The ethical minefield of Big Tech in the US military


In June 2025, the US Army created Detachment 201, known as the Pentagon’s Executive Innovation Corps, to “fuse cutting-edge tech expertise with military innovation.” The initiative comes as part of the Trump administration’s push to integrate Silicon Valley, specifically AI companies, into military affairs, to both boost technological adoption in the Army and develop the Army’s entrepreneurial spirit.

The corps selected a range of AI company executives to serve in its inaugural group of Army Reserve Colonels, including executives from Palantir, Meta, OpenAI, and Thinking Machines Labs. These officers will reportedly not be asked to complete the Army Fitness Test or the military’s six-week Direct Commission Course.

While not ubiquitous, military adoption of technology and the wider military-industrial complex have been a constant in military doctrine for centuries. Integration of Big Tech into military affairs has been less widespread. Stipulations over ethics and consumer-facing products have hindered significant development in the relationship between the US Army and Silicon Valley, with the latter considering it “anathema to work on projects with military applications”, according to The Wall Street Journal. However, contracts such as the Project Maven contract with Palantir, established during Donald Trump’s first term as president, as well as a more tech-friendly president in Washington, have substantially improved this relationship.

The recent announcement that several prominent technology executives will be inducted into the US Army has ignited a firestorm of ethical questions, transparency concerns, and fears about the accelerating militarisation of the tech sector. This move, effectively granting defense-adjacent authority, access, and prestige to corporate leaders from Silicon Valley, raises concerns about civilian control of the military, AI ethics on the battlefield, and conflicts of interest that cannot be ignored.

The ethics of AI in the military

Integrating Big Tech into the chain of command is highly likely to lead to an increase in the use of AI in combat scenarios. AI is used to improve lethality on battlefields, usually through automated targeting using biometric analysis databases, which can then be developed into surveillance databases. Lethal autonomous weapons (LAWs) represent AI’s ultimate and most ethically challenging application in a combat role. There is particular concern about the capacity of autonomous systems to identify, target, and eliminate perceived hostile entities without human oversight.

The potential for target misidentification by LAWs remains a dominant concern among lawmakers and activists. Image recognition and machine learning tools produce flawed results that can be propagated faster and further than typical human error. The lack of effective legal accountability for target misidentification further complicates this. Current laws governing war and culpability for war crimes focus on human actions and do not clearly explain the extent of responsibility for AI-related incidents.

AI-assisted surveillance systems also present significant ethical hurdles and are often used for domestic security purposes. Amazon’s Rekognition technology faced criticism after being tested by US law enforcement without sufficient oversight between 2017 and 2018. Reports came out in January 2025 that Palantir and the Trump administration may partner to develop a database of US citizens, something Palantir has denied. The New York Times posits that Palantir and the Trump administration could then use this data to “advance his political agenda by policing immigrants and punishing critics”. Ultimately, the normalisation of Big Tech figures in the military could potentially result in even more military-grade surveillance systems being used on US citizens.

Conflict of interests

The integration of any member of a company into a government organisation will lead to concerns about conflicts of interest. These companies stand to benefit substantially from military contracts, and now their executives have made their way into the military chain of command, albeit partially, without having to adhere to training standards.

Potential conflicts of interest between Big Tech and the military have emerged before. Former Google CEO Eric Schmidt was chair of the National Security Commission on Artificial Intelligence (NSCAI) when Google applied for (and received) the controversial Integrated Visual Augmentation System contract in 2021, creating a fear among Pentagon insiders of “shadow governance” from private technological interests.

Inducting executives from these same companies into military advisory roles risks collapsing the firewall between public service and private gain. It gives tech firms privileged insight into future military needs while allowing them to shape the procurement paths from which they profit: a textbook definition of a regulatory capture scenario.

No formal mechanism currently exists to vet or limit the influence of these tech leaders within the military, especially when they hold both advisory roles and financial stakes. Unlike elected officials, they are not subject to public scrutiny, Freedom of Information Act (FOIA) transparency, or ethical disclosures.

In democracies, the decision to go to war or use lethal force must be insulated from private profit motives. By formally inducting corporate leaders into the Army, even ceremonially, the US risks sending the message that the interests of private corporations are not just aligned with but embedded within national defense strategy.

Continued involvement of Big Tech in the US Armed Forces will lead to pain before any future gain. Real reform of the Department of Defense will come through intense budget and defence contractor scrutiny, not further integration of biased actors into the military.






Source link