Caught in the Middle With AI
Some U.S. tech companies could be approaching a crossroads. Opportunities are exploding for artificial intelligence (AI) business with the federal government, which is gearing up for a push into AI for military action and intelligence-gathering. Just the past few months have seen the creation of a new National Security Commission on Artificial Intelligence formed under Section 1051 of the fiscal 2019 National Defense Authorization Act, organization of the Pentagon's new Joint Artificial Intelligence Center (JAIC), formation by the White House of a Select Committee on Artificial Intelligence (a/k/a the 'brain trust') in response to a March 2018 report from the Center for Strategic and International Studies (CSIS) calling for a national 'machine intelligence' strategy, as well as rapidly expanding AI-related activity by the Defense Innovation Board, DARPA, and the Office of the Director of National Intelligence.
At the same time, scientists and some researchers at some major Silicon Valley tech companies have expressed reservations about military applications of the technology, based on concerns over the use of AI to improve weapons targeting and increase the "lethality" of automated systems. This is against a general background of public (and Congressional) unease over issues related to AI and surveillance, privacy, data and algorithmic bias, and the assurance of human control over the new systems.
Some tech companies have been caught in the middle of this ideological fault line, called upon to address internal disagreements between marketing people intent on landing lucrative government contracts and, on the other side, key scientists and research personnel who are asking for clarification on the terms and conditions for use of their AI systems.
The Great Game
It is clear that AI will be essential to maintaining military superiority, or at least competitiveness, in the coming decades. America's principal strategic adversaries have openly declared their intentions to surpass the U.S. and dominate military applications of AI by 2030.
The federal government is highly motivated to work with Silicon Valley since a number of the most advanced scientists and researchers in the field have been drawn there, in part by the extraordinary compensation packages on offer from major tech companies. In some cases the star researchers have come to Silicon Valley from positions in government agencies or academia, which can rarely match the current market price for AI specialists. At the same time U.S. government funding for basic and applied research on AI has been cut. If American military and intelligence forces are to maintain qualitative superiority over the country's strategic adversaries, it is important that the Defense Department maintains the ability to leverage the advances now taking place within private U.S. companies.
Suddenly Last Summer
Dissension over AI bubbled to the surface last spring and summer at a leading U.S. tech company in Silicon Valley, when researchers learned from media sources that the company's AI systems were to be deployed to facilitate the analysis of video surveillance footage used for drone targeting. Raucous company meetings ensued, as well as a petition signed by thousands of employees (and echoed by employees at some other Silicon Valley tech firms), eventually leading to resignations by a number of the company's high-level researchers. The company announced soon afterward that it would not renew the related defense contract. Senior management also promulgated a set of principles relating to lethal use of the company's AI, and addressing as well as social benefits of AI technology, data and algorithmic bias, privacy, and international law.
Fault Lines
What are the issues raised by Silicon Valley's dissident researchers, U.S. and European Union regulators, and privacy and consumer advocates? They include:
- Ultimate human control over automated weapons, particularly for decisions involving "lethality"
- Compliance with jurisdictional and international law
- Bias arising from skewed or inappropriately structured data, and as a subset regulatory restrictions on AI processing that improperly affect legally protected groups
- Bias resulting from improperly designed algorithms
- Privacy and related principles of safety and data protection
- Transparency as to operations (subject to appropriate exceptions for national security matters relating to intelligence-gathering and military actions)
- Review, testing, controls (including third-party audits) and documentation to enable recreation of algorithmic operations leading to a particular result, in the event of a challenge or alleged violation
- 'Circuit breakers' for malfunctioning AI
- Accountability for errors, and legal standards for tort liability
- A range of consumer rights including access, the right to be informed, a right to rectification of errors, and the right to object to decisions believed to be in error.
Sense and Sensibility
Some analysts and commentators on defense issues have discounted these and similar concerns. In a report published by the Center for Strategic and International Studies in March 2018, before the blow-up later that year, the authors emphasized the need to open up more data sets to provide the massive amounts of data needed to train the new AI systems. The report noted that the U.S. was finding it difficult to compete with authoritarian states which are untrammeled by legal restrictions or citizens' rights, and the report pinned the blame in part on "privacy advocates . . . pushing for greater protections for data[,]" Authors of the CSIS report called on the U.S. government to develop a strategy to "combat . . . privacy policies that harm our global tech companies." It also cited polling showing that most citizens value personalized services more highly than controls over their data, and it opined that it would be unwise for the U.S. to consider implementation of a regime similar to the EU's General Data Protection Regulation (GDPR) "at the risk of stifling innovation."
Other senior officials with equally impeccable defense credentials have taken a different approach. Joshua Marcuse, executive director of the Pentagon's Defense Innovation Board, pointed out at a recent forum sponsored by Defense One that in our American system the people who know how to work with AI have the right to choose not to do so, with potentially devastating consequences for national security: "If this is not an ethics-first approach, led by a sense we'll do the right thing – a virtuous approach that's consistent with our values – we will lose." (Quoted in C4ISRNET, "Without a clearer ethics policy, the U.S. could lose the military tech battle to China, Jan. 28, 2019.")
At the Pentagon's request, the Board is now working on a set of principles for the ethical use of AI for warfare. The immediate objective is to create a framework for cooperation with Silicon Valley tech companies and their scientists. There is also a wider hope that the result of the process may help to establish standards for the safe use of AI in a broad range of contexts, both military and civilian.
Toward an American Singularity - Perhaps
With good will and constructive engagement by defense officials, tech company management, and concerned AI researchers, the U.S. may be able to achieve a broad consensus on ethical standards that will provide assurance that public interest is properly protected, and that will also enable the federal government to maintain the support of the nation's best minds in the effort to confront AI challenges from authoritarian states which do not share fundamental American and democratic values. The outcome is anything but foreordained.