Date:

A.I. Adoption in the Military Is Too Rapid

Militaries’ Reliance on Artificial Intelligence Systems Raises Concerns

The Pentagon’s Push for A.I. Integration

Militaries are increasingly relying on artificial intelligence (A.I.) systems to make decisions about who or what to target and how to do it. The Pentagon is considering incorporating A.I. into many military tasks, potentially amplifying risks and introducing new and serious cybersecurity vulnerabilities. With Donald Trump’s administration in place, the tech industry is moving full steam ahead in its push to integrate A.I. products across the defense establishment, which could make a dangerous situation even more perilous for national security.

Partnerships and Initiatives

In recent months, technology industries have announced a slew of new partnerships and initiatives to integrate A.I. technologies into deadly weaponry. OpenAI, a company that has touted safety as a core principle, announced a new partnership with the defense tech startup Anduril, marking its entry into the military market. Anduril and Palantir, a data analytics firm, are in talks to form a consortium with a group of competitors to bid jointly for defense contracts. In November, Meta announced agreements to make its A.I. models available to the defense contractors Lockheed Martin and Booz Allen. Earlier in the year, the Pentagon selected the A.I. startup Scale AI to help with the testing and evaluation of large language models across a range of uses, including military planning and decision-making.

Concerns and Risks

Proponents argue that the integration of A.I. foundation models can help the United States retain its technological advantage. However, some of our country’s defense leaders have expressed concerns. Gen. Mark Milley recently said in a speech at Vanderbilt University that these systems are a "double-edged sword," posing real dangers in addition to potential benefits. In 2023, the Navy’s chief information officer Jane Rathbun said that commercial language models, such as OpenAI’s GPT-4 and Google’s Gemini, won’t be ready for operational military use until security control requirements had been "fully investigated, identified and approved for use within controlled environments."

Cybersecurity Vulnerabilities

U.S. military agencies have previously used A.I. systems developed under the Pentagon’s Project Maven to identify targets for subsequent weapons strikes in Iraq, Syria, and Yemen. These systems and their analogues can speed up the process of selecting and attacking targets using image recognition. However, they have had problems with accuracy and can introduce greater potential for error. A 2021 test of one experimental target recognition program revealed an accuracy rate as low as 25 percent, a stark contrast from its professed rate of 90 percent.

Foundation models are even more worrisome from a cybersecurity perspective. As most people who have played with a large language model know, foundation models frequently "hallucinate," asserting patterns that do not exist or producing nonsense. This means that they may recommend the wrong targets. Worse still, because we can’t reliably predict or explain their behavior, the military officers supervising these systems may be unable to distinguish correct recommendations from erroneous ones.

Conclusion

The integration of A.I. foundation models into military systems raises serious concerns about national security. Rather than grapple with these potential threats, the White House is encouraging full speed ahead. Mr. Trump has already repealed an executive action issued by the Biden administration that tried to address these concerns — an indication that the White House will be ratcheting down its regulation of the sector, not scaling it up.

FAQs

Q: What are A.I. foundation models?
A: A.I. foundation models are systems trained on very large pools of data and capable of a range of general tasks.

Q: What are the concerns about A.I. foundation models in the military?
A: The concerns include the potential for errors, cybersecurity vulnerabilities, and the reliance on sensitive data.

Q: What is the current state of A.I. integration in the military?
A: The Pentagon is considering incorporating A.I. into many military tasks, and technology industries are announcing new partnerships and initiatives to integrate A.I. technologies into deadly weaponry.

Q: What are the potential risks of A.I. integration in the military?
A: The potential risks include the amplification of risks and the introduction of new and serious cybersecurity vulnerabilities.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here