Earlier this year, House and Senate Committees and Subcommittees received alarming testimony about artificial intelligence and China. The CEO of Scale AI, Alexandr Wang, emphasized that “The Chinese Communist Party deeply understands the potential for AI to disrupt warfare. … AI is China’s Apollo project.” Michèle Flournoy, former Under-Secretary of Defense, highlighted China’s civil-military fusion policy, which allows the government to demand cooperation from any entity for military purposes. In contrast, the United States values a private sector where individuals and companies have the choice to contribute to national security. Looking back, the potential of AI was demonstrated in board games like chess and Go. Garry Kasparov, a renowned chess master, initially won against IBM’s Deep Blue but ultimately lost. Similarly, Lee Sedol, considered the greatest Go player, was defeated by an AI program. These events showcased the evolving capabilities of artificial intelligence. The focus then shifted to poker, a game that involves imperfect information and decision-making. Tuomas Sandholm, a computer science professor, recognized the relevance of poker to real-world scenarios. In 2017, Carnegie Mellon challenged professional poker players, including Jason Les, to compete against an AI program. Despite their initial confidence, the players lost to the AI, realizing the advanced capabilities of the technology. Sandholm’s AI company, Strategy Robot, now works as a Pentagon contractor, providing AI solutions for decision-making in military operations. However, U.S. policy emphasizes the importance of human oversight in AI applications. Dr. Craig Martell, head of the chief digital and AI office at the Pentagon, ensures responsible acquisition and deployment of AI. The challenge lies in building confidence in AI systems, especially in life-or-death situations. The concern of falling behind China in military AI technology is a pressing issue. Michèle Flournoy emphasized the need for urgency and a strong ethical framework in AI development for military purposes. However, the question arises when competitors do not adhere to the same ethical guidelines. The discussion of human oversight versus AI oversight is crucial, as mistakes are often made by humans. The pattern observed in AI’s victories over human players in chess, Go, and poker highlights the potential of AI in decision-making. Humans tend to overestimate their decision-making abilities. Ultimately, the balance of oversight between humans and AI will evolve over time.
In the modern era of warfare, technology and artificial intelligence are playing increasingly vital roles. But, with the increased reliance on automated systems, emotionless algorithms, and autonomous weaponized robots, comes the ethical dilemma of, can AI be trusted in warfare?
As the technology is still at an early stage, many are skeptical if such Artificial Intelligence (AI) can be trusted in making decisions that involve life and death. AI-controlled weapons can act instantaneously and have been proven to be highly accurate. On the other hand, AI-powered weapons do not possess the capacity to distinguish between a civilian, an ally, a noncombatant, and an enemy. This lack of intent or purpose has the possibility of leading to indiscriminate destruction. Furthermore, there is no guarantee that an AI system would act discriminately in compliance with international laws. AI systems can also malfunction, be hacked or generate their own goals, which can lead to unexpected actions with unknown consequences.
Despite the obvious pitfalls, of trusting AI in warfare, the advantages of such technology are undeniable. From automated offensive protection systems to predictive analytics, AI weapons can revolutionize tactical decision-making and streamline operational efficiency. Such weapons can also aid in surveillance and monitoring in conflict zones.
Ultimately, the decision to trust AI in warfare must be taken by society as a whole. We must debate openly whether the benefits of AI technology outweigh its potential risks. The escalating military arms race makes this decision even more vital, as we decide on the limits of AI weapons, and ultimately on our collective trust in this beijing technology.