A peculiar and astonishing statement has emerged, highlighting unprecedented risks that may lie ahead. Its formulation underscores the uncertainty surrounding the planet’s future, even for decision-makers in this daunting world. Recently, at the annual Asia-Pacific Economic Cooperation (APEC) summit, US President Joe Biden and Chinese President Xi Jinping agreed that decisions regarding the use of nuclear weapons should rest with humans, not artificial intelligence. The leaders also stressed the importance of carefully assessing the risks tied to the development of military artificial intelligence technology, asserting that such advancements must be approached with “wisdom and responsibility.”
This marks the first instance of a joint statement from the two nations addressing artificial intelligence and its military implications. Despite significant political and economic differences, Xi Jinping informed Biden that “China is prepared to collaborate with a new American administration and continue exploring the right path” towards mutual understanding and “achieving long-term peaceful coexistence.”
The statement garnered little attention from Western or American media. This is perhaps unsurprising, as remarks or actions by the US president in the waning days of his administration often receive limited consideration.
Nuclear weapons were not tested on humans until the end of World War II, when the United States dropped two atomic bombs on Hiroshima and Nagasaki, killing at least 129,000 people and causing devastating long-term health effects. These bombings compelled the Emperor of Japan to surrender unconditionally. Since then, nuclear weapons have not been used on humans, evolving from a deterrent into a symbol of potential mass extermination for humanity and all life on Earth. Thus, a fundamental principle has emerged in handling these lethal weapons: while political manoeuvring is allowed and often essential for maximizing economic and social gains, the use of nuclear weapons remains restricted, serving as a condition for survival and the continuation of life.
Despite this golden rule, any nation aspires to become a member of the nuclear club and to be among the select few that possess this weapon, which at the very least ensures they will not face a devastating defeat in the event of a confrontation with an adversary lacking nuclear deterrence. Even those who already possess nuclear capabilities are considering changes to their usage doctrine. For instance, Russian President Vladimir Putin has proposed a revision of Russia’s nuclear doctrine, which previously focused on using nuclear weapons against any nuclear state that targets its territory. The new approach suggests the potential use of nuclear weapons even if a non-nuclear state attacks Russia with the backing of a nuclear power. In reality, there is no clear method to assess the impact of a single nuclear bomb, as it is influenced by numerous factors, including the weather at the time of detonation, the time of day it explodes, the geographical layout of the explosion site, and whether the explosion occurs on the ground or in the air.
However, what would happen if members of the nuclear club left thousands of nuclear and hydrogen missiles in the hands of artificial intelligence, which has rapidly advanced over the past two years? This technology has begun to perform tasks previously limited to humans, capable of generating realistic images and coordinates, mimicking voices, and other capabilities that were not present in the technological landscape of previous years. How can such unregulated intelligence control not only the lives of billions of people and the Earth itself but also the entire solar system we inhabit? Furthermore, this concern has extended to the intelligence community, research centres, and academic institutions in the United States, which have expressed alarm over the potential threats posed by hostile foreign entities that have access to advanced AI capabilities and could use them to manage and activate nuclear weapons.
An open letter has recently been published by a group of 11 current and former employees of OpenAI, the leading artificial intelligence company. The letter asserts that the financial motivations of AI companies hinder effective oversight of AI development, potentially leading to scenarios beyond human control. It also warns of the dangers posed by unregulated AI, ranging from the spread of misinformation to the risk of autonomous AI systems losing control in sensitive military locations, which could result in “human extinction.” The letter emphasizes that AI companies have “weak commitments” regarding the sharing of information with governments about their systems’ capabilities and limitations, indicating that these companies cannot be relied upon.
The open letter is the latest to raise safety concerns regarding rapidly evolving generative artificial intelligence technology, advancing at an astonishing pace and a low cost. It is clear that the verbal agreement between Biden and his Chinese counterpart was a literary and moral attempt that lacks binding authority and is ultimately futile, with no effective regulations in place to limit or at least control what is deemed unethical or immoral. I am reminded of the remarkable work by the esteemed writer Lenin El-Ramly, specifically his play “The Barbarian,” which was performed by artist Mohamed Sobhi nearly 40 years ago as if it were a forewarning of a dark future that could regress humanity to primitive, barbaric times.
Dr. Hatem Sadek – Professor at Helwan University