A new report commissioned by the U.S. State Department paints an alarming picture of the 鈥渃atastrophic鈥 national security risks posed by rapidly evolving artificial intelligence, warning that time is running out for the federal government to avert disaster.

The findings were based on interviews with more than 200 people over more than a year 鈥 including top executives from leading AI companies, cybersecurity researchers, weapons of mass destruction experts and national security officials inside the government.

The report, released this week by Gladstone AI, flatly states that the most advanced AI systems could, in a worst case, 鈥減ose an to the human species.鈥

A U.S. State Department official confirmed to CNN that the agency commissioned the report as it constantly assesses how AI is aligned with its goal to protect U.S. interests at home and abroad. However, the official stressed the report does not represent the views of the U.S. government.

The warning in the report is another reminder that although the potential of AI continues to captivate investors and the public, there are real dangers too.

鈥淎I is already an economically transformative technology. It could allow us to cure diseases, make scientific discoveries, and overcome challenges we once thought were insurmountable,鈥 Jeremie Harris, CEO and co-founder of Gladstone AI, told CNN on Tuesday.

鈥淏ut it could also bring serious risks, including catastrophic risks, that we need to be aware of,鈥 Harris said. 鈥淎nd a growing body of evidence 鈥 including empirical research and analysis published in the world鈥檚 top AI conferences 鈥 suggests that above a certain threshold of capability, AIs could potentially become uncontrollable.鈥

White House spokesperson Robyn Patterson said President Joe Biden鈥檚 executive order on AI is the 鈥渕ost significant action any government in the world has taken to seize the promise and manage the risks of artificial intelligence.鈥

鈥淭he President and Vice President will continue to work with our international partners and urge Congress to pass bipartisan legislation to manage the risks associated with these emerging technologies,鈥 Patterson said.

鈥楥lear and urgent need鈥 to intervene

Researchers warn of two central dangers broadly posed by AI.

First, Gladstone AI said, the most advanced AI systems could be weaponized to inflict potentially irreversible damage. Second, the report said there are private concerns within AI labs that at some point they could 鈥渓ose control鈥 of the very systems they鈥檙e developing, with 鈥減otentially devastating consequences to global security.鈥

鈥淭he rise of AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons,鈥 the report said, adding there is a risk of an AI 鈥渁rms race,鈥 conflict and 鈥淲MD-scale fatal accidents.鈥

Gladstone AI鈥檚 report calls for dramatic new steps aimed at confronting this threat, including launching a new AI agency, imposing 鈥渆mergency鈥 regulatory safeguards and limits on how much computer power can be used to train AI models.

鈥淭here is a clear and urgent need for the U.S. government to intervene,鈥 the authors wrote in the report.

Safety concerns

Harris, the Gladstone AI executive, said the 鈥渦nprecedented level of access鈥 his team had to officials in the public and private sector led to the startling conclusions. Gladstone AI said it spoke to technical and leadership teams from ChatGPT owner OpenAI, Google DeepMind, Facebook parent Meta and Anthropic.

鈥淎long the way, we learned some sobering things,鈥 Harris said in a video posted on Gladstone AI鈥檚 website announcing the report. 鈥淏ehind the scenes, the safety and security situation in advanced AI seems pretty inadequate relative to the national security risks that AI may introduce fairly soon.鈥

Gladstone AI鈥檚 report said that competitive pressures are pushing companies to accelerate development of AI 鈥渁t the expense of safety and security,鈥 raising the prospect that the most advanced AI systems could be 鈥渟tolen鈥 and 鈥渨eaponized鈥 against the United States.

The conclusions add to a growing list of warnings about the existential risks posed by AI 鈥 including even from some of the industry鈥檚 most powerful figures.

Nearly a year ago, Geoffrey Hinton, known as the 鈥淕odfather of AI,鈥 quit his job at Google and blew the whistle on the technology he helped develop. Hinton has said there is a 10% chance that AI will lead to human extinction within the next three decades.

Hinton and dozens of other AI industry leaders, academics and others signed a statement last June that said 鈥渕itigating the risk of extinction from AI should be a global priority.鈥

Business leaders are increasingly concerned about these dangers 鈥 even as they pour billions of dollars into investing in AI. Last year, 42% of CEOs surveyed at the Yale CEO Summit last year said AI has the potential to destroy humanity five to ten years from now.

Human-like abilities to learn

In its report, Gladstone AI noted some of the prominent individuals who have warned of the existential risks posed by AI, including Elon Musk, Federal Trade Commission Chair Lina Khan and a former top executive at OpenAI.

Some employees at AI companies are sharing similar concerns in private, according to Gladstone AI.

鈥淥ne individual at a well-known AI lab expressed the view that, if a specific next-generation AI model were ever released as open-access, this would be 鈥榟orribly bad,鈥欌 the report said, 鈥渂ecause the model鈥檚 potential persuasive capabilities could 鈥榖reak democracy鈥 if they were ever leveraged in areas such as election interference or voter manipulation.鈥

Gladstone said it asked AI experts at frontier labs to privately share their personal estimates of the chance that an AI incident could lead to 鈥済lobal and irreversible effects鈥 in 2024. The estimates ranged between 4% and as high as 20%, according to the report, which noes the estimates were informal and likely subject to significant bias.

One of the biggest wildcards is how fast AI evolves 鈥 specifically AGI, which is a hypothetical form of AI with human-like or even superhuman-like ability to learn.

The report says AGI is viewed as the 鈥減rimary driver of catastrophic risk from loss of control鈥 and notes that OpenAI, Google DeepMind, Anthropic and Nvidia have all publicly stated AGI could be reached by 2028 鈥 although others think it鈥檚 much, much further away.

Gladstone AI notes that disagreements over AGI timelines make it hard to develop policies and safeguards and there is a risk that if the technology develops slower-than-expected regulation could 鈥減rove harmful.鈥