Expert Consensus is clear: unchecked development of Superintelligent AI carries grave risks. Demand your U.S. representatives take immediate action to ensure AI Safety

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

— Signed by the Center for AI Safety, Bill Gates, Sam Altman, Demis Hassabis and hundreds of AI scientists

Experts Are Sounding the Alarm

"If [AI models] are much smarter than us, they’ll be very good at manipulating us. You won’t realize what’s going on."

Geoffrey Hinton

Godfather of AI" & Nobel Prize Winner

"A sandwich has more regulation than AI... We are building machines that are smarter than us, and we have no guarantee that we can control them."

Yoshua Bengio

Turing Award Winner & AI Pioneer

"A misaligned superintelligent AGI could cause grievous harm to the world"

Sam Altman

CEO of OpenAI

"The challenge now is the containment of the unleashed power and ensuring it continues to serve us and the planet."

Mustafa Suleyman

CEO of Microsoft AI

"In a benign scenario, probably none of us will have a job. And in the negative scenario, well, all bets are off. We're in deep trouble."

Elon Musk

CEO of Tesla, SpaceX, Neuralink

"There's a 25% chance that the future of AI will go really, really badly"

Dario Amodei

CEO of Anthropic

Take Action Now: Contact Your U.S. Representatives