ControlAI
@ai_ctrl
Fighting to Keep Humanity in Control. Campaign: http://campaign.controlai.com Newsletter: https://controlai.news Discord: https://discord.com/invite/ptPScqtdc5
🚨OpenAI's latest blog post says superintelligence risks are "potentially catastrophic", and suggests the whole field might need to slow down development. They say nobody should deploy superintelligent AIs without being able to control them, and admit this still can't be done.
AI godfather Yoshua Bengio says it's a mistake to think of AI as it is now. The trajectory is exponential. "You have to look at the trajectory over the last decade and the trajectory is exponential progress on metrics like how do you do the job that a human would do."
Let's stop the corporate welfare for crybaby AI companies and treat them like we treat all other companies: with legally binding safety standards.
NEW: Over 80,000 have now joined the call to ban superintelligence! 95% of Americans say they don't want a race to build superintelligence. Max Tegmark says there are more regulations on sandwiches than on superintelligence.
There's a very simple argument for why developing superintelligence ends badly. Conjecture CEO Connor Leahy (@NPCollapse): "If you make something that is smarter than all humans, you don't know how to control it, how exactly does that turn out well for humans?"
And we now have another new supporter! The Rt Hon. the Lord Robathan has backed our campaign for binding regulation on the most powerful AI systems, acknowledging the extinction threat posed by AI! It's great to see so many coming together from across parties on this issue.
Experts continue to warn that superintelligence poses a risk of human extinction. In our newsletter this week, we're providing an important update on the progress of our UK campaign to prevent this threat, along with news on other developments in AI. controlai.news/p/85-uk-politi…
"No one can deny that this is real. " Conjecture CEO Connor Leahy (@NPCollapse) says the coalition calling for a ban on the development of superintelligence makes it harder and harder to ignore the danger of smarter-than-human AI.
Experts continue to warn that superintelligence poses a risk of human extinction. In our newsletter this week, we're providing an important update on the progress of our UK campaign to prevent this threat, along with news on other developments in AI. controlai.news/p/85-uk-politi…
AI godfather and Nobel Prize winner Geoffrey Hinton says AI companies are much more concerned with racing each other than ensuring that humanity actually survives. The second most-cited scientist in the world, Hinton has been warning repeatedly that superintelligence could cause…
AI godfather Geoffrey Hinton says countries will collaborate to prevent AI taking over. "On AI taking over they will collaborate 'cause nobody wants that. The Chinese Communist Party doesn't want AI to take over. Trump doesn't want AI to take over. They can collaborate on that."
The Guardian: Hundreds of AI safety and effectiveness tests have been found to be weak and flawed. UK AI Security Institute scientists and others checked over 440 benchmarks and found problems that undermine their validity. Robert Booth highlights that this comes after reports…
What happens if we don't cooperate to prevent dangerous AI development? What happens if rapid advancement of AI continues? Here's a new paper that models this. Without international coordination, there is no safe path. Take a look!
We model how rapid AI development may reshape geopolitics if there is no international coordination on preventing dangerous AI development. This results in bleak outcomes: a “winner” achieving permanent global dominance, human extinction, or preemptive major power war.
We model how rapid AI development may reshape geopolitics if there is no international coordination on preventing dangerous AI development. This results in bleak outcomes: a “winner” achieving permanent global dominance, human extinction, or preemptive major power war.
Our video with SciShow just passed 1M views in only 3 days! Out of the 204 videos they published in 2025, we are already: #1 most comments #3 most likes (closing in on #1) #5 most views Audiences want to learn about AI risk, so if you're a creator, get in touch!
My biggest project yet is now LIVE! @hankgreen talks superintelligence, AI risk, and the many issues with AI today that present huge concerns as AI gets more powerful on SciShow. Happy Halloween! youtube.com/watch?v=90C3XV…
youtube.com
YouTube
We’ve Lost Control of AI
United States 트렌드
- 1. Penn State 19.2K posts
- 2. Mendoza 16.3K posts
- 3. Gus Johnson 4,240 posts
- 4. #iufb 3,376 posts
- 5. Omar Cooper 5,870 posts
- 6. Sunderland 144K posts
- 7. $SSHIB 1,679 posts
- 8. Jim Knowles N/A
- 9. James Franklin 6,493 posts
- 10. Texas Tech 12.3K posts
- 11. Sayin 63.8K posts
- 12. Happy Valley 1,507 posts
- 13. Arsenal 245K posts
- 14. WHAT A CATCH 10.2K posts
- 15. Jeremiah Smith 2,363 posts
- 16. Charlie Becker N/A
- 17. St. John 7,875 posts
- 18. CATCH OF THE YEAR 3,464 posts
- 19. #GoDawgs 4,561 posts
- 20. #WeAre 1,098 posts
Something went wrong.
Something went wrong.