Existential Risk Observatory ⏸
@XRobservatory
Reducing AI x-risk by informing the public. We propose a Conditional AI Safety Treaty: https://time.com/7171432/conditional-ai-safety-treaty-trump/
You might like
Today, we propose the Conditional AI Safety Treaty in @TIME as a solution to AI's existential risks. AI poses a risk of human extinction, but this problem is not unsolvable. The Conditional AI Safety Treaty is a global response to avoid losing control over AI. How does it work?
Sometimes, it is hard to believe that this is all real. Are people really building a machine that could be about to kill every living thing on this planet? If this is not true, why are the best scientists in the world saying it is? If this is true, why is no one trying to do…
Our public defender successfully subpoenaed Sam Altman to appear at our trial where we will be tried for non-violently blocking the front door of OpenAI on multiple occasions and blocking the road in front of their office. All of our non-violent actions against OpenAI were an…
We agree with @GaryMarcus that AGI is not here yet. We are afraid though that when it is, it may take over and possibly cause human extinction.
Shame on the @ft for this confused and deeply misleading headline alleging that “AI pioneers claim human-level general intelligence is here” which is absolutely false. - Bengio, shown and quoted, just published an article (with me and 30 others) claiming that AGI is NOT in fact…
Raising awareness about the existential risks of AI remains crucial. We think this documentary deserves a wide audience.
Making God - the documentary on AI you might actually watch all the way through. Watch our teaser below.
Another amazing comms effort by @So8res and @MIRIBerkeley! Increasingly, the public seems to believe the sincerity of those sounding the existential alarm bells, and seems to be open to the concept that ASI will actually go horribly wrong. That's major, hard-fought progress!
This interview is huge. Hank Green's audience (liberal, smart) has been somewhat skeptical of the idea that AGI could be here soon. But with this video, the comments are overwhelmingly positive. I think the skepticism has mostly been based on vibes. It's enticing to lump AI…
The safety mentioned by OpenAI is wholly inadequate for takeover-level AI. If anyone builds it, everyone may well die. Fortunately, OpenAI does not seem to have any more insight into how to build such AI than anyone else.
Yesterday we did a livestream. TL;DR: We have set internal goals of having an automated AI research intern by September of 2026 running on hundreds of thousands of GPUs, and a true automated AI researcher by March of 2028. We may totally fail at this goal, but given the…
Gary Marcus is right in the NY Times that we should pursue narrow AI, rather than AGI. Narrow AI is often more effective at the task at hand. It also won't kill you and your family. Gary Marcus is also wrong to say that "Shifting focus away from chatbots doesn’t mean that…
Should we continue to pursue AGI, right now? And what the heck is AGI, anyway? I have three new articles today on these questions. The first (on whether chatbots are a waste of AI’s potential) is the New York Times. The second is on AGI definitions, with @DanHendrycks,…
United States Trends
- 1. Rams 25.3K posts
- 2. Seahawks 31.1K posts
- 3. Commanders 104K posts
- 4. 49ers 21.1K posts
- 5. Lions 85.8K posts
- 6. Canada Dry 1,348 posts
- 7. Stafford 9,356 posts
- 8. DO NOT CAVE 13K posts
- 9. Niners 5,225 posts
- 10. Dan Campbell 3,372 posts
- 11. #OnePride 4,772 posts
- 12. Bills 144K posts
- 13. Cardinals 11.1K posts
- 14. Lenny Wilkens 3,251 posts
- 15. #RaiseHail 3,602 posts
- 16. Joe Whitt 1,831 posts
- 17. Daboll 15.2K posts
- 18. Gibbs 8,244 posts
- 19. Giants 72K posts
- 20. Vilma 2,378 posts
You might like
-
PauseAI ⏸
@PauseAI -
The Centre for Long-Term Resilience
@LongResilience -
Jess Whittlestone
@jesswhittles -
Siméon
@Simeon_Cps -
Otto Barten⏸
@BartenOtto -
Katja Grace 🔍
@KatjaGrace -
Center for AI Safety
@ai_risks -
Markus Anderljung
@Manderljung -
David Krueger
@DavidSKrueger -
Nik Samoylov
@NikSamoylov -
Andrea Miotti
@_andreamiotti -
Tyler John
@tyler_m_john -
Joe Carlsmith
@jkcarlsmith -
AI Impacts
@AIImpacts -
Center for Human-Compatible AI
@CHAI_Berkeley
Something went wrong.
Something went wrong.