
Luke Muehlhauser
@lukeprog
Open Philanthropy Program Director, AI Governance and Policy
قد يعجبك
Reminder that the nonprofit AI safety field (or "industrial complex" lol) is massively, massively outgunned by, well… The Actual AI Industry. archive.is/DFvLR
“If you’re going to work on export controls, make sure your boss is prepared to have your back,” one staffer told me. For months, I’ve heard about widespread fear among think tank researchers who publish work against NVIDIA’s interests. Here’s what I’ve learned:🧵

I thought this post raised a bunch of interesting points/questions I wanted to respond to. Also, I would be open to chatting about any of this @sriramk if you ever want to - I’m proud of our work in this space and happy to answer questions on it! Before I dive in I wanted to…
On AI safety lobbying: Fascinating to see the reaction on X to @DavidSacks post yesterday especially from the AI safety/EA community. Think a few things are going on (a) the EA/ AI safety / "doomer" lobby was natural allies with the left and now find themselves out of power.…
Great 🧵 about the power of far-sighted and risk-taking philanthropists! Thanks to @open_phil for your early and consistent support. @cayimby could not have achieved these victories without you!
We've been funding @CAYIMBY, which championed this work, since its early days, and it's been fun to look back at how far they've come. After a prior incarnation of SB 79 failed in 2018, we renewed our funding and I wrote in our renewal template:

These are exactly the right sorts of questions to be asking about automated AI R&D. Props to @RepNateMoran for improving the conversation within Congress!

A lot of people seem to implicitly assume that China is going with an entirely libertarian approach to AI regulation, which would be weird given that they are an authoritarian country. Does this look like a libertarian AI policy regime to you?

Now's the time for other funders to get involved in AI safety and security: -AI advances have created more great opportunities -Recent years show progress is possible -Policy needs diverse funding; other funders can beat Good Ventures' marginal $ by 2-5x🧵 x.com/albrgr/status/…
Since 2015, seven years before the launch of ChatGPT, @open_phil has been funding efforts to address potential catastrophic risks from AI. In a new blog post, @emilyoehlsen and I discuss our history in the area and explain our current strategy. x.com/albrgr/status/…

1/3 Our Biosecurity and Pandemic Preparedness team is hiring, and is also looking for promising new grantees! The team focuses on reducing catastrophic and existential risks from biology — work that we think is substantially neglected.

Many AI policy decisions are complicated. "Don't ban self-driving cars" is really not. Good new piece from @KelseyTuoc, with a lede that pulls no punches:


Some people think @open_phil are luddites because we work on AGI safety, and others think we’re techno-utopians because we work on abundance and scientific progress. We’re neither. Here's why we think safety and accelerating progress go hand in hand, in spite of the tensions:🧵

Kudos to @Scott_Wiener and @GavinNewsom and others for getting this over the line!!
Yesterday, @CAgovernor signed the first law in the US to directly regulate catastrophic risks from AI systems. I read the thing so you don't have to.

FAI is hiring! Come join the AI policy team to work with @hamandcheese, @deanwball, and me at @JoinFAI.

METR is a non-profit research organization, and we are actively fundraising! We prioritise independence and trustworthiness, which shapes both our research process and our funding options. To date, we have not accepted funding from frontier AI labs.
Excited to share details on two of our longest running and most effective safeguard collaborations, one with Anthropic and one with OpenAI. We've identified—and they've patched—a large number of vulnerabilities and together strengthened their safeguards. 🧵 1/6


Once again I would like to note that the nonprofit AI safety field is massively outgunned by the AI industry. wsj.com/politics/silic…
Some people are pretending to doubt this, but e.g. the entire nonprofit AI safety ecosystem runs on much less than $1B/yr, whereas annual expenditures for just three leading AI companies (MSFT, GOOG, META) is >$450B/yr.
United States الاتجاهات
- 1. Mike Evans 8,490 posts
- 2. #WWERaw 25.2K posts
- 3. Gibbs 14K posts
- 4. Tez Johnson 2,189 posts
- 5. Lions 64.4K posts
- 6. Bucs 15.1K posts
- 7. Dragon Lee 4,566 posts
- 8. White House 220K posts
- 9. Josh Naylor 2,841 posts
- 10. #OnePride 5,245 posts
- 11. Goff 7,252 posts
- 12. #TBvsDET 3,566 posts
- 13. Ben Solo 10.6K posts
- 14. Baker Mayfield 4,881 posts
- 15. Bron 19.2K posts
- 16. East Wing 42.7K posts
- 17. #RawOnNetflix N/A
- 18. Bieber 16.6K posts
- 19. FanDuel 23.8K posts
- 20. Julio Rodriguez 1,337 posts
قد يعجبك
-
Katja Grace 🔍
@KatjaGrace -
Scott Alexander
@slatestarcodex -
Ajeya Cotra
@ajeya_cotra -
Joe Carlsmith
@jkcarlsmith -
Kelsey Piper
@KelseyTuoc -
Rohin Shah
@rohinmshah -
Michael Aird
@michael__aird -
Victoria Krakovna
@vkrakovna -
Amanda Askell
@AmandaAskell -
Rob Bensinger ⏹️
@robbensinger -
Nate Soares ⏹️
@So8res -
EA Forum Posts
@EAForumPosts -
Jess Whittlestone
@jesswhittles -
Owain Evans
@OwainEvans_UK -
Toby Ord
@tobyordoxford
Something went wrong.
Something went wrong.