lukeprog's profile picture. Open Philanthropy Program Director, AI Governance and Policy

Luke Muehlhauser

@lukeprog

Open Philanthropy Program Director, AI Governance and Policy

مثبتة

Reminder that the nonprofit AI safety field (or "industrial complex" lol) is massively, massively outgunned by, well… The Actual AI Industry. archive.is/DFvLR


Luke Muehlhauser أعاد

“If you’re going to work on export controls, make sure your boss is prepared to have your back,” one staffer told me. For months, I’ve heard about widespread fear among think tank researchers who publish work against NVIDIA’s interests. Here’s what I’ve learned:🧵

sjgadler's tweet image. “If you’re going to work on export controls, make sure your boss is prepared to have your back,” one staffer told me.

For months, I’ve heard about widespread fear among think tank researchers who publish work against NVIDIA’s interests. Here’s what I’ve learned:🧵

Luke Muehlhauser أعاد

I thought this post raised a bunch of interesting points/questions I wanted to respond to. Also, I would be open to chatting about any of this @sriramk if you ever want to - I’m proud of our work in this space and happy to answer questions on it! Before I dive in I wanted to…

On AI safety lobbying: Fascinating to see the reaction on X to @DavidSacks post yesterday especially from the AI safety/EA community. Think a few things are going on (a) the EA/ AI safety / "doomer" lobby was natural allies with the left and now find themselves out of power.…



Luke Muehlhauser أعاد

Great 🧵 about the power of far-sighted and risk-taking philanthropists! Thanks to @open_phil for your early and consistent support. @cayimby could not have achieved these victories without you!

We've been funding @CAYIMBY, which championed this work, since its early days, and it's been fun to look back at how far they've come. After a prior incarnation of SB 79 failed in 2018, we renewed our funding and I wrote in our renewal template:

albrgr's tweet image. We've been funding @CAYIMBY, which championed this work, since its early days, and it's been fun to look back at how far they've come. After a prior incarnation of SB 79 failed in 2018, we renewed our funding and I wrote in our renewal template:


Luke Muehlhauser أعاد

These are exactly the right sorts of questions to be asking about automated AI R&D. Props to @RepNateMoran for improving the conversation within Congress!

daniel_271828's tweet image. These are exactly the right sorts of questions to be asking about automated AI R&D. Props to @RepNateMoran for improving the conversation within Congress!

Luke Muehlhauser أعاد

A lot of people seem to implicitly assume that China is going with an entirely libertarian approach to AI regulation, which would be weird given that they are an authoritarian country. Does this look like a libertarian AI policy regime to you?

deanwball's tweet image. A lot of people seem to implicitly assume that China is going with an entirely libertarian approach to AI regulation, which would be weird given that they are an authoritarian country.

Does this look like a libertarian AI policy regime to you?

Luke Muehlhauser أعاد

Now's the time for other funders to get involved in AI safety and security: -AI advances have created more great opportunities -Recent years show progress is possible -Policy needs diverse funding; other funders can beat Good Ventures' marginal $ by 2-5x🧵 x.com/albrgr/status/…

Since 2015, seven years before the launch of ChatGPT, @open_phil has been funding efforts to address potential catastrophic risks from AI. In a new blog post, @emilyoehlsen and I discuss our history in the area and explain our current strategy. x.com/albrgr/status/…

albrgr's tweet image. Since 2015, seven years before the launch of ChatGPT, @open_phil has been funding efforts to address potential catastrophic risks from AI.

In a new blog post, @emilyoehlsen and I discuss our history in the area and explain our current strategy.

x.com/albrgr/status/…


Luke Muehlhauser أعاد

1/3 Our Biosecurity and Pandemic Preparedness team is hiring, and is also looking for promising new grantees! The team focuses on reducing catastrophic and existential risks from biology — work that we think is substantially neglected.

open_phil's tweet image. 1/3 Our Biosecurity and Pandemic Preparedness team is hiring, and is also looking for promising new grantees! The team focuses on reducing catastrophic and existential risks from biology — work that we think is substantially neglected.

Luke Muehlhauser أعاد

Many AI policy decisions are complicated. "Don't ban self-driving cars" is really not. Good new piece from @KelseyTuoc, with a lede that pulls no punches:

hlntnr's tweet image. Many AI policy decisions are complicated. "Don't ban self-driving cars" is really not. Good new piece from @KelseyTuoc, with a lede that pulls no punches:
hlntnr's tweet image. Many AI policy decisions are complicated. "Don't ban self-driving cars" is really not. Good new piece from @KelseyTuoc, with a lede that pulls no punches:

Luke Muehlhauser أعاد

Some people think @open_phil are luddites because we work on AGI safety, and others think we’re techno-utopians because we work on abundance and scientific progress. We’re neither. Here's why we think safety and accelerating progress go hand in hand, in spite of the tensions:🧵

albrgr's tweet image. Some people think @open_phil are luddites because we work on AGI safety, and others think we’re techno-utopians because we work on abundance and scientific progress. We’re neither.

Here's why we think safety and accelerating progress go hand in hand, in spite of the tensions:🧵

Kudos to @Scott_Wiener and @GavinNewsom and others for getting this over the line!!

Yesterday, @CAgovernor signed the first law in the US to directly regulate catastrophic risks from AI systems. I read the thing so you don't have to.

trevposts's tweet image. Yesterday, @CAgovernor signed the first law in the US to directly regulate catastrophic risks from AI systems. I read the thing so you don't have to.


Luke Muehlhauser أعاد

FAI is hiring! Come join the AI policy team to work with @hamandcheese, @deanwball, and me at @JoinFAI.

sophiabrownh's tweet image. FAI is hiring! Come join the AI policy team to work with @hamandcheese, @deanwball, and me at @JoinFAI.

Luke Muehlhauser أعاد

METR is a non-profit research organization, and we are actively fundraising! We prioritise independence and trustworthiness, which shapes both our research process and our funding options. To date, we have not accepted funding from frontier AI labs.


Luke Muehlhauser أعاد

Excited to share details on two of our longest running and most effective safeguard collaborations, one with Anthropic and one with OpenAI. We've identified—and they've patched—a large number of vulnerabilities and together strengthened their safeguards. 🧵 1/6

alxndrdavies's tweet image. Excited to share details on two of our longest running and most effective safeguard collaborations, one with Anthropic and one with OpenAI. We've identified—and they've patched—a large number of vulnerabilities and together strengthened their safeguards. 🧵 1/6
alxndrdavies's tweet image. Excited to share details on two of our longest running and most effective safeguard collaborations, one with Anthropic and one with OpenAI. We've identified—and they've patched—a large number of vulnerabilities and together strengthened their safeguards. 🧵 1/6

Once again I would like to note that the nonprofit AI safety field is massively outgunned by the AI industry. wsj.com/politics/silic…

Some people are pretending to doubt this, but e.g. the entire nonprofit AI safety ecosystem runs on much less than $1B/yr, whereas annual expenditures for just three leading AI companies (MSFT, GOOG, META) is >$450B/yr.



Loading...

Something went wrong.


Something went wrong.