lukeprog's profile picture. Open Philanthropy Program Director, AI Governance and Policy

Luke Muehlhauser

@lukeprog

Open Philanthropy Program Director, AI Governance and Policy

고정된 트윗

Reminder that the nonprofit AI safety field (or "industrial complex" lol) is massively, massively outgunned by, well… The Actual AI Industry. archive.is/DFvLR


Luke Muehlhauser 님이 재게시함

Great 🧵 about the power of far-sighted and risk-taking philanthropists! Thanks to @open_phil for your early and consistent support. @cayimby could not have achieved these victories without you!

We've been funding @CAYIMBY, which championed this work, since its early days, and it's been fun to look back at how far they've come. After a prior incarnation of SB 79 failed in 2018, we renewed our funding and I wrote in our renewal template:

albrgr's tweet image. We've been funding @CAYIMBY, which championed this work, since its early days, and it's been fun to look back at how far they've come. After a prior incarnation of SB 79 failed in 2018, we renewed our funding and I wrote in our renewal template:


Luke Muehlhauser 님이 재게시함

These are exactly the right sorts of questions to be asking about automated AI R&D. Props to @RepNateMoran for improving the conversation within Congress!

daniel_271828's tweet image. These are exactly the right sorts of questions to be asking about automated AI R&D. Props to @RepNateMoran for improving the conversation within Congress!

Luke Muehlhauser 님이 재게시함

A lot of people seem to implicitly assume that China is going with an entirely libertarian approach to AI regulation, which would be weird given that they are an authoritarian country. Does this look like a libertarian AI policy regime to you?

deanwball's tweet image. A lot of people seem to implicitly assume that China is going with an entirely libertarian approach to AI regulation, which would be weird given that they are an authoritarian country.

Does this look like a libertarian AI policy regime to you?

Luke Muehlhauser 님이 재게시함

Now's the time for other funders to get involved in AI safety and security: -AI advances have created more great opportunities -Recent years show progress is possible -Policy needs diverse funding; other funders can beat Good Ventures' marginal $ by 2-5x🧵 x.com/albrgr/status/…

Since 2015, seven years before the launch of ChatGPT, @open_phil has been funding efforts to address potential catastrophic risks from AI. In a new blog post, @emilyoehlsen and I discuss our history in the area and explain our current strategy. x.com/albrgr/status/…

albrgr's tweet image. Since 2015, seven years before the launch of ChatGPT, @open_phil has been funding efforts to address potential catastrophic risks from AI.

In a new blog post, @emilyoehlsen and I discuss our history in the area and explain our current strategy.

x.com/albrgr/status/…


Luke Muehlhauser 님이 재게시함

1/3 Our Biosecurity and Pandemic Preparedness team is hiring, and is also looking for promising new grantees! The team focuses on reducing catastrophic and existential risks from biology — work that we think is substantially neglected.

open_phil's tweet image. 1/3 Our Biosecurity and Pandemic Preparedness team is hiring, and is also looking for promising new grantees! The team focuses on reducing catastrophic and existential risks from biology — work that we think is substantially neglected.

Luke Muehlhauser 님이 재게시함

Many AI policy decisions are complicated. "Don't ban self-driving cars" is really not. Good new piece from @KelseyTuoc, with a lede that pulls no punches:

hlntnr's tweet image. Many AI policy decisions are complicated. "Don't ban self-driving cars" is really not. Good new piece from @KelseyTuoc, with a lede that pulls no punches:
hlntnr's tweet image. Many AI policy decisions are complicated. "Don't ban self-driving cars" is really not. Good new piece from @KelseyTuoc, with a lede that pulls no punches:

Luke Muehlhauser 님이 재게시함

Some people think @open_phil are luddites because we work on AGI safety, and others think we’re techno-utopians because we work on abundance and scientific progress. We’re neither. Here's why we think safety and accelerating progress go hand in hand, in spite of the tensions:🧵

albrgr's tweet image. Some people think @open_phil are luddites because we work on AGI safety, and others think we’re techno-utopians because we work on abundance and scientific progress. We’re neither.

Here's why we think safety and accelerating progress go hand in hand, in spite of the tensions:🧵

Kudos to @Scott_Wiener and @GavinNewsom and others for getting this over the line!!

Yesterday, @CAgovernor signed the first law in the US to directly regulate catastrophic risks from AI systems. I read the thing so you don't have to.

trevposts's tweet image. Yesterday, @CAgovernor signed the first law in the US to directly regulate catastrophic risks from AI systems. I read the thing so you don't have to.


Luke Muehlhauser 님이 재게시함

FAI is hiring! Come join the AI policy team to work with @hamandcheese, @deanwball, and me at @JoinFAI.

sophiabrownh's tweet image. FAI is hiring! Come join the AI policy team to work with @hamandcheese, @deanwball, and me at @JoinFAI.

Luke Muehlhauser 님이 재게시함

METR is a non-profit research organization, and we are actively fundraising! We prioritise independence and trustworthiness, which shapes both our research process and our funding options. To date, we have not accepted funding from frontier AI labs.


Luke Muehlhauser 님이 재게시함

Excited to share details on two of our longest running and most effective safeguard collaborations, one with Anthropic and one with OpenAI. We've identified—and they've patched—a large number of vulnerabilities and together strengthened their safeguards. 🧵 1/6

alxndrdavies's tweet image. Excited to share details on two of our longest running and most effective safeguard collaborations, one with Anthropic and one with OpenAI. We've identified—and they've patched—a large number of vulnerabilities and together strengthened their safeguards. 🧵 1/6
alxndrdavies's tweet image. Excited to share details on two of our longest running and most effective safeguard collaborations, one with Anthropic and one with OpenAI. We've identified—and they've patched—a large number of vulnerabilities and together strengthened their safeguards. 🧵 1/6

Once again I would like to note that the nonprofit AI safety field is massively outgunned by the AI industry. wsj.com/politics/silic…

Some people are pretending to doubt this, but e.g. the entire nonprofit AI safety ecosystem runs on much less than $1B/yr, whereas annual expenditures for just three leading AI companies (MSFT, GOOG, META) is >$450B/yr.



Luke Muehlhauser 님이 재게시함

Congratulations to Giving What We Can on reaching 10,000 pledgers who donate 10% of their income to high-impact charities! x.com/givingwhatweca…

albrgr's tweet image. Congratulations to Giving What We Can on reaching 10,000 pledgers who donate 10% of their income to high-impact charities!
x.com/givingwhatweca…

10,000 people across the globe have now taken the 🔸10% Pledge 🚀 > $300M already donated > $1B lifetime committed 10,000 people from 116 countries 🌐 This is a movement of action and optimism, funding work to solve the world’s most pressing problems 🔥 givingwhatwecan.org/pledge



Luke Muehlhauser 님이 재게시함

Exciting news! We saturated my $250,000 match! @patrickc is very generously contributing another $250,000 to extend the donation match! If we saturate Patrick’s match, we’ll have raised over $1 million dollars for @farmkind_giving! Thank you so much to the latest round of…

Huge thanks to this next round of folks for their contributions! Anonymous: $29,000 @kipperrii: $12,600 @JacobTref: $10,000 @DigitKyu: $5,000 @jimcox25: $4,000 @haoxingdu: $2,000 @louisvarge: $1,500 @Mjreard: $1,000 @maxwellfarrens: $1,000 We’ve raised over $220,000 so far!…



Luke Muehlhauser 님이 재게시함

A quarter of a penny of charity per chicken.

Honestly the thing that motivated me to do this episode was learning that there's less than $200M/year of smart philanthropy on factory farming - GLOBALLY. Just to explain how fucking crazy that is: 1. It's insane how cheap the interventions that will spare BILLIONS of animals…



Nikolai Kapustin is the glorious musical future that Rhapsody in Blue promised us in 1924: open.spotify.com/playlist/61Ido…


Loading...

Something went wrong.


Something went wrong.