CAIS's profile picture. Reducing societal-scale risks from AI.
http://safe.ai
http://ai-frontiers.org

Center for AI Safety

@CAIS

Reducing societal-scale risks from AI. http://safe.ai http://ai-frontiers.org

Building societal defenses is one vital part of mitigating the risks from AI. Join the def/account hackathon tomorrow to help make that happen! luma.com/def-acc-hack-sf


Center for AI Safety reposted

Can AI automate jobs? We created the Remote Labor Index to test AI’s ability to automate hundreds of long, real-world, economically valuable projects from remote work platforms. While AIs are smart, they are not yet that useful: the current automation rate is less than 3%.

hendrycks's tweet image. Can AI automate jobs?

We created the Remote Labor Index to test AI’s ability to automate hundreds of long, real-world, economically valuable projects from remote work platforms.

While AIs are smart, they are not yet that useful:
the current automation rate is less than 3%.
hendrycks's tweet image. Can AI automate jobs?

We created the Remote Labor Index to test AI’s ability to automate hundreds of long, real-world, economically valuable projects from remote work platforms.

While AIs are smart, they are not yet that useful:
the current automation rate is less than 3%.
hendrycks's tweet image. Can AI automate jobs?

We created the Remote Labor Index to test AI’s ability to automate hundreds of long, real-world, economically valuable projects from remote work platforms.

While AIs are smart, they are not yet that useful:
the current automation rate is less than 3%.

Center for AI Safety reposted

The term “AGI” is currently a vague, moving goalpost. To ground the discussion, we propose a comprehensive, testable definition of AGI. Using it, we can quantify progress: GPT-4 (2023) was 27% of the way to AGI. GPT-5 (2025) is 58%. Here’s how we define and measure it: 🧵

hendrycks's tweet image. The term “AGI” is currently a vague, moving goalpost.

To ground the discussion, we propose a comprehensive, testable definition of AGI.
Using it, we can quantify progress:
GPT-4 (2023) was 27% of the way to AGI. GPT-5 (2025) is 58%.

Here’s how we define and measure it: 🧵
hendrycks's tweet image. The term “AGI” is currently a vague, moving goalpost.

To ground the discussion, we propose a comprehensive, testable definition of AGI.
Using it, we can quantify progress:
GPT-4 (2023) was 27% of the way to AGI. GPT-5 (2025) is 58%.

Here’s how we define and measure it: 🧵

Applications are open for the Winter (Nov 3–Feb 1) session of our free, online AI Safety, Ethics, and Society course! Open to all fields—no technical background required. Apply by October 10. More info & apply: aisafetybook.com/virtual-course


Loading...

Something went wrong.


Something went wrong.