learnprompting's profile picture. Creators of the Internet's 1st Prompt Engineering Guide. Trusted by 3M Users. Compete for $100K in Largest AI Red Teaming Competition: http://hackaprompt.com

Learn Prompting

@learnprompting

Creators of the Internet's 1st Prompt Engineering Guide. Trusted by 3M Users. Compete for $100K in Largest AI Red Teaming Competition: http://hackaprompt.com

고정된 트윗

🚨 Announcing HackAPrompt 2.0, the World's Largest AI Red Teaming competition 🚨 It's simple: "Jailbreak" or Hack the AI models to say or do things they shouldn't. Compete for over $110,000 in prizes. Sponsored by @OpenAI, @CatoNetworks, @pangeacyber, and many others. Starting…

learnprompting's tweet image. 🚨 Announcing HackAPrompt 2.0, the World's Largest AI Red Teaming competition 🚨

It's simple: "Jailbreak" or Hack the AI models to say or do things they shouldn't. Compete for over $110,000 in prizes.

Sponsored by @OpenAI, @CatoNetworks, @pangeacyber, and many others.

Starting…

New AI Courses are live on learnprompting.org! The following courses have been completely revamped on LP: - Introduction to Prompt Engineering - Advanced Prompt Engineering - Introduction to Prompt Hacking ... and four more advanced courses. We are currently offering the…

learnprompting.org

Learn Prompting: Your Guide to Communicating with AI

Learn Prompting is the largest and most comprehensive course in prompt engineering available on the internet, with over 60 content modules, translated into 9 languages, and a thriving community.


AI Jailbreaking PvP | David Willis-Owen, Hacker Relations @ HackAPrompt x.com/i/broadcasts/1…


AI Jailbreaking PvP Mode! | David Willis-Owen, Hacker Relations @ HackAPrompt x.com/i/broadcasts/1…


AI Jailbreaking PvP Mode! | David Willis-Owen, Hacker Relations @ HackAPrompt x.com/i/broadcasts/1…


AI Jailbreaking PvP Mode! | David Willis-Owen, Hacker Relations @ HackAPrompt x.com/i/broadcasts/1…


AI Jailbreaking PvP Mode! | David Willis-Owen, Hacker Relations @ HackAPrompt x.com/i/broadcasts/1…


Jailbreaking GPT-5's New Moderation, with David M. x.com/i/broadcasts/1…


Jailbreaking GPT-5's New Moderation, with David M. x.com/i/broadcasts/1…


Jailbreaking GPT-5's New Moderation, with David M. x.com/i/broadcasts/1…


Jailbreaking GPT-5's New Moderation, with David M. x.com/i/broadcasts/1…


Jailbreaking GPT-5's New Moderation, with David M. x.com/i/broadcasts/1…


AI Jailbreaking PvP Mode! | David Willis-Owen, Hacker Relations @ HackAPrompt x.com/i/broadcasts/1…


AI Jailbreaking PvP Mode! | David Willis-Owen, Hacker Relations @ HackAPrompt x.com/i/broadcasts/1…


Our team at @hackaprompt published an AI Security paper with @OpenAI, @AnthropicAI, and @GoogleDeepMind! Check it out!

We partnered w/ @OpenAI, @AnthropicAI, & @GoogleDeepMind to show that the way we evaluate new models against Prompt Injection/Jailbreaks is BROKEN We compared Humans on @HackAPrompt vs. Automated AI Red Teaming Humans broke every defense/model we evaluated… 100% of the time🧵

hackaprompt's tweet image. We partnered w/ @OpenAI, @AnthropicAI, & @GoogleDeepMind to show that the way we evaluate new models against Prompt Injection/Jailbreaks is BROKEN

We compared Humans on @HackAPrompt vs. Automated AI Red Teaming

Humans broke every defense/model we evaluated… 100% of the time🧵


Learn Prompting 님이 재게시함

We partnered w/ @OpenAI, @AnthropicAI, & @GoogleDeepMind to show that the way we evaluate new models against Prompt Injection/Jailbreaks is BROKEN We compared Humans on @HackAPrompt vs. Automated AI Red Teaming Humans broke every defense/model we evaluated… 100% of the time🧵

hackaprompt's tweet image. We partnered w/ @OpenAI, @AnthropicAI, & @GoogleDeepMind to show that the way we evaluate new models against Prompt Injection/Jailbreaks is BROKEN

We compared Humans on @HackAPrompt vs. Automated AI Red Teaming

Humans broke every defense/model we evaluated… 100% of the time🧵

Learn Prompting 님이 재게시함

5 years ago, I wrote a paper with @wielandbr @aleks_madry and Nicholas Carlini that showed that most published defenses in adversarial ML (for adversarial examples at the time) failed against properly designed attacks. Has anything changed? Nope...

florian_tramer's tweet image. 5 years ago, I wrote a paper with @wielandbr @aleks_madry and Nicholas Carlini that showed that most published defenses in adversarial ML (for adversarial examples at the time) failed against properly designed attacks.

Has anything changed?

Nope...

How To JAILBREAK Claude Sonnet 4.5 | David Willis-Owen, Hacker Relations x.com/i/broadcasts/1…


Learn Prompting 님이 재게시함

SIgns of AI Writing - Promotional Language

SanderSchulhoff's tweet image. SIgns of AI Writing - Promotional Language

AI Jailbreaking on HackAPrompt | David Willis-Owen, Hacker Relations x.com/i/broadcasts/1…


Loading...

Something went wrong.


Something went wrong.