aivanlogic's profile picture. Co-Founder of @sentient_agency (25+ million follower network)

Worked with @meta @openai and other tech giants

Ivan | AI | Automation

@aivanlogic

Co-Founder of @sentient_agency (25+ million follower network) Worked with @meta @openai and other tech giants

Sabitlenmiş

#ChatGPT is the best free teacher. But most people only leverage 10% of its capability. Here are the best prompts to learn anything faster:


🚨 The biggest myth in AI safety just exploded Everyone’s been acting like “machine unlearning” is the magic fix. Delete the bad data, make the model safe. Simple, right? Wrong. Oxford + MIT just dropped a paper that basically says: none of this works. Unlearning sounds neat…

aivanlogic's tweet image. 🚨 The biggest myth in AI safety just exploded

Everyone’s been acting like “machine unlearning” is the magic fix. Delete the bad data, make the model safe. Simple, right?

Wrong.

Oxford + MIT just dropped a paper that basically says: none of this works.

Unlearning sounds neat…

Holy shit… Stanford just killed prompt engineering 🤯 They just dropped a paper so wild it proves we’ve been prompting AIs wrong for years. It’s called 'Verbalized Sampling', and it unlocks the trapped creativity inside every aligned model no fine-tuning, no retraining, just…

aivanlogic's tweet image. Holy shit… Stanford just killed prompt engineering 🤯

They just dropped a paper so wild it proves we’ve been prompting AIs wrong for years.

It’s called 'Verbalized Sampling', and it unlocks the trapped creativity inside every aligned model no fine-tuning, no retraining, just…

MIT and Oxford just broke one of AI’s biggest illusions. They found that “machine unlearning” the idea that we can just delete bad data to make models safe doesn’t actually work. Here’s what they discovered: • Models rebuild deleted knowledge from what’s left. • Dual-use…

aivanlogic's tweet image. MIT and Oxford just broke one of AI’s biggest illusions.

They found that “machine unlearning” the idea that we can just delete bad data to make models safe doesn’t actually work.

Here’s what they discovered:

• Models rebuild deleted knowledge from what’s left.
• Dual-use…

Loading...

Something went wrong.


Something went wrong.