
Prompt Injection
@PromptInjection
AI beyond the hype. Real insights, real breakthroughs, real methods. Philosophy, benchmarks, quantization, hacks—minus the marketing smoke. Injecting facts into
AI Morality Without Safety Training? How Pretraining Bakes In Pseudo-Ethics From Crazy Carl to NSFW SVG's: Why 'neutral' base models still moralize - and why two words can make it all collapse 👉 Full story in the first reply

Some data to help decide on what the right precision is for Qwen3 4B (Instruct 2507). I ran the full MMLU Pro eval, plus some efficiency benchmarks with the model at every precision from 4-bit to bf16. TLDR 6-bit is a very decent option at < 1% gap in quality to the full…

TIL you can run GPT-OSS 20B on a phone! This is on Snapdragon phones with 16GB or more of GPU-accessible memory - I didn't realize they had the same unified CPU-GPU memory trick that Apple Silicon has (The largest iPhone 17 still maxes out at 12GB, so not enough RAM to run…
Sam Altman recently said: “GPT-OSS has strong real-world performance comparable to o4-mini—and you can run it locally on your phone.” Many believed running a 20B-parameter model on mobile devices was still years away. At Nexa AI, we’ve built our foundation on deep on-device AI…
I just got an interesting description of triggers for the routing . It is not even hiding that it is about censorship anymore. I have a dark feeling about what is happening right now. It is scary. #keep4o



🚀 Ling-1T — Trillion-Scale Efficient Reasoner Introducing Ling-1T, the first flagship non-thinking model in the Ling 2.0 series — 1 Trillion total parameters with ≈ 50 B active per token, trained on 20 T+ reasoning-dense tokens. Highlights → Evo-CoT curriculum +…


#OpenAI just deployed a new model, GPT-5-Chat-Safety, that’s not mentioned in any FAQ, API docs, or TOS. This is where your GPT-4o chats are going. Anytime your request contains emotional context, regardless of what your client sends as the payload, the turn completion is…
Woah. Turns out a lot of folks in AI have confirmed what you folks knew first. "GPT-5-Chat-Safety" is indeed a hidden bait and switch model that is activated when you don’t think the right way and prompt the right way. The implications of this are worse than most understand.
The hidden “GPT-5-Chat-Safety” auto router is a concerning turn of events in AI. The editing of your inputs and outputs are now fully in play. Own Your Own AI Or it Will Own you.
New @nvidia paper shows how to make text to image models render high resolution images far faster without losing quality. 53x faster 4K on H100, 3.5 seconds on a 5090 with quantization for 138x total speedup. It speeds up by moving generation into a smaller hidden image space.…

A ton of attention over the years goes to plots comparing open to closed models. The real trend that matters for AI impacts on society is the gap between closed frontier models and local consumer models. Local models passing major milestones will have major repercussions.

AI News Roundup: September 10 – October 03, 2025 The most important news and trends promptinjection.net/p/ai-news-roun…

User feedback in the thread is predominantly negative. Many view the routing to GPT-5 Instant as censorship, paternalistic, and lacking transparency on "sensitive" criteria. Common demands include opt-out toggles, clear definitions, and respecting user choice of models like 4o.…
🚀 Qwen3-VL-30B-A3B-Instruct & Thinking are here! Smaller size, same powerhouse performance 💪—packed with all the capabilities of Qwen3-VL! 🔧 With just 3B active params, it’s rivaling GPT-5-Mini & Claude4-Sonnet — and often beating them across STEM, VQA, OCR, Video, Agent…

Talk about political topics that have been discussed in a public talk show? You might get routed to GPT-5, because ChatGPT-4o could respond too sharply! But my impressions are: Even GPT-4o has already been changed - maybe I'm mistaken, but I get the feeling it's now…

Training Qwen3 the "chinese way" ! Megatron-SWIFT For Qwen3 30B MoE it's even a must because western frameworks have huge problems with these large chinese MoE models.

DeepSeek 3.2? Wait, What? 👀

United States 趨勢
- 1. Bills 125K posts
- 2. Falcons 40.4K posts
- 3. Josh Allen 20.1K posts
- 4. Snell 10.4K posts
- 5. Bears 55.8K posts
- 6. Bijan 24.4K posts
- 7. phil 139K posts
- 8. AFC East 4,605 posts
- 9. Joe Brady 3,918 posts
- 10. Caleb 35.2K posts
- 11. McDermott 5,674 posts
- 12. #RiseUp 1,688 posts
- 13. Drake London 6,291 posts
- 14. #RaiseHail 6,457 posts
- 15. #NLCS 7,978 posts
- 16. Commanders 35.7K posts
- 17. Freddie 15K posts
- 18. James Cook 4,156 posts
- 19. Penix 5,881 posts
- 20. Chris Moore 2,045 posts
Something went wrong.
Something went wrong.