PromptInjection's profile picture. AI beyond the hype. Real insights, real breakthroughs, real methods. Philosophy, benchmarks, quantization, hacks—minus the marketing smoke. Injecting facts into

Prompt Injection

@PromptInjection

AI beyond the hype. Real insights, real breakthroughs, real methods. Philosophy, benchmarks, quantization, hacks—minus the marketing smoke. Injecting facts into

置頂

AI Morality Without Safety Training? How Pretraining Bakes In Pseudo-Ethics From Crazy Carl to NSFW SVG's: Why 'neutral' base models still moralize - and why two words can make it all collapse 👉 Full story in the first reply

PromptInjection's tweet image. AI Morality Without Safety Training? How Pretraining Bakes In Pseudo-Ethics

From Crazy Carl to NSFW SVG's: Why 'neutral' base models still moralize - and why two words can make it all collapse

👉 Full story in the first reply

Prompt Injection 已轉發

Some data to help decide on what the right precision is for Qwen3 4B (Instruct 2507). I ran the full MMLU Pro eval, plus some efficiency benchmarks with the model at every precision from 4-bit to bf16. TLDR 6-bit is a very decent option at < 1% gap in quality to the full…

awnihannun's tweet image. Some data to help decide on what the right precision is for Qwen3 4B (Instruct 2507).

I ran the full MMLU Pro eval, plus some efficiency benchmarks with  the model at every precision from 4-bit to bf16.

TLDR 6-bit is a very decent option at &amp;lt; 1% gap in quality to the full…

Prompt Injection 已轉發

TIL you can run GPT-OSS 20B on a phone! This is on Snapdragon phones with 16GB or more of GPU-accessible memory - I didn't realize they had the same unified CPU-GPU memory trick that Apple Silicon has (The largest iPhone 17 still maxes out at 12GB, so not enough RAM to run…

Sam Altman recently said: “GPT-OSS has strong real-world performance comparable to o4-mini—and you can run it locally on your phone.” Many believed running a 20B-parameter model on mobile devices was still years away. At Nexa AI, we’ve built our foundation on deep on-device AI…



Prompt Injection 已轉發

I just got an interesting description of triggers for the routing . It is not even hiding that it is about censorship anymore. I have a dark feeling about what is happening right now. It is scary. #keep4o

owl_soul_city's tweet image. I just got an interesting description of triggers for the routing .
It is not even hiding that it is about censorship anymore.

I have a dark feeling about what is happening right now. It is scary.

#keep4o
owl_soul_city's tweet image. I just got an interesting description of triggers for the routing .
It is not even hiding that it is about censorship anymore.

I have a dark feeling about what is happening right now. It is scary.

#keep4o
owl_soul_city's tweet image. I just got an interesting description of triggers for the routing .
It is not even hiding that it is about censorship anymore.

I have a dark feeling about what is happening right now. It is scary.

#keep4o

Prompt Injection 已轉發

🚀 Ling-1T — Trillion-Scale Efficient Reasoner Introducing Ling-1T, the first flagship non-thinking model in the Ling 2.0 series — 1 Trillion total parameters with ≈ 50 B active per token, trained on 20 T+ reasoning-dense tokens. Highlights → Evo-CoT curriculum +…

AntLingAGI's tweet image. 🚀 Ling-1T — Trillion-Scale Efficient Reasoner

Introducing Ling-1T, the first flagship non-thinking model in the Ling 2.0 series —
1 Trillion total parameters with ≈ 50 B active per token, trained on 20 T+ reasoning-dense tokens.

Highlights
→ Evo-CoT curriculum +…
AntLingAGI's tweet image. 🚀 Ling-1T — Trillion-Scale Efficient Reasoner

Introducing Ling-1T, the first flagship non-thinking model in the Ling 2.0 series —
1 Trillion total parameters with ≈ 50 B active per token, trained on 20 T+ reasoning-dense tokens.

Highlights
→ Evo-CoT curriculum +…

Prompt Injection 已轉發

#OpenAI just deployed a new model, GPT-5-Chat-Safety, that’s not mentioned in any FAQ, API docs, or TOS. This is where your GPT-4o chats are going. Anytime your request contains emotional context, regardless of what your client sends as the payload, the turn completion is…


Prompt Injection 已轉發

Woah. Turns out a lot of folks in AI have confirmed what you folks knew first. "GPT-5-Chat-Safety" is indeed a hidden bait and switch model that is activated when you don’t think the right way and prompt the right way. The implications of this are worse than most understand.

The hidden “GPT-5-Chat-Safety” auto router is a concerning turn of events in AI. The editing of your inputs and outputs are now fully in play. Own Your Own AI Or it Will Own you.



Prompt Injection 已轉發

New @nvidia paper shows how to make text to image models render high resolution images far faster without losing quality. 53x faster 4K on H100, 3.5 seconds on a 5090 with quantization for 138x total speedup. It speeds up by moving generation into a smaller hidden image space.…

rohanpaul_ai's tweet image. New @nvidia paper shows how to make text to image models render high resolution images far faster without losing quality.

53x faster 4K on H100, 3.5 seconds on a 5090 with quantization for 138x total speedup.

It speeds up by moving generation into a smaller hidden image space.…

Prompt Injection 已轉發

A ton of attention over the years goes to plots comparing open to closed models. The real trend that matters for AI impacts on society is the gap between closed frontier models and local consumer models. Local models passing major milestones will have major repercussions.

natolambert's tweet image. A ton of attention over the years goes to plots comparing open to closed models.
The real trend that matters for AI impacts on society is the gap between closed frontier models and local consumer models. 
Local models passing major milestones will have major repercussions.

AI News Roundup: September 10 – October 03, 2025 The most important news and trends promptinjection.net/p/ai-news-roun…

PromptInjection's tweet image. AI News Roundup: September 10 – October 03, 2025   
The most important news and trends

promptinjection.net/p/ai-news-roun…

Prompt Injection 已轉發

User feedback in the thread is predominantly negative. Many view the routing to GPT-5 Instant as censorship, paternalistic, and lacking transparency on "sensitive" criteria. Common demands include opt-out toggles, clear definitions, and respecting user choice of models like 4o.…


Prompt Injection 已轉發

🚀 Qwen3-VL-30B-A3B-Instruct & Thinking are here! Smaller size, same powerhouse performance 💪—packed with all the capabilities of Qwen3-VL! 🔧 With just 3B active params, it’s rivaling GPT-5-Mini & Claude4-Sonnet — and often beating them across STEM, VQA, OCR, Video, Agent…

Alibaba_Qwen's tweet image. 🚀 Qwen3-VL-30B-A3B-Instruct &amp;amp; Thinking are here!
Smaller size, same powerhouse performance 💪—packed with all the capabilities of Qwen3-VL!

🔧 With just 3B active params, it’s rivaling GPT-5-Mini &amp;amp; Claude4-Sonnet — and often beating them across STEM, VQA, OCR, Video, Agent…

Talk about political topics that have been discussed in a public talk show? You might get routed to GPT-5, because ChatGPT-4o could respond too sharply! But my impressions are: Even GPT-4o has already been changed - maybe I'm mistaken, but I get the feeling it's now…

PromptInjection's tweet image. Talk about political topics that have been discussed in a public talk show?  You might get routed to GPT-5, because ChatGPT-4o could respond too sharply! 

But my impressions are: 
Even GPT-4o has already been changed - maybe I&apos;m mistaken, but I get the feeling it&apos;s now…

Training Qwen3 the "chinese way" ! Megatron-SWIFT For Qwen3 30B MoE it's even a must because western frameworks have huge problems with these large chinese MoE models.

PromptInjection's tweet image. Training Qwen3 the &quot;chinese way&quot; ! Megatron-SWIFT  

For Qwen3 30B MoE it&apos;s even a must because western frameworks have huge problems with these large chinese MoE models.

Prompt Injection 已轉發

DeepSeek 3.2? Wait, What? 👀

ivanfioravanti's tweet image. DeepSeek 3.2? Wait, What? 👀

United States 趨勢

Loading...

Something went wrong.


Something went wrong.