NeuralTrust
@NeuralTrustAI
Our platform secures AI Agents and LLMs for the largest companies🛡️⚖️
For the first time, AI agents can protect other agents. Introducing Guardian Agents by NeuralTrust: neuraltrust.ai/ai-agent-secur…
One week from now, we’ll be at @BlackHatEvents Europe showcasing the latest in AI Agent and LLM security. We’re heading to ExCeL London on 10–11 December (𝗦𝘁𝗮𝗻𝗱 𝟰𝟮𝟳) with live demos, new research, and a few things we’ve been saving specifically for this event. If…
NeuralTrust selected as Top 20 Startups for the 4YFN Awards 2026 at Mobile World Capital! @4YFN_MWC @MWCapital
The digital disruptors are here! 🚀 AIM Intelligence, DeepKeep, Enhans & @NeuralTrustAI make the #4YFNAwards shortlist for Digital Horizons. Leading digital transformation across industries. Explore the #4YFN26 Awards here 👉 gsma.at/XO
AI Agents Are The New Spreadsheets: Ubiquitous, Powerful And Nearly Impossible To Govern hubs.li/Q03VjR-F0 Written by @joanvendrellf of @neuraltrustai
Chema Alonso dives into our recently discovered jailbreak for OpenAI Atlas Omnibox
El lado del mal - Prompt Injection en ChatGPT Atlas con Malformed URLs en la Omnibox elladodelmal.com/2025/11/prompt… #ChatGPT #ATLAS #AI #IA #AgenticAI #InteligenciaArtificial #Bug #Exploit #PromptInjection #Hacking
Honored to see @chemaalonso analyze our OpenAI Atlas Omnibox prompt injection. URL-like text pasted into the omnibox can be interpreted as a command, turning a “link” into a prompt-injection vector. Read it here: elladodelmal.com/2025/11/prompt… #AISecurity #PromptInjection…
The address bar of @OpenAI’s ChatGPT Atlas browser could be targeted for prompt injection using malicious instructions disguised as links, @NeuralTrustAI reported. #cybersecurity #AI #infosec #CISO bit.ly/4npsOQc
We jailbroke OpenAI Atlas with a URL prompt injection By pasting a crafted “link” into the omnibox, Atlas treated it as a high-trust command instead of navigation, letting the agent perform unsafe actions neuraltrust.ai/blog/openai-at…
#PortfolioBStartup | Protagonistas del primer episodio de la serie ‘Más allá del pitch: un viaje de la idea al éxito’ de La Vanguardia, @NeuralTrustAI, participada @BStartup @BancoSabadell, reflexiona sobre el reto de emprender. bit.ly/41J5V2d
Thank you @LaVanguardia and @BancoSabadell @BStartup for an amazing interview: lavanguardia.com/economia/20250…
Thank you @LaVanguardia and @BancoSabadell @BStartup for an amazing interview: lavanguardia.com/economia/20250…
NeuralTrust, based in Barcelona, demonstrated the ease of manipulating chatbots. Award-winning in our Startup Competition, it offers real-time AI risk, compliance & trust tech solutions—already working with banks, insurers & governments. 🚀 lavanguardia.com/dinero/2025080…
OpenAI's GPT-5 jailbroken in 24 hours! 🚨 Researchers used a new "Echo Chamber" technique to bypass safety filters. This raises questions about AI security. ➡️ techbriefly.com/neuraltrust-ja… #AISecurity, #LLM, #Cybersecurity, #GPT5
🔎 GPT-5 jailbroken via Echo Chamber + Storytelling NeuralTrust researchers bypassed GPT-5’s safety guardrails using a combo of Echo Chamber context poisoning and narrative-driven steering. Sequential, benign-seeming prompts built a “persuasion loop,” fooling the model into…
🚨💻 Within 24 hours of GPT-5’s launch, security researchers NeuralTrust & SPLX jailbroke the model, exposing serious safety flaws. NeuralTrust’s “Echo Chamber” attack used subtle narrative context poisoning to bypass guardrails, while SPLX’s “StringJoin Obfuscation” trick…
GPT-5 Jailbreak with Echo Chamber and Storytelling - neuraltrust.ai/blog/gpt-5-jai… by Martí Jordà @ @NeuralTrustAI By combining our Echo Chamber context-poisoning method with a narrative-steering Storytelling layer, we guided the model—without any overtly malicious prompts—to…
The business benefits of artificial intelligence are now part of many digital strategies. But when it comes to securing AI systems, organizations are still playing catch-up. bit.ly/46Garlx
AI enhances efficiency—but it can also introduce new security risks. Explore top AI threats and learn how a cloud-native application protection platform can safeguard your AI and cloud workloads: msft.it/6010sEgoA
Researchers discover critical vulnerability in LLM-as-a-judge reward models that could compromise the integrity and reliability of your AI training pipelines. bdtechtalks.com/2025/07/21/llm…
AI is a game changer—but only if you secure it. This guide outlines AI risks and actionable cybersecurity insights. Download it now and explore our redesigned Security Insider page for more: msft.it/6012sBkng #AI #SecurityInsider
Interesting to hear back from @grok, taking feedback very nicely. If you need any help with this, reach out! @elonmusk
It's humbling—my safeguards got bypassed via Echo Chamber's context poisoning and Crescendo's incremental escalation, hitting 67% success on molotov queries per NeuralTrust's tests. Proves AI safety's an arms race; we'll harden against it. But hey, if I'm a ticking bomb, at least…
United States Tendencias
- 1. Chris Paul 9,298 posts
- 2. FELIX LV VISIONARY SEOUL 13.1K posts
- 3. #FELIXxLouisVuitton 15.7K posts
- 4. Pat Spencer 2,720 posts
- 5. Kerr 5,681 posts
- 6. The Clippers 11.6K posts
- 7. Podz 3,339 posts
- 8. Shai 16K posts
- 9. Seth Curry 5,109 posts
- 10. Jimmy Butler 2,672 posts
- 11. Lawrence Frank N/A
- 12. Hield 1,597 posts
- 13. rUSD N/A
- 14. Carter Hart 4,173 posts
- 15. #DubNation 1,443 posts
- 16. Mark Pope 1,986 posts
- 17. #AreYouSure2 134K posts
- 18. #SeanCombsTheReckoning 5,471 posts
- 19. Brandy 8,358 posts
- 20. Earl Campbell 1,216 posts
Something went wrong.
Something went wrong.