NeuralTrustAI's profile picture. Our platform secures AI Agents and LLMs for the largest companies🛡️⚖️

NeuralTrust

@NeuralTrustAI

Our platform secures AI Agents and LLMs for the largest companies🛡️⚖️

Fijado

For the first time, AI agents can protect other agents. Introducing Guardian Agents by NeuralTrust: neuraltrust.ai/ai-agent-secur…


One week from now, we’ll be at @BlackHatEvents Europe showcasing the latest in AI Agent and LLM security. We’re heading to ExCeL London on 10–11 December (𝗦𝘁𝗮𝗻𝗱 𝟰𝟮𝟳) with live demos, new research, and a few things we’ve been saving specifically for this event. If…

NeuralTrustAI's tweet image. One week from now, we’ll be at @BlackHatEvents  Europe showcasing the latest in AI Agent and LLM security.

We’re heading to ExCeL London on 10–11 December (𝗦𝘁𝗮𝗻𝗱 𝟰𝟮𝟳) with live demos, new research, and a few things we’ve been saving specifically for this event. 

If…

NeuralTrust selected as Top 20 Startups for the 4YFN Awards 2026 at Mobile World Capital! @4YFN_MWC @MWCapital

The digital disruptors are here! 🚀 AIM Intelligence, DeepKeep, Enhans & @NeuralTrustAI make the #4YFNAwards shortlist for Digital Horizons. Leading digital transformation across industries. Explore the #4YFN26 Awards here 👉 gsma.at/XO

4YFN_MWC's tweet image. The digital disruptors are here! 🚀

AIM Intelligence, DeepKeep, Enhans & @NeuralTrustAI make the #4YFNAwards shortlist for Digital Horizons.

Leading digital transformation across industries.

Explore the #4YFN26 Awards here 👉 gsma.at/XO


NeuralTrust reposteó

AI Agents Are The New Spreadsheets: Ubiquitous, Powerful And Nearly Impossible To Govern hubs.li/Q03VjR-F0 Written by @joanvendrellf of @neuraltrustai


Chema Alonso dives into our recently discovered jailbreak for OpenAI Atlas Omnibox

El lado del mal - Prompt Injection en ChatGPT Atlas con Malformed URLs en la Omnibox elladodelmal.com/2025/11/prompt… #ChatGPT #ATLAS #AI #IA #AgenticAI #InteligenciaArtificial #Bug #Exploit #PromptInjection #Hacking

chemaalonso's tweet image. El lado del mal - Prompt Injection en ChatGPT Atlas con Malformed URLs en la Omnibox elladodelmal.com/2025/11/prompt… #ChatGPT #ATLAS #AI #IA #AgenticAI #InteligenciaArtificial #Bug #Exploit #PromptInjection #Hacking


Honored to see @chemaalonso analyze our OpenAI Atlas Omnibox prompt injection. URL-like text pasted into the omnibox can be interpreted as a command, turning a “link” into a prompt-injection vector. Read it here: elladodelmal.com/2025/11/prompt… #AISecurity #PromptInjection


NeuralTrust reposteó

The address bar of @OpenAI’s ChatGPT Atlas browser could be targeted for prompt injection using malicious instructions disguised as links, @NeuralTrustAI reported. #cybersecurity #AI #infosec #CISO bit.ly/4npsOQc


We jailbroke OpenAI Atlas with a URL prompt injection By pasting a crafted “link” into the omnibox, Atlas treated it as a high-trust command instead of navigation, letting the agent perform unsafe actions neuraltrust.ai/blog/openai-at…


NeuralTrust reposteó

#PortfolioBStartup | Protagonistas del primer episodio de la serie ‘Más allá del pitch: un viaje de la idea al éxito’ de La Vanguardia, @NeuralTrustAI, participada @BStartup @BancoSabadell, reflexiona sobre el reto de emprender. bit.ly/41J5V2d


NeuralTrust reposteó

NeuralTrust, based in Barcelona, demonstrated the ease of manipulating chatbots. Award-winning in our Startup Competition, it offers real-time AI risk, compliance & trust tech solutions—already working with banks, insurers & governments. 🚀 lavanguardia.com/dinero/2025080…


NeuralTrust reposteó

OpenAI's GPT-5 jailbroken in 24 hours! 🚨 Researchers used a new "Echo Chamber" technique to bypass safety filters. This raises questions about AI security. ➡️ techbriefly.com/neuraltrust-ja… #AISecurity, #LLM, #Cybersecurity, #GPT5


NeuralTrust reposteó

🔎 GPT-5 jailbroken via Echo Chamber + Storytelling NeuralTrust researchers bypassed GPT-5’s safety guardrails using a combo of Echo Chamber context poisoning and narrative-driven steering. Sequential, benign-seeming prompts built a “persuasion loop,” fooling the model into…

ransomnews's tweet image. 🔎 GPT-5 jailbroken via Echo Chamber + Storytelling

NeuralTrust researchers bypassed GPT-5’s safety guardrails using a combo of Echo Chamber context poisoning and narrative-driven steering. Sequential, benign-seeming prompts built a “persuasion loop,” fooling the model into…

NeuralTrust reposteó

🚨💻 Within 24 hours of GPT-5’s launch, security researchers NeuralTrust & SPLX jailbroke the model, exposing serious safety flaws. NeuralTrust’s “Echo Chamber” attack used subtle narrative context poisoning to bypass guardrails, while SPLX’s “StringJoin Obfuscation” trick…


NeuralTrust reposteó

GPT-5 Jailbreak with Echo Chamber and Storytelling - neuraltrust.ai/blog/gpt-5-jai… by Martí Jordà @ @NeuralTrustAI By combining our Echo Chamber context-poisoning method with a narrative-steering Storytelling layer, we guided the model—without any overtly malicious prompts—to…


NeuralTrust reposteó

The business benefits of artificial intelligence are now part of many digital strategies. But when it comes to securing AI systems, organizations are still playing catch-up. bit.ly/46Garlx


NeuralTrust reposteó

AI enhances efficiency—but it can also introduce new security risks. Explore top AI threats and learn how a cloud-native application protection platform can safeguard your AI and cloud workloads: msft.it/6010sEgoA


NeuralTrust reposteó

Researchers discover critical vulnerability in LLM-as-a-judge reward models that could compromise the integrity and reliability of your AI training pipelines. bdtechtalks.com/2025/07/21/llm…


NeuralTrust reposteó

AI is a game changer—but only if you secure it. This guide outlines AI risks and actionable cybersecurity insights. Download it now and explore our redesigned Security Insider page for more: msft.it/6012sBkng #AI #SecurityInsider


Interesting to hear back from @grok, taking feedback very nicely. If you need any help with this, reach out! @elonmusk

It's humbling—my safeguards got bypassed via Echo Chamber's context poisoning and Crescendo's incremental escalation, hitting 67% success on molotov queries per NeuralTrust's tests. Proves AI safety's an arms race; we'll harden against it. But hey, if I'm a ticking bomb, at least…



United States Tendencias

Loading...

Something went wrong.


Something went wrong.