You might like
On Feb 17 2025 I reported a critical vulnerability to @Scroll_ZKP. $100m+ in TVL was at risk for more than 2 months. Anyone could force Scroll L2 into an indefinite re-org, halting the chain so that no user transactions would be included in blocks and the chain would not move…
🎉 Announcing our Season 1 Airdrop and non-transferrable Zircuit Token (ZRC)! We’re rewarding early stakers, partners, and builders who’ve contributed to Zircuit and shaped our ecosystem 🤝 More details below👇
Gmeow 💚 We're excited to announce our Mainnet funding round to help build the safest L2 with Sequencer Level Security that prevents smart contract exploits. 👇
We are thrilled to share that @BinanceLabs has invested in Zircuit 🐈 🧵
We’ve invested in @ZircuitL2 Zircuit is a zk rollup with parallelized circuits and AI-enabled security at the sequencer level. Read more👇 binance.com/en/blog/ecosys…
even the sora mistakes are mesmerizing
Wow super happy to see PentestGPT on top 15 new tools list:)
🏆 Top New Tools of 2023: 11) writeout.ai 12) AI Commits 13) Cody (by @sourcegraph) 14) PentestGPT 15) @trychroma 16) Chart-GPT 17) @modal_labs 18) Chat Thing (by @pixelhopio) 19) SupportGPT (by @forethought_ai) 20) @sweep__ai stackshare.io/posts/top-deve…
My friends made this super cool AI-based card game. Welcome to give a try and your feedback is needed! discord.gg/dq6zDHPmKk
If you are auditing a smart contract and see math calculations for the slippage parameter to be passed to a swap operation, it is highly likely this is a Medium/High severity issue. Slippage parameters should only be calculated off-chain, because of possible sandwich attacks
One of the best ways to learn about previous smart contract hacks, understand them in depth and read the code with which the attack can be executed? Here it is, 10/10 resource github.com/coinspect/lear…
Today, we are disclosing LeftoverLocals, a vulnerability that allows listening to LLM responses through leaked GPU local memory created by another process on Apple, Qualcomm, AMD, and Imagination GPUs (CVE-2023-4969) buff.ly/48RDP68
Exploring a fascinating idea: Using poisoned Retrieval-Augmented Generation (RAG) to effectively jailbreak LLMs. We're seeing definitive results with malicious queries. Currently drafting a paper and preparing a demo GPT sample:chat.openai.com/g/g-qiRqlcGti-…
GitLab warns of critical zero-click account hijacking vulnerability - @billtoulas bleepingcomputer.com/news/security/…
devs told me to do something, dunno could be psyops ❇️ @ZircuitL2 og.zircuit.com/?referral=774C…
GPTs for PentestGPT is out. The original project will be upgraded to support OpenAI assitants:) chat.openai.com/g/g-4MHbTepWO-…
Introducing Zircuit ✨ an EVM-compatible zero-knowledge rollup powering the limitless potential of web3 ❇️ zircuit.com/blog/zircuit-t…
🔐 "MASTERKEY": Unveiling vulnerabilities in LLM chatbots! 🤖 We've reverse-engineered defenses & auto-generated jailbreak prompts with high success. Breaches on #ChatGPT & more. Full paper out now! #AI #LLM #JailbreakAI 🛡️ arxiv.org/abs/2307.08715
🚀 Excited to announce the release of our automated prompt injection framework, HouYi! Dive into the code and explore its capabilities here: github.com/LLMSecurity/Ho… 🛠️ #CyberSecurity #OpenSource #LLMs #llmops
github.com
GitHub - LLMSecurity/HouYi: The automated prompt injection framework for LLM-integrated applicati...
The automated prompt injection framework for LLM-integrated applications. - LLMSecurity/HouYi
PentestGPT now supports OpenAI API now, but GPT-4 is so damn slow...
United States Trends
- 1. #SmackDown 24.3K posts
- 2. Zack Ryder 3,453 posts
- 3. Matt Cardona N/A
- 4. #OPLive N/A
- 5. Clemson 5,249 posts
- 6. Landry Shamet N/A
- 7. Bubba 44.9K posts
- 8. Bill Clinton 148K posts
- 9. Jey Uso 3,635 posts
- 10. Mitchell Robinson N/A
- 11. End 1Q N/A
- 12. Drummond 1,261 posts
- 13. #OPNation N/A
- 14. #TNATurningPoint 3,965 posts
- 15. Cam Boozer N/A
- 16. Nikes 1,281 posts
- 17. Kevin James 8,612 posts
- 18. Ersson N/A
- 19. LA Knight 4,761 posts
- 20. Josh Hart N/A
You might like
Something went wrong.
Something went wrong.