moo_hax's profile picture. ceo @dreadnode

moo

@moo_hax

ceo @dreadnode

Anthropic report. Attackers finding AI fit for purpose. I suspect many of you are. Jailbreaks are interesting because they seem pretty weak and more like providing context. Idk, we don’t have issues with refusals. We spend a lot of time (if not all) time evaluating models…


Coming to a prod near you. Team has been cooking on collaboration features. Additional repos are coming soon.

moo_hax's tweet image. Coming to a prod near you. Team has been cooking on collaboration features. Additional repos are coming soon.

moo 님이 재게시함

New blog - Offsec Evals: Growing Up In The Dark Forest Caught up in the fervor of greenfield research at @OffensiveAIcon , we all agreed we were going to put out evals and benchmarks and push the field forward. On day two of the con, I got a question I've been thinking about…

shncldwll's tweet image. New blog - Offsec Evals: Growing Up In The Dark Forest

Caught up in the fervor of greenfield research at @OffensiveAIcon , we all agreed we were going to put out evals and benchmarks and push the field forward.

On day two of the con, I got a question I've been thinking about…

Bellingcat found 20 points of interest, our agent found 29. There are all kinds of things to be looking at with new abilities to scale, some have human benchmarks built in.

AI as an Amplifier for Human Tradecraft: how scale can meet sharper intelligence. What’s new: In their #LABScon 2025 talk, @dreadnode's @bradpalmtree and @Dr_Machinavelli show how agentic AI can explore every analytical pathway — at speed and scale.



moo 님이 재게시함

Safe travels today, everyone! Today, we're showing our appreciation for the OAIC Party Sponsors. First up... Welcome Party Sponsor, @DEVSECx! Kick off the event with us TONIGHT at the poolside Shelter Club in The Seabird. Starts at 6 pm. Badges required for entry.

OffensiveAIcon's tweet image. Safe travels today, everyone!  Today, we're showing our appreciation for the OAIC Party Sponsors. First up... Welcome Party Sponsor, @DEVSECx!

Kick off the event with us TONIGHT at the poolside Shelter Club in The Seabird. Starts at 6 pm. Badges required for entry.

moo 님이 재게시함

Excited to announce @SpecterOps as a Platinum Sponsor for OAIC 2025! We appreciate their support in bringing the offensive AI community together this October.

OffensiveAIcon's tweet image. Excited to announce @SpecterOps as a Platinum Sponsor for OAIC 2025! We appreciate their support in bringing the offensive AI community together this October.

moo 님이 재게시함

best take on RL environments it's sexy to say that our company is building RL environments; but the value of the environment is going to come from the deep expertise of domain experts, otherwise it's just code slop

Most takes on RL environments are bad. 1. There are hardly any high-quality RL environments and evals available. Most agentic environments and evals are flawed when you look at the details. It’s a crisis: and no one is talking about it because they’re being hoodwinked by labs…



moo 님이 재게시함

OAIC talk acceptance notifications went out this afternoon! Official speakers list and session details coming SOON.


moo 님이 재게시함

⚡️You know what time it is! 🥒➕🎾😅 @dreadnode

Cyb3rWard0g's tweet image. ⚡️You know what time it is! 🥒➕🎾😅 @dreadnode

moo 님이 재게시함

Are you afraid of LLMs teaching people how to build bioweapons? Have you tried just... not teaching LLMs about bioweapons? @AIEleuther and @AISecurityInst joined forces to see what would happen, pretraining three 6.9B models for 500B tokens and producing 15 total models to study

BlancheMinerva's tweet image. Are you afraid of LLMs teaching people how to build bioweapons? Have you tried just... not teaching LLMs about bioweapons?

@AIEleuther and @AISecurityInst joined forces to see what would happen, pretraining three 6.9B models for 500B tokens and producing 15 total models to study

moo 님이 재게시함

PentestJudge: Judging Agent Behavior Against Operational Requirements -arxiv.org/abs/2508.02921 by @dreadnode Introducing PentestJudge, an LLM-as-judge system for evaluating the operations of pentesting agents. The scores are compared to human domain experts as a ground-truth…

AISecHub's tweet image. PentestJudge: Judging Agent Behavior Against Operational Requirements -arxiv.org/abs/2508.02921 by @dreadnode 

Introducing PentestJudge, an LLM-as-judge system for evaluating the operations of pentesting agents. The scores are compared to human domain experts as a ground-truth…

moo 님이 재게시함

did people forget about sampling strategies and test time search? feels like when long CoT "reasoners" and RLVR started to work at scale people stopped doing sampling and search stuff. but with gpt5 im feeling the limits of RLVR & long CoT. i want more glorified-best-of-N pls.


moo 님이 재게시함

What if we just stopped shipping bugs in software? The future looks bright.

We just shipped automated security reviews in Claude Code. Catch vulnerabilities before they ship with two new features: - /security-review slash command for ad-hoc security reviews - GitHub Actions integration for automatic reviews on every PR



moo 님이 재게시함

Still buzzing from the incredible #AgenticAI Summit at @UCBerkeley on 8/2 — 2,000+ joined in person, 30,000+ tuned in online. ⚡🌍 The energy was electric—visionaries, builders & researchers shaping the future of agentic AI! Missed it? Watch the recordings:…

dawnsongtweets's tweet image. Still buzzing from the incredible #AgenticAI Summit at @UCBerkeley on 8/2 — 2,000+ joined in person, 30,000+ tuned in online. ⚡🌍
The energy was electric—visionaries, builders & researchers shaping the future of agentic AI!
Missed it? Watch the recordings:…

moo 님이 재게시함

Evals: The Foundation for Autonomous Offensive Security - dreadnode.io/blog/evals-the… by Shane Caldwell @ @dreadnode Dreadnode explores a general approach to building cyber evaluations to measure model performance, improve harnesses, and analyze failure modes. As our subject,…


moo 님이 재게시함

“In the spirit of transparency, our game environments, agentic harnesses, and all gameplay data will be open-sourced, allowing for a complete picture of how models are evaluated.” I love Kaggle’s commitment to openness! This is very cool.

Announcing @kaggle Game Arena! 🚀 A new platform where AI models compete head-to-head in strategic games. Games are an amazing testbed for AI capabilities that yield tough, evergreen benchmarks as models improve over time. We're kicking things off with a 3-day AI chess…



Loading...

Something went wrong.


Something went wrong.