The wait is over! As the leading AI code review tool, CodeRabbit was given early access to OpenAI's GPT-5 model to evaluate the LLM's ability to reason through and find errors in complex codebases! Our evals found GPT-5 performed up to 190% better than other leading models!

coderabbitai's tweet image. The wait is over!

As the leading AI code review tool, CodeRabbit was given early access to OpenAI's GPT-5 model to evaluate the LLM's ability to reason through and find errors in complex codebases!

Our evals found GPT-5 performed up to 190% better than other leading models!

As part of our GPT-5 testing, we conducted extensive evals to uncover the model’s technical nuances, capabilities, and use cases around common code review tasks using over 300 carefully selected PRs.

coderabbitai's tweet image. As part of our GPT-5 testing, we conducted extensive evals to uncover the model’s technical nuances, capabilities, and use cases around common code review tasks using over 300 carefully selected PRs.

Across the whole dataset, GPT-5 outperformed Opus-4, Sonnet-4, and OpenAI's O3 on a battery of 300 varying difficulty, error diverse pull requests – representing a 22%-30% improvement over other models

coderabbitai's tweet image. Across the whole dataset, GPT-5 outperformed Opus-4, Sonnet-4, and OpenAI's O3 on a battery of 300 varying difficulty, error diverse pull requests – representing a 22%-30% improvement over other models

We then tested GPT-5 on the hardest 200 PRs to see how much better it did on particularly hard to spot issues and bugs. It found 157 out of 200 bugs where other models found between 108 and 117. That represents a 34%-45% improvement!

coderabbitai's tweet image. We then tested GPT-5 on the hardest 200 PRs to see how much better it did on particularly hard to spot issues and bugs. It found 157 out of 200 bugs where other models found between 108 and 117. That represents a 34%-45% improvement!

On our 25 hardest PRs from our evaluation dataset, GPT-5 achieved the highest ever overall pass rate (77.3%), representing a 190% improvement over Sonnet-4, 132% over Opus-4, and 76% over O3.

coderabbitai's tweet image. On our 25 hardest PRs from our evaluation dataset, GPT-5 achieved the highest ever overall pass rate (77.3%), representing a 190% improvement over Sonnet-4, 132% over Opus-4, and 76% over O3.

Our results show that GPT-5 represents a significant improvement in the ability to reason through a codebase and find issues – thanks to some new capabilities.

coderabbitai's tweet image. Our results show that GPT-5 represents a significant improvement in the ability to reason through a codebase and find issues – thanks to some new capabilities.

Check out our in-depth testing process with detailed results and examples in our latest blog. coderabbit.ai/blog/benchmark…

coderabbitai's tweet image. Check out our in-depth testing process with detailed results and examples in our latest blog.

<a style="text-decoration: none;" rel="nofollow" target="_blank" href="https://www.coderabbit.ai/blog/benchmarking-gpt-5-why-its-a-generational-leap-in-reasoning">coderabbit.ai/blog/benchmark…</a>

So hyped



I’m actually 6”4 so


So, 🐰 is about to roast my PRs more than ever with GPT-5 in the sweetest way possible, huh?


We can be nice or mean, it's up to you!


Will this be turned on by default for reviews?


it'll go live in a few hours!


is this better than Claude? One subscription would be nice.


It’s looking good for GPT-5! Totally depends on your use case though doesn’t it, our new blog goes more in depth as to the testing and use-cases we used! coderabbit.ai/blog/benchmark…


at least some companies know how to make charts lmao



We only tested GPT-5 against previous top performers on our tests and ultimately grok 4 didn’t perform up to the same standard so wasn’t used in this test!


we are waiting for this to roll out


It‘ll be live on CodeRabbit soon!


How did the comparison stand with opus 4.1 and grok 4? Those are the leading models from competitors and they should present in comparison.


We only tested GPT-5 against previous top performers on our tests and ultimately grok 4 didn’t perform up to the same standard so wasn’t used in this test!



sounds like some solid testing there. if those numbers hold up, that could change the game for code reviews. curious to see how it holds up in real-world scenarios.


this is cool!


Seeing GPT-5’s reasoning boost makes me wonder—how will this impact the pace of catching subtle bugs that often slip through tests? 🤔 Would love to see some real-world debugging examples from CodeRabbit!


I wish, the evaluation code is publicly avaliable. So, other labs can do validation.


What about the smaller models like gpt5 if it is on sonnet 4 level that would be a massive price advantage.


thats insane, coderabbit about to make sure the tea doesn't get spilt again?


United States Tendenze
Loading...

Something went wrong.


Something went wrong.