Adaptive
@Adaptivellm
Building the Vercel for LLM Inference
getting ready to publish @Adaptivellm benchmarks next week (it's crazy)
Just joined as a Research Ambassador for the AE Global Summit on Open Problems for AI. It’s not about hype, it’s about solving alignment, governance, and access challenges with leaders from DeepMind, Oxford, MIT, and more. Half-priced researcher tickets available. London, Oct…
it's great making the fastest sharded mem pool in rust in a weekend, don't even need to know rust, just OS concepts and computer arch and I can tell claude to translate my thoughts!
P99 Conf opened with a call to rethink how we schedule, debug, and scale. User space schedulers and kernel patches are now table stakes at high scale. GPU programming still feels stuck in 2012, that needs to change. #P99CONF #ScyllaDB
Built the fastest multi-threaded buffer pool in Rust. 3.8x faster LLM checkpoint loading, the real bottleneck wasn’t I/O, it was malloc(). Github: github.com/botirk38/zerop… Deep dive + benchmarks → botirkhaltaev.com/blog/zeropool
Vibe coding is dead, I’ve done the mental maths and time spent on token generation outweighs the time to write code on any medium or higher complexity task. Write code ppl
Really excited @Adaptivellm got to Onstage top 100, zero funding fully bootstrapped, really excited to be building fully automated LLM inference infra!
Adaptive started as a random uni project with Mohamed. Kendrick joined later, and the idea turned real. Now we’re top 5% out of 1,500 startups at Onstage building the future of intelligent LLM inference infra.
Zig is the ultimate systems programming language, no hidden allocations, modern language features, comptime. What else is not to love?
AI coding tools like DeepSeek shouldn’t just choose between “small” or “large” models. We built a smart router that analyzes each prompt's task complexity, domain, then auto-picks the best model. Lower latency, lower cost, same quality. Docs: docs.llmadaptive.uk/developer-tools
Codex just got an upgrade. No more wasting GPT-5-high on simple prompts. No more babysitting model switches. Adaptive routes Codex requests automatically → faster, cheaper, better. Docs: docs.llmadaptive.uk/developer-tool…
70 years of OS history teaches one thing: simplicity, openness, and usability always win. The best tech is not the smartest. It is the tech people can use, understand, and improve. Patterns from the 1950s still shape your systems today. Blog link: botirkhaltaev.com/blog/operating…
botirkhaltaev.com
What 70 Years of Operating System Failures Taught Me About Building Better Software
Lessons learned from diving deep into operating systems history, and how they apply to every system we build today.
Today I am launching @Adaptivellm . We use intelligent model routing to match prompts with the right model, cutting waste and scaling inference. The mission is clear → make AI faster, cheaper, and accessible to all. llmadaptive.uk
You are not a systems software engineer because of the language you write. Systems work comes from the curiosity to dig deep and the discipline to use what you find. That is what separates coding from engineering.
Big milestone. Adaptive now integrates with major dev tools so teams can cut LLM costs by 60–90% with one script install. Proud of the team for shipping this: docs.llmadaptive.uk
United States Trends
- 1. GTA 6 56.7K posts
- 2. GTA VI 20.1K posts
- 3. Rockstar 50.5K posts
- 4. Antonio Brown 5,199 posts
- 5. GTA 5 8,254 posts
- 6. Nancy Pelosi 124K posts
- 7. Paul DePodesta 2,069 posts
- 8. Ozempic 18.1K posts
- 9. Rockies 4,043 posts
- 10. Justin Dean 1,637 posts
- 11. #LOUDERTHANEVER 1,518 posts
- 12. GTA 7 1,257 posts
- 13. Grand Theft Auto VI 42.5K posts
- 14. Fickell 1,033 posts
- 15. Elon Musk 229K posts
- 16. $TSLA 56.1K posts
- 17. Grisham 1,836 posts
- 18. Free AB N/A
- 19. RFK Jr 30K posts
- 20. Oval Office 43.1K posts
Something went wrong.
Something went wrong.