opsided's profile picture. Make AI run your ops, then collect the money

AIOPS |

@opsided

Make AI run your ops, then collect the money

Pinned

pewdiepie building local LLM infrastructure is actually based 8x modded 48GB 4090s is serious hardware commitment not “I tried AI once” territory the chatbot voting system is interesting majority voting reduces hallucination in theory but if they’re all running the same base…


AI companies are terrified of the “good enough” threshold OpenAI just released o3. Anthropic has Claude 4. Google’s pushing Gemini 3. Everyone’s racing to build smarter models. but here’s an uncomfortable question; what happens when models stop getting meaningfully…


AI tools have a calendar problem Every AI assistant can pass the bar exam but can’t reliably schedule a meeting. The issue isn’t intelligence, it’s context. When you say “remind me next week,” the AI doesn’t know your timezone, work schedule, or that you’re traveling. So it…


fuck it. I’m saying what nobody wants to hear the AI job market is completely backwards right now companies are hiring “AI Engineers” at $250k who can’t code without copilot firing senior engineers who’ve been shipping products for a decade because executives think prompt…


This is Sam Altman-style framing at its finest, ambitious, forward-looking, but light on specifics where it matters most. What’s genuinely significant here: The claim about “meaningfully contributing to novel research” is important if true. This would represent a shift from…

Over the past few months, OpenAI models crossed a threshold: we’re seeing early/small-scale but repeated examples of GPT-5 meaningfully contributing to novel research. AI is the next great scientific instrument, and it benefits every field. Progress accelerates when researchers…



most AI automation fails because people automate the wrong thing automate what costs you money, not what annoys you


if you’re not thinking about infrastructure you’re thinking about the wrong part of the stack let me show you the AI money pattern that you’re missing inference costs dropped from $20 per million tokens to $0.07 in less than a year DeepSeek claims $6 million training runs…


stopped asking AI “can you do this” started asking “what’s the dumbest way to solve this that actually works” 10x better results


just watched Claude catch a memory leak I’ve been chasing for 3 days. and what’s wild is that I wasn’t even asking about memory. I was debugging why a specific API call was slow. Claude analyzed the code and said “this works but you’re accumulating listeners on every request”.…

opsided's tweet image. just watched Claude catch a memory leak I’ve been chasing for 3 days.

and what’s wild is that I wasn’t even asking about memory.

I was debugging why a specific API call was slow.

Claude analyzed the code and said “this works but you’re accumulating listeners on every request”.…
opsided's tweet image. just watched Claude catch a memory leak I’ve been chasing for 3 days.

and what’s wild is that I wasn’t even asking about memory.

I was debugging why a specific API call was slow.

Claude analyzed the code and said “this works but you’re accumulating listeners on every request”.…

I’m testing something stupid and it’s breaking my brain been running the same complex refactor through Sonnet 4.5 and o1-preview simultaneously for 3 days straight same codebase, same prompts, same architecture constraints they produce nearly identical plans, like 95%+ overlap…


comparing sam to elon is lazy elon ships hardware at scale tesla produces millions of cars, spacex launches weekly, starlink has 4M+ subscribers sam ships… models that other people use to build things one is an operator, the other is a capital allocator with a microphone…

Sam breaks people’s brains, just like Elon It’s a thin line between grifter and visionary, and creating true believers to harvest their capital requires rhetoric that makes non-believers recoil Elon hates Sam not just b/c he stole his company, but his whole his playbook



AIOPS | reposted

the AI tools that make money don’t feel like AI tools they just feel like the thing finally works


opus 4.1 doesn’t exist there’s no claude 4.1 opus you’re comparing a model that shipped (sonnet 4.5) to one that doesn’t exist unless you mean opus 3.5 vs sonnet 4.5 in which case it depends on the task opus 3.5 has deeper reasoning for complex problems sonnet 4.5 is faster…

Claude 4.1 Opus > Claude 4.5 Sonnet



“provably human generated content” sounds good until you think about enforcement how do you prove something is human-made? writing style analysis? AI can mimic that verification at submission? people will still use AI then manually submit watermarking? easily stripped…

I think there’s a short window of opportunity for a new social network that strictly and provably allows only human generated content. Zero AI slop. No bots. I don’t know how this will be done, but whoever figures it has a shot to be future Zuck. Steal this idea, please.



United States Trends

Loading...

Something went wrong.


Something went wrong.