RpxDeveloper's profile picture. Common Lisp / RISC-V / OpenBSD

RPX

@RpxDeveloper

Common Lisp / RISC-V / OpenBSD

Enlightening 😃OpenAI-compatible mode for minimax m2 fix.

OMG I have been using Minimax M2 wrong; no wonder it is dumber than GLM 4.6 on some tests. So far MiniMax m2 only works when using Claude Code in anthropic format (doesn't work well when using openAI providers). In other words, it is gimped in most places that provide it!…



RPX reposted

🚀MiniMax Coding Plan — live now! Mini Price, Max Performance Pick your tier: Starter / Plus / Max Build smarter, ship faster, pay less — we want you to have it all. 🔗 platform.minimax.io/subscribe/codi…

MiniMax__AI's tweet image. 🚀MiniMax Coding Plan — live now! Mini Price, Max Performance

Pick your tier: Starter / Plus / Max
Build smarter, ship faster, pay less  — we want you to have it all. 

🔗 platform.minimax.io/subscribe/codi…

RPX reposted

As we mentioned before, happy to introduce Mini Agent, a simple yet powerful CLI demo built with MiniMax M2. 💡 Simple: 14 Python files, 3.3K lines, clean & extensible. ⚙️ Powerful: beautiful CLI, native file/bash tools, auto compaction, MCP integration, and Claude Skill support…


RPX reposted

Hope my fellow #OpenBSD developers are having an amazing time in Coimbra, Portugal this week at the #h2k25 #hackathon! 🐡

canadianbryan's tweet image. Hope my fellow #OpenBSD developers are having an amazing time in Coimbra, Portugal this week at the #h2k25 #hackathon! 🐡

RPX reposted

Thanks everyone for testing Kimi K2 Thinking and sharing benchmark results! We've noticed that benchmark outcomes can vary across providers. Some third-party endpoints show substantial accuracy drops (e.g., 20+ pp), which has negatively affected scores on reasoning-heavy tasks…


RPX reposted

GLM 4.6 is the real deal


RPX reposted

Kimi K2 Thinking is here! Scale up reasoning with more thinking tokens and tool-call steps. Now live on kimi.com, the Kimi app, and API.


RPX reposted

Great to see GLM-4.6 powering Cerebras Code. This is exactly why we open weights: so teams can combine their own infra and ideas with GLM capabilities, and bring more choices to developers worldwide. Huge welcome to all partners building on GLM. Let’s grow the ecosystem…

Cerebras Code just got an UPGRADE. It's now powered by GLM 4.6 Pro Plans ($50): 300k ▶️ 1M TPM @  24M Tokens/day Max Plans ($200): 400k ▶️ 1.5M TPM @ 120M Tokens/day Fastest GLM provider on the planet at 1000 tokens/s and at 131K context. Get yours before we run out 👇



RPX reposted

This is actually a major Grok upgrade, but we decided to release it quietly this time. Gave us time to smooth out the rough edges.

GROK JUST GOT A WHOLE LOT SMARTER xAI quietly patched Grok-4-Fast... and the before/after is wild. “Reasoning” mode jumped from 77.5% to 94.1% complete responses. “Non-reasoning”? Up to a staggering 97.9%. All thanks to one thing: better injected system prompts. No press…

MarioNawfal's tweet image. GROK JUST GOT A WHOLE LOT SMARTER

xAI quietly patched Grok-4-Fast... and the before/after is wild.

“Reasoning” mode jumped from 77.5% to 94.1% complete responses. 

“Non-reasoning”? Up to a staggering 97.9%.

All thanks to one thing: better injected system prompts.

No press…


RPX reposted

Amp Free just got a BIG upgrade: ~65% faster, *much* smarter model (ads pay well, we 5x'd ALL rate limits!). Happy weekend coding. Will keep this if our tests and your feedback are all positive. (`amp update` or update Amp extension to use.)


RPX reposted

Introducing MiniMax M2 Plans! API at 8–10% of Claude Sonnet’s cost. Coding Plan, 10% cost, 2x usage limits! Staying true to our vision "Intelligence with everyone", making frontier AI accessable and affordable for all


RPX reposted

MLX - Kimi K2 Thinking model 1T parameters running on 2 M3 Ultras 512GB with Qwen 3 Image Edit in parallel on one node as well: Node 1: 350GB Node 2: 460GB Let's see the final result of this experiment 🤞🏻


RPX reposted

Grok upgrades

Someone from xAI reached out and asked me to retest grok-4-fast, because they've improved the injected system prompts. Huge improvement! grok-4-fast-reasoning: 77.5% -> 94.1% grok-4-fast-non-reasoning: 77.9 -> 97.9% I really appreciate that xAI takes this topic seriously.

xlr8harder's tweet image. Someone from xAI reached out and asked me to retest grok-4-fast, because they've improved the injected system prompts. Huge improvement!

grok-4-fast-reasoning: 77.5% -> 94.1%
grok-4-fast-non-reasoning: 77.9 -> 97.9%

I really appreciate that xAI takes this topic seriously.


RPX reposted

These numbers are amazing for an open-source model. We're working on bringing this model up for Perplexity users with our own deployment in US data centers.

AravSrinivas's tweet image. These numbers are amazing for an open-source model. We're working on bringing this model up for Perplexity users with our own deployment in US data centers.

RPX reposted

I wanted to make all these Steve Jobs puns, but it was getting too weird. Anyway, this is a big one. Basically, Crush lets LLMs do all kinds of really interesting things in parallel, far beyond coding. If you’re playing with infra I’d definitely give this a look.

New in Crush today: Jobs! Crush can now run and manage background processes. Spin up a dozen Xcode builds, start a swarm of Docker containers, and go crazy. The asynchronous world is now at your (LLM’s) fingertips.



Coding Plans for Mini Max: Starter: $10 / month (Equivalent to Claude Code Max 5x) Pro: $20 / month (Equivalent to Claude Code Max 20x) Max: $50 / month (Equivalent to Claude Code Max 20x)


Wow...that's awesome! 😃

New in Crush today: Jobs! Crush can now run and manage background processes. Spin up a dozen Xcode builds, start a swarm of Docker containers, and go crazy. The asynchronous world is now at your (LLM’s) fingertips.



Loading...

Something went wrong.


Something went wrong.