lmstudiodevs's profile picture. Updates for developers building with @lmstudio SDKs and APIs 👾
npm i @lmstudio/sdk

LM Studio Developers

@lmstudiodevs

Updates for developers building with @lmstudio SDKs and APIs 👾 npm i @lmstudio/sdk

LM Studio Developers reposted

LM Studio 0.3.31 has shipped! What's new: 🏞️ OCR, VLM performance improvements 🛠️ MiniMax-M2 tool calling support ⚡️ Flash Attention on by default for CUDA 🚂 New CLI command: `lms runtime` See it in action 👇


LM Studio Developers reposted

LM Studio queues requests for a single loaded model instance, not allowing true parallelism (at least for now). THE TRICK : If you have sufficient VRAM, you can load the same model multiple times under different names (instances). Simultaneous requests to these instances will…


LM Studio Developers reposted

LM Studio now ships for NVIDIA's DGX Spark! @nvidia DGX Spark is a tiny but mighty Linux ARM box with 128GB of unified memory. Grace Blackwell architecture. CUDA 13. ✨👾

lmstudio's tweet image. LM Studio now ships for NVIDIA's DGX Spark!

@nvidia DGX Spark is a tiny but mighty Linux ARM box with 128GB of unified memory.

Grace Blackwell architecture. CUDA 13.

✨👾

LM Studio Developers reposted

In addition to the venerable chat completions compat API, @lmstudio now supports /v1/responses! 1. Swap out the openai base url to point to LM Studio 2. Load up gpt-oss 3. Profit

Introducing OpenAI Responses API compatibility! /v1/responses on localhost. Supports stateful responses, custom tool use, and setting reasoning level for local LLMs. 👇🧵

lmstudio's tweet image. Introducing OpenAI Responses API compatibility!

/v1/responses on localhost.

Supports stateful responses, custom tool use, and setting reasoning level for local LLMs.

👇🧵


LM Studio Developers reposted

Introducing OpenAI Responses API compatibility! /v1/responses on localhost. Supports stateful responses, custom tool use, and setting reasoning level for local LLMs. 👇🧵

lmstudio's tweet image. Introducing OpenAI Responses API compatibility!

/v1/responses on localhost.

Supports stateful responses, custom tool use, and setting reasoning level for local LLMs.

👇🧵

LM Studio Developers reposted

LM Studio 0.3.28 is out now 🛳️ 🫰 Easily choose MLX and GGUF variants, or different quantizations of the same model!

lmstudio's tweet image. LM Studio 0.3.28 is out now 🛳️

🫰 Easily choose MLX and GGUF variants, or different quantizations of the same model!

LM Studio Developers reposted

You can just run qwen3-coder on a macbook w/ @lmstudio

qwen3-coder is so shockingly solid in cline when I run it locally on my 36gb ram macbook



LM Studio Developers reposted

Preparing for a packed 2 weeks of updates

lmstudio's tweet image. Preparing for a packed 2 weeks of updates

LM Studio 0.3.27 build 1 (beta) is available now. New: when loading a model, the selected context length will be taken into account for memory guardrails estimates

lmstudiodevs's tweet image. LM Studio 0.3.27 build 1 (beta) is available now.

New: when loading a model, the selected context length will be taken into account for memory guardrails estimates

LM Studio Developers reposted

LM Studio CLI tool to manage, automate, and script local LLM workflows from the terminal

tom_doerr's tweet image. LM Studio CLI tool to manage, automate, and script local LLM workflows from the terminal

LM Studio Developers reposted

There's a new open embedding model in town! lms get google/embedding-gemma-300m 300m parameters, 2048 context length, supports 100+ languages.

Introducing EmbeddingGemma: our new open, state-of-the-art embedding model designed for on-device AI 📱



LM Studio Developers reposted

.@cline now has a new “compact system prompt”, designed for local models. Thoughtful context construction is key when using local models as coding agents. Try it with Qwen3 Coder 30B: lms get qwen/qwen3-coder-30b

For the first time, you can run Cline completely offline. LM Studio + Qwen3 Coder 30B + Cline's new compact prompt system = a local coding environment that works on your laptop. Your code never leaves your machine. No API costs. No internet dependency.



United States Trends

Loading...

Something went wrong.


Something went wrong.