#llamacpp search results

The best local ChatGPT just dropped. A new interface for Llama.cpp lets you run 150K open-source models locally — no limits, no censorship, full privacy. Works on any device, supports files, and it’s FREE. 🔗 github.com/ggml-org/llama… #LlamaCpp


【ハイスペックPC勢、朗報です!🎉】 ついに巨大な #Qwen3Next モデルが、あなたのPCで動く日が来るかも!✨ #llamacpp がQwen3-Next-80Bのサポート検証を開始しました! クラウド不要!高性能AIをローカルで安全に使える可能性が浮上! API課金なし、高精度AIをガンガン試せて費用対効果も抜群◎…

ai_hakase_'s tweet image. 【ハイスペックPC勢、朗報です!🎉】
ついに巨大な #Qwen3Next モデルが、あなたのPCで動く日が来るかも!✨
#llamacpp がQwen3-Next-80Bのサポート検証を開始しました!
クラウド不要!高性能AIをローカルで安全に使える可能性が浮上!
API課金なし、高精度AIをガンガン試せて費用対効果も抜群◎…

Wha! Just booted up the local Linux server and realized that I was playing with #llamacpp last year! Thanks to ⁦@ollama⁩ now it’s a daily occurrence on my Mac!

sojoodi's tweet image. Wha! Just booted up the local Linux server and realized that I was playing with #llamacpp last year! Thanks to ⁦@ollama⁩ now it’s a daily occurrence on my Mac!

Having a ton of fun playing with grammars in #llamacpp. The following is based on the 7B #codellama model. Without grammar constraints, I get content moderation errors. With grammar based on JSON, I get a usable response. And with a fixed JSON Schema, I get an answer that…

ctyau's tweet image. Having a ton of fun playing with grammars in #llamacpp. The following is based on the 7B #codellama model. Without grammar constraints, I get content moderation errors. With grammar based on JSON, I get a usable response. And with a fixed JSON Schema, I get an answer that…

Just a simple chatbot based on #LLM for #ROS2. It is controlled by a YASMIN state machine that uses #llamacpp + #whispercpp + #VITS.


we need llama.cpp for voice models, anyone doing it already? #llamacpp #llamacpp4voice


Running llama_ros (#llamacpp for #ROS2) in the SteamOS of my Steam Deck.


【朗報】ローカルAIの常識が変わる!?🚀 llama.cppに「GroveMoE」が統合! 「高性能AIをローカルで動かすのは難しい…」と諦めていた方に朗報です⚡️ ついに「GroveMoE」が「#llamacpp」にマージ!PCでもパワフルAIがサクサク動く時代が来るかも!まさに「賢い省エネAI」の幕開けですね✨ 💡…

ai_hakase_'s tweet image. 【朗報】ローカルAIの常識が変わる!?🚀 llama.cppに「GroveMoE」が統合!

「高性能AIをローカルで動かすのは難しい…」と諦めていた方に朗報です⚡️
ついに「GroveMoE」が「#llamacpp」にマージ!PCでもパワフルAIがサクサク動く時代が来るかも!まさに「賢い省エネAI」の幕開けですね✨

💡…

Actually geeking with privateGPT and ingesting a bunch of PDF (#langchain #GPT4all #LLamaCPP). What's totally cool about PrivateGPT is that it's "private", no data sharing / leaks. Which is totally mandatory for corporate / company documents. Repo => github.com/imartinez/priv… I'll…


Locally running new #qwen3 instruct model (via #llamacpp), in a locally running chat UI, with locally running #MCP servers and locally hosted search engine, the speeds here are insane 🚀 The new @Alibaba_Qwen A3B Instruct is a very impressive model, congrats to the team!


Не знал, что в #llamacpp добавили жопу

ttldtor's tweet image. Не знал, что в #llamacpp добавили жопу

Now you can generate an heatmap of correctness and response time, with llm-eval-simple #llm #localai #llamacpp #ollama

techonsapevole's tweet image. Now you can generate an heatmap of correctness and response time, with llm-eval-simple #llm #localai #llamacpp #ollama

llamacppをUnityで動かすやつ。llamacpp.swiftにデフォルト設定されてるモデルだと、ネイティブアプリと同じ速度感だったので、やはりモデルの問題だった。ので、llamacppがUnityで動く環境は手に入れられたぞ #LLAMA #llamacpp #Unity #madewithunity


Want an even easier way to play with #llamacpp on your #M1 (or #M2) with #LLaVA 1.5 #multimodal #model fine-tuned on top of #Llama2? Download #llamafile, spin up the web UI, and I asked what were the components in the image.

dennylee's tweet image. Want an even easier way to play with #llamacpp on your #M1 (or #M2) with #LLaVA 1.5  #multimodal #model fine-tuned on top of #Llama2? Download #llamafile, spin up the web UI, and I asked what were the components in the image.

檢驗TAIDE-LX-7B-Chat繁體中文語言模型的黨性時候到了。 很好,中華民國台灣確實是一個國家,這個新AI模型回答再也不會像CKIP-LlaMa-2-7B一樣翻車了。 #llamacpp #llama

Ivon852's tweet image. 檢驗TAIDE-LX-7B-Chat繁體中文語言模型的黨性時候到了。
很好,中華民國台灣確實是一個國家,這個新AI模型回答再也不會像CKIP-LlaMa-2-7B一樣翻車了。
#llamacpp #llama

🚫 No GPU? No problem. Sergiu Nagailic hit 92+ tokens/sec on llama.cpp with just a CPU Threadripper setup. Great insights for low-resource LLM devs. Read: bit.ly/4ofrYGY #llamacpp #OpenSourceAI #DrupalAI #LLMDev

thedroptimes's tweet image. 🚫 No GPU? No problem.

Sergiu Nagailic hit 92+ tokens/sec on llama.cpp with just a CPU Threadripper setup.

Great insights for low-resource LLM devs.

Read: bit.ly/4ofrYGY 

#llamacpp #OpenSourceAI #DrupalAI #LLMDev

The best local ChatGPT just dropped. A new interface for Llama.cpp lets you run 150K open-source models locally — no limits, no censorship, full privacy. Works on any device, supports files, and it’s FREE. 🔗 github.com/ggml-org/llama… #LlamaCpp


we need llama.cpp for voice models, anyone doing it already? #llamacpp #llamacpp4voice


🚫 No GPU? No problem. Sergiu Nagailic hit 92+ tokens/sec on llama.cpp with just a CPU Threadripper setup. Great insights for low-resource LLM devs. Read: bit.ly/4ofrYGY #llamacpp #OpenSourceAI #DrupalAI #LLMDev

thedroptimes's tweet image. 🚫 No GPU? No problem.

Sergiu Nagailic hit 92+ tokens/sec on llama.cpp with just a CPU Threadripper setup.

Great insights for low-resource LLM devs.

Read: bit.ly/4ofrYGY 

#llamacpp #OpenSourceAI #DrupalAI #LLMDev

【ハイスペックPC勢、朗報です!🎉】 ついに巨大な #Qwen3Next モデルが、あなたのPCで動く日が来るかも!✨ #llamacpp がQwen3-Next-80Bのサポート検証を開始しました! クラウド不要!高性能AIをローカルで安全に使える可能性が浮上! API課金なし、高精度AIをガンガン試せて費用対効果も抜群◎…

ai_hakase_'s tweet image. 【ハイスペックPC勢、朗報です!🎉】
ついに巨大な #Qwen3Next モデルが、あなたのPCで動く日が来るかも!✨
#llamacpp がQwen3-Next-80Bのサポート検証を開始しました!
クラウド不要!高性能AIをローカルで安全に使える可能性が浮上!
API課金なし、高精度AIをガンガン試せて費用対効果も抜群◎…

【朗報】ローカルAIの常識が変わる!?🚀 llama.cppに「GroveMoE」が統合! 「高性能AIをローカルで動かすのは難しい…」と諦めていた方に朗報です⚡️ ついに「GroveMoE」が「#llamacpp」にマージ!PCでもパワフルAIがサクサク動く時代が来るかも!まさに「賢い省エネAI」の幕開けですね✨ 💡…

ai_hakase_'s tweet image. 【朗報】ローカルAIの常識が変わる!?🚀 llama.cppに「GroveMoE」が統合!

「高性能AIをローカルで動かすのは難しい…」と諦めていた方に朗報です⚡️
ついに「GroveMoE」が「#llamacpp」にマージ!PCでもパワフルAIがサクサク動く時代が来るかも!まさに「賢い省エネAI」の幕開けですね✨

💡…

#DevLog: jj-fzf ✨ Alt-S: Start Interactive Restore ⏪ Oplog Alt-V: Revert Operation 📝 Ctrl-D: Automatic Merge Messages 🏷️ New: Bookmark Untrack / Push-New 🧠 Ctrl-S: LLM Commit Messages → #LLamacpp #Gemini #OpenAI #Jujutsu #VCS #jjfzf #AI #LLM #100DaysOfCode #DevTools

TimJanik's tweet image. #DevLog: jj-fzf

✨ Alt-S: Start Interactive Restore

⏪ Oplog Alt-V: Revert Operation

📝 Ctrl-D: Automatic Merge Messages

🏷️ New: Bookmark Untrack / Push-New

🧠 Ctrl-S: LLM Commit Messages
→ #LLamacpp #Gemini #OpenAI

#Jujutsu #VCS #jjfzf  #AI #LLM #100DaysOfCode #DevTools

Llama.cpp now pulls GGUF models directly from Docker Hub By using OCI-compliant registries like Docker Hub, the AI community can build more robust, reproducible, and scalable MLOps pipelines. Learn more: docker.com/blog/llama.cpp… #Docker #llamacpp #GGUF


Engineer's Guide to running Local LLMs with #llamacpp on Ubuntu, @Alibaba_Qwen Coder 30B running locally along with QwenCode in your terminal dev.to/avatsaev/pro-d…


ローカルLLMはllama.cppで実用化。GGUFと1.5〜8bit量子化で軽く、OpenAI互換APIを手元で提供できる。 CLI/サーバー同梱(llama-cli / llama-server)。CUDA/Metal/Vulkan/HIP対応、デフォルトで8080番にHTTPサーバ起動。-hfでHugging Faceからモデル直取得。#llamacpp #GGUF


Now you can generate an heatmap of correctness and response time, with llm-eval-simple #llm #localai #llamacpp #ollama

techonsapevole's tweet image. Now you can generate an heatmap of correctness and response time, with llm-eval-simple #llm #localai #llamacpp #ollama

I converted FreeSEED’s gpt-oss-120B (TW-corpus finetune, a specialized LLM optimized for thinking in Taiwanese Mandarin) to GGUF so it can run on llama.cpp. HF: huggingface.co/hydaitw/gpt-os… Quant: MXFP4_MOE #llamacpp #GGUF #LLM


The goal of Ollama uses to be about running LLMs locally, not using remote server. I understand you guys are looking to make money but this is another project I am stopping to support after going commercial. Going back to #llamacpp.


Wow, wow, wow, wow!!! 💪💪💪👌 #llamacpp #GPToss

The ultimate guide for using gpt-oss with llama.cpp - Runs on any device - Supports NVIDIA, Apple, AMD and others - Support for efficient CPU offloading - The most lightweight inference stack today github.com/ggml-org/llama…



No results for "#llamacpp"

Now you can generate an heatmap of correctness and response time, with llm-eval-simple #llm #localai #llamacpp #ollama

techonsapevole's tweet image. Now you can generate an heatmap of correctness and response time, with llm-eval-simple #llm #localai #llamacpp #ollama

My GUI for Llama.cpp #LLAMACPP #LocalLLaMA

AGI0K's tweet image. My GUI for Llama.cpp #LLAMACPP #LocalLLaMA

llama_sampler_sample が落ちてしまうね。 Dartにバインドしたからか?謎。 有識者さん助けて〜! llama_sampler_sample crashes. Is it because I bound it to Dart? It's a mystery. Experts please help! #Flutter #Dart #llamacpp #GGUF #有識者求む #プログラミング

rewaishi's tweet image. llama_sampler_sample が落ちてしまうね。
Dartにバインドしたからか?謎。

有識者さん助けて〜!

llama_sampler_sample crashes.
Is it because I bound it to Dart? It's a mystery.

Experts please help!

#Flutter #Dart #llamacpp #GGUF #有識者求む #プログラミング
rewaishi's tweet image. llama_sampler_sample が落ちてしまうね。
Dartにバインドしたからか?謎。

有識者さん助けて〜!

llama_sampler_sample crashes.
Is it because I bound it to Dart? It's a mystery.

Experts please help!

#Flutter #Dart #llamacpp #GGUF #有識者求む #プログラミング

Web embedding using Llama.cui + #Qwen2 #llamacpp, you can ask it anything github.com/dspasyuk/llama…

SpasyukD's tweet image. Web embedding using Llama.cui + #Qwen2 #llamacpp, you can ask it anything github.com/dspasyuk/llama…

My GUI for Llama.cpp #LLAMACPP #LocalLLaMA

AGI0K's tweet image. My GUI for Llama.cpp
#LLAMACPP #LocalLLaMA

Не знал, что в #llamacpp добавили жопу

ttldtor's tweet image. Не знал, что в #llamacpp добавили жопу

【ハイスペックPC勢、朗報です!🎉】 ついに巨大な #Qwen3Next モデルが、あなたのPCで動く日が来るかも!✨ #llamacpp がQwen3-Next-80Bのサポート検証を開始しました! クラウド不要!高性能AIをローカルで安全に使える可能性が浮上! API課金なし、高精度AIをガンガン試せて費用対効果も抜群◎…

ai_hakase_'s tweet image. 【ハイスペックPC勢、朗報です!🎉】
ついに巨大な #Qwen3Next モデルが、あなたのPCで動く日が来るかも!✨
#llamacpp がQwen3-Next-80Bのサポート検証を開始しました!
クラウド不要!高性能AIをローカルで安全に使える可能性が浮上!
API課金なし、高精度AIをガンガン試せて費用対効果も抜群◎…

Having a ton of fun playing with grammars in #llamacpp. The following is based on the 7B #codellama model. Without grammar constraints, I get content moderation errors. With grammar based on JSON, I get a usable response. And with a fixed JSON Schema, I get an answer that…

ctyau's tweet image. Having a ton of fun playing with grammars in #llamacpp. The following is based on the 7B #codellama model. Without grammar constraints, I get content moderation errors. With grammar based on JSON, I get a usable response. And with a fixed JSON Schema, I get an answer that…

Want an even easier way to play with #llamacpp on your #M1 (or #M2) with #LLaVA 1.5 #multimodal #model fine-tuned on top of #Llama2? Download #llamafile, spin up the web UI, and I asked what were the components in the image.

dennylee's tweet image. Want an even easier way to play with #llamacpp on your #M1 (or #M2) with #LLaVA 1.5  #multimodal #model fine-tuned on top of #Llama2? Download #llamafile, spin up the web UI, and I asked what were the components in the image.

🚫 No GPU? No problem. Sergiu Nagailic hit 92+ tokens/sec on llama.cpp with just a CPU Threadripper setup. Great insights for low-resource LLM devs. Read: bit.ly/4ofrYGY #llamacpp #OpenSourceAI #DrupalAI #LLMDev

thedroptimes's tweet image. 🚫 No GPU? No problem.

Sergiu Nagailic hit 92+ tokens/sec on llama.cpp with just a CPU Threadripper setup.

Great insights for low-resource LLM devs.

Read: bit.ly/4ofrYGY 

#llamacpp #OpenSourceAI #DrupalAI #LLMDev

Well, for some reason I am unable to trigger Alpaca's tinfoil head mode. Too bad. But lots of fun anyway. 🤣🤣 #llama #alpaca #llamacpp #serge

evilJazz's tweet image. Well, for some reason I am unable to trigger Alpaca's tinfoil head mode. Too bad. But lots of fun anyway. 🤣🤣 #llama #alpaca #llamacpp #serge

So @LangChainAI its working pretty decent with CodeLlama v2 34B (based off @llama v2) on my two P40's. Its a bit slow but usable. 15t/s average, but #llamacpp context caching makes it usable even with large System Prompts :D Made a telegram bot to be easier to test interaction.

lucasteske's tweet image. So @LangChainAI its working pretty decent with CodeLlama v2 34B (based off @llama v2) on my two P40's. Its a bit slow but usable. 15t/s average, but #llamacpp context caching makes it usable even with large System Prompts :D

Made a telegram bot to be easier to test interaction.

Does size scale of your AI matter? Scaling up raises challenges. If you run multiple nodes, you’ve thought of orchestration, monitoring, load balancing. We cover this in the AI Devroom at #FOSDEM. Since we like #llamacpp, we couldn’t miss #Paddler. Listen to @mcharytoniuk!

dadarstan's tweet image. Does size scale of your AI matter? Scaling up raises challenges. If you run multiple nodes, you’ve thought of orchestration, monitoring, load balancing. We cover this in the AI Devroom at #FOSDEM. Since we like #llamacpp, we couldn’t miss #Paddler. Listen to @mcharytoniuk!

tested zephyr-7b-alpha as API #LLM #llamacpp #api #googlecolab #python

0xkoji's tweet image. tested zephyr-7b-alpha as API
#LLM #llamacpp #api #googlecolab #python

first version of the #python #boost #llamacpp #llamaplugin linking and calling python github.com/ggerganov/llam… next is to pass in variables

introsp3ctor's tweet image. first version of the #python #boost #llamacpp #llamaplugin linking and calling python github.com/ggerganov/llam… next is to pass in variables

Listen to one of the most anticipated speakers in our DevRoom: Iwan Kawrakow. You sure know him is one of most significant contributors to #llamacpp. Got quantization questions? Comment and get answers on February 2 at 10:30 at Low-level AI Engineering and Hacking!

dadarstan's tweet image. Listen to one of the most anticipated speakers in our DevRoom: Iwan Kawrakow. You sure know him is one of most significant contributors to #llamacpp. Got quantization questions? Comment and get answers on February 2 at 10:30 at Low-level AI Engineering and Hacking!

【朗報】ローカルAIの常識が変わる!?🚀 llama.cppに「GroveMoE」が統合! 「高性能AIをローカルで動かすのは難しい…」と諦めていた方に朗報です⚡️ ついに「GroveMoE」が「#llamacpp」にマージ!PCでもパワフルAIがサクサク動く時代が来るかも!まさに「賢い省エネAI」の幕開けですね✨ 💡…

ai_hakase_'s tweet image. 【朗報】ローカルAIの常識が変わる!?🚀 llama.cppに「GroveMoE」が統合!

「高性能AIをローカルで動かすのは難しい…」と諦めていた方に朗報です⚡️
ついに「GroveMoE」が「#llamacpp」にマージ!PCでもパワフルAIがサクサク動く時代が来るかも!まさに「賢い省エネAI」の幕開けですね✨

💡…

Wha! Just booted up the local Linux server and realized that I was playing with #llamacpp last year! Thanks to ⁦@ollama⁩ now it’s a daily occurrence on my Mac!

sojoodi's tweet image. Wha! Just booted up the local Linux server and realized that I was playing with #llamacpp last year! Thanks to ⁦@ollama⁩ now it’s a daily occurrence on my Mac!

Loading...

Something went wrong.


Something went wrong.


United States Trends