#llamacpp 搜尋結果

The best local ChatGPT just dropped. A new interface for Llama.cpp lets you run 150K open-source models locally — no limits, no censorship, full privacy. Works on any device, supports files, and it’s FREE. 🔗 github.com/ggml-org/llama… #LlamaCpp


Wha! Just booted up the local Linux server and realized that I was playing with #llamacpp last year! Thanks to ⁦@ollama⁩ now it’s a daily occurrence on my Mac!

sojoodi's tweet image. Wha! Just booted up the local Linux server and realized that I was playing with #llamacpp last year! Thanks to ⁦@ollama⁩ now it’s a daily occurrence on my Mac!

I am gonna be a #pinokio script writing fool once @cocktailpeanut releases v5! Currently using the new agent feature to help me build a super lightweight LLM chat app using #llamacpp to serve models which are dynamically downloaded from @huggingface

halr9000's tweet image. I am gonna be a #pinokio script writing fool once @cocktailpeanut releases v5! Currently using the new agent feature to help me build a super lightweight LLM chat app using #llamacpp to serve models which are dynamically downloaded from @huggingface

【ハイスペックPC勢、朗報です!🎉】 ついに巨大な #Qwen3Next モデルが、あなたのPCで動く日が来るかも!✨ #llamacpp がQwen3-Next-80Bのサポート検証を開始しました! クラウド不要!高性能AIをローカルで安全に使える可能性が浮上! API課金なし、高精度AIをガンガン試せて費用対効果も抜群◎…

ai_hakase_'s tweet image. 【ハイスペックPC勢、朗報です!🎉】
ついに巨大な #Qwen3Next モデルが、あなたのPCで動く日が来るかも!✨
#llamacpp がQwen3-Next-80Bのサポート検証を開始しました!
クラウド不要!高性能AIをローカルで安全に使える可能性が浮上!
API課金なし、高精度AIをガンガン試せて費用対効果も抜群◎…

Running llama_ros (#llamacpp for #ROS2) in the SteamOS of my Steam Deck.


【朗報】ローカルAIの常識が変わる!?🚀 llama.cppに「GroveMoE」が統合! 「高性能AIをローカルで動かすのは難しい…」と諦めていた方に朗報です⚡️ ついに「GroveMoE」が「#llamacpp」にマージ!PCでもパワフルAIがサクサク動く時代が来るかも!まさに「賢い省エネAI」の幕開けですね✨ 💡…

ai_hakase_'s tweet image. 【朗報】ローカルAIの常識が変わる!?🚀 llama.cppに「GroveMoE」が統合!

「高性能AIをローカルで動かすのは難しい…」と諦めていた方に朗報です⚡️
ついに「GroveMoE」が「#llamacpp」にマージ!PCでもパワフルAIがサクサク動く時代が来るかも!まさに「賢い省エネAI」の幕開けですね✨

💡…

Just a simple chatbot based on #LLM for #ROS2. It is controlled by a YASMIN state machine that uses #llamacpp + #whispercpp + #VITS.


Не знал, что в #llamacpp добавили жопу

ttldtor's tweet image. Не знал, что в #llamacpp добавили жопу

we need llama.cpp for voice models, anyone doing it already? #llamacpp #llamacpp4voice


Actually geeking with privateGPT and ingesting a bunch of PDF (#langchain #GPT4all #LLamaCPP). What's totally cool about PrivateGPT is that it's "private", no data sharing / leaks. Which is totally mandatory for corporate / company documents. Repo => github.com/imartinez/priv… I'll…


Want an even easier way to play with #llamacpp on your #M1 (or #M2) with #LLaVA 1.5 #multimodal #model fine-tuned on top of #Llama2? Download #llamafile, spin up the web UI, and I asked what were the components in the image.

dennylee's tweet image. Want an even easier way to play with #llamacpp on your #M1 (or #M2) with #LLaVA 1.5  #multimodal #model fine-tuned on top of #Llama2? Download #llamafile, spin up the web UI, and I asked what were the components in the image.

Experimental #LlamaCPP Code Completion Project for JetBrains IDE's using Java and any GGUF model 🤩 🔥 GitHub @ github.com/stephanj/Llama… /Cc @ggerganov @DevoxxGenie


Are you also tired of too many open browser tabs? I built a Tabulous chrome extension to help! Summarizes tab contents along other handy features. Fully local, fully open source. Thanks! @LangChainAI @FastAPI @huggingface @hwchase17 #llamacpp #llama2 github.com/naveen-tirupat…


🚫 No GPU? No problem. Sergiu Nagailic hit 92+ tokens/sec on llama.cpp with just a CPU Threadripper setup. Great insights for low-resource LLM devs. Read: bit.ly/4ofrYGY #llamacpp #OpenSourceAI #DrupalAI #LLMDev

thedroptimes's tweet image. 🚫 No GPU? No problem.

Sergiu Nagailic hit 92+ tokens/sec on llama.cpp with just a CPU Threadripper setup.

Great insights for low-resource LLM devs.

Read: bit.ly/4ofrYGY 

#llamacpp #OpenSourceAI #DrupalAI #LLMDev

Having a ton of fun playing with grammars in #llamacpp. The following is based on the 7B #codellama model. Without grammar constraints, I get content moderation errors. With grammar based on JSON, I get a usable response. And with a fixed JSON Schema, I get an answer that…

ctyau's tweet image. Having a ton of fun playing with grammars in #llamacpp. The following is based on the 7B #codellama model. Without grammar constraints, I get content moderation errors. With grammar based on JSON, I get a usable response. And with a fixed JSON Schema, I get an answer that…

檢驗TAIDE-LX-7B-Chat繁體中文語言模型的黨性時候到了。 很好,中華民國台灣確實是一個國家,這個新AI模型回答再也不會像CKIP-LlaMa-2-7B一樣翻車了。 #llamacpp #llama

Ivon852's tweet image. 檢驗TAIDE-LX-7B-Chat繁體中文語言模型的黨性時候到了。
很好,中華民國台灣確實是一個國家,這個新AI模型回答再也不會像CKIP-LlaMa-2-7B一樣翻車了。
#llamacpp #llama

llamacppをUnityで動かすやつ。llamacpp.swiftにデフォルト設定されてるモデルだと、ネイティブアプリと同じ速度感だったので、やはりモデルの問題だった。ので、llamacppがUnityで動く環境は手に入れられたぞ #LLAMA #llamacpp #Unity #madewithunity


AI adoption is on the rise in India, with significant investments in AI technologies. Generative AI is expected to unlock between US $2.6 trillion and US $4.4 trillion in additional value. #AI #OpenSource #LlamaCpp


#EVOX2 #llamacpp 2倍の高速化ってすごいな 並列度を上げて(SMを最低でも2ブロック動かす・・であってる?)、FlashAttentionのメモリ領域確保を固定長から可変長にしたみたい github.com/ggml-org/llama…


I am gonna be a #pinokio script writing fool once @cocktailpeanut releases v5! Currently using the new agent feature to help me build a super lightweight LLM chat app using #llamacpp to serve models which are dynamically downloaded from @huggingface

halr9000's tweet image. I am gonna be a #pinokio script writing fool once @cocktailpeanut releases v5! Currently using the new agent feature to help me build a super lightweight LLM chat app using #llamacpp to serve models which are dynamically downloaded from @huggingface

Had a great chat about @BrodieOnLinux if you want to learn about Docker Model Runner tune in. Push/pull and encapsulate AI models as simply as you run a container. We dive into many different tangential topics #DockerModelRunner #llamacpp #ai #LLMs


The best local ChatGPT just dropped. A new interface for Llama.cpp lets you run 150K open-source models locally — no limits, no censorship, full privacy. Works on any device, supports files, and it’s FREE. 🔗 github.com/ggml-org/llama… #LlamaCpp


we need llama.cpp for voice models, anyone doing it already? #llamacpp #llamacpp4voice


🚫 No GPU? No problem. Sergiu Nagailic hit 92+ tokens/sec on llama.cpp with just a CPU Threadripper setup. Great insights for low-resource LLM devs. Read: bit.ly/4ofrYGY #llamacpp #OpenSourceAI #DrupalAI #LLMDev

thedroptimes's tweet image. 🚫 No GPU? No problem.

Sergiu Nagailic hit 92+ tokens/sec on llama.cpp with just a CPU Threadripper setup.

Great insights for low-resource LLM devs.

Read: bit.ly/4ofrYGY 

#llamacpp #OpenSourceAI #DrupalAI #LLMDev

【ハイスペックPC勢、朗報です!🎉】 ついに巨大な #Qwen3Next モデルが、あなたのPCで動く日が来るかも!✨ #llamacpp がQwen3-Next-80Bのサポート検証を開始しました! クラウド不要!高性能AIをローカルで安全に使える可能性が浮上! API課金なし、高精度AIをガンガン試せて費用対効果も抜群◎…

ai_hakase_'s tweet image. 【ハイスペックPC勢、朗報です!🎉】
ついに巨大な #Qwen3Next モデルが、あなたのPCで動く日が来るかも!✨
#llamacpp がQwen3-Next-80Bのサポート検証を開始しました!
クラウド不要!高性能AIをローカルで安全に使える可能性が浮上!
API課金なし、高精度AIをガンガン試せて費用対効果も抜群◎…

【朗報】ローカルAIの常識が変わる!?🚀 llama.cppに「GroveMoE」が統合! 「高性能AIをローカルで動かすのは難しい…」と諦めていた方に朗報です⚡️ ついに「GroveMoE」が「#llamacpp」にマージ!PCでもパワフルAIがサクサク動く時代が来るかも!まさに「賢い省エネAI」の幕開けですね✨ 💡…

ai_hakase_'s tweet image. 【朗報】ローカルAIの常識が変わる!?🚀 llama.cppに「GroveMoE」が統合!

「高性能AIをローカルで動かすのは難しい…」と諦めていた方に朗報です⚡️
ついに「GroveMoE」が「#llamacpp」にマージ!PCでもパワフルAIがサクサク動く時代が来るかも!まさに「賢い省エネAI」の幕開けですね✨

💡…

#DevLog: jj-fzf ✨ Alt-S: Start Interactive Restore ⏪ Oplog Alt-V: Revert Operation 📝 Ctrl-D: Automatic Merge Messages 🏷️ New: Bookmark Untrack / Push-New 🧠 Ctrl-S: LLM Commit Messages → #LLamacpp #Gemini #OpenAI #Jujutsu #VCS #jjfzf #AI #LLM #100DaysOfCode #DevTools

TimJanik's tweet image. #DevLog: jj-fzf

✨ Alt-S: Start Interactive Restore

⏪ Oplog Alt-V: Revert Operation

📝 Ctrl-D: Automatic Merge Messages

🏷️ New: Bookmark Untrack / Push-New

🧠 Ctrl-S: LLM Commit Messages
→ #LLamacpp #Gemini #OpenAI

#Jujutsu #VCS #jjfzf  #AI #LLM #100DaysOfCode #DevTools

Llama.cpp now pulls GGUF models directly from Docker Hub By using OCI-compliant registries like Docker Hub, the AI community can build more robust, reproducible, and scalable MLOps pipelines. Learn more: docker.com/blog/llama.cpp… #Docker #llamacpp #GGUF


Engineer's Guide to running Local LLMs with #llamacpp on Ubuntu, @Alibaba_Qwen Coder 30B running locally along with QwenCode in your terminal dev.to/avatsaev/pro-d…


ローカルLLMはllama.cppで実用化。GGUFと1.5〜8bit量子化で軽く、OpenAI互換APIを手元で提供できる。 CLI/サーバー同梱(llama-cli / llama-server)。CUDA/Metal/Vulkan/HIP対応、デフォルトで8080番にHTTPサーバ起動。-hfでHugging Faceからモデル直取得。#llamacpp #GGUF


未找到 "#llamacpp" 的結果

My GUI for Llama.cpp #LLAMACPP #LocalLLaMA

AGI0K's tweet image. My GUI for Llama.cpp
#LLAMACPP #LocalLLaMA

Wha! Just booted up the local Linux server and realized that I was playing with #llamacpp last year! Thanks to ⁦@ollama⁩ now it’s a daily occurrence on my Mac!

sojoodi's tweet image. Wha! Just booted up the local Linux server and realized that I was playing with #llamacpp last year! Thanks to ⁦@ollama⁩ now it’s a daily occurrence on my Mac!

My GUI for Llama.cpp #LLAMACPP #LocalLLaMA

AGI0K's tweet image. My GUI for Llama.cpp #LLAMACPP #LocalLLaMA

Well, for some reason I am unable to trigger Alpaca's tinfoil head mode. Too bad. But lots of fun anyway. 🤣🤣 #llama #alpaca #llamacpp #serge

evilJazz's tweet image. Well, for some reason I am unable to trigger Alpaca's tinfoil head mode. Too bad. But lots of fun anyway. 🤣🤣 #llama #alpaca #llamacpp #serge

Не знал, что в #llamacpp добавили жопу

ttldtor's tweet image. Не знал, что в #llamacpp добавили жопу

llama_sampler_sample が落ちてしまうね。 Dartにバインドしたからか?謎。 有識者さん助けて〜! llama_sampler_sample crashes. Is it because I bound it to Dart? It's a mystery. Experts please help! #Flutter #Dart #llamacpp #GGUF #有識者求む #プログラミング

rewaishi's tweet image. llama_sampler_sample が落ちてしまうね。
Dartにバインドしたからか?謎。

有識者さん助けて〜!

llama_sampler_sample crashes.
Is it because I bound it to Dart? It's a mystery.

Experts please help!

#Flutter #Dart #llamacpp #GGUF #有識者求む #プログラミング
rewaishi's tweet image. llama_sampler_sample が落ちてしまうね。
Dartにバインドしたからか?謎。

有識者さん助けて〜!

llama_sampler_sample crashes.
Is it because I bound it to Dart? It's a mystery.

Experts please help!

#Flutter #Dart #llamacpp #GGUF #有識者求む #プログラミング

Web embedding using Llama.cui + #Qwen2 #llamacpp, you can ask it anything github.com/dspasyuk/llama…

SpasyukD's tweet image. Web embedding using Llama.cui + #Qwen2 #llamacpp, you can ask it anything github.com/dspasyuk/llama…

So @LangChainAI its working pretty decent with CodeLlama v2 34B (based off @llama v2) on my two P40's. Its a bit slow but usable. 15t/s average, but #llamacpp context caching makes it usable even with large System Prompts :D Made a telegram bot to be easier to test interaction.

lucasteske's tweet image. So @LangChainAI its working pretty decent with CodeLlama v2 34B (based off @llama v2) on my two P40's. Its a bit slow but usable. 15t/s average, but #llamacpp context caching makes it usable even with large System Prompts :D

Made a telegram bot to be easier to test interaction.

Want an even easier way to play with #llamacpp on your #M1 (or #M2) with #LLaVA 1.5 #multimodal #model fine-tuned on top of #Llama2? Download #llamafile, spin up the web UI, and I asked what were the components in the image.

dennylee's tweet image. Want an even easier way to play with #llamacpp on your #M1 (or #M2) with #LLaVA 1.5  #multimodal #model fine-tuned on top of #Llama2? Download #llamafile, spin up the web UI, and I asked what were the components in the image.

Un poco mejor ahora, 38 tokens por segundo al evaluar el prompt. Lo extraño es que si el prompt tiene menos de 360 tokens el rendimiento cae a menos de 8 tokens por segundo #llamacpp #python @abetlen

titos_carrasco's tweet image. Un poco mejor ahora, 38 tokens por segundo al evaluar el prompt. Lo extraño es que si el prompt tiene menos de 360 tokens el rendimiento cae a menos de 8 tokens por segundo

#llamacpp #python @abetlen

Interesante para seguir jugando con #llamacpp #ia #LLM

titos_carrasco's tweet image. Interesante para seguir jugando con #llamacpp #ia #LLM

🔥 Running local LLMs in Node.js just got EASY! Been using llama-cpp-node — super lightweight, offline, and insanely fast. That’s literally it. No APIs. No costs. 100% local. 🧠⚡ #NodeJS #AI #LlamaCpp #LLM #JavaScript #OpenSource

khrasedul_dev's tweet image. 🔥 Running local LLMs in Node.js just got EASY!

Been using llama-cpp-node — super lightweight, offline, and insanely fast.

That’s literally it.
No APIs. No costs. 100% local. 🧠⚡

#NodeJS #AI #LlamaCpp #LLM #JavaScript #OpenSource

Context matters. 🧠 Using llama-cpp-node, you can boost model accuracy just by increasing the context window. Longer memory = smarter offline AI. Super useful for chatbots & agents. ⚡ #LlamaCpp #NodeJS #AI #LLM #JavaScript

khrasedul_dev's tweet image. Context matters. 🧠
Using llama-cpp-node, you can boost model accuracy just by increasing the context window.

Longer memory = smarter offline AI.
Super useful for chatbots & agents. ⚡

#LlamaCpp #NodeJS #AI #LLM #JavaScript

Life goal completed.. thanks to my awesome friend @LumpenLue12 and @huggingface @ggerganov @abetlen wooooho! #llamacpp #llamacpp #llamacpp! I'm celebrating this little win for me! sorry the tags for me is an inspiration! Thank U!

prce__'s tweet image. Life goal completed.. thanks to my awesome friend @LumpenLue12 and @huggingface @ggerganov @abetlen wooooho! #llamacpp #llamacpp #llamacpp! I'm celebrating this little win for me! sorry the tags for me is an inspiration! Thank U!

Having a ton of fun playing with grammars in #llamacpp. The following is based on the 7B #codellama model. Without grammar constraints, I get content moderation errors. With grammar based on JSON, I get a usable response. And with a fixed JSON Schema, I get an answer that…

ctyau's tweet image. Having a ton of fun playing with grammars in #llamacpp. The following is based on the 7B #codellama model. Without grammar constraints, I get content moderation errors. With grammar based on JSON, I get a usable response. And with a fixed JSON Schema, I get an answer that…

first version of the #python #boost #llamacpp #llamaplugin linking and calling python github.com/ggerganov/llam… next is to pass in variables

introsp3ctor's tweet image. first version of the #python #boost #llamacpp #llamaplugin linking and calling python github.com/ggerganov/llam… next is to pass in variables

Listen to one of the most anticipated speakers in our DevRoom: Iwan Kawrakow. You sure know him is one of most significant contributors to #llamacpp. Got quantization questions? Comment and get answers on February 2 at 10:30 at Low-level AI Engineering and Hacking!

dadarstan's tweet image. Listen to one of the most anticipated speakers in our DevRoom: Iwan Kawrakow. You sure know him is one of most significant contributors to #llamacpp. Got quantization questions? Comment and get answers on February 2 at 10:30 at Low-level AI Engineering and Hacking!

Loading...

Something went wrong.


Something went wrong.


United States Trends