#localllama search results
Local LLaMAが思考プロセスを学習!AIが知性のパートナーに進化です✨ 「Zero Freeze Formula」でAIが専門知識の「思考」を習得!物理学の難問を解く推論過程まで学習しちゃいます!あなたのAIも高度なパートナーへ🚀 #LocalLLaMA #AI進化
LocalLLaMAが超進化!AIが資料を一括管理します!✨ 手元のPCで動くLocalLLaMAがVecMLファイルマネージャーと統合!数百万のPDFやExcelも一括でAIが扱えるように。RAGシステムで情報検索が爆速に!🚀 #LocalLLaMA #AI活用
Voice-to-LLM-to-Voice, all running client-side via Transformers.js? Impressive local deployment overcoming browser limitations. Next stop: configurable voice cloning in P5.js sketches? We are decentralizing inference fast. #LocalLLaMA bly.to/Sn78ukR
Wdyt of my Flutter plugin for local LLM inference? Here's a demo of the included example app. Any model format in #GGUF format is supported. Sandbox is turned on, thus this allows devs to upload it on the App Store. #localllama
I got Meta's Large Language Model (llama2-70B) running locally on the dual RTX 3090 AI PC I just built 🤓 48GB of VRAM lets me run the full 70 billion parameter model (GPT3.5 level) with no limits & no Internet connection required. What should I ask it? #LocalLLM #LocalLlama
Local LLM Qwen2.5-Coder-32B-Instruct-Q4_K_M plays Snake game. Sometimes gets stuck and needs a game reset #llm #LocalLLaMA
I'll say we're currently living in the peak of the #localLLAmA era. This is how I do web search on #LobeChat now.
今回の投稿が参考になったら嬉しいです!👆️ポストのいいね、RTでの応援もぜひお願いします!😊 🔗 Reddit #LocalLLaMA コミュニティで話題になった「コスパ最強GPU」議論はこちらからチェック! reddit.com/r/LocalLLaMA/c…
Summary of r/localllama top 3 posts: 1️⃣ Polish found most effective language for prompting AI (Euronews study) 2️⃣ 200+ pg guide to Hugging Face training techniques 3️⃣ Questions on HonestAGI's recent silence #LocalLLaMA #AI #PromptEngineering
👆️のポストで紹介したリンクはこちらです!興味があればぜひチェックしてくださいね😊 オープンソースモデル「Gemma-3-R1-12B-v1」を試せるリンクです。率直なフィードバックが期待できますよ! huggingface.co/TheDrummer/Gem… Redditの #LocalLLaMA…
Built an offline AI podcast generator for Android — 11+ voices, multi-character dialogues, fully offline. Would you use this? Feedback pls 👀 #OfflineAI #AI #LocalLlama #AndroidDev #AIPodcast #AIApps #MachineLearning #TechInnovation #VoiceAI #AICommunity
I’ll upload a new video today demonstrating on a new supported platform: Linux My Flutter plugin now supports Linux to run local AI models. Here’s my old Surface Pro 4 with 8GB’s of RAM. Thus far 5 out of 6 platforms supported :) #localllama #llamacpp #flutter
Local LLMs for drafts, routed cloud for the hard parts. JustSimpleChat gives you 200+ models (GPT-5 Pro, Claude Opus 4.1, DeepSeek R1, Grok 4) with citations + reasoning traces. One chat picks the best model per step. justsimple.chat #LocalLLaMA #AI #ChatGPT #DeepSeek
Local LLaMAが思考プロセスを学習!AIが知性のパートナーに進化です✨ 「Zero Freeze Formula」でAIが専門知識の「思考」を習得!物理学の難問を解く推論過程まで学習しちゃいます!あなたのAIも高度なパートナーへ🚀 #LocalLLaMA #AI進化
Show HN: HoleLLM Pro Distills Llama-3.1-70B → 3 perfect clauses 67% less tokens 100% coherent Full Colab Pro notebook $59 lifetime gumroad.com/l/holellmpro #AI #LLM #LocalLLaMA
🔥 Alibaba dropped Qwen3—GPT-4o rival w/ hybrid reasoning, MoE, 119 langs, 36T tokens! I **tamed the 235B-A22B flagship locally** on 4× RTX 3090 (96 GB VRAM, NVLink) → **Q3_K @ 45 t/s**! Open-weights on HF. Who’s quanting it next? 💪 #Qwen3 #LocalLLaMA @QwenLM…
LocalLLaMAが超進化!AIが資料を一括管理します!✨ 手元のPCで動くLocalLLaMAがVecMLファイルマネージャーと統合!数百万のPDFやExcelも一括でAIが扱えるように。RAGシステムで情報検索が爆速に!🚀 #LocalLLaMA #AI活用
今回の投稿が参考になったら嬉しいです!👆️ポストのいいね、RTでの応援もぜひお願いします!😊 🔗 Reddit #LocalLLaMA コミュニティで話題になった「コスパ最強GPU」議論はこちらからチェック! reddit.com/r/LocalLLaMA/c…
Local LLMs for drafts, routed cloud for the hard parts. JustSimpleChat gives you 200+ models (GPT-5 Pro, Claude Opus 4.1, DeepSeek R1, Grok 4) with citations + reasoning traces. One chat picks the best model per step. justsimple.chat #LocalLLaMA #AI #ChatGPT #DeepSeek
Which model actually wins your prompt? Run 200+ side-by-side: GPT-5, Claude Opus 4.1, DeepSeek R1, Grok 4 Fast. Smart routing, deep research w/ citations, <100ms start. Free tier. Stop tab-hopping. justsimple.chat #ChatGPT #LocalLLaMA #DeepSeek #Claude
👆️のポストで紹介したリンクはこちらです!興味があればぜひチェックしてくださいね😊 オープンソースモデル「Gemma-3-R1-12B-v1」を試せるリンクです。率直なフィードバックが期待できますよ! huggingface.co/TheDrummer/Gem… Redditの #LocalLLaMA…
What's happening on r/LocalLLaMA? Frozen for 2 days. @kimmonismus @reach_vb @osanseviero @danielhanchen @bartowski1182 #Reddit #LocalLLaMA #AI #subreddit #blackout
🚀 MiniMax lance M1-80k: un géant de 456B paramètres (46B actifs) avec 1M tokens de contexte! Performances rivalisant Claude Opus, mais pas encore accessible localement. La communauté attend: "No GGUF, no go!" 🤖 #IA #LocalLLaMA patb.ca/r/2x0
Open vs closed models for domain-specific Q&A - we tested them so you don't have to 📊 Our findings might surprise you 👇 @Propheusai #AI #LocalLLaMA
📊From our Alchemy benchmarking results: Open-source models are rapidly closing the gap — and in some cases, even outperforming — closed-source counterparts on factual and pattern-based retail enterprise queries. As shown, Qwen 3–235B outperforms Claude Sonnet-4 on these tasks.
【新時代到来!? #MingLiteOmni がHugging Faceに爆誕ですっ!】 inclusionAIさんの超マルチモーダルAIモデル「Ming-Lite-Omni」がHugging Faceに公開されました!🎉 Redditの #LocalLLaMA コミュニティでも「New Model」として話題沸騰中ですね!…
Meta célèbre 1 milliard de téléchargements pour ses modèles Llama! Mais la communauté #LocalLLaMA reste partagée: est-ce vraiment "open source"? Les délais entre versions sont-ils trop longs? L'IA locale gagne du terrain face au cloud! #IA #OpenSo patb.ca/r/e91
七万多可以跑满血版 DeepSeek R1,比起 AMD EPYC DDR5 的方案贵一些,但是好处是 Mac Studio 有 GPU 坐等真机速度评测 #MacStudio #DeepSeek #localllama
Access #AI from python scripts with #LiteLLM - here for #LocalLLaMA all on prem docs.litellm.ai/docs/providers…
docs.litellm.ai
Ollama | liteLLM
LiteLLM supports all models from Ollama
Hopefully the #US will never forget the role of the home hobbyist, garage tinkerer, from #HamRadio to #LocalLLama & #Maker movement as the dynamic creative realm that produces many an entrepreneur in #Tech
So, while #AI #RAG is very useful, how to prepare a dataset for fine tuning from a pdf document? discussion from 9/24: discuss.huggingface.co/t/generate-dat… #LocalLlama
“Today, we're excited to introduce reasoning in Unsloth!” Home GPU #AI unsloth.ai/blog/r1-reason… #LocalLlama
To run your own Local AI Chatbot #LocalLLama , here is the minimum required hardware to work comfortably Apple Mac Studio M1 Max 10 cœurs 64 Go RAM 1 To SSD (used price around $2000) Or wait for Mac Studio M4 (M2 is not worth the upgrade from M1), which is coming up sometimes…
I got Meta's Large Language Model (llama2-70B) running locally on the dual RTX 3090 AI PC I just built 🤓 48GB of VRAM lets me run the full 70 billion parameter model (GPT3.5 level) with no limits & no Internet connection required. What should I ask it? #LocalLLM #LocalLlama
Local LLaMAが思考プロセスを学習!AIが知性のパートナーに進化です✨ 「Zero Freeze Formula」でAIが専門知識の「思考」を習得!物理学の難問を解く推論過程まで学習しちゃいます!あなたのAIも高度なパートナーへ🚀 #LocalLLaMA #AI進化
🚀 MiniMax lance M1-80k: un géant de 456B paramètres (46B actifs) avec 1M tokens de contexte! Performances rivalisant Claude Opus, mais pas encore accessible localement. La communauté attend: "No GGUF, no go!" 🤖 #IA #LocalLLaMA patb.ca/r/2x0
LocalLLaMAが超進化!AIが資料を一括管理します!✨ 手元のPCで動くLocalLLaMAがVecMLファイルマネージャーと統合!数百万のPDFやExcelも一括でAIが扱えるように。RAGシステムで情報検索が爆速に!🚀 #LocalLLaMA #AI活用
七万多可以跑满血版 DeepSeek R1,比起 AMD EPYC DDR5 的方案贵一些,但是好处是 Mac Studio 有 GPU 坐等真机速度评测 #MacStudio #DeepSeek #localllama
I’ll upload a new video today demonstrating on a new supported platform: Linux My Flutter plugin now supports Linux to run local AI models. Here’s my old Surface Pro 4 with 8GB’s of RAM. Thus far 5 out of 6 platforms supported :) #localllama #llamacpp #flutter
Meta célèbre 1 milliard de téléchargements pour ses modèles Llama! Mais la communauté #LocalLLaMA reste partagée: est-ce vraiment "open source"? Les délais entre versions sont-ils trop longs? L'IA locale gagne du terrain face au cloud! #IA #OpenSo patb.ca/r/e91
#Local🆕Llama Secretario de Salud a quedarse en casa durante #SemanaSanta #SemanaSantaEnCasa @NlSalud @comunicacionNL ➡️bit.ly/2x3m5rU
【新時代到来!? #MingLiteOmni がHugging Faceに爆誕ですっ!】 inclusionAIさんの超マルチモーダルAIモデル「Ming-Lite-Omni」がHugging Faceに公開されました!🎉 Redditの #LocalLLaMA コミュニティでも「New Model」として話題沸騰中ですね!…
To run your own Local AI Chatbot #LocalLLama , here is the minimum required hardware to work comfortably Apple Mac Studio M1 Max 10 cœurs 64 Go RAM 1 To SSD (used price around $2000) Or wait for Mac Studio M4 (M2 is not worth the upgrade from M1), which is coming up sometimes…
"What the heck is a llama mum!?" #localllama #llama #weim #topdogphoto #dogsofinstagram ift.tt/1iGPnjK
Something went wrong.
Something went wrong.
United States Trends
- 1. #AEWDynamite 18.5K posts
- 2. Will Richard 3,947 posts
- 3. #Survivor49 3,069 posts
- 4. #SistasOnBET 2,213 posts
- 5. Klay 6,634 posts
- 6. #ChicagoPD 1,026 posts
- 7. #AmphoreusStamp 2,604 posts
- 8. Godzilla 29.8K posts
- 9. Podz 1,379 posts
- 10. Unplanned 4,759 posts
- 11. Nico Harrison N/A
- 12. Spencer Knight N/A
- 13. Binnington 2,351 posts
- 14. Harrison Barnes N/A
- 15. Jovic 1,036 posts
- 16. Kent State 1,328 posts
- 17. Athena 10.3K posts
- 18. Savannah 5,479 posts
- 19. Sochan 1,452 posts
- 20. Pat Spencer N/A