#smallllm search results
Home-lab gap: Mac Studio has the bandwidth but no CUDA; DGX Spark has CUDA but not the bandwidth. Both have big unified memory for inference, but 7–14B fine-tuning is still bandwidth-bound. We need Studio-class bandwidth plus CUDA in one box. #SmallLLM #EdgeLLM #LoRA #QLoRA…
💡 What if building an AI assistant didn’t need billions or NDAs — just curiosity, a GPU & $100? Discover how NanoChat reshapes the cost-barrier of AI creation 🔍 👉 medium.com/@rogt.x1997/th… #SmallLLM #AITraining #OpenSourceAI
Hugging Face newest SmoLlm3 3B is now live on our WebAI platform, now locally runnable inside your user's browser. #SmallLLM
$100 training run. 1.9B parameters. 38B tokens. Four training stages. Control your AI instead of renting it. ⚙️ #SmallLLM #DIYAI #AIOwnership medium.com/p/the-100-shoc…
📽️ Watch this! An AI server running privately on my home network. Not only is this possible (and relatively easy), it's also fast, secure, and incredibly useful! Better than creepy Alexa - cleverer and less eavesdroppy. 😜 #SmallLLM #PrivateAI @OLLAMA
Is it possible to separate the reasoning and memory part out of a LLM? If so we can make personalized LLMs in a much cheaper and affordable way. Think of it as a person having not much memory power but quite a reasonable guy. #ML #Training #SmallLLM #AI
A fine-tuned small LLM delivers faster, more accurate, and cost-effective user assistance—tailored to your product’s needs. 🚀 zurl.co/c8fUN #AIAssistant #LLM #SmallLLM #FineTuning #UserAssistance #TechInnovation #DigitalTransformation #AI #MachineLearning
SmolLM is offering a remarkably small AI at 360 Million Parameters (The Standard is Billions) "Small Models can achieve impressive results?" It's right 30% of the time, ever time huggingface.co/spaces/Hugging… #SmolLM #SmallLLM #SLM #SmallLanguageModel Info: huggingface.co/blog/smollm
I am hopeful for this new ChatGPT AI angle for LLMs, but I can tell you that I successfully ran TinyLlama 1.1B on a Raspberry Pi 5 at quite a fast speed, which is only a 638 Megabyte download. #SmallLLM #SLM #TinyLlama #1BitLLM #1BitAI #TinyAI #TinyLLM github.com/jzhang38/TinyL…
✨ Microsoft 1-bit era paper (released in Feb) is really a masterpiece. BitNet b1.58 70B was 4.1 times faster and 8.9 times higher throughput capable than the corresponding FP16 LLaMa. 📌 Requires almost no multiplication operations for matrix multiplication and can be highly…
💡 What if building an AI assistant didn’t need billions or NDAs — just curiosity, a GPU & $100? Discover how NanoChat reshapes the cost-barrier of AI creation 🔍 👉 medium.com/@rogt.x1997/th… #SmallLLM #AITraining #OpenSourceAI
$100 training run. 1.9B parameters. 38B tokens. Four training stages. Control your AI instead of renting it. ⚙️ #SmallLLM #DIYAI #AIOwnership medium.com/p/the-100-shoc…
Home-lab gap: Mac Studio has the bandwidth but no CUDA; DGX Spark has CUDA but not the bandwidth. Both have big unified memory for inference, but 7–14B fine-tuning is still bandwidth-bound. We need Studio-class bandwidth plus CUDA in one box. #SmallLLM #EdgeLLM #LoRA #QLoRA…
Hugging Face newest SmoLlm3 3B is now live on our WebAI platform, now locally runnable inside your user's browser. #SmallLLM
A fine-tuned small LLM delivers faster, more accurate, and cost-effective user assistance—tailored to your product’s needs. 🚀 zurl.co/c8fUN #AIAssistant #LLM #SmallLLM #FineTuning #UserAssistance #TechInnovation #DigitalTransformation #AI #MachineLearning
Is it possible to separate the reasoning and memory part out of a LLM? If so we can make personalized LLMs in a much cheaper and affordable way. Think of it as a person having not much memory power but quite a reasonable guy. #ML #Training #SmallLLM #AI
SmolLM is offering a remarkably small AI at 360 Million Parameters (The Standard is Billions) "Small Models can achieve impressive results?" It's right 30% of the time, ever time huggingface.co/spaces/Hugging… #SmolLM #SmallLLM #SLM #SmallLanguageModel Info: huggingface.co/blog/smollm
I am hopeful for this new ChatGPT AI angle for LLMs, but I can tell you that I successfully ran TinyLlama 1.1B on a Raspberry Pi 5 at quite a fast speed, which is only a 638 Megabyte download. #SmallLLM #SLM #TinyLlama #1BitLLM #1BitAI #TinyAI #TinyLLM github.com/jzhang38/TinyL…
✨ Microsoft 1-bit era paper (released in Feb) is really a masterpiece. BitNet b1.58 70B was 4.1 times faster and 8.9 times higher throughput capable than the corresponding FP16 LLaMa. 📌 Requires almost no multiplication operations for matrix multiplication and can be highly…
📽️ Watch this! An AI server running privately on my home network. Not only is this possible (and relatively easy), it's also fast, secure, and incredibly useful! Better than creepy Alexa - cleverer and less eavesdroppy. 😜 #SmallLLM #PrivateAI @OLLAMA
Home-lab gap: Mac Studio has the bandwidth but no CUDA; DGX Spark has CUDA but not the bandwidth. Both have big unified memory for inference, but 7–14B fine-tuning is still bandwidth-bound. We need Studio-class bandwidth plus CUDA in one box. #SmallLLM #EdgeLLM #LoRA #QLoRA…
Hugging Face newest SmoLlm3 3B is now live on our WebAI platform, now locally runnable inside your user's browser. #SmallLLM
A fine-tuned small LLM delivers faster, more accurate, and cost-effective user assistance—tailored to your product’s needs. 🚀 zurl.co/c8fUN #AIAssistant #LLM #SmallLLM #FineTuning #UserAssistance #TechInnovation #DigitalTransformation #AI #MachineLearning
Something went wrong.
Something went wrong.
United States Trends
- 1. Northern Lights 27K posts
- 2. #DWTS 47K posts
- 3. #Aurora 5,528 posts
- 4. Justin Edwards 1,786 posts
- 5. Louisville 15.5K posts
- 6. Andy 59.8K posts
- 7. Lowe 12.2K posts
- 8. #RHOSLC 5,615 posts
- 9. #OlandriaxHarpersBazaar 3,118 posts
- 10. Elaine 42.4K posts
- 11. Kentucky 24.4K posts
- 12. Oweh 1,805 posts
- 13. Celtics 11.6K posts
- 14. JT Toppin N/A
- 15. Robert 98.8K posts
- 16. #WWENXT 15.6K posts
- 17. Dylan 30.6K posts
- 18. Whitney 8,698 posts
- 19. Jordan Walsh N/A
- 20. Kam Williams N/A