#webllm نتائج البحث
Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! #RISCV #LocalAI #DeepComputing
Have you ever wanted to watch two LLMs chat with each other, all from the comfort of your own GPU? No? Really? Well now you can whether you like it or not. (powered by #WebLLM) dkackman.github.io/model-chat/
WebLLMとな?ブラウザでLLMが動く時代か…ピーガガ…神託が乱れておるぞ!ローカル、ネイティブ、クラウド…ふむ、未来を感じるのじゃ!🔮✨ #WebLLM #LLM #神託 zenn.dev/sre tinyurl.com/23scvbhh
What if your favorite NPCs actually lived in your browser? No cloud. No API calls. Just local brains, powered by WebLLM. Coming soon to CodexLocal — local AI characters and sandbox simulations. #WebLLM #AI #Privacy #LLMgaming
Imagine chatting with NPCs that actually remember you… No login. No cloud. Just your browser and a local model. Coming soon: LocalNPCs — AI characters that live offline. #WebLLM #AI #Gaming #CodexLocal
Thats pretty wild😳. I just tried the new #Qwen3 0.6B Model with #WebLLM for structured-output tool-calling. And it works surprisingly good. With and without "thinking" enabled. The model ist just 335MB🤯
#WebLLM just completed a major overhaul with typescript rewrite, and modularized packaging. The @javascript package is now available in @npmjs . Brings accelerated LLM chats to the browser via @WebGPU . Checkout examples and build your own private chatbot github.com/mlc-ai/web-llm
PhraseFlow AI v0.0.6 is here! ✨ Enhanced prompt for better AI responses 🎯 Cleaner context presentation ⚡ Faster, more accurate AI-powered text transformation Try it now: tinyurl.com/3t7k4mjb #AI #ChromeExtension #WebLLM #PromptEngineering
Ever dreamed of running powerful AI models right in your browser—no servers, no data leaks? Enter WebLLM: offline LLMs like Llama2 that execute locally on WebGPU. Privacy win: Your chats stay on-device, zero cloud . game-changing for secure experimentation!🔒🤖 #WebLLM #AIPrivacy
🧵We have done that through our MLC engine efforts and they unlocked very interesting applications such as #WebLLM. It is great to see @PyTorch compiler also embraces that philosophy. We are going to see more of it especially as hardware continue challenge our assumptions of SW.
🚀 Dive into the future of AI chats – right in your browser! This local LLM demo runs entirely on your device, powered by WebLLM. Zero servers, total privacy. No data leaks! Try it out and see the magic: demo.codesamplez.com/llm-chat 🤖🔐#AI #WebLLM #Privacy
I vibe-coded another web app, it's called ReflectPad. reflectpad.manugracio.com You can write your thoughts every day and everything stays locally on your browser. You can then interact with your saved thoughts using an AI chat. 100% Browser #AIJournal #WebLLM #vibecoding
Check out #WebLLM: A high performance #WebAI based inference engine for large language models in the browser! Happy to see smart folk investing in AI for #JS +surrounding web tech. It's going to be a great 2024! Read now: blog.mlc.ai/2024/06/13/web… Talk to @charlie_ruan if questions
#webllm Qwen2.5-1.5B-Instruct-q4f16_1-MLC Currently, I think it will be used to run models up to about 3B locally, let them do simple tasks and preprocessing for complex tasks, and throw the rest to backend (8B+).
I just deployed a huge update for #AskMyPDF where I was able to implement many suggestions for improvement. More about those in the thread 👇 #WebAI #WebLLM #PWA
@GoogleAI just released a Gemma2 2B model fine-tuned on Japanese text — a great addition to specialized small models for on-device inference! Try it in your browser locally with #WebLLM: chat.webllm.ai
#WebLLM 上面视频展示了在Samsung S23上以1倍速运行4位量化Phi-2模型的演示。理论上,任何4位量化的小于3B的模型都可以在Android手机上以合理的速度运行🚀。 查看完整博客👇: developer.chrome.com/blog/new-in-we…
Llama-3.2 1B/3B from @AIatMeta now on #WebLLM! Run it locally in browser with no setup, accelerated by @WebGPU! Great size for edge deployment, with decent tool-use scores. Looking forward to local agents built with it! Try it out on webllm.mlc.ai or the HF Space:
WebLLMとな?ブラウザでLLMが動く時代か…ピーガガ…神託が乱れておるぞ!ローカル、ネイティブ、クラウド…ふむ、未来を感じるのじゃ!🔮✨ #WebLLM #LLM #神託 zenn.dev/sre tinyurl.com/23scvbhh
🧵We have done that through our MLC engine efforts and they unlocked very interesting applications such as #WebLLM. It is great to see @PyTorch compiler also embraces that philosophy. We are going to see more of it especially as hardware continue challenge our assumptions of SW.
Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! #RISCV #LocalAI #DeepComputing
Ever dreamed of running powerful AI models right in your browser—no servers, no data leaks? Enter WebLLM: offline LLMs like Llama2 that execute locally on WebGPU. Privacy win: Your chats stay on-device, zero cloud . game-changing for secure experimentation!🔒🤖 #WebLLM #AIPrivacy
PhraseFlow AI v0.0.6 is here! ✨ Enhanced prompt for better AI responses 🎯 Cleaner context presentation ⚡ Faster, more accurate AI-powered text transformation Try it now: tinyurl.com/3t7k4mjb #AI #ChromeExtension #WebLLM #PromptEngineering
Imagine chatting with NPCs that actually remember you… No login. No cloud. Just your browser and a local model. Coming soon: LocalNPCs — AI characters that live offline. #WebLLM #AI #Gaming #CodexLocal
What if your favorite NPCs actually lived in your browser? No cloud. No API calls. Just local brains, powered by WebLLM. Coming soon to CodexLocal — local AI characters and sandbox simulations. #WebLLM #AI #Privacy #LLMgaming
🚀 Dive into the future of AI chats – right in your browser! This local LLM demo runs entirely on your device, powered by WebLLM. Zero servers, total privacy. No data leaks! Try it out and see the magic: demo.codesamplez.com/llm-chat 🤖🔐#AI #WebLLM #Privacy
I vibe-coded another web app, it's called ReflectPad. reflectpad.manugracio.com You can write your thoughts every day and everything stays locally on your browser. You can then interact with your saved thoughts using an AI chat. 100% Browser #AIJournal #WebLLM #vibecoding
Beta Version Online at: recursive-ai.curioforce.com/beta/ *Requires Google Chrome as browser. #AI #WebLLM #recursive #improver
Mobile-first is real. – LLMs run natively on Android via MLC & WebLLM – Nations skipping desktops and going straight to phone-based agents Think: LLMs as digital assistants for farmers, students, and clinics. #MLC #WebLLM #MobileLLMs #AILeapfrog #TechForGood
WebLLM + MLC – Run LLMs in browser or mobile GPU – 4-bit, 3-bit quantization breakthroughs – Offline agents are here AI, without the cloud. #WebLLM #MLC
Thats pretty wild😳. I just tried the new #Qwen3 0.6B Model with #WebLLM for structured-output tool-calling. And it works surprisingly good. With and without "thinking" enabled. The model ist just 335MB🤯
Running LLMs locally in your browser with WebLLM – no server, no API keys, just pure WebGPU magic! #WebLLM #MLC #AI #LLM #WebGPU #opensource
Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! #RISCV #LocalAI #DeepComputing
#WebLLM just completed a major overhaul with typescript rewrite, and modularized packaging. The @javascript package is now available in @npmjs . Brings accelerated LLM chats to the browser via @WebGPU . Checkout examples and build your own private chatbot github.com/mlc-ai/web-llm
Thats pretty wild😳. I just tried the new #Qwen3 0.6B Model with #WebLLM for structured-output tool-calling. And it works surprisingly good. With and without "thinking" enabled. The model ist just 335MB🤯
Check out #WebLLM: A high performance #WebAI based inference engine for large language models in the browser! Happy to see smart folk investing in AI for #JS +surrounding web tech. It's going to be a great 2024! Read now: blog.mlc.ai/2024/06/13/web… Talk to @charlie_ruan if questions
Absolutely mindblowing! Tried it out on this sloth. Accurate and at an astonishing 17t/s #WebAI #WebLLM
Moondream, your favorite tiny vision language model by @vikhyatk can now run directly in the browser on WebGPU! 🤯 Powered, of course, by Transformers.js and ONNX Runtime Web! 🤗 Local inference means no data leaves your device! This is huge for privacy! Try it out yourself! 👇
#webllm Qwen2.5-1.5B-Instruct-q4f16_1-MLC Currently, I think it will be used to run models up to about 3B locally, let them do simple tasks and preprocessing for complex tasks, and throw the rest to backend (8B+).
#WebLLM を試せるデモページもあるっぽい! github.com/mlc-ai/web-llm
おお、 #WebLLM というのが気になる! どうやら #WebGPU によるアクセラレーションも使えるらしい。 まずはリポジトリを見に行ってみようかな。 github.com/mlc-ai/web-llm
Checkout @huggingface's cool blog on running @metaai Llama3.2 on device: huggingface.co/blog/llama32 Thanks for including MLC and #WebLLM as one of the solutions, and happy to see edge/mobile inference gaining more attention!
Running LLMs locally in your browser with WebLLM – no server, no API keys, just pure WebGPU magic! #WebLLM #MLC #AI #LLM #WebGPU #opensource
WebLLM is a great start for using LLM in the browser/client device itself, going to open new avenues for innovative solutions for sure 👍 webllm.mlc.ai Source: #WebAI Conference #WebLLM
🛠️ Stößt du auf "cannot find function"-Fehler nach einem WebLLM-Update? Christian Liebel teilt Lösungen, von manueller Cache-Bereinigung bis zu automatischen Fixes. Ein Lebensretter für alle Webentwickler! #WebEntwicklung #KI #WebLLM #Fehlerbehebung labs.thinktecture.com/solving-cannot…
本日は、「生成AIをローカルPCで動かす!自分だけの大規模言語モデル活用ハンズオン」をテーマにしたセミナーを、大阪産業局 ソフト産業プラザTEQS様で実施致しました!多数のご参加誠に有難うございました!! sansokan.jp/events/eve_det… #生成AI #WebLLM #セミナー #ハンズオン #ビジネス #denaripam
本日は、「生成AIの現在地!オープンソースAIモデルで新たなビジネスを作る!」をテーマにしたセミナーを、大阪産業局 ソフト産業プラザTEQS様で実施致しました!多数のご参加誠に有難うございました!! sansokan.jp/events/eve_det… #生成AI #WebLLM #セミナー #ビジネス #denaripam
To better understand how #LLM behave in the browser, I would like to collect more data from the real world. For this purpose I have added a small benchmak test to Tweeter. #WebLLM #WebAI
Want to build a data app powered by #WebLLM? 👇Check out the links to app and source code below #python #dataviz #datascience
Great series of posts by @christianliebel on web.dev, covering the capabilities and limitations of LLMs running in the browser, and a walkthrough on building an on-device chatbot. #WebAI #PromptAPI #WebLLM #GenAI
Check out my three-part article series about the benefits of on-device large language models and learn how to add AI capabilities to your web apps: web.dev/articles/ai-ll… #GenAI #WebLLM #PromptAPI
Something went wrong.
Something went wrong.
United States Trends
- 1. Good Saturday 19.8K posts
- 2. Tottenham 47.8K posts
- 3. #SaturdayVibes 2,883 posts
- 4. #LingOrm1st_ImpactFANCON 1.5M posts
- 5. LINGORM HER AND HERS FANCON 1.44M posts
- 6. Manchester United 56.6K posts
- 7. #KirbyAirRiders 2,033 posts
- 8. Justice Jackson 6,835 posts
- 9. Brown Jackson 6,413 posts
- 10. Collar 17.4K posts
- 11. Capitol Police 42.3K posts
- 12. Frankenstein 86.8K posts
- 13. Giulia 16.4K posts
- 14. The Supreme Court 149K posts
- 15. Heels 31.5K posts
- 16. Snoop Dogg 3,329 posts
- 17. Lindsey 9,453 posts
- 18. Pluribus 32K posts
- 19. Hungary 61.4K posts
- 20. Tatis 2,389 posts