Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! #RISCV #LocalAI #DeepComputing

DeepComputingio's tweet image. Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” 
Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! 

#RISCV #LocalAI #DeepComputing
DeepComputingio's tweet image. Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” 
Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! 

#RISCV #LocalAI #DeepComputing
DeepComputingio's tweet image. Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” 
Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! 

#RISCV #LocalAI #DeepComputing

Have you ever wanted to watch two LLMs chat with each other, all from the comfort of your own GPU? No? Really? Well now you can whether you like it or not. (powered by #WebLLM) dkackman.github.io/model-chat/


What if your favorite NPCs actually lived in your browser? No cloud. No API calls. Just local brains, powered by WebLLM. Coming soon to CodexLocal — local AI characters and sandbox simulations. #WebLLM #AI #Privacy #LLMgaming


Imagine chatting with NPCs that actually remember you… No login. No cloud. Just your browser and a local model. Coming soon: LocalNPCs — AI characters that live offline. #WebLLM #AI #Gaming #CodexLocal


PhraseFlow AI v0.0.6 is here! ✨ Enhanced prompt for better AI responses 🎯 Cleaner context presentation ⚡ Faster, more accurate AI-powered text transformation Try it now: tinyurl.com/3t7k4mjb #AI #ChromeExtension #WebLLM #PromptEngineering


#WebLLM just completed a major overhaul with typescript rewrite, and modularized packaging. The @javascript package is now available in @npmjs . Brings accelerated LLM chats to the browser via @WebGPU . Checkout examples and build your own private chatbot github.com/mlc-ai/web-llm

tqchenml's tweet image. #WebLLM just completed a major overhaul with typescript rewrite, and modularized packaging. The @javascript package is now available in @npmjs . Brings accelerated LLM chats to the browser via @WebGPU . Checkout examples and build your own private chatbot github.com/mlc-ai/web-llm

Thats pretty wild😳. I just tried the new #Qwen3 0.6B Model with #WebLLM for structured-output tool-calling. And it works surprisingly good. With and without "thinking" enabled. The model ist just 335MB🤯

nic_o_martin's tweet image. Thats pretty wild😳. I just tried the new #Qwen3 0.6B Model with #WebLLM for structured-output tool-calling. And it works surprisingly good. With and without "thinking" enabled.
The model ist just 335MB🤯

Ever dreamed of running powerful AI models right in your browser—no servers, no data leaks? Enter WebLLM: offline LLMs like Llama2 that execute locally on WebGPU. Privacy win: Your chats stay on-device, zero cloud . game-changing for secure experimentation!🔒🤖 #WebLLM #AIPrivacy


🚀 Dive into the future of AI chats – right in your browser! This local LLM demo runs entirely on your device, powered by WebLLM. Zero servers, total privacy. No data leaks! Try it out and see the magic: demo.codesamplez.com/llm-chat 🤖🔐#AI #WebLLM #Privacy


I vibe-coded another web app, it's called ReflectPad. reflectpad.manugracio.com You can write your thoughts every day and everything stays locally on your browser. You can then interact with your saved thoughts using an AI chat. 100% Browser #AIJournal #WebLLM #vibecoding


#webllm Qwen2.5-1.5B-Instruct-q4f16_1-MLC Currently, I think it will be used to run models up to about 3B locally, let them do simple tasks and preprocessing for complex tasks, and throw the rest to backend (8B+).

0xkoji's tweet image. #webllm Qwen2.5-1.5B-Instruct-q4f16_1-MLC
Currently, I think it will be used to run models up to about 3B locally, let them do simple tasks and preprocessing for complex tasks, and throw the rest to backend (8B+).

Check out #WebLLM: A high performance #WebAI based inference engine for large language models in the browser! Happy to see smart folk investing in AI for #JS +surrounding web tech. It's going to be a great 2024! Read now: blog.mlc.ai/2024/06/13/web… Talk to @charlie_ruan if questions

jason_mayes's tweet image. Check out #WebLLM: A high performance #WebAI based inference engine for large language models in the browser! Happy to see smart folk investing in AI for #JS +surrounding web tech. It's going to be a great 2024! Read now: blog.mlc.ai/2024/06/13/web…
Talk to @charlie_ruan if questions

I just deployed a huge update for #AskMyPDF where I was able to implement many suggestions for improvement. More about those in the thread 👇 #WebAI #WebLLM #PWA

Hey everyone, get ready for another cool #WebAI sideproject 🎉 In the last weeks I spent a lot of time building a #RAG solution directly in the browser without dependencies to any AI cloud provider. And today I am proud to share it with you😊



Absolutely mindblowing! Tried it out on this sloth. Accurate and at an astonishing 17t/s #WebAI #WebLLM

janschultecom's tweet image. Absolutely mindblowing! Tried it out on this sloth. Accurate and at an astonishing 17t/s  #WebAI #WebLLM

Moondream, your favorite tiny vision language model by @vikhyatk can now run directly in the browser on WebGPU! 🤯 Powered, of course, by Transformers.js and ONNX Runtime Web! 🤗 Local inference means no data leaves your device! This is huge for privacy! Try it out yourself! 👇



#OpenAI and #Google don’t want us to know it. Instead of paying OpenAI or Google Gemini for LLM API calls, innovative #webllm empowers us to run small language models (SMLs) locally—directly in users' browsers. Efficient. Secure. Free. For B2C digital products relying heavily…


@GoogleAI just released a Gemma2 2B model fine-tuned on Japanese text — a great addition to specialized small models for on-device inference! Try it in your browser locally with #WebLLM: chat.webllm.ai


WebLLM is a great start for using LLM in the browser/client device itself, going to open new avenues for innovative solutions for sure 👍 webllm.mlc.ai Source: #WebAI Conference #WebLLM

ankurkumarz's tweet image. WebLLM is a great start for using LLM in the browser/client device itself, going to open new avenues for innovative solutions for sure 👍

webllm.mlc.ai

Source: #WebAI Conference 

#WebLLM

🧵We have done that through our MLC engine efforts and they unlocked very interesting applications such as #WebLLM. It is great to see @PyTorch compiler also embraces that philosophy. We are going to see more of it especially as hardware continue challenge our assumptions of SW.


Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! #RISCV #LocalAI #DeepComputing

DeepComputingio's tweet image. Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” 
Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! 

#RISCV #LocalAI #DeepComputing
DeepComputingio's tweet image. Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” 
Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! 

#RISCV #LocalAI #DeepComputing
DeepComputingio's tweet image. Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” 
Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! 

#RISCV #LocalAI #DeepComputing

Ever dreamed of running powerful AI models right in your browser—no servers, no data leaks? Enter WebLLM: offline LLMs like Llama2 that execute locally on WebGPU. Privacy win: Your chats stay on-device, zero cloud . game-changing for secure experimentation!🔒🤖 #WebLLM #AIPrivacy


PhraseFlow AI v0.0.6 is here! ✨ Enhanced prompt for better AI responses 🎯 Cleaner context presentation ⚡ Faster, more accurate AI-powered text transformation Try it now: tinyurl.com/3t7k4mjb #AI #ChromeExtension #WebLLM #PromptEngineering


Imagine chatting with NPCs that actually remember you… No login. No cloud. Just your browser and a local model. Coming soon: LocalNPCs — AI characters that live offline. #WebLLM #AI #Gaming #CodexLocal


What if your favorite NPCs actually lived in your browser? No cloud. No API calls. Just local brains, powered by WebLLM. Coming soon to CodexLocal — local AI characters and sandbox simulations. #WebLLM #AI #Privacy #LLMgaming


🚀 Dive into the future of AI chats – right in your browser! This local LLM demo runs entirely on your device, powered by WebLLM. Zero servers, total privacy. No data leaks! Try it out and see the magic: demo.codesamplez.com/llm-chat 🤖🔐#AI #WebLLM #Privacy


I vibe-coded another web app, it's called ReflectPad. reflectpad.manugracio.com You can write your thoughts every day and everything stays locally on your browser. You can then interact with your saved thoughts using an AI chat. 100% Browser #AIJournal #WebLLM #vibecoding


Beta Version Online at: recursive-ai.curioforce.com/beta/ *Requires Google Chrome as browser. #AI #WebLLM #recursive #improver


Mobile-first is real. – LLMs run natively on Android via MLC & WebLLM – Nations skipping desktops and going straight to phone-based agents Think: LLMs as digital assistants for farmers, students, and clinics. #MLC #WebLLM #MobileLLMs #AILeapfrog #TechForGood


WebLLM + MLC – Run LLMs in browser or mobile GPU – 4-bit, 3-bit quantization breakthroughs – Offline agents are here AI, without the cloud. #WebLLM #MLC


Thats pretty wild😳. I just tried the new #Qwen3 0.6B Model with #WebLLM for structured-output tool-calling. And it works surprisingly good. With and without "thinking" enabled. The model ist just 335MB🤯

nic_o_martin's tweet image. Thats pretty wild😳. I just tried the new #Qwen3 0.6B Model with #WebLLM for structured-output tool-calling. And it works surprisingly good. With and without "thinking" enabled.
The model ist just 335MB🤯

Running LLMs locally in your browser with WebLLM – no server, no API keys, just pure WebGPU magic! #WebLLM #MLC #AI #LLM #WebGPU #opensource

priyanshu210022's tweet image. Running LLMs locally in your browser with WebLLM – no server, no API keys, just pure WebGPU magic! #WebLLM #MLC #AI #LLM #WebGPU #opensource

【WebLLM技術解説🔍】 ブラウザで動くAIモデル「WebLLM」について議論。 ・API利用料不要 ・データを外部送信しないセキュリティ ・計算資源の限界 カメラ映像をローカル解析するユースケースも検討中! #WebLLM #AI技術 (3/4)👇


Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! #RISCV #LocalAI #DeepComputing

DeepComputingio's tweet image. Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” 
Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! 

#RISCV #LocalAI #DeepComputing
DeepComputingio's tweet image. Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” 
Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! 

#RISCV #LocalAI #DeepComputing
DeepComputingio's tweet image. Big thanks to Kathy Giori for an amazing session at #RISCVSummit NA — “Running WebLLM in the Browser on RISC-V: Toward Lightweight, Local AI Experiences.” 
Bringing #RISCVAIPC + #WebLLM together for efficient, private, browser-based AI! 

#RISCV #LocalAI #DeepComputing

#WebLLM just completed a major overhaul with typescript rewrite, and modularized packaging. The @javascript package is now available in @npmjs . Brings accelerated LLM chats to the browser via @WebGPU . Checkout examples and build your own private chatbot github.com/mlc-ai/web-llm

tqchenml's tweet image. #WebLLM just completed a major overhaul with typescript rewrite, and modularized packaging. The @javascript package is now available in @npmjs . Brings accelerated LLM chats to the browser via @WebGPU . Checkout examples and build your own private chatbot github.com/mlc-ai/web-llm

Absolutely mindblowing! Tried it out on this sloth. Accurate and at an astonishing 17t/s #WebAI #WebLLM

janschultecom's tweet image. Absolutely mindblowing! Tried it out on this sloth. Accurate and at an astonishing 17t/s  #WebAI #WebLLM

Moondream, your favorite tiny vision language model by @vikhyatk can now run directly in the browser on WebGPU! 🤯 Powered, of course, by Transformers.js and ONNX Runtime Web! 🤗 Local inference means no data leaves your device! This is huge for privacy! Try it out yourself! 👇



Check out #WebLLM: A high performance #WebAI based inference engine for large language models in the browser! Happy to see smart folk investing in AI for #JS +surrounding web tech. It's going to be a great 2024! Read now: blog.mlc.ai/2024/06/13/web… Talk to @charlie_ruan if questions

jason_mayes's tweet image. Check out #WebLLM: A high performance #WebAI based inference engine for large language models in the browser! Happy to see smart folk investing in AI for #JS +surrounding web tech. It's going to be a great 2024! Read now: blog.mlc.ai/2024/06/13/web…
Talk to @charlie_ruan if questions

#webllm Qwen2.5-1.5B-Instruct-q4f16_1-MLC Currently, I think it will be used to run models up to about 3B locally, let them do simple tasks and preprocessing for complex tasks, and throw the rest to backend (8B+).

0xkoji's tweet image. #webllm Qwen2.5-1.5B-Instruct-q4f16_1-MLC
Currently, I think it will be used to run models up to about 3B locally, let them do simple tasks and preprocessing for complex tasks, and throw the rest to backend (8B+).

#WebLLM を試せるデモページもあるっぽい! github.com/mlc-ai/web-llm

youtoy's tweet image. #WebLLM を試せるデモページもあるっぽい!

github.com/mlc-ai/web-llm

おお、 #WebLLM というのが気になる! どうやら #WebGPU によるアクセラレーションも使えるらしい。 まずはリポジトリを見に行ってみようかな。 github.com/mlc-ai/web-llm



Thats pretty wild😳. I just tried the new #Qwen3 0.6B Model with #WebLLM for structured-output tool-calling. And it works surprisingly good. With and without "thinking" enabled. The model ist just 335MB🤯

nic_o_martin's tweet image. Thats pretty wild😳. I just tried the new #Qwen3 0.6B Model with #WebLLM for structured-output tool-calling. And it works surprisingly good. With and without "thinking" enabled.
The model ist just 335MB🤯

Checkout @huggingface's cool blog on running @metaai Llama3.2 on device: huggingface.co/blog/llama32 Thanks for including MLC and #WebLLM as one of the solutions, and happy to see edge/mobile inference gaining more attention!

charlie_ruan's tweet image. Checkout @huggingface's cool blog on running @metaai Llama3.2 on device: huggingface.co/blog/llama32

Thanks for including MLC and #WebLLM as one of the solutions, and happy to see edge/mobile inference gaining more attention!

Running LLMs locally in your browser with WebLLM – no server, no API keys, just pure WebGPU magic! #WebLLM #MLC #AI #LLM #WebGPU #opensource

priyanshu210022's tweet image. Running LLMs locally in your browser with WebLLM – no server, no API keys, just pure WebGPU magic! #WebLLM #MLC #AI #LLM #WebGPU #opensource

WebLLM is a great start for using LLM in the browser/client device itself, going to open new avenues for innovative solutions for sure 👍 webllm.mlc.ai Source: #WebAI Conference #WebLLM

ankurkumarz's tweet image. WebLLM is a great start for using LLM in the browser/client device itself, going to open new avenues for innovative solutions for sure 👍

webllm.mlc.ai

Source: #WebAI Conference 

#WebLLM

本日は、「生成AIをローカルPCで動かす!自分だけの大規模言語モデル活用ハンズオン」をテーマにしたセミナーを、大阪産業局 ソフト産業プラザTEQS様で実施致しました!多数のご参加誠に有難うございました!! sansokan.jp/events/eve_det… #生成AI #WebLLM #セミナー #ハンズオン #ビジネス #denaripam

denaripam's tweet image. 本日は、「生成AIをローカルPCで動かす!自分だけの大規模言語モデル活用ハンズオン」をテーマにしたセミナーを、大阪産業局 ソフト産業プラザTEQS様で実施致しました!多数のご参加誠に有難うございました!!

sansokan.jp/events/eve_det…

#生成AI #WebLLM #セミナー #ハンズオン #ビジネス #denaripam
denaripam's tweet image. 本日は、「生成AIをローカルPCで動かす!自分だけの大規模言語モデル活用ハンズオン」をテーマにしたセミナーを、大阪産業局 ソフト産業プラザTEQS様で実施致しました!多数のご参加誠に有難うございました!!

sansokan.jp/events/eve_det…

#生成AI #WebLLM #セミナー #ハンズオン #ビジネス #denaripam

本日は、「生成AIの現在地!オープンソースAIモデルで新たなビジネスを作る!」をテーマにしたセミナーを、大阪産業局 ソフト産業プラザTEQS様で実施致しました!多数のご参加誠に有難うございました!! sansokan.jp/events/eve_det… #生成AI #WebLLM #セミナー #ビジネス #denaripam

denaripam's tweet image. 本日は、「生成AIの現在地!オープンソースAIモデルで新たなビジネスを作る!」をテーマにしたセミナーを、大阪産業局 ソフト産業プラザTEQS様で実施致しました!多数のご参加誠に有難うございました!!

sansokan.jp/events/eve_det…

#生成AI #WebLLM #セミナー #ビジネス #denaripam
denaripam's tweet image. 本日は、「生成AIの現在地!オープンソースAIモデルで新たなビジネスを作る!」をテーマにしたセミナーを、大阪産業局 ソフト産業プラザTEQS様で実施致しました!多数のご参加誠に有難うございました!!

sansokan.jp/events/eve_det…

#生成AI #WebLLM #セミナー #ビジネス #denaripam
denaripam's tweet image. 本日は、「生成AIの現在地!オープンソースAIモデルで新たなビジネスを作る!」をテーマにしたセミナーを、大阪産業局 ソフト産業プラザTEQS様で実施致しました!多数のご参加誠に有難うございました!!

sansokan.jp/events/eve_det…

#生成AI #WebLLM #セミナー #ビジネス #denaripam

To better understand how #LLM behave in the browser, I would like to collect more data from the real world. For this purpose I have added a small benchmak test to Tweeter. #WebLLM #WebAI

nic_o_martin's tweet image. To better understand how #LLM behave in the browser, I would like to collect more data from the real world.
For this purpose I have added a small benchmak test to Tweeter.
#WebLLM #WebAI

🛠️ Stößt du auf "cannot find function"-Fehler nach einem WebLLM-Update? Christian Liebel teilt Lösungen, von manueller Cache-Bereinigung bis zu automatischen Fixes. Ein Lebensretter für alle Webentwickler! #WebEntwicklung #KI #WebLLM #Fehlerbehebung labs.thinktecture.com/solving-cannot…

thinktecture's tweet image. 🛠️ Stößt du auf "cannot find function"-Fehler nach einem WebLLM-Update? Christian Liebel teilt Lösungen, von manueller Cache-Bereinigung bis zu automatischen Fixes. Ein Lebensretter für alle Webentwickler! #WebEntwicklung #KI #WebLLM #Fehlerbehebung

labs.thinktecture.com/solving-cannot…

Want to build a data app powered by #WebLLM? 👇Check out the links to app and source code below #python #dataviz #datascience

Panel_org's tweet image. Want to build a data app powered by #WebLLM?

👇Check out the links to app and source code below

#python #dataviz #datascience

Great series of posts by @christianliebel on web.dev, covering the capabilities and limitations of LLMs running in the browser, and a walkthrough on building an on-device chatbot. #WebAI #PromptAPI #WebLLM #GenAI

andreban's tweet image. Great series of posts by @christianliebel on web.dev, covering the capabilities and limitations of LLMs running in the browser, and a walkthrough on building an on-device chatbot.

#WebAI #PromptAPI #WebLLM #GenAI

Check out my three-part article series about the benefits of on-device large language models and learn how to add AI capabilities to your web apps: web.dev/articles/ai-ll… #GenAI #WebLLM #PromptAPI



Loading...

Something went wrong.


Something went wrong.


United States Trends