awakecoding's profile picture. Remote desktop protocol expert, OSS contributor and Microsoft MVP. I love designing products with Rust, C# and PowerShell. Proud to be CTO at Devolutions. 🇨🇦

Marc-André Moreau

@awakecoding

Remote desktop protocol expert, OSS contributor and Microsoft MVP. I love designing products with Rust, C# and PowerShell. Proud to be CTO at Devolutions. 🇨🇦

Marc-André Moreau heeft deze post opnieuw geplaatst

Run Mistral Large 3 on Ollama's cloud: ollama run mistral-large-3:675b-cloud

Mistral Large 3 debuts as the #1 open source coding model on the @arena leaderboard. We'd love for you to try it! More on coding in a few days... 👀

MistralAI's tweet image. Mistral Large 3 debuts as the #1 open source coding model on the @arena leaderboard. We'd love for you to try it! 

More on coding in a few days... 👀


Marc-André Moreau heeft deze post opnieuw geplaatst

The Copilot CLI is now available via winget 🪟

_Evan_Boyle's tweet image. The Copilot CLI is now available via winget 🪟

What's the simplest zero-config alternative to LocalDB to launch an ASP.NET Kestrel application in WSL from Visual Studio 2026? It works well on the Windows host, but it's not supported in the Linux guest. I wonder if Visual Studio can launch containers easily?

awakecoding's tweet image. What's the simplest zero-config alternative to LocalDB to launch an ASP.NET Kestrel application in WSL from Visual Studio 2026? It works well on the Windows host, but it's not supported in the Linux guest. I wonder if Visual Studio can launch containers easily?

I just realized many of Claude Code users internally had no idea they could just resume past chat sessions, and either started from scratch, or generated markdown files to "save state" This is something that's automatic and easily discoverable in GitHub Copilot in VSCode

Advent of Claude Day 4 - Session Management Accidentally closed your terminal? Laptop died? No problem. claude --continue → picks up your last conversation instantly claude --resume → shows a picker to choose any past session Context preserved. Momentum restored.



Why is it that every time I dare try the Copilot button in Outlook I *instantly* hit a limitation that makes it useless. Apparently searching into custom folders is not supported, so it's unable to find the emails I want, they're just invisible to Copilot

awakecoding's tweet image. Why is it that every time I dare try the Copilot button in Outlook I *instantly* hit a limitation that makes it useless. Apparently searching into custom folders is not supported, so it's unable to find the emails I want, they're just invisible to Copilot
awakecoding's tweet image. Why is it that every time I dare try the Copilot button in Outlook I *instantly* hit a limitation that makes it useless. Apparently searching into custom folders is not supported, so it's unable to find the emails I want, they're just invisible to Copilot
awakecoding's tweet image. Why is it that every time I dare try the Copilot button in Outlook I *instantly* hit a limitation that makes it useless. Apparently searching into custom folders is not supported, so it's unable to find the emails I want, they're just invisible to Copilot

Is there a way in Outlook mobile to *select* emails from search results? I've got lots of emails matching very specific patterns I'd like to delete, but all I can do is open each one individually apparently

awakecoding's tweet image. Is there a way in Outlook mobile to *select* emails from search results? I've got lots of emails matching very specific patterns I'd like to delete, but all I can do is open each one individually apparently

Claude code on windows, need a LOT of work...



Marc-André Moreau heeft deze post opnieuw geplaatst

Markdown Monster 4.0 is out. Many new features & improvements: * Integrated LLM Chat interface * .NET 10 Runtime * Improved ARM64 support * Many Mermaid graph improvements * Support for Font ligatures * Many small UI improvements Check it out: markdownmonster.west-wind.com #markdown

MarkdownMonstr's tweet image. Markdown Monster 4.0 is out. Many new features & improvements:

* Integrated LLM Chat interface
* .NET 10 Runtime
* Improved ARM64 support
* Many Mermaid graph improvements
* Support for Font  ligatures
* Many small UI improvements

Check it out:
markdownmonster.west-wind.com #markdown

I've been using ollama cloud for a month, I like it better to try running some of the latest open source models without using my local hardware resources. I can also connect it to GitHub Copilot in VSCode through custom models

Ministral 3 is now also available on Ollama's cloud: 14B: ollama run ministral-3:14b-cloud 8B: ollama run ministral-3:8b-cloud 3B: ollama run ministral-3:3b-cloud ollama.com/library/minist…



Marc-André Moreau heeft deze post opnieuw geplaatst

Mistral Large 3 is now available in Microsoft Foundry, delivering frontier-level instruction reliability, long-context comprehension, and multimodal reasoning with full Apache 2.0 openness. 🏢 Built for real enterprise workloads 🤖 Optimized for production assistants, RAG…

Azure's tweet image. Mistral Large 3 is now available in Microsoft Foundry, delivering frontier-level instruction reliability, long-context comprehension, and multimodal reasoning with full Apache 2.0 openness.

🏢 Built for real enterprise workloads
🤖 Optimized for production assistants, RAG…

Marc-André Moreau heeft deze post opnieuw geplaatst

Introducing the Mistral 3 family of models: Frontier intelligence at all sizes. Apache 2.0. Details in 🧵

MistralAI's tweet image. Introducing the Mistral 3 family of models: Frontier intelligence at all sizes. Apache 2.0. Details in 🧵

Marc-André Moreau heeft deze post opnieuw geplaatst

Mistral 3 is now available on Ollama v0.13.1 (currently in pre-release on GitHub). 14B: ollama run ministral-3:14b 8B: ollama run ministral-3:8b 3B: ollama run ministral-3:3b Please update to the latest Ollama.

ollama's tweet image. Mistral 3 is now available on Ollama v0.13.1 (currently in pre-release on GitHub). 

14B: 
ollama run ministral-3:14b

8B: 
ollama run ministral-3:8b

3B: 
ollama run ministral-3:3b

Please update to the latest Ollama.
ollama's tweet image. Mistral 3 is now available on Ollama v0.13.1 (currently in pre-release on GitHub). 

14B: 
ollama run ministral-3:14b

8B: 
ollama run ministral-3:8b

3B: 
ollama run ministral-3:3b

Please update to the latest Ollama.
ollama's tweet image. Mistral 3 is now available on Ollama v0.13.1 (currently in pre-release on GitHub). 

14B: 
ollama run ministral-3:14b

8B: 
ollama run ministral-3:8b

3B: 
ollama run ministral-3:3b

Please update to the latest Ollama.
ollama's tweet image. Mistral 3 is now available on Ollama v0.13.1 (currently in pre-release on GitHub). 

14B: 
ollama run ministral-3:14b

8B: 
ollama run ministral-3:8b

3B: 
ollama run ministral-3:3b

Please update to the latest Ollama.

Introducing the Mistral 3 family of models: Frontier intelligence at all sizes. Apache 2.0. Details in 🧵

MistralAI's tweet image. Introducing the Mistral 3 family of models: Frontier intelligence at all sizes. Apache 2.0. Details in 🧵


Marc-André Moreau heeft deze post opnieuw geplaatst

NEW: @MistralAI releases Mistral 3, a family of multimodal models, including three start-of-the-art dense models (3B, 8B, and 14B) and Mistral Large 3 (675B, 41B active). All Apache 2.0! 🤗 Surprisingly, the 3B is small enough to run 100% locally in your browser on WebGPU! 🤯


Ok this is really awesome - Ministral 3B WebGPU with live video inferencing. The demo page literally downloads the model in the browser and it just... works. Try it out! huggingface.co/spaces/mistral…

awakecoding's tweet image. Ok this is really awesome - Ministral 3B WebGPU with live video inferencing. The demo page literally downloads the model in the browser and it just... works. Try it out! huggingface.co/spaces/mistral…

Marc-André Moreau heeft deze post opnieuw geplaatst

🎉 Congratulations to the Mistral team on launching the Mistral 3 family! We’re proud to share that @MistralAI, @NVIDIAAIDev, @RedHat_AI, and vLLM worked closely together to deliver full Day-0 support for the entire Mistral 3 lineup. This collaboration enabled: • NVFP4…

vllm_project's tweet image. 🎉 Congratulations to the Mistral team on launching the Mistral 3 family!

We’re proud to share that @MistralAI, @NVIDIAAIDev, @RedHat_AI, and vLLM worked closely together to deliver full Day-0 support for the entire Mistral 3 lineup.

This collaboration enabled:
• NVFP4…

Introducing the Mistral 3 family of models: Frontier intelligence at all sizes. Apache 2.0. Details in 🧵

MistralAI's tweet image. Introducing the Mistral 3 family of models: Frontier intelligence at all sizes. Apache 2.0. Details in 🧵


Introducing the Mistral 3 family of models: Frontier intelligence at all sizes. Apache 2.0. Details in 🧵

MistralAI's tweet image. Introducing the Mistral 3 family of models: Frontier intelligence at all sizes. Apache 2.0. Details in 🧵


Loading...

Something went wrong.


Something went wrong.