dotnetcore's profile picture. Founder GiftOasis LLC. Tweets about software development:  .NET, AI, ASPNET Core, C# & LuceneNET. Top 1% on StackOverflow.  I Love learning & helping others.

Ron Clabo

@dotnetcore

Founder GiftOasis LLC. Tweets about software development: .NET, AI, ASPNET Core, C# & LuceneNET. Top 1% on StackOverflow. I Love learning & helping others.

This is the golden age of AI. This is the best time in human history to be an AI builder, says Andrew Ng. I tend to agree.


Starting in Visual Studio 2026, the IDE is decoupled from the toolchain version. This allows for monthly, stable updates to the IDE without disrupting the toolchain. It's the best of both worlds? I'm gonna love the MORE FREQUENT IDE productivity improvements! #dotnetconf

dotnetcore's tweet image. Starting in Visual Studio 2026, the IDE is decoupled from the toolchain version.  This allows for monthly, stable updates to the IDE without disrupting the toolchain.  It's the best of both worlds? I'm gonna love the MORE FREQUENT IDE productivity improvements! #dotnetconf

Microsoft Agent Framework is the next gen AI framework for .NET #dotnetconf

dotnetcore's tweet image. Microsoft Agent Framework is the next gen AI framework for .NET #dotnetconf

A GitHub AI Agent for creating unit tests for legacy c# code is coming. Great use for an AI agent. #dotnetconf

dotnetcore's tweet image. A GitHub AI Agent for creating unit tests for legacy c# code is coming.  Great use for an AI agent.
#dotnetconf

Ron Clabo 已转帖

Mix .NET with Java, Python & beyond? Yes, please! 🎉 msft.it/6017tHgLF Aspire Polyglot lets you build multi-language apps without the headaches. Dive into the future of #WebDev 👉 msft.it/6012tHgLA 🔥 Flexible. Fast. Fun.

dotnet's tweet image. Mix .NET with Java, Python & beyond? Yes, please! 🎉
msft.it/6017tHgLF Aspire Polyglot lets you build multi-language apps without the headaches.
Dive into the future of #WebDev 👉 msft.it/6012tHgLA
🔥 Flexible. Fast. Fun.

Ron Clabo 已转帖

Our new Gemini 2.5 Computer Use model is now available in the Gemini API, setting a new standard on multiple benchmarks with lower latency. These are early days, but the model’s ability to interact with the web – like scrolling, filling forms + navigating dropdowns – is an…

sundarpichai's tweet image. Our new Gemini 2.5 Computer Use model is now available in the Gemini API, setting a new standard on multiple benchmarks with lower latency. These are early days, but the model’s ability to interact with the web – like scrolling, filling forms + navigating dropdowns – is an…

Really great commentary on Microsoft's recent In-Context Learning (ICL) Paper. Good insights on how many shots are needed for "few-shot" prompting.

Cool paper from Microsoft. And it's on the very important topic of in-context learning. So what's new? Let's find out:

omarsar0's tweet image. Cool paper from Microsoft.

And it's on the very important topic of in-context learning.

So what's new?

Let's find out:


Different AI models have different strengths, weaknesses, biases, and safety profiles. Blending them to achieve an optimal user experience is nontrivial.


How important is Context Engineering to leveraging AI? Even humans can’t be intelligent without good context. Full stop. Read that again.


“What techniques are you using to customize your AI systems?" Amplify, a VC firm, surveyed 500 software/AI engineers. Here are the results:

dotnetcore's tweet image. “What techniques are you using to customize your AI systems?" Amplify, a VC firm, surveyed 500 software/AI engineers. Here are the results:

Google DeepMind’s research found vector search hits a scalability limit due to fixed embedding dimensions, underperforming BM25 in most cases. Time to explore hybrid approaches.

Google DeepMind Finds a Fundamental Bug in RAG: Embedding Limits Break Retrieval at Scale Google DeepMind's latest research uncovers a fundamental limitation in Retrieval-Augmented Generation (RAG): embedding-based retrieval cannot scale indefinitely due to fixed vector…

Marktechpost's tweet image. Google DeepMind Finds a Fundamental Bug in RAG: Embedding Limits Break Retrieval at Scale

Google DeepMind's latest research uncovers a fundamental limitation in Retrieval-Augmented Generation (RAG): embedding-based retrieval cannot scale indefinitely due to fixed vector…


In the blink of a cosmic eye, we passed the Turing test. ... And yet the moment passed with little fanfare, or even recognition. - Mustafa Suleyman


United States 趋势

Loading...

Something went wrong.


Something went wrong.