_NeuralNerd's profile picture. AI Master's Student @KSU_CCIS | Exploring language, vision & reasoning | Sharing what I learn along the way 📚

Nina

@_NeuralNerd

AI Master's Student @KSU_CCIS | Exploring language, vision & reasoning | Sharing what I learn along the way 📚

Pinned

Hey, I’m Nina 👋 Learning my way through AI one paper, one project at a time. Sharing what I learn and the process along the way.


Nina reposted

🛠️ Small Language Models are the Future of Agentic AI. But How do you make LLMs 10× smaller but just as smart? ➡️ Knowledge Distillation. It’s quite a buzzword. But let’s break it down — the way I’d explain it to myself. For instance: Google Gemma was trained by Gemini.…

MaryamMiradi's tweet image. 🛠️ Small Language Models are the Future of Agentic AI. But How do you make LLMs 10× smaller but just as smart? ➡️ Knowledge Distillation.

It’s quite a buzzword.
But let’s break it down — the way I’d explain it to myself.

For instance: Google Gemma was trained by Gemini.…

Nina reposted

LLM token prices are collapsing fast, and the collapse is steepest at the top end. The least "intelligent" models get about 9× cheaper per year, mid-tier models drop about 40× per year, and the most capable models fall about 900× per year. Was same with "Moore’s Law, the best…

rohanpaul_ai's tweet image. LLM token prices are collapsing fast, and the collapse is steepest at the top end. 

The least "intelligent" models get about 9× cheaper per year, mid-tier models drop about 40× per year, and the most capable models fall about 900× per year.

Was same with "Moore’s Law, the best…

Nina reposted

Memory in AI agents seems like a logical next step after RAG evolved to agentic RAG. RAG: one-shot read-only Agentic RAG: read-only via tool calls Memory in AI agents: read-and-write via tool calls Obviously, it's a little more complex than this. I make my case here:…

helloiamleonie's tweet image. Memory in AI agents seems like a logical next step after RAG evolved to agentic RAG.

RAG: one-shot read-only
Agentic RAG: read-only via tool calls
Memory in AI agents: read-and-write via tool calls

Obviously, it's a little more complex than this.

I make my case here:…

Nina reposted

there's a new concept I'm seeing emerging in AI Agents (especially coding agents), which I'll call "harness engineering" - applying context engineering principles to how you use an existing agent Context engineering -> how context (long or short, agentic or not) is passed to an…

dexhorthy's tweet image. there's a new concept I'm seeing emerging in AI Agents (especially coding agents), which I'll call "harness engineering" - applying context engineering principles to how you use an existing agent

Context engineering -> how context (long or short, agentic or not) is passed to an…

Nina reposted

she actually summarized everything you must know from the “AI Engineering” book in 76 minutes. if you don’t got the time to read the book, you need to watch this. foundational models, evaluation, prompt engineering, RAG, memory, fine-tuning and many more. great starting point.

Hesamation's tweet image. she actually summarized everything you must know from the “AI Engineering” book in 76 minutes. if you don’t got the time to read the book, you need to watch this.

foundational models, evaluation, prompt engineering, RAG, memory, fine-tuning and many more. great starting point.

Nina reposted

🚨 RIP “Prompt Engineering.” The GAIR team just dropped Context Engineering 2.0 — and it completely reframes how we think about human–AI interaction. Forget prompts. Forget “few-shot.” Context is the real interface. Here’s the core idea: “A person is the sum of their…

rryssf_'s tweet image. 🚨 RIP “Prompt Engineering.”

The GAIR team just dropped Context Engineering 2.0 — and it completely reframes how we think about human–AI interaction.

Forget prompts. Forget “few-shot.” Context is the real interface.

Here’s the core idea:

“A person is the sum of their…

Nina reposted

RAG vs. CAG, clearly explained! RAG is great, but it has a major problem: Every query hits the vector database. Even for static information that hasn't changed in months. This is expensive, slow, and unnecessary. Cache-Augmented Generation (CAG) addresses this issue by…


Nina reposted

Stanford just did something wild. They put their entire graduate-level AI course on YouTube. No paywall, no signup. It’s the exact curriculum Stanford charges $7,570 for ❱❱❱❱ watch free now

HeyNina101's tweet image. Stanford just did something wild. They put their entire graduate-level AI course on YouTube. No paywall, no signup. It’s the exact curriculum Stanford charges $7,570 for ❱❱❱❱ watch free now

The 'Workflow Memory' idea is crucial for production Agents. It reinforces that optimizing the retrieval component is the battleground! The IC-RALM paper I studied showed huge gains 12%➡️31% QA using off-the-shelf BM25, proving retrieval over model modification is the real shift.

your AI agent only forgets because you let it. there is a simple technique that everybody needs, but few actually use, and it can improve the agent by 51.1%. here's how you can use workflow memory: you ask your agent to train a simple ML model on your custom CSV data. — it…

Hesamation's tweet image. your AI agent only forgets because you let it.

there is a simple technique that everybody needs, but few actually use, and it can improve the agent by 51.1%.

here's how you can use workflow memory:

you ask your agent to train a simple ML model on your custom CSV data. 
— it…


United States Trends

Loading...

Something went wrong.


Something went wrong.