code_igx's profile picture. 25 🇮🇳, Hustler @RITtigers NY 🇺🇸 | RnD on Quantum AI, Superintelligence & Systems | Ex- @Broadcom @VMware

Ishan Gupta

@code_igx

25 🇮🇳, Hustler @RITtigers NY 🇺🇸 | RnD on Quantum AI, Superintelligence & Systems | Ex- @Broadcom @VMware

Ishan Gupta heeft deze post opnieuw geplaatst

Google literally just made a model that learns from its mistake.

basicprompts's tweet image. Google literally just made a model that learns from its mistake.

Ishan Gupta heeft deze post opnieuw geplaatst

Transformers Can Reprogram Themselves. A New Paper Explains the Mind-Blowing Trick. 1/12 You know that magical feeling when an AI like ChatGPT learns a new skill instantly, just from a few examples in your prompt? It's not magic. And it's not "learning" in the way you think.…

IntuitMachine's tweet image. Transformers Can Reprogram Themselves. A New Paper Explains the Mind-Blowing Trick.

1/12

You know that magical feeling when an AI like ChatGPT learns a new skill instantly, just from a few examples in your prompt?

It's not magic. And it's not "learning" in the way you think.…

Ishan Gupta heeft deze post opnieuw geplaatst

Context Engineering Template for AI Agents! A complete system for comprehensive context engineering. Includes documentation, examples, rules, and patterns. 100% open-source.

akshay_pachaar's tweet image. Context Engineering Template for AI Agents!

A complete system for comprehensive context engineering. Includes documentation, examples, rules, and patterns.

100% open-source.

Ishan Gupta heeft deze post opnieuw geplaatst

Holy shit. MIT just built an AI that can rewrite its own code to get smarter 🤯 It’s called SEAL (Self-Adapting Language Models). Instead of humans fine-tuning it, SEAL reads new info, rewrites it in its own words, and runs gradient updates on itself literally performing…

alex_prompter's tweet image. Holy shit. MIT just built an AI that can rewrite its own code to get smarter  🤯

It’s called SEAL (Self-Adapting Language Models).

Instead of humans fine-tuning it, SEAL reads new info, rewrites it in its own words, and runs gradient updates on itself literally performing…

Ishan Gupta heeft deze post opnieuw geplaatst

Researchers from Meta built a new RAG approach that: - outperforms LLaMA on 16 RAG benchmarks. - has 30.85x faster time-to-first-token. - handles 16x larger context windows. - and it utilizes 2-4x fewer tokens. Here's the core problem with a typical RAG setup that Meta solves:…

_avichawla's tweet image. Researchers from Meta built a new RAG approach that:

- outperforms LLaMA on 16 RAG benchmarks.
- has 30.85x faster time-to-first-token.
- handles 16x larger context windows.
- and it utilizes 2-4x fewer tokens.

Here's the core problem with a typical RAG setup that Meta solves:…

Ishan Gupta heeft deze post opnieuw geplaatst

This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates. No fine-tuning or training & beats classic ML methods.

emollick's tweet image. This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.
emollick's tweet image. This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.
emollick's tweet image. This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.

Ishan Gupta heeft deze post opnieuw geplaatst

You can just prompt things

This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates. No fine-tuning or training & beats classic ML methods.

emollick's tweet image. This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.
emollick's tweet image. This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.
emollick's tweet image. This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.


Ishan Gupta heeft deze post opnieuw geplaatst

A senior Google engineer just dropped a 424-page doc called Agentic Design Patterns. Every chapter is code-backed and covers the frontier of AI systems: → Prompt chaining, routing, memory → MCP & multi-agent coordination → Guardrails, reasoning, planning This isn’t a blog…

basicprompts's tweet image. A senior Google engineer just dropped a 424-page doc called Agentic Design Patterns.

Every chapter is code-backed and covers the frontier of AI systems:

→ Prompt chaining, routing, memory
→ MCP & multi-agent coordination
→ Guardrails, reasoning, planning

This isn’t a blog…

Ishan Gupta heeft deze post opnieuw geplaatst

Did Stanford just kill LLM fine-tuning? This new paper from Stanford, called Agentic Context Engineering (ACE), proves something wild: you can make models smarter without changing a single weight. Here's how it works: Instead of retraining the model, ACE evolves the context…

akshay_pachaar's tweet image. Did Stanford just kill LLM fine-tuning?

This new paper from Stanford, called Agentic Context Engineering (ACE), proves something wild: you can make models smarter without changing a single weight.

Here's how it works:

Instead of retraining the model, ACE evolves the context…

Ishan Gupta heeft deze post opnieuw geplaatst

Great recap of security risks associated with LLM-based agents. The literature keeps growing, but these are key papers worth reading. Analysis of 150+ papers finds that there is a shift from monolithic to planner-executor and multi-agent architectures. Multi-agent security is…

omarsar0's tweet image. Great recap of security risks associated with LLM-based agents.

The literature keeps growing, but these are key papers worth reading.

Analysis of 150+ papers finds that there is a shift from monolithic to planner-executor and multi-agent architectures.

Multi-agent security is…

Ishan Gupta heeft deze post opnieuw geplaatst

Holy shit...Google just built an AI that learns from its own mistakes in real time. New paper dropped on ReasoningBank. The idea is pretty simple but nobody's done it this way before. Instead of just saving chat history or raw logs, it pulls out the actual reasoning patterns,…

alex_prompter's tweet image. Holy shit...Google just built an AI that learns from its own mistakes in real time.

New paper dropped on ReasoningBank. The idea is pretty simple but nobody's done it this way before. Instead of just saving chat history or raw logs, it pulls out the actual reasoning patterns,…

Ishan Gupta heeft deze post opnieuw geplaatst

New paper from @Google is a major memory breakthrough for AI agents. ReasoningBank helps an AI agent improve during use by learning from its wins and mistakes. To succeed in real-world settings, LLM agents must stop making the same mistakes. ReasoningBank memory framework…

rohanpaul_ai's tweet image. New paper from @Google is a major memory breakthrough for AI agents. 

ReasoningBank helps an AI agent improve during use by learning from its wins and mistakes.

To succeed in real-world settings, LLM agents must stop making the same mistakes. ReasoningBank memory framework…

Ishan Gupta heeft deze post opnieuw geplaatst

What the fuck just happened 🤯 Stanford just made fine-tuning irrelevant with a single paper. It’s called Agentic Context Engineering (ACE) and it proves you can make models smarter without touching a single weight. Instead of retraining, ACE evolves the context itself. The…

alxnderhughes's tweet image. What the fuck just happened 🤯

Stanford just made fine-tuning irrelevant with a single paper.

It’s called Agentic Context Engineering (ACE) and it proves you can make models smarter without touching a single weight.

Instead of retraining, ACE evolves the context itself.

The…

Ishan Gupta heeft deze post opnieuw geplaatst

My brain broke when I read this paper. A tiny 7 Million parameter model just beat DeepSeek-R1, Gemini 2.5 pro, and o3-mini at reasoning on both ARG-AGI 1 and ARC-AGI 2. It's called Tiny Recursive Model (TRM) from Samsung. How can a model 10,000x smaller be smarter? Here's how…

JacksonAtkinsX's tweet image. My brain broke when I read this paper.

A tiny 7 Million parameter model just beat DeepSeek-R1, Gemini 2.5 pro, and o3-mini at reasoning on both ARG-AGI 1 and ARC-AGI 2.

It's called Tiny Recursive Model (TRM) from Samsung.

How can a model 10,000x smaller be smarter?

Here's how…

Ishan Gupta heeft deze post opnieuw geplaatst

In the near future, your Tesla will drop you off at the store entrance and then go find a parking spot. When you’re ready to exit the store, just tap Summon on your phone and the car will come to you.

FSD V14.1 Spends 20 Minutes Looking For Parking Spot at Costco This video is sped up 35x once we get hunting for a spot and during that time the car pulls of some really inellegent moves while searching. We did not once pass any empty available spots, the only issue is we didn't…



Ishan Gupta heeft deze post opnieuw geplaatst

Google did it again! First, they launched ADK, a fully open-source framework to build, orchestrate, evaluate, and deploy production-grade Agentic systems. And now, they have made it even powerful! Google ADK is now fully compatible with all three major AI protocols out there:…


Ishan Gupta heeft deze post opnieuw geplaatst

You can instantly generate Grok Imagine videos using any simple dark image, skipping the need of custom image for video Just pick a dark image with your preferred aspect ratio, type your prompt, and you’re set It works amazingly good....yes, this my cool recipe with all videos…

amXFreeze's tweet image. You can instantly generate Grok Imagine videos using any simple dark image, skipping the need of custom image for video

Just pick a dark image with your preferred aspect ratio, type your prompt, and you’re set

It works amazingly good....yes, this my cool recipe with all videos…

Ishan Gupta heeft deze post opnieuw geplaatst

Inference optimizations I’d study if I wanted sub-second LLM responses: Bookmark this. 1.KV-Caching 2.Speculative Decoding 3.FlashAttention 4.PagedAttention 5.Batch Inference 6.Early Exit Decoding 7.Parallel Decoding 8.Mixed Precision Inference 9.Quantized Kernels 10.Tensor…

asmah2107's tweet image. Inference optimizations I’d study if I wanted sub-second LLM responses:

Bookmark this.

1.KV-Caching
2.Speculative Decoding
3.FlashAttention
4.PagedAttention
5.Batch Inference
6.Early Exit Decoding
7.Parallel Decoding
8.Mixed Precision Inference
9.Quantized Kernels
10.Tensor…

Loading...

Something went wrong.


Something went wrong.