code_igx's profile picture. 25 🇮🇳, Hustler @RITtigers NY 🇺🇸 | RnD on Quantum AI, Superintelligence & Systems | Ex- @Broadcom @VMware

Ishan Gupta

@code_igx

25 🇮🇳, Hustler @RITtigers NY 🇺🇸 | RnD on Quantum AI, Superintelligence & Systems | Ex- @Broadcom @VMware

Ishan Gupta a reposté

This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates. No fine-tuning or training & beats classic ML methods.

emollick's tweet image. This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.
emollick's tweet image. This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.
emollick's tweet image. This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.

Ishan Gupta a reposté

You can just prompt things

This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates. No fine-tuning or training & beats classic ML methods.

emollick's tweet image. This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.
emollick's tweet image. This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.
emollick's tweet image. This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.


Ishan Gupta a reposté

A senior Google engineer just dropped a 424-page doc called Agentic Design Patterns. Every chapter is code-backed and covers the frontier of AI systems: → Prompt chaining, routing, memory → MCP & multi-agent coordination → Guardrails, reasoning, planning This isn’t a blog…

basicprompts's tweet image. A senior Google engineer just dropped a 424-page doc called Agentic Design Patterns.

Every chapter is code-backed and covers the frontier of AI systems:

→ Prompt chaining, routing, memory
→ MCP & multi-agent coordination
→ Guardrails, reasoning, planning

This isn’t a blog…

Ishan Gupta a reposté

Did Stanford just kill LLM fine-tuning? This new paper from Stanford, called Agentic Context Engineering (ACE), proves something wild: you can make models smarter without changing a single weight. Here's how it works: Instead of retraining the model, ACE evolves the context…

akshay_pachaar's tweet image. Did Stanford just kill LLM fine-tuning?

This new paper from Stanford, called Agentic Context Engineering (ACE), proves something wild: you can make models smarter without changing a single weight.

Here's how it works:

Instead of retraining the model, ACE evolves the context…

Ishan Gupta a reposté

Great recap of security risks associated with LLM-based agents. The literature keeps growing, but these are key papers worth reading. Analysis of 150+ papers finds that there is a shift from monolithic to planner-executor and multi-agent architectures. Multi-agent security is…

omarsar0's tweet image. Great recap of security risks associated with LLM-based agents.

The literature keeps growing, but these are key papers worth reading.

Analysis of 150+ papers finds that there is a shift from monolithic to planner-executor and multi-agent architectures.

Multi-agent security is…

Ishan Gupta a reposté

Holy shit...Google just built an AI that learns from its own mistakes in real time. New paper dropped on ReasoningBank. The idea is pretty simple but nobody's done it this way before. Instead of just saving chat history or raw logs, it pulls out the actual reasoning patterns,…

alex_prompter's tweet image. Holy shit...Google just built an AI that learns from its own mistakes in real time.

New paper dropped on ReasoningBank. The idea is pretty simple but nobody's done it this way before. Instead of just saving chat history or raw logs, it pulls out the actual reasoning patterns,…

Ishan Gupta a reposté

New paper from @Google is a major memory breakthrough for AI agents. ReasoningBank helps an AI agent improve during use by learning from its wins and mistakes. To succeed in real-world settings, LLM agents must stop making the same mistakes. ReasoningBank memory framework…

rohanpaul_ai's tweet image. New paper from @Google is a major memory breakthrough for AI agents. 

ReasoningBank helps an AI agent improve during use by learning from its wins and mistakes.

To succeed in real-world settings, LLM agents must stop making the same mistakes. ReasoningBank memory framework…

Ishan Gupta a reposté

What the fuck just happened 🤯 Stanford just made fine-tuning irrelevant with a single paper. It’s called Agentic Context Engineering (ACE) and it proves you can make models smarter without touching a single weight. Instead of retraining, ACE evolves the context itself. The…

alxnderhughes's tweet image. What the fuck just happened 🤯

Stanford just made fine-tuning irrelevant with a single paper.

It’s called Agentic Context Engineering (ACE) and it proves you can make models smarter without touching a single weight.

Instead of retraining, ACE evolves the context itself.

The…

Ishan Gupta a reposté

My brain broke when I read this paper. A tiny 7 Million parameter model just beat DeepSeek-R1, Gemini 2.5 pro, and o3-mini at reasoning on both ARG-AGI 1 and ARC-AGI 2. It's called Tiny Recursive Model (TRM) from Samsung. How can a model 10,000x smaller be smarter? Here's how…

JacksonAtkinsX's tweet image. My brain broke when I read this paper.

A tiny 7 Million parameter model just beat DeepSeek-R1, Gemini 2.5 pro, and o3-mini at reasoning on both ARG-AGI 1 and ARC-AGI 2.

It's called Tiny Recursive Model (TRM) from Samsung.

How can a model 10,000x smaller be smarter?

Here's how…

Ishan Gupta a reposté

In the near future, your Tesla will drop you off at the store entrance and then go find a parking spot. When you’re ready to exit the store, just tap Summon on your phone and the car will come to you.

FSD V14.1 Spends 20 Minutes Looking For Parking Spot at Costco This video is sped up 35x once we get hunting for a spot and during that time the car pulls of some really inellegent moves while searching. We did not once pass any empty available spots, the only issue is we didn't…



Ishan Gupta a reposté

Google did it again! First, they launched ADK, a fully open-source framework to build, orchestrate, evaluate, and deploy production-grade Agentic systems. And now, they have made it even powerful! Google ADK is now fully compatible with all three major AI protocols out there:…


Ishan Gupta a reposté

You can instantly generate Grok Imagine videos using any simple dark image, skipping the need of custom image for video Just pick a dark image with your preferred aspect ratio, type your prompt, and you’re set It works amazingly good....yes, this my cool recipe with all videos…

amXFreeze's tweet image. You can instantly generate Grok Imagine videos using any simple dark image, skipping the need of custom image for video

Just pick a dark image with your preferred aspect ratio, type your prompt, and you’re set

It works amazingly good....yes, this my cool recipe with all videos…

Ishan Gupta a reposté

Inference optimizations I’d study if I wanted sub-second LLM responses: Bookmark this. 1.KV-Caching 2.Speculative Decoding 3.FlashAttention 4.PagedAttention 5.Batch Inference 6.Early Exit Decoding 7.Parallel Decoding 8.Mixed Precision Inference 9.Quantized Kernels 10.Tensor…

asmah2107's tweet image. Inference optimizations I’d study if I wanted sub-second LLM responses:

Bookmark this.

1.KV-Caching
2.Speculative Decoding
3.FlashAttention
4.PagedAttention
5.Batch Inference
6.Early Exit Decoding
7.Parallel Decoding
8.Mixed Precision Inference
9.Quantized Kernels
10.Tensor…

Ishan Gupta a reposté

Absolutely classic @GoogleResearch paper on In-Context-Learning by LLMs. Shows the mechanisms of how LLMs learn in context from examples in the prompt, can pick up new patterns while answering, yet their stored weights never change. 💡The mechanism they reveal for…

rohanpaul_ai's tweet image. Absolutely classic  @GoogleResearch  paper on In-Context-Learning by LLMs.

Shows the mechanisms of how LLMs learn in context from examples in the prompt,  can pick up new patterns while answering, yet their stored weights never change.

💡The mechanism they reveal for…

Ishan Gupta a reposté

Earth’s gravity is strong enough to make reaching Mars extremely hard, but not impossible

Why Super-Earthlings Might Never Reach the Stars In the rocket equation, the fuel required to reach orbit grows exponentially with gravity. If Earth’s gravity were 15% stronger, space programs would likely be impossible.

fermatslibrary's tweet image. Why Super-Earthlings Might Never Reach the Stars

In the rocket equation, the fuel required to reach orbit grows exponentially with gravity. If Earth’s gravity were 15% stronger, space programs would likely be impossible.


Ishan Gupta a reposté

A senior Google engineer just dropped a 424-page doc called Agentic Design Patterns. Every chapter is code-backed and covers the frontier of AI systems: → Prompt chaining, routing, memory → MCP & multi-agent coordination → Guardrails, reasoning, planning This isn’t a blog…

aaditsh's tweet image. A senior Google engineer just dropped a 424-page doc called Agentic Design Patterns.

Every chapter is code-backed and covers the frontier of AI systems:

→ Prompt chaining, routing, memory
→ MCP & multi-agent coordination
→ Guardrails, reasoning, planning

This isn’t a blog…

Ishan Gupta a reposté

You can teach a Transformer to execute a simple algorithm if you provide the exact step by step algorithm during training via CoT tokens. This is interesting, but the point of machine learning should be to *find* the algorithm during training, from input/output pairs only -- not…

A beautiful paper from MIT+Harvard+ @GoogleDeepMind 👏 Explains why Transformers miss multi digit multiplication and shows a simple bias that fixes it. The researchers trained two small Transformer models on 4-digit-by-4-digit multiplication. One used a special training method…

rohanpaul_ai's tweet image. A beautiful paper from MIT+Harvard+ @GoogleDeepMind 👏

Explains why Transformers miss multi digit multiplication and shows a simple bias that fixes it.

The researchers trained two small Transformer models on 4-digit-by-4-digit multiplication.

One used a special training method…


Ishan Gupta a reposté

Temperature in LLMs, clearly explained! Temperature is a key sampling parameter in LLM inference. Today I'll show you what it means and how it actually works. Let's start by prompting OpenAI GPT-3.5 with a low temperature value twice. We observe that it produces identical…

akshay_pachaar's tweet image. Temperature in LLMs, clearly explained!

Temperature is a key sampling parameter in LLM inference. Today I'll show you what it means and how it actually works.

Let's start by prompting OpenAI GPT-3.5 with a low temperature value twice.

We observe that it produces identical…
akshay_pachaar's tweet image. Temperature in LLMs, clearly explained!

Temperature is a key sampling parameter in LLM inference. Today I'll show you what it means and how it actually works.

Let's start by prompting OpenAI GPT-3.5 with a low temperature value twice.

We observe that it produces identical…
akshay_pachaar's tweet image. Temperature in LLMs, clearly explained!

Temperature is a key sampling parameter in LLM inference. Today I'll show you what it means and how it actually works.

Let's start by prompting OpenAI GPT-3.5 with a low temperature value twice.

We observe that it produces identical…
akshay_pachaar's tweet image. Temperature in LLMs, clearly explained!

Temperature is a key sampling parameter in LLM inference. Today I'll show you what it means and how it actually works.

Let's start by prompting OpenAI GPT-3.5 with a low temperature value twice.

We observe that it produces identical…

Loading...

Something went wrong.


Something went wrong.