thefirehacker's profile picture. Founder-AI Researcher. Building BubblSpace & Timecapsule

FireHacker

@thefirehacker

Founder-AI Researcher. Building BubblSpace & Timecapsule

置頂

🔥1+ month of effort and first signs of success! Final Product: TimeCapsule-SLM An Open Source Deepresearch that works in browser with Qwen 3 0.6b(ollama) that has semantic understanding , provide insights & generate novel ideas . Privacy first local Deep Research. 👽…

thefirehacker's tweet image. 🔥1+ month of effort and first signs of success!
Final Product: TimeCapsule-SLM
 An Open Source Deepresearch that works in browser with Qwen 3 0.6b(ollama) that has semantic understanding , provide insights & generate novel ideas .  Privacy first local Deep Research.

👽…
thefirehacker's tweet image. 🔥1+ month of effort and first signs of success!
Final Product: TimeCapsule-SLM
 An Open Source Deepresearch that works in browser with Qwen 3 0.6b(ollama) that has semantic understanding , provide insights & generate novel ideas .  Privacy first local Deep Research.

👽…
thefirehacker's tweet image. 🔥1+ month of effort and first signs of success!
Final Product: TimeCapsule-SLM
 An Open Source Deepresearch that works in browser with Qwen 3 0.6b(ollama) that has semantic understanding , provide insights & generate novel ideas .  Privacy first local Deep Research.

👽…
thefirehacker's tweet image. 🔥1+ month of effort and first signs of success!
Final Product: TimeCapsule-SLM
 An Open Source Deepresearch that works in browser with Qwen 3 0.6b(ollama) that has semantic understanding , provide insights & generate novel ideas .  Privacy first local Deep Research.

👽…

FireHacker 已轉發

LLM.Q: C++/Cuda native LLM training. If you want to learn about quantization, how to implement FSDP or activation checkpointing from scratch, it has it all. Project from one of the most cracked people I know, Erik Schultheis and @DAlistarh 1/2

m_sirovatka's tweet image. LLM.Q: C++/Cuda native LLM training. If you want to learn about quantization, how to implement FSDP or activation checkpointing from scratch, it has it all. Project from one of the most cracked people I know, Erik Schultheis and @DAlistarh 1/2

FireHacker 已轉發

US and Indian VCs have formed a $1B+ alliance to fund India’s deep tech startups. techcrunch.com/2025/09/01/u-s… Eight US and Indian VC/PE firms — including Accel, Blume, Celesta, Premji, Gaja, Tenacity and others — have formed an unusual coalition to back India’s deep tech startups.


FireHacker 已轉發

you can checkout any branch you want but you can never merge


FireHacker 已轉發

….

TensorSlay's tweet image. ….

The Cline CLI (Preview) isn't just Cline in your terminal. It's your scriptable coding agent and a force multiplier for VS Code and JetBrains Cline, allowing you to wield the IDE-level Cline as an orchestrator for Cline subagents. Here's how you can get started 🧵



FireHacker 已轉發

The entire park is under a genjutsu


FireHacker 已轉發

Tiny Recursion Model (TRM) results on ARC-AGI - ARC-AGI-1: 40%, $1.76/task - ARC-AGI-2: 6.2%, $2.10/task Thank you to @jm_alexia for contributing TRM, a well written, open source, and thorough research to the community based on the HRM from @makingAGI

arcprize's tweet image. Tiny Recursion Model (TRM) results on ARC-AGI

 - ARC-AGI-1: 40%, $1.76/task
 - ARC-AGI-2: 6.2%, $2.10/task

Thank you to @jm_alexia for contributing TRM, a well written, open source, and thorough research to the community based on the HRM from @makingAGI

FireHacker 已轉發

An exciting milestone for AI in science: Our C2S-Scale 27B foundation model, built with @Yale and based on Gemma, generated a novel hypothesis about cancer cellular behavior, which scientists experimentally validated in living cells.  With more preclinical and clinical tests,…


FireHacker 已轉發

New CIFAR-10 training speed record: 94% in 1.99 seconds on one A100 Previous record: 2.59 seconds (Nov. 10th 2024) New record-holder: Algorithmic discovery engine developed by @hivergeai Changelog: - Muon: Vectorize NS iter and reduce frequency of 'normalize weights' step 1/3

kellerjordan0's tweet image. New CIFAR-10 training speed record: 94% in 1.99 seconds on one A100

Previous record: 2.59 seconds (Nov. 10th 2024)
New record-holder: Algorithmic discovery engine developed by @hivergeai

Changelog:
- Muon: Vectorize NS iter and reduce frequency of 'normalize weights' step
1/3

FireHacker 已轉發

What if scaling the context windows of frontier LLMs is much easier than it sounds? We’re excited to share our work on Recursive Language Models (RLMs). A new inference strategy where LLMs can decompose and recursively interact with input prompts of seemingly unbounded length,…

a1zhang's tweet image. What if scaling the context windows of frontier LLMs is much easier than it sounds?

We’re excited to share our work on Recursive Language Models (RLMs). A new inference strategy where LLMs can decompose and recursively interact with input prompts of seemingly unbounded length,…

FireHacker 已轉發

incredible walkthrough of verifiers, the Environments Hub, and how to navigate the modern RL era :)

ever wondered what an RLVR environment is? in 27 min I’ll show you: - what they made of - how RLVR differs from RLHF - the performance gain it gives to small models - and a walkthrough of the verifiers specs to define them. by the end you will be able to make your own 👺🦋

yacinelearning's tweet image. ever wondered what an RLVR environment is?

in 27 min I’ll show you:
- what they made of 
- how RLVR differs from RLHF
- the performance gain it gives to small models
- and a walkthrough of the verifiers specs to define them.

by the end you will be able to make your own 👺🦋


FireHacker 已轉發

🚨 Tech Mahindra is currently developing an indigenous LLM with 1 trillion parameters as a part of IndiaAI Mission: CEO.

IndianTechGuide's tweet image. 🚨 Tech Mahindra is currently developing an indigenous LLM with 1 trillion parameters as a part of IndiaAI Mission: CEO.

FireHacker 已轉發

Excited to release new repo: nanochat! (it's among the most unhinged I've written). Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single,…

karpathy's tweet image. Excited to release new repo: nanochat!
(it's among the most unhinged I've written).

Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single,…

FireHacker 已轉發

Close enough, welcome back Obito Uchiha

NarutoDaily_'s tweet image. Close enough, welcome back Obito Uchiha

FireHacker 已轉發

Thank you PM @narendramodi for the great conversation on fostering a broader partnership between @Qualcomm and India in support of the IndiaAI and India Semiconductor Missions, as well as the transition to 6G. We are encouraged by the opportunities to develop an Indian ecosystem…

cristianoamon's tweet image. Thank you PM @narendramodi for the great conversation on fostering a broader partnership between @Qualcomm and India in support of the IndiaAI and India Semiconductor Missions, as well as the transition to 6G. We are encouraged by the opportunities to develop an Indian ecosystem…
cristianoamon's tweet image. Thank you PM @narendramodi for the great conversation on fostering a broader partnership between @Qualcomm and India in support of the IndiaAI and India Semiconductor Missions, as well as the transition to 6G. We are encouraged by the opportunities to develop an Indian ecosystem…

FireHacker 已轉發

TWO BILLION RAISED with no (a deleted) product! team is beyond cracked

swyx's tweet image. TWO BILLION RAISED with no (a deleted) product!

team is beyond cracked

Today we're sharing the next phase of Reflection. We're building frontier open intelligence accessible to all. We've assembled an extraordinary AI team, built a frontier LLM training stack, and raised $2 billion. Why Open Intelligence Matters Technological and scientific…



FireHacker 已轉發

SS Rajamouli 🫡 Wishing the maverick movie director a very happy birthday 🎂

來自 Mr Prabh Deol

FireHacker 已轉發

Meet LFM2-8B-A1B by @LiquidAI_ - 8B total and 1B active params 🐘 - 5x faster on CPUs and GPUs ⚡️ - Perfect for fast, private, edge 📱/💻/🚗/🤖

xanamini's tweet image. Meet LFM2-8B-A1B by @LiquidAI_
- 8B total and 1B active params 🐘
- 5x faster on CPUs and GPUs ⚡️
- Perfect for fast, private, edge 📱/💻/🚗/🤖
xanamini's tweet image. Meet LFM2-8B-A1B by @LiquidAI_
- 8B total and 1B active params 🐘
- 5x faster on CPUs and GPUs ⚡️
- Perfect for fast, private, edge 📱/💻/🚗/🤖

Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

LiquidAI_'s tweet image. Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘

> LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed.
> Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs.
> Quantized variants fit comfortably on…


FireHacker 已轉發

Alert 🚨 - You are watching one of the greatest comeback by India women's when they were 100/6 to 251/10 👏🏻 - Richa Ghosh and Sneh Rana had a brilliant partnership of 88, with Richa Ghosh's blistering 94 🔥 - What's your take 🤔 #INDvsSA

來自 Star Sports

FireHacker 已轉發

Richa Ghosh - The Greatest Finisher of Indian Women's Cricket 🇮🇳

RichKettle07's tweet image. Richa Ghosh - The Greatest Finisher of Indian Women's Cricket 🇮🇳

Loading...

Something went wrong.


Something went wrong.