
FireHacker
@thefirehacker
Founder-AI Researcher. Building BubblSpace & Timecapsule
你可能會喜歡
🔥1+ month of effort and first signs of success! Final Product: TimeCapsule-SLM An Open Source Deepresearch that works in browser with Qwen 3 0.6b(ollama) that has semantic understanding , provide insights & generate novel ideas . Privacy first local Deep Research. 👽…




LLM.Q: C++/Cuda native LLM training. If you want to learn about quantization, how to implement FSDP or activation checkpointing from scratch, it has it all. Project from one of the most cracked people I know, Erik Schultheis and @DAlistarh 1/2

US and Indian VCs have formed a $1B+ alliance to fund India’s deep tech startups. techcrunch.com/2025/09/01/u-s… Eight US and Indian VC/PE firms — including Accel, Blume, Celesta, Premji, Gaja, Tenacity and others — have formed an unusual coalition to back India’s deep tech startups.
you can checkout any branch you want but you can never merge
….

The Cline CLI (Preview) isn't just Cline in your terminal. It's your scriptable coding agent and a force multiplier for VS Code and JetBrains Cline, allowing you to wield the IDE-level Cline as an orchestrator for Cline subagents. Here's how you can get started 🧵
The entire park is under a genjutsu
Tiny Recursion Model (TRM) results on ARC-AGI - ARC-AGI-1: 40%, $1.76/task - ARC-AGI-2: 6.2%, $2.10/task Thank you to @jm_alexia for contributing TRM, a well written, open source, and thorough research to the community based on the HRM from @makingAGI

An exciting milestone for AI in science: Our C2S-Scale 27B foundation model, built with @Yale and based on Gemma, generated a novel hypothesis about cancer cellular behavior, which scientists experimentally validated in living cells. With more preclinical and clinical tests,…
New CIFAR-10 training speed record: 94% in 1.99 seconds on one A100 Previous record: 2.59 seconds (Nov. 10th 2024) New record-holder: Algorithmic discovery engine developed by @hivergeai Changelog: - Muon: Vectorize NS iter and reduce frequency of 'normalize weights' step 1/3

What if scaling the context windows of frontier LLMs is much easier than it sounds? We’re excited to share our work on Recursive Language Models (RLMs). A new inference strategy where LLMs can decompose and recursively interact with input prompts of seemingly unbounded length,…

incredible walkthrough of verifiers, the Environments Hub, and how to navigate the modern RL era :)
ever wondered what an RLVR environment is? in 27 min I’ll show you: - what they made of - how RLVR differs from RLHF - the performance gain it gives to small models - and a walkthrough of the verifiers specs to define them. by the end you will be able to make your own 👺🦋

Tech Mahindra developing 1-trillion-parameter sovereign LLM under IndiaAI Mission By @debanganaghosh4 moneycontrol.com/news/business/…
moneycontrol.com
Tech Mahindra developing 1-trillion-parameter sovereign LLM under IndiaAI Mission
Tech Mahindra CEO Mohit Joshi described it as a “significant technical milestone,” noting that with 1 trillion parameters, the IT major’s LLM would rank among the largest AI models currently under...
🚨 Tech Mahindra is currently developing an indigenous LLM with 1 trillion parameters as a part of IndiaAI Mission: CEO.

Excited to release new repo: nanochat! (it's among the most unhinged I've written). Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single,…

Close enough, welcome back Obito Uchiha

Thank you PM @narendramodi for the great conversation on fostering a broader partnership between @Qualcomm and India in support of the IndiaAI and India Semiconductor Missions, as well as the transition to 6G. We are encouraged by the opportunities to develop an Indian ecosystem…


TWO BILLION RAISED with no (a deleted) product! team is beyond cracked

Today we're sharing the next phase of Reflection. We're building frontier open intelligence accessible to all. We've assembled an extraordinary AI team, built a frontier LLM training stack, and raised $2 billion. Why Open Intelligence Matters Technological and scientific…
SS Rajamouli 🫡 Wishing the maverick movie director a very happy birthday 🎂
Meet LFM2-8B-A1B by @LiquidAI_ - 8B total and 1B active params 🐘 - 5x faster on CPUs and GPUs ⚡️ - Perfect for fast, private, edge 📱/💻/🚗/🤖


Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘 > LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed. > Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs. > Quantized variants fit comfortably on…

Alert 🚨 - You are watching one of the greatest comeback by India women's when they were 100/6 to 251/10 👏🏻 - Richa Ghosh and Sneh Rana had a brilliant partnership of 88, with Richa Ghosh's blistering 94 🔥 - What's your take 🤔 #INDvsSA
Richa Ghosh - The Greatest Finisher of Indian Women's Cricket 🇮🇳

United States 趨勢
- 1. Texans 37.7K posts
- 2. World Series 110K posts
- 3. CJ Stroud 6,714 posts
- 4. Mariners 92.8K posts
- 5. Blue Jays 95.8K posts
- 6. Seahawks 36.2K posts
- 7. Seattle 52K posts
- 8. Springer 67.5K posts
- 9. Nick Caley 2,610 posts
- 10. White House 310K posts
- 11. Dan Wilson 4,316 posts
- 12. #WWERaw 60.8K posts
- 13. LA Knight 8,140 posts
- 14. Nico Collins 2,137 posts
- 15. Bazardo 3,156 posts
- 16. Kenneth Walker 2,567 posts
- 17. Demeco 1,796 posts
- 18. Dodgers in 5 2,226 posts
- 19. Sam Darnold 4,317 posts
- 20. Munoz 10.4K posts
Something went wrong.
Something went wrong.