#sparseautoencoder search results
To address this, our team at @recombee developed CompresSAE: a lightweight, scalable embedding compression method based on a novel #sparseautoencoder (#SAE). (3/7)
How Sparse is Your Thought? Cracking the Inner Logic of Chain-of-Thought Prompts cognaptus.com/blog/2025-08-0… #Chain-of-Thought #interpretability #sparseautoencoder #LLMreasoning #activationpatching #mechanisticinterpretability
cognaptus.com
How Sparse is Your Thought? Cracking the Inner Logic of Chain-of-Thought Prompts
A feature-level causal analysis reveals how Chain-of-Thought prompting restructures large language models' reasoning internals, but only if they're big enough to care.
To address this, our team at @recombee developed CompresSAE: a lightweight, scalable embedding compression method based on a novel #sparseautoencoder (#SAE). (3/7)
Something went wrong.
Something went wrong.
United States Trends
- 1. Penn State 23.5K posts
- 2. Indiana 39.2K posts
- 3. Mendoza 20.6K posts
- 4. Gus Johnson 6,923 posts
- 5. #UFCVegas111 5,354 posts
- 6. #iufb 4,267 posts
- 7. Sayin 70.2K posts
- 8. Omar Cooper 9,703 posts
- 9. Iowa 20K posts
- 10. Mizzou 3,931 posts
- 11. Estevao 41K posts
- 12. Josh Hokit N/A
- 13. Kirby Moore N/A
- 14. Sunderland 156K posts
- 15. Beck 7,559 posts
- 16. Texas Tech 14.1K posts
- 17. Jim Knowles N/A
- 18. Happy Valley 1,923 posts
- 19. Preston Howard N/A
- 20. James Franklin 9,078 posts