
Talvez você curta
Marc Andreessen and Ben Horowitz say that AI models are hitting a ceiling of capabilities: "we've really slowed down in terms of the amount of improvement... we're increasing GPUs, but we're not getting the intelligence improvements, at all"
The results are in: Trade-offs between accuracy & performance in LLM quantization After hundreds of thousands of evals and benchmarks from our research team at @neuralmagic, I'm excited to share our findings on LLM quantization—now available as a paper on arXiv:…




It saddens me how such an imaginative and creative show struggles to get a season 2, this is one of those shows that's gonna be cited as a main inspiration for movies and games 20 years from now, it's peak sci fi.
Preview of "Scavengers Reign" Season 2. Unfortunately, this new season has not yet been greenlighted. So the creators of the show & Green Street studio produced this concept trailer in-house. Full video >> catsuka.com/news/2024-11-0… cc @josephbennett00 @charleshuettner
Preview of "Scavengers Reign" Season 2. Unfortunately, this new season has not yet been greenlighted. So the creators of the show & Green Street studio produced this concept trailer in-house. Full video >> catsuka.com/news/2024-11-0… cc @josephbennett00 @charleshuettner
"Scavengers Reign" released a year ago today. Fingers crossed for Season 2.

Former Conservative MP (UK) Rory Stewart on GiveDirectly's basic income program that transformed entire communities. "The results were absolutely staggering... The whole place just felt better. Happier."
I may not be convinced that pragmatics is a thing but it sure comes in handy.
Love it. Matches intuitions very well.
👶NEW PAPER🪇 Children are better at learning a second language (L2) than adults. In a new paper (led by the awesome Ionut Constantinescu) we ask: 1. "Do LMs also have a 'Critical Period' (CP) for language acquisition?" and 2. "What can LMs tell us about the CP in humans?"

An outdoor pic of Pilet 5. #raspberrypi #portable #computer

Newspeak House is hosting an election night event, if anyone in London wants a place to be: lu.ma/f0gmn2dy
Fans of The Bitter Lesson may be interested in this talk from 2018 (recently re-discovered) which includes its first public presentation, at 30:40. youtu.be/tUCJ4UsKU2I?si…
youtube.com
YouTube
Weinberg Symposium 2018: Sutton
I like @OpenAI #SWARM and I indeed wrote an article about it: linkedin.com/pulse/swarming… But I am sadly surprised that the #FOSS github project sadly states in the issue page for each issue

A few ideas from this 2018 paper on scalable neuro-symbolic reasoning are now mainstream 🙂 we 1) used k-NN/MIPS to find the most relevant facts in a KB to answer a query (as in RAG today), and 2) recursively decompose queries into sub-queries (like in CoT, but in embedding…

"Towards Neural Theorem Proving at Scale," Minervini and Bosnjak et al.: arxiv.org/abs/1807.08204
the gap between OAI/Anthropic/Meta/etc. and a large group of companies all over the world you've never cared to know of, in terms of LM pre-training? tiny
RIP Greg Hildebrandt who has left us to join his brother Tim in Middle Earth




Gut feeling: Most common prompts the general public use LLMs for don't are simple and in the training set. The "hardest problems" we get annoyed LLMs can't solve around here, are not. Therefore: Some frontier labs overffit the training set deliberately. And most users love it.
Hottest week for London AI so far 🔥 Dev Day yesterday and AI Tinkerers tonight! london.aitinkerers.org/p/ai-tinkerers… @monzo @tortus_AI @_lucas_godfrey @QuotientAI @samshapley @LukeHarries_ @stephenbtl


Our new AI paper reveals surprising geometric structure in the LLM-learned concepts: 1) They form brain-like "lobes", 2) they form "semantic crystals" much more precise than it first seems, and 3) the concept cloud is more fractal than round:
1/6 New paper! “The Geometry of Concepts: Sparse Autoencoder Feature Structure.” We find that the concept universe of SAE features has interesting structure at three levels: 1) “atomic” small-scale, 2) “brain” intermediate-scale, and 3) “galaxy” large-scale!

My ICLR talk “How Do We Build a General Intelligence?” is now online! youtube.com/watch?v=HEp4TO…
youtube.com
YouTube
How Do We Build a General Intelligence?
United States Tendências
- 1. Mariners 80.1K posts
- 2. World Series 86.7K posts
- 3. World Series 86.7K posts
- 4. George Springer 36.3K posts
- 5. Texans 25.8K posts
- 6. #WWERaw 54.9K posts
- 7. Dan Wilson 3,638 posts
- 8. Baker 35.9K posts
- 9. CJ Stroud 3,327 posts
- 10. #ALCS 9,790 posts
- 11. Munoz 9,715 posts
- 12. Bazardo 2,591 posts
- 13. Seahawks 25.5K posts
- 14. Mike Evans 15.1K posts
- 15. Lions 90.7K posts
- 16. Kendrick 15.7K posts
- 17. LA Knight 6,748 posts
- 18. White House 279K posts
- 19. Gibbs 23.5K posts
- 20. Jeff Hoffman 2,450 posts
Talvez você curta
-
Jia-Chen Gu
@Jiachen_Gu -
Ruizhe Li
@liruizhe94 -
Nirav Diwan
@ocean_drifters -
Darrin Johnson
@darrinpjohnson -
Can Udomcharoenchaikit
@canudomc -
Dudon Wai
@DudonWai -
Gabriele Pergola
@pergolagb -
ItaliaNLP Lab
@ItaliaNLP_Lab -
Eleonora Mancini
@helemanc___ -
Tanya Chowdhury
@ta_knee_aa -
Soares Chen
@soareschen -
Ifty Mohammad Rezwan
@imr165 -
PSU NLP LAB
@NLP_PennState -
Shoaib Ehsan
@ShoaibEhsan8 -
Gabriel Stanovsky
@GabiStanovsky
Something went wrong.
Something went wrong.