
내가 좋아할 만한 콘텐츠
Marc Andreessen and Ben Horowitz say that AI models are hitting a ceiling of capabilities: "we've really slowed down in terms of the amount of improvement... we're increasing GPUs, but we're not getting the intelligence improvements, at all"
The results are in: Trade-offs between accuracy & performance in LLM quantization After hundreds of thousands of evals and benchmarks from our research team at @neuralmagic, I'm excited to share our findings on LLM quantization—now available as a paper on arXiv:…




It saddens me how such an imaginative and creative show struggles to get a season 2, this is one of those shows that's gonna be cited as a main inspiration for movies and games 20 years from now, it's peak sci fi.
Preview of "Scavengers Reign" Season 2. Unfortunately, this new season has not yet been greenlighted. So the creators of the show & Green Street studio produced this concept trailer in-house. Full video >> catsuka.com/news/2024-11-0… cc @josephbennett00 @charleshuettner
Preview of "Scavengers Reign" Season 2. Unfortunately, this new season has not yet been greenlighted. So the creators of the show & Green Street studio produced this concept trailer in-house. Full video >> catsuka.com/news/2024-11-0… cc @josephbennett00 @charleshuettner
Former Conservative MP (UK) Rory Stewart on GiveDirectly's basic income program that transformed entire communities. "The results were absolutely staggering... The whole place just felt better. Happier."
I may not be convinced that pragmatics is a thing but it sure comes in handy.
Love it. Matches intuitions very well.
👶NEW PAPER🪇 Children are better at learning a second language (L2) than adults. In a new paper (led by the awesome Ionut Constantinescu) we ask: 1. "Do LMs also have a 'Critical Period' (CP) for language acquisition?" and 2. "What can LMs tell us about the CP in humans?"

An outdoor pic of Pilet 5. #raspberrypi #portable #computer

Newspeak House is hosting an election night event, if anyone in London wants a place to be: lu.ma/f0gmn2dy
Fans of The Bitter Lesson may be interested in this talk from 2018 (recently re-discovered) which includes its first public presentation, at 30:40. youtu.be/tUCJ4UsKU2I?si…
youtube.com
YouTube
Weinberg Symposium 2018: Sutton
I like @OpenAI #SWARM and I indeed wrote an article about it: linkedin.com/pulse/swarming… But I am sadly surprised that the #FOSS github project sadly states in the issue page for each issue

A few ideas from this 2018 paper on scalable neuro-symbolic reasoning are now mainstream 🙂 we 1) used k-NN/MIPS to find the most relevant facts in a KB to answer a query (as in RAG today), and 2) recursively decompose queries into sub-queries (like in CoT, but in embedding…

"Towards Neural Theorem Proving at Scale," Minervini and Bosnjak et al.: arxiv.org/abs/1807.08204
the gap between OAI/Anthropic/Meta/etc. and a large group of companies all over the world you've never cared to know of, in terms of LM pre-training? tiny
RIP Greg Hildebrandt who has left us to join his brother Tim in Middle Earth




Gut feeling: Most common prompts the general public use LLMs for don't are simple and in the training set. The "hardest problems" we get annoyed LLMs can't solve around here, are not. Therefore: Some frontier labs overffit the training set deliberately. And most users love it.
Hottest week for London AI so far 🔥 Dev Day yesterday and AI Tinkerers tonight! london.aitinkerers.org/p/ai-tinkerers… @monzo @tortus_AI @_lucas_godfrey @QuotientAI @samshapley @LukeHarries_ @stephenbtl


Our new AI paper reveals surprising geometric structure in the LLM-learned concepts: 1) They form brain-like "lobes", 2) they form "semantic crystals" much more precise than it first seems, and 3) the concept cloud is more fractal than round:
1/6 New paper! “The Geometry of Concepts: Sparse Autoencoder Feature Structure.” We find that the concept universe of SAE features has interesting structure at three levels: 1) “atomic” small-scale, 2) “brain” intermediate-scale, and 3) “galaxy” large-scale!

My ICLR talk “How Do We Build a General Intelligence?” is now online! youtube.com/watch?v=HEp4TO…
youtube.com
YouTube
How Do We Build a General Intelligence?
United States 트렌드
- 1. Ohtani 219K posts
- 2. Dodgers 264K posts
- 3. World Series 62.2K posts
- 4. Carson Beck 16.6K posts
- 5. Emiru 7,890 posts
- 6. Miami 101K posts
- 7. Louisville 28.5K posts
- 8. Brewers 56.9K posts
- 9. Nebraska 17.8K posts
- 10. Babe Ruth 3,517 posts
- 11. NOCHE IS BACK 29.9K posts
- 12. Massie 31.3K posts
- 13. NLCS 62.9K posts
- 14. #HeartofTaehyung 24.2K posts
- 15. #BostonBlue 7,864 posts
- 16. George Santos 85.7K posts
- 17. Rhule 4,693 posts
- 18. 3 HRs 10.1K posts
- 19. Sam Harris 1,064 posts
- 20. #SmackDown 59.2K posts
내가 좋아할 만한 콘텐츠
-
Jia-Chen Gu
@Jiachen_Gu -
Ruizhe Li
@liruizhe94 -
Nirav Diwan
@ocean_drifters -
Darrin Johnson
@darrinpjohnson -
Can Udomcharoenchaikit
@canudomc -
Dudon Wai
@DudonWai -
Gabriele Pergola
@pergolagb -
ItaliaNLP Lab
@ItaliaNLP_Lab -
Eleonora Mancini
@helemanc___ -
Tanya Chowdhury
@ta_knee_aa -
Soares Chen
@soareschen -
Ifty Mohammad Rezwan
@imr165 -
PSU NLP LAB
@NLP_PennState -
Shoaib Ehsan
@ShoaibEhsan8 -
Gabriel Stanovsky
@GabiStanovsky
Something went wrong.
Something went wrong.