fractalego's profile picture. Data scientist with an NLProc twist.

Alberto Cetoli (@[email protected])

@fractalego

Data scientist with an NLProc twist.

Alberto Cetoli (@[email protected]) 님이 재게시함

Marc Andreessen and Ben Horowitz say that AI models are hitting a ceiling of capabilities: "we've really slowed down in terms of the amount of improvement... we're increasing GPUs, but we're not getting the intelligence improvements, at all"


Alberto Cetoli (@[email protected]) 님이 재게시함

The results are in: Trade-offs between accuracy & performance in LLM quantization After hundreds of thousands of evals and benchmarks from our research team at @neuralmagic, I'm excited to share our findings on LLM quantization—now available as a paper on arXiv:…

markurtz_'s tweet image. The results are in: Trade-offs between accuracy & performance in LLM quantization

After hundreds of thousands of evals and benchmarks from our research team at @neuralmagic, I'm excited to share our findings on LLM quantization—now available as a paper on arXiv:…
markurtz_'s tweet image. The results are in: Trade-offs between accuracy & performance in LLM quantization

After hundreds of thousands of evals and benchmarks from our research team at @neuralmagic, I'm excited to share our findings on LLM quantization—now available as a paper on arXiv:…
markurtz_'s tweet image. The results are in: Trade-offs between accuracy & performance in LLM quantization

After hundreds of thousands of evals and benchmarks from our research team at @neuralmagic, I'm excited to share our findings on LLM quantization—now available as a paper on arXiv:…
markurtz_'s tweet image. The results are in: Trade-offs between accuracy & performance in LLM quantization

After hundreds of thousands of evals and benchmarks from our research team at @neuralmagic, I'm excited to share our findings on LLM quantization—now available as a paper on arXiv:…

Alberto Cetoli (@[email protected]) 님이 재게시함

It saddens me how such an imaginative and creative show struggles to get a season 2, this is one of those shows that's gonna be cited as a main inspiration for movies and games 20 years from now, it's peak sci fi.

Preview of "Scavengers Reign" Season 2. Unfortunately, this new season has not yet been greenlighted. So the creators of the show & Green Street studio produced this concept trailer in-house. Full video >> catsuka.com/news/2024-11-0… cc @josephbennett00 @charleshuettner



Alberto Cetoli (@[email protected]) 님이 재게시함

Preview of "Scavengers Reign" Season 2. Unfortunately, this new season has not yet been greenlighted. So the creators of the show & Green Street studio produced this concept trailer in-house. Full video >> catsuka.com/news/2024-11-0… cc @josephbennett00 @charleshuettner

"Scavengers Reign" released a year ago today. Fingers crossed for Season 2.

catsuka's tweet image. "Scavengers Reign" released a year ago today.
Fingers crossed for Season 2.


Alberto Cetoli (@[email protected]) 님이 재게시함

Former Conservative MP (UK) Rory Stewart on GiveDirectly's basic income program that transformed entire communities. "The results were absolutely staggering... The whole place just felt better. Happier."


Alberto Cetoli (@[email protected]) 님이 재게시함

I may not be convinced that pragmatics is a thing but it sure comes in handy.


Alberto Cetoli (@[email protected]) 님이 재게시함

Love it. Matches intuitions very well.

👶NEW PAPER🪇 Children are better at learning a second language (L2) than adults. In a new paper (led by the awesome Ionut Constantinescu) we ask: 1. "Do LMs also have a 'Critical Period' (CP) for language acquisition?" and 2. "What can LMs tell us about the CP in humans?"

a_stadt's tweet image. 👶NEW PAPER🪇

Children are better at learning a second language (L2) than adults. In a new paper (led by the awesome Ionut Constantinescu) we ask: 
1. "Do LMs also have a 'Critical Period' (CP) for language acquisition?" and 
2. "What can LMs tell us about the CP in humans?"


Alberto Cetoli (@[email protected]) 님이 재게시함

a touch that feels nothing

Synthetic_Copy's tweet image. a touch that feels nothing

Alberto Cetoli (@[email protected]) 님이 재게시함

An outdoor pic of Pilet 5. #raspberrypi #portable #computer

soulscircuit's tweet image. An outdoor pic of Pilet 5. #raspberrypi #portable #computer

Alberto Cetoli (@[email protected]) 님이 재게시함

Newspeak House is hosting an election night event, if anyone in London wants a place to be: lu.ma/f0gmn2dy


Alberto Cetoli (@[email protected]) 님이 재게시함

Fans of The Bitter Lesson may be interested in this talk from 2018 (recently re-discovered) which includes its first public presentation, at 30:40. youtu.be/tUCJ4UsKU2I?si…

RichardSSutton's tweet card. Weinberg Symposium 2018: Sutton

youtube.com

YouTube

Weinberg Symposium 2018: Sutton


Alberto Cetoli (@[email protected]) 님이 재게시함

I like @OpenAI #SWARM and I indeed wrote an article about it: linkedin.com/pulse/swarming… But I am sadly surprised that the #FOSS github project sadly states in the issue page for each issue

solyarisoftware's tweet image. I like @OpenAI #SWARM and I indeed wrote an article about it: linkedin.com/pulse/swarming…

But I am sadly surprised that the #FOSS github project sadly states in the issue page for each issue

Alberto Cetoli (@[email protected]) 님이 재게시함

A few ideas from this 2018 paper on scalable neuro-symbolic reasoning are now mainstream 🙂 we 1) used k-NN/MIPS to find the most relevant facts in a KB to answer a query (as in RAG today), and 2) recursively decompose queries into sub-queries (like in CoT, but in embedding…

PMinervini's tweet image. A few ideas from this 2018 paper on scalable neuro-symbolic reasoning are now mainstream 🙂 we 1) used k-NN/MIPS to find the most relevant facts in a KB to answer a query (as in RAG today), and 2) recursively decompose queries into sub-queries (like in CoT, but in embedding…

"Towards Neural Theorem Proving at Scale," Minervini and Bosnjak et al.: arxiv.org/abs/1807.08204



Alberto Cetoli (@[email protected]) 님이 재게시함

the gap between OAI/Anthropic/Meta/etc. and a large group of companies all over the world you've never cared to know of, in terms of LM pre-training? tiny


Alberto Cetoli (@[email protected]) 님이 재게시함

What. The.

mattjay's tweet image. What. The.

Alberto Cetoli (@[email protected]) 님이 재게시함

RIP Greg Hildebrandt who has left us to join his brother Tim in Middle Earth

retroscifiart's tweet image. RIP Greg Hildebrandt who has left us to join his brother Tim in Middle Earth
retroscifiart's tweet image. RIP Greg Hildebrandt who has left us to join his brother Tim in Middle Earth
retroscifiart's tweet image. RIP Greg Hildebrandt who has left us to join his brother Tim in Middle Earth
retroscifiart's tweet image. RIP Greg Hildebrandt who has left us to join his brother Tim in Middle Earth

Alberto Cetoli (@[email protected]) 님이 재게시함

Gut feeling: Most common prompts the general public use LLMs for don't are simple and in the training set. The "hardest problems" we get annoyed LLMs can't solve around here, are not. Therefore: Some frontier labs overffit the training set deliberately. And most users love it.


Alberto Cetoli (@[email protected]) 님이 재게시함

Hottest week for London AI so far 🔥 Dev Day yesterday and AI Tinkerers tonight! london.aitinkerers.org/p/ai-tinkerers… @monzo @tortus_AI @_lucas_godfrey @QuotientAI @samshapley @LukeHarries_ @stephenbtl

LouisKnightWebb's tweet image. Hottest week for London AI so far 🔥

Dev Day yesterday and AI Tinkerers tonight!

london.aitinkerers.org/p/ai-tinkerers…

@monzo @tortus_AI @_lucas_godfrey @QuotientAI @samshapley @LukeHarries_ @stephenbtl
LouisKnightWebb's tweet image. Hottest week for London AI so far 🔥

Dev Day yesterday and AI Tinkerers tonight!

london.aitinkerers.org/p/ai-tinkerers…

@monzo @tortus_AI @_lucas_godfrey @QuotientAI @samshapley @LukeHarries_ @stephenbtl

Alberto Cetoli (@[email protected]) 님이 재게시함

Our new AI paper reveals surprising geometric structure in the LLM-learned concepts: 1) They form brain-like "lobes", 2) they form "semantic crystals" much more precise than it first seems, and 3) the concept cloud is more fractal than round:

1/6 New paper! “The Geometry of Concepts: Sparse Autoencoder Feature Structure.” We find that the concept universe of SAE features has interesting structure at three levels: 1) “atomic” small-scale, 2) “brain” intermediate-scale, and 3) “galaxy” large-scale!

dbaek__'s tweet image. 1/6 New paper! “The Geometry of Concepts: Sparse Autoencoder Feature Structure.” We find that the concept universe of SAE features has interesting structure at three levels: 1) “atomic” small-scale, 2) “brain” intermediate-scale, and 3) “galaxy” large-scale!


Alberto Cetoli (@[email protected]) 님이 재게시함

My ICLR talk “How Do We Build a General Intelligence?” is now online! youtube.com/watch?v=HEp4TO…

andrewgwils's tweet card. How Do We Build a General Intelligence?

youtube.com

YouTube

How Do We Build a General Intelligence?


Loading...

Something went wrong.


Something went wrong.