Gary Marcus
@GaryMarcus
“In the aftermath of GPT-5’s launch … the views of critics like Marcus seem increasingly moderate.” —@newyorker
คุณอาจชื่นชอบ
Three thoughts on what really matters: 1. Fuck cancer 2. Friends are irreplaceable 3. The new "Marcus test" for AI is when AI makes a significant dent on cancer May that happen sooner, much sooner, rather than later. In memory of my childhood friend Paul.
This place is toxic. For the last seven years I warned you that LLMs and similar approaches would not lead us to AGI. Almost nobody is willing to acknowledge that, even though so many of you gave me endless grief about it at the time. I also warned you -– first –- that Sam…
Clearly the machine learning community can’t handle the truth. Good to see that @MrEwanMorrison can.
Now that many others are coming out with the truth and jumping ship- for the record it must be accepted that @GaryMarcus was the first to say - and prove- that LLMs are not a pathway to AGI.
Wow. Just wow. @ylecun taking credit for my March 2022 argument that scaling would hit a wall and that LLM would not bring us to AGI--after he initially attacked me for saying it and continued to promote them right up until ChatGPT ate has lunch--has to among the most…
Lotta people owe Gary an apology for the grief he got over his prescient Nautilus story that described, years in advance, how we would be here with respect to AI
translation : Deep Learning Hit a Wall™ new techniques needed to go forward; just like my infamous 2022 paper said. go back and read it!
It is one thing for everyone from @sama to @ylecun to @elonmusk and the Twitterverse to have attacked me literally for years for saying that scaling alone would not get us to AGI. Another to pretend now that that never happened — now that I have largely been proven correct.
this is so true. the twitterverse (aside from @wendyweeww, below) is getting the history here exactly wrong, and I honestly don’t know what to do about it. suggestions welcome.
Exactly. Yann hardly faced the challenges Gary faced on this topic, yet it’s as if Yann is “the one who was right all along”
People forgot @GaryMarcus 😂
Genuine question: Why is there a double standard between Ilya and Yann?
It's relieving to see that other researchers are finally seeing the light. We have been blindsided by LLMs. We need new methods if we want to truly reach AGI. We cannot become like physics. Their field stagnated for decades after Einstein's big discoveries.
Ilya Sutskever: We are no longer in the age of scaling, we are back to the age of research
Another top AI researcher comes out & claims that LLMs won't reach human level intelligence (AGI) no matter how much they are scaled up. And yet the US govt has just committed to vast investment on data centre infrastructure, partnering with LLM companies to...scale up LLMs
translation : Deep Learning Hit a Wall™ new techniques needed to go forward; just like my infamous 2022 paper said. go back and read it!
"We are no longer in the age of scaling, we are back to the age of research.' Kaplan scaling laws are flattening at current frontiers, autoregressive transformers exhausted for reasoning/planning/alignment gains. Next jump needs real architectural breakthroughs - test-time…
*exactly* what i have been saying all along. sooner or later all my haters are going to have realize that the vast majority of what i said here has turned out to be correct.
"We are no longer in the age of scaling, we are back to the age of research.' Kaplan scaling laws are flattening at current frontiers, autoregressive transformers exhausted for reasoning/planning/alignment gains. Next jump needs real architectural breakthroughs - test-time…
agreed.
I want to put forward a CRAZY fear: We’re trying to figure out how to go back to doing science *without* creating pesky fiercely independent secure American Scientists. No more Feynmans. No more Watsons. Just business leaders, AI, military contractors, engineers, visa holders.
2025 was supposed to be the year of agents. instead it’s the year of cleaning up their messes.
2025 in AI is not like the years before. Instead, multiple AI experts are rapidly moving towards the positions I have long held. Latest is @ilyasut, who has converged with me on the deficiencies of neural networks relative humans in generalization and the need for deeper…
The @ilyasut episode 0:00:00 – Explaining model jaggedness 0:09:39 - Emotions and value functions 0:18:49 – What are we scaling? 0:25:13 – Why humans generalize better than models 0:35:45 – Straight-shotting superintelligence 0:46:47 – SSI’s model will learn from deployment…
United States เทรนด์
- 1. Spurs 47.8K posts
- 2. Merry Christmas Eve 44.1K posts
- 3. Rockets 24.2K posts
- 4. #Pluribus 19.4K posts
- 5. Cooper Flagg 12K posts
- 6. UNLV 2,538 posts
- 7. Chet 9,895 posts
- 8. Ime Udoka N/A
- 9. SKOL 1,704 posts
- 10. #PorVida 1,734 posts
- 11. Mavs 6,201 posts
- 12. Randle 2,671 posts
- 13. Kawhi Leonard 1,056 posts
- 14. #VegasBorn N/A
- 15. Rosetta Stone N/A
- 16. #WWENXT 12.1K posts
- 17. connor 153K posts
- 18. Yellow 58.1K posts
- 19. #GoAvsGo N/A
- 20. Keldon Johnson 1,661 posts
คุณอาจชื่นชอบ
-
Andrew Ng
@AndrewYNg -
Andrej Karpathy
@karpathy -
Lilian Weng
@lilianweng -
Geoffrey Hinton
@geoffreyhinton -
Ilya Sutskever
@ilyasut -
Jürgen Schmidhuber
@SchmidhuberAI -
clem 🤗
@ClementDelangue -
Mustafa Suleyman
@mustafasuleyman -
Ian Goodfellow
@goodfellow_ian -
Yann LeCun
@ylecun -
Demis Hassabis
@demishassabis -
François Chollet
@fchollet -
Melanie Mitchell
@MelMitchell1 -
Berkeley AI Research
@berkeley_ai -
Jeff Dean
@JeffDean
Something went wrong.
Something went wrong.