GaryMarcus's profile picture. “In the aftermath of GPT-5’s launch … the views of critics like Marcus seem increasingly moderate.” —@newyorker

Gary Marcus

@GaryMarcus

“In the aftermath of GPT-5’s launch … the views of critics like Marcus seem increasingly moderate.” —@newyorker

ปักหมุด

Three thoughts on what really matters: 1. Fuck cancer 2. Friends are irreplaceable 3. The new "Marcus test" for AI is when AI makes a significant dent on cancer May that happen sooner, much sooner, rather than later. In memory of my childhood friend Paul.


This place is toxic. For the last seven years I warned you that LLMs and similar approaches would not lead us to AGI. Almost nobody is willing to acknowledge that, even though so many of you gave me endless grief about it at the time. I also warned you -– first –- that Sam…


Clearly the machine learning community can’t handle the truth. Good to see that @MrEwanMorrison can.

Now that many others are coming out with the truth and jumping ship- for the record it must be accepted that @GaryMarcus was the first to say - and prove- that LLMs are not a pathway to AGI.



Wow. Just wow. @ylecun taking credit for my March 2022 argument that scaling would hit a wall and that LLM would not bring us to AGI--after he initially attacked me for saying it and continued to promote them right up until ChatGPT ate has lunch--has to among the most…

GaryMarcus's tweet image. Wow.  Just wow. 

@ylecun taking credit for my March 2022 argument that scaling would hit a wall and that LLM would not bring us to AGI--after he initially attacked me for saying it and continued to promote them right up until ChatGPT ate has lunch--has to among the most…

Gary Marcus รีโพสต์แล้ว

Lotta people owe Gary an apology for the grief he got over his prescient Nautilus story that described, years in advance, how we would be here with respect to AI

translation : Deep Learning Hit a Wall™ new techniques needed to go forward; just like my infamous 2022 paper said. go back and read it!



It is one thing for everyone from @sama to @ylecun to @elonmusk and the Twitterverse to have attacked me literally for years for saying that scaling alone would not get us to AGI. Another to pretend now that that never happened — now that I have largely been proven correct.


this is so true. the twitterverse (aside from @wendyweeww, below) is getting the history here exactly wrong, and I honestly don’t know what to do about it. suggestions welcome.

Exactly. Yann hardly faced the challenges Gary faced on this topic, yet it’s as if Yann is “the one who was right all along”



Gary Marcus รีโพสต์แล้ว

People forgot @GaryMarcus 😂

Genuine question: Why is there a double standard between Ilya and Yann?

Yuchenj_UW's tweet image. Genuine question:

Why is there a double standard between Ilya and Yann?


Gary Marcus รีโพสต์แล้ว

It's relieving to see that other researchers are finally seeing the light. We have been blindsided by LLMs. We need new methods if we want to truly reach AGI. We cannot become like physics. Their field stagnated for decades after Einstein's big discoveries.

Ilya Sutskever: We are no longer in the age of scaling, we are back to the age of research

scaling01's tweet image. Ilya Sutskever: We are no longer in the age of scaling, we are back to the age of research


Gary Marcus รีโพสต์แล้ว

Another top AI researcher comes out & claims that LLMs won't reach human level intelligence (AGI) no matter how much they are scaled up. And yet the US govt has just committed to vast investment on data centre infrastructure, partnering with LLM companies to...scale up LLMs

MrEwanMorrison's tweet image. Another top AI researcher comes out & claims that LLMs won't reach human level intelligence (AGI) no matter how much they are scaled up.

And yet the US govt has just committed to vast investment on data centre infrastructure,  partnering with LLM companies to...scale up LLMs

translation : Deep Learning Hit a Wall™ new techniques needed to go forward; just like my infamous 2022 paper said. go back and read it!

"We are no longer in the age of scaling, we are back to the age of research.' Kaplan scaling laws are flattening at current frontiers, autoregressive transformers exhausted for reasoning/planning/alignment gains. Next jump needs real architectural breakthroughs - test-time…

jenzhuscott's tweet image. "We are no longer in the age of scaling, we are back to the age of research.'

Kaplan scaling laws are flattening at current frontiers, autoregressive transformers exhausted for reasoning/planning/alignment gains. Next jump needs real architectural breakthroughs - test-time…


*exactly* what i have been saying all along. sooner or later all my haters are going to have realize that the vast majority of what i said here has turned out to be correct.

"We are no longer in the age of scaling, we are back to the age of research.' Kaplan scaling laws are flattening at current frontiers, autoregressive transformers exhausted for reasoning/planning/alignment gains. Next jump needs real architectural breakthroughs - test-time…

jenzhuscott's tweet image. "We are no longer in the age of scaling, we are back to the age of research.'

Kaplan scaling laws are flattening at current frontiers, autoregressive transformers exhausted for reasoning/planning/alignment gains. Next jump needs real architectural breakthroughs - test-time…


agreed.

I want to put forward a CRAZY fear: We’re trying to figure out how to go back to doing science *without* creating pesky fiercely independent secure American Scientists. No more Feynmans. No more Watsons. Just business leaders, AI, military contractors, engineers, visa holders.



2025 was supposed to be the year of agents. instead it’s the year of cleaning up their messes.

GaryMarcus's tweet image. 2025 was supposed to be the year of agents.

instead it’s the year of cleaning up their messes.

2025 in AI is not like the years before. Instead, multiple AI experts are rapidly moving towards the positions I have long held. Latest is @ilyasut, who has converged with me on the deficiencies of neural networks relative humans in generalization and the need for deeper…

The @ilyasut episode 0:00:00 – Explaining model jaggedness 0:09:39 - Emotions and value functions 0:18:49 – What are we scaling? 0:25:13 – Why humans generalize better than models 0:35:45 – Straight-shotting superintelligence 0:46:47 – SSI’s model will learn from deployment…



Loading...

Something went wrong.


Something went wrong.