drdebmath's profile picture. Assistant Professor, CSE @IITIOfficial.  PhD, BTech @IITGuwahati. Postdoc @UniLUISS @UQO @Carleton_U @uottawa Distributed Algorithms.

Debasish

@drdebmath

Assistant Professor, CSE @IITIOfficial. PhD, BTech @IITGuwahati. Postdoc @UniLUISS @UQO @Carleton_U @uottawa Distributed Algorithms.

Debasish reposted

Galaxy brain resistance: vitalik.eth.limo/general/2025/1…


What bothers me in this video is that the robot has such poor planning ability that it does not even lift its foot 3 more inches where it can clearly see an obstacle. It hits the obstacle, loses balance for 200 cycles (assuming 1ms cycles) and rebalances. Not even babies do that.

Uneven terrain



It has been so annoying since forever because none of @ChatGPTapp @claudeai or @GeminiApp produces a correct visualization of their output in the chat interface when the output contains markdown in some form. The incorrigible one is html with markdown.


Debasish reposted

One of my favorite things to tell math grad students (perhaps relevant in the context of questions about the beauty of matrix multiplication) is that “everything true is beautiful.”


When AI actually becomes really helpful in solving mathematics, we will see a version of this that will exceed what Euler, Newton, or Erdos did to their contemporaries.


When you misunderstand the difference between 100 percent and 100x? BTW, for any meaningful impact, they need to increase their capacity by another 1000x.

Nature has many ways of storing carbon dioxide. One is by turning it into solid form that can lock CO2 away for centuries, but this option takes a lot of time.



Why does pptx python package do not support equation mode objects? Or is there some trick?


At 10x current rate, all words generated by LLMs would exceed all words spoken by all humans. If we only approximate intelligence as a function of volume, and llms are already better than average humans at average tasks, it remains to see how the mean moves.


Debasish reposted

Finally had a chance to listen through this pod with Sutton, which was interesting and amusing. As background, Sutton's "The Bitter Lesson" has become a bit of biblical text in frontier LLM circles. Researchers routinely talk about and ask whether this or that approach or idea…

.@RichardSSutton, father of reinforcement learning, doesn’t think LLMs are bitter-lesson-pilled. My steel man of Richard’s position: we need some new architecture to enable continual (on-the-job) learning. And if we have continual learning, we don't need a special training…



Now with AI video models, anyone can have any alternative endings they wanted for their favourite TV series. Creative taste becomes more valuable when the generation cost approaches 0.


LLMs are time compressors as the tokens per second increase and value of each token increases.


United States Trends

Loading...

Something went wrong.


Something went wrong.