Debasish
@drdebmath
Assistant Professor, CSE @IITIOfficial. PhD, BTech @IITGuwahati. Postdoc @UniLUISS @UQO @Carleton_U @uottawa Distributed Algorithms.
You might like
What bothers me in this video is that the robot has such poor planning ability that it does not even lift its foot 3 more inches where it can clearly see an obstacle. It hits the obstacle, loses balance for 200 cycles (assuming 1ms cycles) and rebalances. Not even babies do that.
It has been so annoying since forever because none of @ChatGPTapp @claudeai or @GeminiApp produces a correct visualization of their output in the chat interface when the output contains markdown in some form. The incorrigible one is html with markdown.
One of my favorite things to tell math grad students (perhaps relevant in the context of questions about the beauty of matrix multiplication) is that “everything true is beautiful.”
When AI actually becomes really helpful in solving mathematics, we will see a version of this that will exceed what Euler, Newton, or Erdos did to their contemporaries.
When you misunderstand the difference between 100 percent and 100x? BTW, for any meaningful impact, they need to increase their capacity by another 1000x.
Why does pptx python package do not support equation mode objects? Or is there some trick?
At 10x current rate, all words generated by LLMs would exceed all words spoken by all humans. If we only approximate intelligence as a function of volume, and llms are already better than average humans at average tasks, it remains to see how the mean moves.
Finally had a chance to listen through this pod with Sutton, which was interesting and amusing. As background, Sutton's "The Bitter Lesson" has become a bit of biblical text in frontier LLM circles. Researchers routinely talk about and ask whether this or that approach or idea…
.@RichardSSutton, father of reinforcement learning, doesn’t think LLMs are bitter-lesson-pilled. My steel man of Richard’s position: we need some new architecture to enable continual (on-the-job) learning. And if we have continual learning, we don't need a special training…
Now with AI video models, anyone can have any alternative endings they wanted for their favourite TV series. Creative taste becomes more valuable when the generation cost approaches 0.
LLMs are time compressors as the tokens per second increase and value of each token increases.
United States Trends
- 1. Luka 62.5K posts
- 2. Clippers 18.2K posts
- 3. Lakers 48.7K posts
- 4. #DWTS 95.6K posts
- 5. Kris Dunn 2,647 posts
- 6. #LakeShow 3,531 posts
- 7. Kawhi 6,280 posts
- 8. Jaxson Hayes 2,467 posts
- 9. Reaves 12.3K posts
- 10. Robert 137K posts
- 11. Ty Lue 1,589 posts
- 12. Collar 45K posts
- 13. Jordan 115K posts
- 14. Zubac 2,294 posts
- 15. Alix 15.1K posts
- 16. Elaine 46.2K posts
- 17. TOP CALL 14.3K posts
- 18. Colorado State 2,454 posts
- 19. Godzilla 36.6K posts
- 20. AI Alert 10.7K posts
Something went wrong.
Something went wrong.