robotbugs's profile picture. EE then PhD computational neuroscience, ~15 years machine learning / computer vision at ex @MSFTResearch. Deeply involved in AI, RL, robotics, and art.

Simon Winder

@robotbugs

EE then PhD computational neuroscience, ~15 years machine learning / computer vision at ex @MSFTResearch. Deeply involved in AI, RL, robotics, and art.

Simon Winder 님이 재게시함

Did you know? When snails hatch, they eat the shells of their eggs. This gives them needed calcium for their own shells.

DOGS 🐶❤️🐱 CATS님으로부터

Simon Winder 님이 재게시함

Round 2: Frank the Tank is back


Simon Winder 님이 재게시함

George Porter shows how nylon was discovered by chemists, and gives a demonstration of how we can spin nylon threads. [📹 The Royal Institution]

ViralRush ⚡님으로부터

Simon Winder 님이 재게시함

The Visitors (Made with Veo 3). Humans react to alien visitors. This mini project was completed in 8 hours, from start to finish, by one person. The tools I used are listed at the end of the video.


Simon Winder 님이 재게시함

🚨 China: Massive explosion at Shandong Youdao Chemical plant in Gaomi city. At least 5 dead, 19 injured, 6 missing. The factory manufactured Chlorpyrifos, neurotoxic organophosphate insecticide that acts as a nerve agent.


Simon Winder 님이 재게시함

😈 BEWARE: Claude 4 + GitHub MCP will leak your private GitHub repositories, no questions asked. We discovered a new attack on agents using GitHub’s official MCP server, which can be exploited by attackers to access your private repositories. creds to @marco_milanta (1/n) 👇

lbeurerkellner's tweet image. 😈 BEWARE: Claude 4 + GitHub MCP will leak your private GitHub repositories, no questions asked.

We discovered a new attack on agents using GitHub’s official MCP server,  which can be exploited by attackers to access your private repositories.

creds to @marco_milanta

(1/n) 👇

Pythagorian triples in complex number space (a+ib)^2 + (c+id)^2 = (e+if)^2.

robotbugs's tweet image. Pythagorian triples in complex number space (a+ib)^2 + (c+id)^2 = (e+if)^2.

Simon Winder 님이 재게시함

This perfectly articulates why "model alignment" is the wrong way to think about AI safety. Safety is not a property of models, it's a property of systems that use models. Asking whether a model is safe is like asking whether a computer chip is safe.

binarybits's tweet image. This perfectly articulates why "model alignment" is the wrong way to think about AI safety. Safety is not a property of models, it's a property of systems that use models. Asking whether a model is safe is like asking whether a computer chip is safe.

Simon Winder 님이 재게시함

12/ Despite wartime constraints, Ukraine has rapidly ramped up domestic weapons production. Casper of the 14th Separate UAV Regiment, says 80–90% of the drones his unit now uses are made in Ukraine, compared to near-total reliance on foreign models in the early years of the war.

DVKirichenko's tweet image. 12/ Despite wartime constraints, Ukraine has rapidly ramped up domestic weapons production. Casper of the 14th Separate UAV Regiment, says 80–90% of the drones his unit now uses are made in Ukraine, compared to near-total reliance on foreign models in the early years of the war.

Simon Winder 님이 재게시함

I know we aren’t supposed to interrupt the group chat discourse with woke nonsense like the habitability of our planet, but possibly the most intense heat wave on record is unfolding from Sub-Saharan Africa to East Asia and it’s getting zero attention.

ABSOLUTE INSANITY Unbelievable hot nights in CENTRAL ASIA Thousands of records of April hottest nights brutalized with the hugest margins ever seen in world climatic history (locally 10C+ above monthly records !) Anything happened anywhere is nothing to what it's happening now

extremetemps's tweet image. ABSOLUTE INSANITY 
Unbelievable hot nights in CENTRAL ASIA
Thousands of records of April hottest nights brutalized with the hugest margins ever seen in world climatic history (locally 10C+ above monthly records !)

Anything happened anywhere is nothing to what it's happening now


Simon Winder 님이 재게시함

Sure, if you ignore o3

sixfootbrit's tweet image. Sure, if you ignore o3

Simon Winder 님이 재게시함

Game over. Grok won. If you’re an investor in OpenAI, you can report the tax loss to your accountant this year.

nikitabier's tweet image. Game over. Grok won.

If you’re an investor in OpenAI, you can report the tax loss to your accountant this year.

Simon Winder 님이 재게시함

I instantly knew something was up when it sent this

justkrup's tweet image. I instantly knew something was up when it sent this

Simon Winder 님이 재게시함

A new paper demonstrates that LLMs could "think" in latent space, effectively decoupling internal reasoning from visible context tokens. This breakthrough suggests that even smaller models can achieve remarkable performance without relying on extensive context windows.

kimmonismus's tweet image. A new paper demonstrates that LLMs could "think" in latent space, effectively decoupling internal reasoning from visible context tokens. 

This breakthrough suggests that even smaller models can achieve remarkable performance without relying on extensive context windows.

Simon Winder 님이 재게시함

Given a pretrained model, spline theory tells you how to alter its curvature by changing a single interpretable parameter! Reducing the model's curvature always significantly improves its downstream task performances/robustness. Also work on Transformers! arxiv.org/abs/2502.07783

randall_balestr's tweet image. Given a pretrained model, spline theory tells you how to alter its curvature by changing a single interpretable parameter! Reducing the model's curvature always significantly improves its downstream task performances/robustness. Also work on Transformers!
arxiv.org/abs/2502.07783

Simon Winder 님이 재게시함

The Kalman Filter was once a core topic in EECS curricula. Given its relevance to ML, RL, Ctrl/Robotics, I'm surprised that most researchers don't know much about it - and up rediscovering it. Kalman Filter seems messy & complicated, but the intuition behind it is invaluable 1/4

docmilanfar's tweet image. The Kalman Filter was once a core topic in EECS curricula. Given its relevance to ML, RL, Ctrl/Robotics, I'm surprised that most researchers don't know much about it - and up rediscovering it. Kalman Filter seems messy & complicated, but the intuition behind it is invaluable

1/4

Simon Winder 님이 재게시함

Live from Paris, at the #ParisAISummit . I gave the opening keynote about AI’s past, present, future and opportunities. “75 years ago, Alan Turing dared humanity to imagine ‘thinking machines’. But perhaps that vision is too narrow and inward-looking. As AI today is poised to…

drfeifei's tweet image. Live from Paris, at the #ParisAISummit . I gave the opening keynote about AI’s past, present, future and opportunities. “75 years ago, Alan Turing dared humanity to imagine ‘thinking machines’. But perhaps that vision is too narrow and inward-looking. As AI today is poised to…

Simon Winder 님이 재게시함

🚨 Studies are starting to show what many of us feared: AI use might lead to overreliance and human disempowerment. Below is the SHOCKING conclusion of this particular study [download for future reference]: "1. We surveyed 319 knowledge workers who use GenAI tools (e.g.,…

LuizaJarovsky's tweet image. 🚨 Studies are starting to show what many of us feared: AI use might lead to overreliance and human disempowerment. Below is the SHOCKING conclusion of this particular study [download for future reference]:

"1. We surveyed 319 knowledge workers who use GenAI tools (e.g.,…

Simon Winder 님이 재게시함

Hyperparameter of the day! Have you adjusted your adam_beta2 lately? Ever? You might want to, especially if you're working in RL. Beta2 usually defaults to 0.999 - what does that mean? Interpret it it by computing 1/(1-beta2), so with beta2=0.999 this comes out to 1000,…


Simon Winder 님이 재게시함

Large Memory Models for Long-Context Reasoning This paper focuses on improving long-context reasoning with explicit memory mechanisms. It presents LM2, a Transformer-based architecture equipped with a dedicated memory module to enhance long-context reasoning, multi-hop…

omarsar0's tweet image. Large Memory Models for Long-Context Reasoning

This paper focuses on improving long-context reasoning with explicit memory mechanisms.

It presents LM2, a Transformer-based architecture equipped with a dedicated memory module to enhance long-context reasoning, multi-hop…

United States 트렌드

Loading...

Something went wrong.


Something went wrong.