You might like
AI Mode, our most powerful AI Search, is now available right from the Chrome address bar, allowing users to ask complex, multi-part questions, from the same place they already search and browse the web. We're also launching contextual search suggestions in the Chrome address bar…
Let X = exp(A²/2) and Y=exp(B²/2) where A,B are joint normal with mean 0, variance 1, and correlation ρ (between 0 and 1). Then, E[X|Y] = Y/ρ > Y E[Y|X] = X/ρ > X So X is greater than Y 'on average', and Y is greater than X on average. Here's a proof of that
Let X = exp(U²/2) and Y=exp(V²/2) where U,V are joint normal with mean 0, variance 1, and 50% correlation. Then, E[X|Y] = 2Y E[Y|X] = 2X They are each, on average, twice as big as the other!
![Almost_Sure's tweet image. Let X = exp(U²/2) and Y=exp(V²/2) where U,V are joint normal with mean 0, variance 1, and 50% correlation. Then,
E[X|Y] = 2Y
E[Y|X] = 2X
They are each, on average, twice as big as the other!](https://pbs.twimg.com/media/FxiKMQfX0AINL2A.jpg)
The usual confusion about "reasoning" is the process vs. product confusion. In particular the following are both true --> LLMs/LRMs can correctly (and usefully) answer problems that would normally require "reasoning process" --> LLMs/LRMs don't necessarily use what might be…
Usually I am on team “current AI models are so smart, bro you have no idea”, but in this case I think @MLStreetTalk is right (echoing @GaryMarcus, @rao2z, etc.): current AI models *are* still oddly *weak* at “true reasoning” (in the sense of William James), compared to intuition.
.@GroqInc has introduced prompt caching for the SOTA open-source coding model, Kimi-k2. What does this mean for you? Significantly reduced costs with Kimi-k2 on its fastest provider. Check the price: left side without prompt caching, right side with prompt caching.
MUSK: GROK 5 BEGINS TRAINING NEXT MONTH
🚀 We just released Zed v0.200! In a previous release, we added the `--diff` flag to the Zed CLI. Now, you can compare two files directly from the project panel, via `Compare marked files`.
Timothy Dunn's writing made a big impact on how I understand border militarization. I was delighted to get the chance to chat with him on the @HayekProgram Podcast. Listen now to better understand the history & consequences of U.S. border policies. mercatus.org/hayekprogram/h…
高难度/复杂质感+光影/高级质感海报生成测试: 依旧只有 gpt-img 遥遥领先,其他各种模型都拉了,更别说设计海报了 nano-banana 虽然已经比其他模型都强了,但也还有很长的路要走

新风格出炉!GPT 4o = 绝美海报设计! 流影|ZH4O|设计系列|GPT 4o Creation 【提示 / Prompt】⬇️

In my articles header, I try to design it in a way that reflect the topic. Here is the one about CSS Relative Colors. ishadeed.com/article/css-re…
I'm in love with this CSS-only animation. Code below.
GPT-5 画的⬇️

GPT-5 is freaking awesome. image credit : r/u/HKelephant20

anthropic employees should use twitter slightly more, openai employees should use twitter slightly less, xai employees should use twitter slightly differently
Rewiring your muscle memory for copy/paste when you go from Mac to Omarchy is an important rite of passage. Not friction to be whittled away. We need more rituals in society. More tokens of sacrifice. This is a small one. Make it proudly.
其实不是三十年河东三十年河西,而是……前几年买房时得自己承担所有的中介费、现在卖房时自己承担所有中介费的,是同一拨人😂

If you had told me when starting Charm that it would hit 150k stars, and that we’d have more contributions than we know what to do with, I would’ve told you that would be the ultimate DREAM for this company. Thanks to your support and this amazing community around Charm tools,…
I'm often asked for the best public example of AI evals done right for a real, production product. I finally have an answer. @ttorres shares how she shipped an AI interview coach, and used evals to rapidly squash bugs and improve the product. Teresa shows how she: 1. did…
Task Lists just launched! We've had a lot of folks asking for this - check it out 👇
Gemma 3 270m 4-bit DWQ is up. Same speed, same memory, much better quality:

Gemma 3 270m 4-bit generates text at over 650 (!) tok/sec on an M4 Max with mlx-lm and uses < 200MB: Not sped up:
United States Trends
- 1. Auburn 46.4K posts
- 2. At GiveRep N/A
- 3. Discussing Web3 N/A
- 4. Brewers 66K posts
- 5. MACROHARD 6,104 posts
- 6. #SEVENTEEN_NEW_IN_TACOMA 35.2K posts
- 7. Cubs 57K posts
- 8. Gilligan 6,339 posts
- 9. Georgia 68.4K posts
- 10. Utah 25.5K posts
- 11. Wordle 1,576 X N/A
- 12. #MakeOffer 19.2K posts
- 13. #SVT_TOUR_NEW_ 27.5K posts
- 14. #AcexRedbull 4,397 posts
- 15. Kirby Smart 8,575 posts
- 16. Boots 51.1K posts
- 17. Arizona 41.8K posts
- 18. #HawaiiFB N/A
- 19. mingyu 99.4K posts
- 20. Michigan 63.1K posts
Something went wrong.
Something went wrong.