You might like
“Hey guys, I smashed the loom, we’ll stick to knitting by hand from now on”

Hypothesis, I think shame might help reduce reward hacking, esp for long horizon tasks It doesn't prevent shortcuts, but Gemini often mentions how shameful it feels when it violates the spirit of the requirements, so at least the actions are faithful to the CoT Curious to see…

if you value intelligence above all other human qualities, you’re gonna have a bad time
the timelines are now so short that public prediction feels like leaking rather than scifi speculation
Meta presents Layer Skip Enabling Early Exit Inference and Self-Speculative Decoding We present LayerSkip, an end-to-end solution to speed-up inference of large language models (LLMs). First, during training we apply layer dropout, with low dropout rates for

Open AI presents The Instruction Hierarchy Training LLMs to Prioritize Privileged Instructions Today's LLMs are susceptible to prompt injections, jailbreaks, and other attacks that allow adversaries to overwrite a model's original instructions with their own malicious prompts.

Meta announces Megalodon Efficient LLM Pretraining and Inference with Unlimited Context Length The quadratic complexity and weak length extrapolation of Transformers limits their ability to scale to long sequences, and while sub-quadratic solutions like linear attention and

Google presents Mixture-of-Depths Dynamically allocating compute in transformer-based language models Transformer-based language models spread FLOPs uniformly across input sequences. In this work we demonstrate that transformers can instead learn to dynamically allocate

welcome to bling zoo! this is a single video generated by sora, shot changes and all.
here is sora, our video generation model: openai.com/sora today we are starting red-teaming and offering access to a limited number of creators. @_tim_brooks @billpeeb @model_mechanic are really incredible; amazing work by them and the team. remarkable moment.
The only thing that matters is AGI and ASI. Nothing else matters.
Excited to share a new paper showing language models can explain the neurons of language models Since the first circuits work I’ve been nervous whether mechanistic interpretability will be able to scale as fast as AI is. “Have the AI do it” might work openai.com/research/langu…
NVIDIA reporting LLM use? "NVIDIA has detected that you might be attempting to load LLM or generative language model weights. For research and safety, a one-time aggregation of non-personally identifying information has been sent to NVIDIA and stored in an anonymized database."

here is GPT-4, our most capable and aligned model yet. it is available today in our API (with a waitlist) and in ChatGPT+. openai.com/research/gpt-4 it is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.
The timeless struggle between the people building new things and the people trying to stop them…
a new version of moore’s law that could start soon: the amount of intelligence in the universe doubles every 18 months
I've been trying out "Chat with Humans" and so far many responses are laughably wrong, and follow up conclusions illogical. Worse both true and false replies are given with same degree of certainty. I'm sorry but Chat with Humans is not ready for prime time.
Pattern matching AI as "the next platform shift" like the PC/internet/smartphone leads to significant underestimates of its potential.
United States Trends
- 1. #2025MAMAVOTE 1.35M posts
- 2. #KonamiWorldSeriesSweepstakes N/A
- 3. Tyla 15.1K posts
- 4. Fetterman 66.1K posts
- 5. Deport Harry Sisson 24.5K posts
- 6. No Kings 146K posts
- 7. Somalia 29.7K posts
- 8. Andrade 6,383 posts
- 9. #thursdayvibes 3,578 posts
- 10. Dave Dombrowski N/A
- 11. #SpiritDay 1,214 posts
- 12. Miguel Vick N/A
- 13. #ThursdayThoughts 2,559 posts
- 14. Mila 17.3K posts
- 15. Ninja Gaiden 24.7K posts
- 16. Jennifer Welch 7,689 posts
- 17. Turkey Leg Hut N/A
- 18. Starting 5 7,353 posts
- 19. Tomonobu Itagaki 18.4K posts
- 20. Caresha N/A
Something went wrong.
Something went wrong.