Tal vez te guste
“Hey guys, I smashed the loom, we’ll stick to knitting by hand from now on”

Hypothesis, I think shame might help reduce reward hacking, esp for long horizon tasks It doesn't prevent shortcuts, but Gemini often mentions how shameful it feels when it violates the spirit of the requirements, so at least the actions are faithful to the CoT Curious to see…

if you value intelligence above all other human qualities, you’re gonna have a bad time
the timelines are now so short that public prediction feels like leaking rather than scifi speculation
Meta presents Layer Skip Enabling Early Exit Inference and Self-Speculative Decoding We present LayerSkip, an end-to-end solution to speed-up inference of large language models (LLMs). First, during training we apply layer dropout, with low dropout rates for

Open AI presents The Instruction Hierarchy Training LLMs to Prioritize Privileged Instructions Today's LLMs are susceptible to prompt injections, jailbreaks, and other attacks that allow adversaries to overwrite a model's original instructions with their own malicious prompts.

Meta announces Megalodon Efficient LLM Pretraining and Inference with Unlimited Context Length The quadratic complexity and weak length extrapolation of Transformers limits their ability to scale to long sequences, and while sub-quadratic solutions like linear attention and

Google presents Mixture-of-Depths Dynamically allocating compute in transformer-based language models Transformer-based language models spread FLOPs uniformly across input sequences. In this work we demonstrate that transformers can instead learn to dynamically allocate

welcome to bling zoo! this is a single video generated by sora, shot changes and all.
here is sora, our video generation model: openai.com/sora today we are starting red-teaming and offering access to a limited number of creators. @_tim_brooks @billpeeb @model_mechanic are really incredible; amazing work by them and the team. remarkable moment.
The only thing that matters is AGI and ASI. Nothing else matters.
Excited to share a new paper showing language models can explain the neurons of language models Since the first circuits work I’ve been nervous whether mechanistic interpretability will be able to scale as fast as AI is. “Have the AI do it” might work openai.com/research/langu…
NVIDIA reporting LLM use? "NVIDIA has detected that you might be attempting to load LLM or generative language model weights. For research and safety, a one-time aggregation of non-personally identifying information has been sent to NVIDIA and stored in an anonymized database."

here is GPT-4, our most capable and aligned model yet. it is available today in our API (with a waitlist) and in ChatGPT+. openai.com/research/gpt-4 it is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.
The timeless struggle between the people building new things and the people trying to stop them…
a new version of moore’s law that could start soon: the amount of intelligence in the universe doubles every 18 months
I've been trying out "Chat with Humans" and so far many responses are laughably wrong, and follow up conclusions illogical. Worse both true and false replies are given with same degree of certainty. I'm sorry but Chat with Humans is not ready for prime time.
Pattern matching AI as "the next platform shift" like the PC/internet/smartphone leads to significant underestimates of its potential.
United States Tendencias
- 1. Yamamoto 47.6K posts
- 2. #DWTS 43.9K posts
- 3. halsey 8,712 posts
- 4. Brewers 41.6K posts
- 5. Growth Path 1,652 posts
- 6. #FlyTogether 2,826 posts
- 7. #TexasHockey 3,400 posts
- 8. Young Republicans 74.9K posts
- 9. Kreider 1,273 posts
- 10. Ohtani 14.2K posts
- 11. Jared Butler N/A
- 12. #MakeOffer 11.4K posts
- 13. Domain For Sale 11.9K posts
- 14. #WWENXT 20.1K posts
- 15. TOKYO NARITA N/A
- 16. Jarry N/A
- 17. Will Richard 2,714 posts
- 18. Cuffem 2,477 posts
- 19. Roldan 2,696 posts
- 20. Ayton 2,490 posts
Something went wrong.
Something went wrong.