
Mira Murati
@miramurati
Now building @thinkymachines. Previously CTO @OpenAI
You might like
Tinker is cool. If you're a researcher/developer, tinker dramatically simplifies LLM post-training. You retain 90% of algorithmic creative control (usually related to data, loss function, the algorithm) while tinker handles the hard parts that you usually want to touch much less…
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models!…

Very excited to see the Tinker release! @pcmoritz and I had a chance to experiment with the API. It does a nice job of providing flexibility while abstracting away GPU handling. Here's a simple example showing how to generate synthetic data and fine tune a text to SQL model.…
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models!…

Today we launched Tinker. Tinker brings frontier tools to researchers, offering clean abstractions for writing experiments and training pipelines while handling distributed training complexity. It enables novel research, custom models, and solid baselines. Excited to see what…
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models!…

Today on Connectionism: establishing the conditions under which LoRA matches full fine-tuning performance, with new experimental results and a grounding in information theory
LoRA makes fine-tuning more accessible, but it's unclear how it compares to full fine-tuning. We find that the performance often matches closely---more often than you might expect. In our latest Connectionism post, we share our experimental results and recommendations for LoRA.…

Sharing our second Connectionism research post on Modular Manifolds, a mathematical approach to refining training at each layer of the neural network
Efficient training of neural networks is difficult. Our second Connectionism post introduces Modular Manifolds, a theoretical step toward more stable and performant training by co-designing neural net optimizers with manifold constraints on weight matrices.…

At Thinking Machines, our work includes collaborating with the broader research community. Today we are excited to share that we are building a vLLM team at @thinkymachines to advance open-source vLLM and serve frontier models. If you are interested, please DM me or @barret_zoph!…
A big part of our mission at Thinking Machines is to improve people’s scientific understanding of AI and work with the broader research community. Introducing Connectionism today to share some of our scientific insights.
Today Thinking Machines Lab is launching our research blog, Connectionism. Our first blog post is “Defeating Nondeterminism in LLM Inference” We believe that science is better when shared. Connectionism will cover topics as varied as our research is: from kernel numerics to…

Thinking Machines Lab exists to empower humanity through advancing collaborative general intelligence. We're building multimodal AI that works with how you naturally interact with the world - through conversation, through sight, through the messy way we collaborate. We're…
If you’d like to be part of a team making huge ambitious bets on multimodality among other things & work with Rowan, we’re hiring!
life update: I've joined @thinkymachines lab! We're building the future of human-AI interaction through open science, research+product co-iteration, and with multimodal at the core. If you're interested in joining our fantastic team - reach out! DMs open 😀
Follow us @thinkymachines for more updates over the coming weeks
Today, we are excited to announce Thinking Machines Lab (thinkingmachines.ai), an artificial intelligence research and product company. We are scientists, engineers, and builders behind some of the most widely used AI products and libraries, including ChatGPT,…
I shared the following note with the OpenAI team today.

All Plus and Team users in ChatGPT

Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week. While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents. It can also say “Sorry I’m late” in over 50 languages.
The Safety and Security Committee—a committee established to review critical safety and security issues—has made recommendations across five key areas, which we are adopting. openai.com/index/update-o…
There has been a lot of enthusiasm to try OpenAI o1-preview and o1-mini, and some users hit their rate limits quickly. We reset weekly rate limits for all Plus and Team users so that you can keep experimenting with o1.
We’re hosting an AMA for developers from 10–11 AM PT today. Reply to this thread with any questions and the OpenAI o1 team will answer as many as they can.
curious about RL given the o1 and o1-mini release? interested in the algorithms behind reasoning? reminder that @jachiam0's Spinning Up is still one of the best resources (though slightly out of date) for learning reinforcement learning. spinningup dot openai dot com!

Today we rolled out OpenAI o1-preview and o1-mini to all ChatGPT Plus/Team users & Tier 5 developers in the API. o1 marks the start of a new era in AI, where models are trained to "think" before answering through a private chain of thought. The more time they take to think, the…
the best part of 🍓is the team
United States Trends
- 1. Diane Keaton 150K posts
- 2. Mateer 8,921 posts
- 3. #UFCRio 15.4K posts
- 4. Oregon 55.7K posts
- 5. Indiana 27.9K posts
- 6. Annie Hall 34.1K posts
- 7. #iufb 3,823 posts
- 8. Mendoza 15.8K posts
- 9. Northwestern 3,966 posts
- 10. Drew Allar 1,439 posts
- 11. #HookEm 3,888 posts
- 12. Tim Banks 1,041 posts
- 13. Sark 3,417 posts
- 14. Bama 16.8K posts
- 15. Arkansas 10.5K posts
- 16. Oklahoma 27.5K posts
- 17. Huntington Beach 7,792 posts
- 18. Raiola 1,337 posts
- 19. Hawkins 3,869 posts
- 20. Baby Boom 3,180 posts
You might like
-
OpenAI
@OpenAI -
Sam Altman
@sama -
Naval
@naval -
Hugging Face
@huggingface -
Marc Andreessen 🇺🇸
@pmarca -
LangChain
@LangChainAI -
Wojciech Zaremba
@woj_zaremba -
Greg Brockman
@gdb -
Anthropic
@AnthropicAI -
Lilian Weng
@lilianweng -
Andrej Karpathy
@karpathy -
Ilya Sutskever
@ilyasut -
AI Breakfast
@AiBreakfast -
Runway
@runwayml -
Stability AI
@StabilityAI
Something went wrong.
Something went wrong.