NormalComputing's profile picture. We build AI systems that natively reason, so they can partner with us on our most important problems. Join us https://bit.ly/normal-jobs.

Normal Computing 🧠🌡️

@NormalComputing

We build AI systems that natively reason, so they can partner with us on our most important problems. Join us https://bit.ly/normal-jobs.

고정된 트윗

We’re excited to announce our preprint "Solving the compute crisis with physics-based ASICs"! Our team at @NormalComputing, together with collaborators at @ARIA_research, @ucsantabarbara, @Penn, @sfiscience, @Cornell, @ARPAE, and @Yale , has posted a new preprint: "Solving the…

NormalComputing's tweet image. We’re excited to announce our preprint "Solving the compute crisis with physics-based ASICs"!

Our team at @NormalComputing, together with collaborators at @ARIA_research, @ucsantabarbara, @Penn, @sfiscience, @Cornell, @ARPAE, and @Yale , has posted a new preprint: "Solving the…

Normal Computing 🧠🌡️ 님이 재게시함

at Normal we recently build an agent that runs for 21 days autonomously we probably need to optimize it...

thomasahle's tweet image. at Normal we recently build an agent that runs for 21 days autonomously 

we probably need to optimize it...

Claude Sonnet 4.5 runs autonomously for 30+ hours of coding?! The record for GPT-5-Codex was just 7 hours. What’s Anthropic’s secret sauce?

Yuchenj_UW's tweet image. Claude Sonnet 4.5 runs autonomously for 30+ hours of coding?!

The record for GPT-5-Codex was just 7 hours.

What’s Anthropic’s secret sauce?


Normal Computing 🧠🌡️ 님이 재게시함

Everyone knows RLHF and RLVR, but do you know the 17 other RLXX methods published in papers and blog posts? Here's a condensed list:


Normal Computing 🧠🌡️ 님이 재게시함

Diffusion for everything! We share a recipe to start from a pretrained autoregressive VLM and, with very little training compute and some nice annealing tricks, turn it into a SOTA diffusion VLM. Research in diffusion for language is progressing very quickly and in my mind,…

Today we're sharing our first research work exploring diffusion for language models: Autoregressive-to-Diffusion Vision Language Models We develop a state-of-the-art diffusion vision language model, Autoregressive-to-Diffusion (A2D), by adapting an existing autoregressive vision…

runwayml's tweet image. Today we're sharing our first research work exploring diffusion for language models: Autoregressive-to-Diffusion Vision Language Models

We develop a state-of-the-art diffusion vision language model, Autoregressive-to-Diffusion (A2D), by adapting an existing autoregressive vision…


Normal Computing 🧠🌡️ 님이 재게시함

More than $5 million in new funding from several private companies—including @Fidelity and @NormalComputing—will expand Maryland’s Quantum-Thermodynamics Hub, co-led by Nicole Yunger Halpern (@nicoleyh11), and support it for three more years. Read more: go.umd.edu/227t

JointQuICS's tweet image. More than $5 million in new funding from several private companies—including @Fidelity and @NormalComputing—will expand Maryland’s Quantum-Thermodynamics Hub, co-led by Nicole Yunger Halpern (@nicoleyh11), and support it for three more years. Read more: go.umd.edu/227t

Live from #AIInfraSummit: Maxim Khomiakov (@maximkhv) is on the Demo Stage presenting “Normal EDA: AI-Native Verification Without the Rework.” Mission-critical design verification is fragmented and manual. Humans (and LLMs) are struggling to tame the mathematical complexity of…

NormalComputing's tweet image. Live from #AIInfraSummit: Maxim Khomiakov (@maximkhv) is on the Demo Stage presenting “Normal EDA: AI-Native Verification Without the Rework.”

Mission-critical design verification is fragmented and manual. Humans (and LLMs) are struggling to tame the mathematical complexity of…

It’s been an incredible couple of days at #AIInfraSummit so far. If you haven’t yet, swing by before the end of the summit and chat with our team on how AI is transforming chip design and verification. See the demo today at 1:15–1:30pm PDT on the Demo Stage.

NormalComputing's tweet image. It’s been an incredible couple of days at #AIInfraSummit so far.

If you haven’t yet, swing by before the end of the summit and chat with our team on how AI is transforming chip design and verification.

See the demo today at 1:15–1:30pm PDT on the Demo Stage.

The Normal Computing team is ready for #AIInfraSummit! The Normal Computing team is setting up at Booth 725. Come meet with the team this week to learn how we’re rethinking chip verification with AI-native tools.

NormalComputing's tweet image. The Normal Computing team is ready for  #AIInfraSummit!

The Normal Computing team is setting up at Booth 725. Come meet with the team this week to learn how we’re rethinking chip verification with AI-native tools.

On Sept 11, 1:15–1:30pm PDT, don’t miss Maxim Khomiakov (@maximkhv) on the Demo Stage at #AIInfraSummit presenting “Normal EDA: AI-Native Verification Without the Rework." If you care about chip verification speed, accuracy, and AI’s role in silicon, this is the session for you!

NormalComputing's tweet image. On Sept 11, 1:15–1:30pm PDT, don’t miss Maxim Khomiakov (@maximkhv) on the Demo Stage at #AIInfraSummit presenting “Normal EDA: AI-Native Verification Without the Rework."

If you care about chip verification speed, accuracy, and AI’s role in silicon, this is the session for you!

Heading to Santa Clara next week for #AIInfraSummit? Stop by Booth 725 to meet with our team and see Normal EDA in action. Teams use Normal EDA to generate full, production-grade collateral from specs, accelerating signoff, reducing engineering effort, and surfacing edge cases…

NormalComputing's tweet image. Heading to Santa Clara next week for #AIInfraSummit?

Stop by Booth 725 to meet with our team and see Normal EDA in action. 

Teams use Normal EDA to generate full, production-grade collateral from specs, accelerating signoff, reducing engineering effort, and surfacing edge cases…

Normal Computing 🧠🌡️ 님이 재게시함

Read more at arxiv.org/abs/2508.20883 Including scaling LRW up to image generation with Stable Diffusion 3.5 🐱 Really fun work with @MaxAifer @blip_tm @ColesThermoAI


Normal Computing 🧠🌡️ 님이 재게시함

Thrilled to share that I’ll be at #AIInfraSummit 2025 in two weeks, representing the Normal Computing team! On Sept 11 (1:15–1:30pm PDT), I’ll be on the Demo Stage presenting: “Normal EDA: AI-Native Verification Without the Rework” Verification teams use Normal EDA to generate…

Normal Computing is heading to #AIInfraSummit as an official Event Partner, and we’re bringing our latest breakthroughs in AI-native chip verification with us. Visit us at our booth for a live demo of our unified, physics-aware EDA platform to enable 2x faster time-to-market.…

NormalComputing's tweet image. Normal Computing is heading to #AIInfraSummit as an official Event Partner, and we’re bringing our latest breakthroughs in AI-native chip verification with us.

Visit us at our booth for a live demo of our unified, physics-aware EDA platform to enable 2x faster time-to-market.…


Normal Computing 🧠🌡️ 님이 재게시함

New paper on arXiv! And I think it's a good'un 😄 Meet the new Lattice Random Walk (LRW) discretisation for SDEs. It’s radically different from traditional methods like Euler-Maruyama (EM) in that each iteration can only move in discrete steps {-δₓ, 0, δₓ}.


Normal Computing is heading to #AIInfraSummit as an official Event Partner, and we’re bringing our latest breakthroughs in AI-native chip verification with us. Visit us at our booth for a live demo of our unified, physics-aware EDA platform to enable 2x faster time-to-market.…

NormalComputing's tweet image. Normal Computing is heading to #AIInfraSummit as an official Event Partner, and we’re bringing our latest breakthroughs in AI-native chip verification with us.

Visit us at our booth for a live demo of our unified, physics-aware EDA platform to enable 2x faster time-to-market.…

Loading...

Something went wrong.


Something went wrong.