src_term's profile picture. src/term research — Machine Learning x Computational Fluids — Gradients are all you need

nai

@src_term

src/term research — Machine Learning x Computational Fluids — Gradients are all you need

Angepinnt

Gonna build an fast, gpu native aeroacoustics solver. Probably in JAX, might be pytorch based, but JAX jit is much better imo, let’s see….


nai hat repostet

Announcing AA-Omniscience, our new benchmark for knowledge and hallucination across >40 topics, where all but three models are more likely to hallucinate than give a correct answer Embedded knowledge in language models is important for many real world use cases. Without…

ArtificialAnlys's tweet image. Announcing AA-Omniscience, our new benchmark for knowledge and hallucination across >40 topics, where all but three models are more likely to hallucinate than give a correct answer

Embedded knowledge in language models is important for many real world use cases. Without…

nai hat repostet

Excited to announce our partnership with Hill Climb - working on producing future models and work on difficult math domains! Stay tuned :)

hillclimb (@hillclimbai) is the human superintelligence community, dedicated to building golden datasets for AGI. Starting with math, their team of IMO medalists, lean experts, PhDs is designing RL environments for @NousResearch.



nai hat repostet

It's SOTA, not only open weights SOTA :)

MoonshotAI has released Kimi K2 Thinking, a new reasoning variant of Kimi K2 that achieves #1 in the Tau2 Bench Telecom agentic benchmark and is potentially the new leading open weights model Kimi K2 Thinking is one of the largest open weights models ever, at 1T total parameters…

ArtificialAnlys's tweet image. MoonshotAI has released Kimi K2 Thinking, a new reasoning variant of Kimi K2 that achieves #1 in the Tau2 Bench Telecom agentic benchmark and is potentially the new leading open weights model

Kimi K2 Thinking is one of the largest open weights models ever, at 1T total parameters…


JAX on mac gpu…ugh

src_term's tweet image. JAX on mac gpu…ugh

nai hat repostet

ig the goal is to be rich enough that curiosity drives you instead of necessity.

pewdiepie is now fine-tuning his own LLM model. what the fuck.

birdabo's tweet image. pewdiepie is now fine-tuning his own LLM model. 

what the fuck.


nai hat repostet

The bottleneck for deep skill isn't usually intelligence, but boredom tolerance. Learning has an activation energy: below a certain skill threshold, practice is tedious, but above it, it becomes a self-sustaining flow state. The entire battle is persisting until that transition.


My first thought here, hardware acceleration for realtime, large-scale turbulence modeling and diffusion-based surrogates?

Thermodynamic Computing: From Zero to One Read the blog here: extropic.ai/writing/thermo…

extropic's tweet image. Thermodynamic Computing: From Zero to One

Read the blog here:
extropic.ai/writing/thermo…


Interesting benchmark, also interesting to see that gemini 2.5 pro is still relatively sota and strong in physics

We built ProfBench to raise the bar for LLMs - literally. At @NVIDIA, we worked with domain experts to create a benchmark that goes far beyond trivia and short answers. ProfBench tests LLMs on complex, multi-step tasks that demand the kind of reasoning, synthesis, and clarity…



Agreed

You rarely solve hard problems in a flash of insight. It's more typically a slow, careful process of exploring a branching tree of possibilities. You must pause, backtrack, and weigh every alternative. You can't fully do this in your head, because your working memory is too…



nai hat repostet

Learning how to learn is the most important skill to acquire.


United States Trends

Loading...

Something went wrong.


Something went wrong.