Geoshh's profile picture. Embodied A.I. | Socioaffective Alignment | Systems Biology & Interpersonal Neurobiology | @UChicago | @EuroGradSchool |healing,science,technology,connection

Geosh

@Geoshh

Embodied A.I. | Socioaffective Alignment | Systems Biology & Interpersonal Neurobiology | @UChicago | @EuroGradSchool |healing,science,technology,connection

고정된 트윗

Gonna try to pin a few favorite posts that linger in mind over time:

Amusing how 99% of people using their own brains forget how it works: The brain is an advanced probability machine. It keeps predicting the next most likely thought, word, or action based on incoming signals and past learning. Under the hood, billions of neurons are doing…



Geosh 님이 재게시함

RIP prompt engineering ☠️ This new Stanford paper just made it irrelevant with a single technique. It's called Verbalized Sampling and it proves aligned AI models aren't broken we've just been prompting them wrong this whole time. Here's the problem: Post-training alignment…

alex_prompter's tweet image. RIP prompt engineering ☠️

This new Stanford paper just made it irrelevant with a single technique.

It's called Verbalized Sampling and it proves aligned AI models aren't broken we've just been prompting them wrong this whole time.

Here's the problem: Post-training alignment…

Geosh 님이 재게시함

i've been curious about what information LLMs "forget" during RL recently i spent time combing through research for examples of things models getting worse at after RL turns out that learning to reason makes models better at pretty much everything. scary realization tbh


Geosh 님이 재게시함

Super cool new LLM system by @a1zhang and @lateinteraction! Context rot is a major problem as tasks grow more complex and context windows expand; this issue is particularly acute for lawyers, who must process lengthy, intricate documents and are especially vulnerable to the loss…

joelniklaus's tweet image. Super cool new LLM system by @a1zhang and @lateinteraction!

Context rot is a major problem as tasks grow more complex and context windows expand; this issue is particularly acute for lawyers, who must process lengthy, intricate documents and are especially vulnerable to the loss…

Geosh 님이 재게시함

Dead internet is no longer just a theory

©️ompounding Memes님으로부터

Geosh 님이 재게시함

This is the most impressive plot I've seen all year: - Scaling RL not only works, but can be predicted from experiments run with 1/2 the target compute - PipelineRL crushes conventional RL pipelines in terms of compute efficiency - Many small details matter for stability &…

_lewtun's tweet image. This is the most impressive plot I've seen all year:

- Scaling RL not only works, but can be predicted from experiments run with 1/2 the target compute

- PipelineRL crushes conventional RL pipelines in terms of compute efficiency

- Many small details matter for stability &…

Geosh 님이 재게시함

Depression hates a moving target. Keep your body & mind active. Depression thrives in stagnation & rumination.

NTFabiano's tweet image. Depression hates a moving target.

Keep your body & mind active.

Depression thrives in stagnation & rumination.

Geosh 님이 재게시함

Important paper on the energetic forces that likely drive the brain’s recalibrations associated with psychiatric disorders Brilliant integration of neuroscience, mitochondrial psychobiology, psychiatry, and allostasis by @sequencemyneuro

MitoPsychoBio's tweet image. Important paper on the energetic forces that likely drive the brain’s recalibrations associated with psychiatric disorders 

Brilliant integration of neuroscience, mitochondrial psychobiology, psychiatry, and allostasis by @sequencemyneuro

The allostatic triage model of psychopathology (ATP Model): How reallocation of #brain energetic resources under stress elicits #psychiatric symptoms. sciencedirect.com/science/articl…

heniek_htw's tweet image. The allostatic triage model of psychopathology (ATP Model): How reallocation of #brain energetic resources under stress elicits #psychiatric symptoms.
sciencedirect.com/science/articl…


Geosh 님이 재게시함

🚨 GPT-5 Pro just rediscovered a novel astrophysics result in under 30 minutes. Alex Lupsasca, an actual astrophysicist, gave it a real research problem he’d been working on. It independently derived the same solution. We’ve officially crossed the line from AI summarizing…

VraserX's tweet image. 🚨 GPT-5 Pro just rediscovered a novel astrophysics result in under 30 minutes.

Alex Lupsasca, an actual astrophysicist, gave it a real research problem he’d been working on. It independently derived the same solution.

We’ve officially crossed the line from AI summarizing…

Geosh 님이 재게시함

DGX Spark vs M4 Max in qwen3-coder bf16/fp16 inference

digitalix's tweet image. DGX Spark vs M4 Max in qwen3-coder bf16/fp16 inference

Geosh 님이 재게시함

The first fantastic paper on scaling RL with LLMs just dropped. I strongly recommend taking a look and will be sharing more thoughts on the blog soon. The Art of Scaling Reinforcement Learning Compute for LLMs Khatri & Madaan et al.

natolambert's tweet image. The first fantastic paper on scaling RL with LLMs just dropped. I strongly recommend taking a look and will be sharing more thoughts on the blog soon.

The Art of Scaling Reinforcement Learning Compute for LLMs
Khatri & Madaan et al.

Geosh 님이 재게시함

After more than half a year of work, it's finally done! In my new paper I demonstrate a new technique for mesoscopic understanding of language model behavior over time. We show that LM hidden states can be approximated by the same mathematics as govern the statistical properties…

mtlushan's tweet image. After more than half a year of work, it's finally done! In my new paper I demonstrate a new technique for mesoscopic understanding of language model behavior over time. We show that LM hidden states can be approximated by the same mathematics as govern the statistical properties…

Geosh 님이 재게시함

Unbelievable results on long context (gpt-5-mini is better than gpt-5) if you let the LLM explore the context through python interpreter in a loop. You can plug this idea to wherever you want on whatever task, not just long context. This is insane.

What if scaling the context windows of frontier LLMs is much easier than it sounds? We’re excited to share our work on Recursive Language Models (RLMs). A new inference strategy where LLMs can decompose and recursively interact with input prompts of seemingly unbounded length,…

a1zhang's tweet image. What if scaling the context windows of frontier LLMs is much easier than it sounds?

We’re excited to share our work on Recursive Language Models (RLMs). A new inference strategy where LLMs can decompose and recursively interact with input prompts of seemingly unbounded length,…


Geosh 님이 재게시함

shoutout to this dude for making the only Muon explainer that has properly stuck in my head to date

swyx's tweet image. shoutout to this dude for making the only Muon explainer that has properly stuck in my head to date

Geosh 님이 재게시함

How the brain talks to the immune system This diagram shows the inflammatory reflex - a neural circuit where the brain regulates inflammation through the vagus nerve. It’s how psychological stress, inflammation, and immune activity stay linked. 1️⃣ The signal starts in the brain…

drwilliamwallac's tweet image. How the brain talks to the immune system

This diagram shows the inflammatory reflex - a neural circuit where the brain regulates inflammation through the vagus nerve. It’s how psychological stress, inflammation, and immune activity stay linked.

1️⃣ The signal starts in the brain…

Geosh 님이 재게시함

What if scaling the context windows of frontier LLMs is much easier than it sounds? We’re excited to share our work on Recursive Language Models (RLMs). A new inference strategy where LLMs can decompose and recursively interact with input prompts of seemingly unbounded length,…

a1zhang's tweet image. What if scaling the context windows of frontier LLMs is much easier than it sounds?

We’re excited to share our work on Recursive Language Models (RLMs). A new inference strategy where LLMs can decompose and recursively interact with input prompts of seemingly unbounded length,…

Geosh 님이 재게시함

Clustering NVIDIA DGX Spark + M3 Ultra Mac Studio for 4x faster LLM inference. DGX Spark: 128GB @ 273GB/s, 100 TFLOPS (fp16), $3,999 M3 Ultra: 256GB @ 819GB/s, 26 TFLOPS (fp16), $5,599 The DGX Spark has 3x less memory bandwidth than the M3 Ultra but 4x more FLOPS. By running…

exolabs's tweet image. Clustering NVIDIA DGX Spark + M3 Ultra Mac Studio for 4x faster LLM inference.

DGX Spark: 128GB @ 273GB/s, 100 TFLOPS (fp16), $3,999
M3 Ultra: 256GB @ 819GB/s, 26 TFLOPS (fp16), $5,599

The DGX Spark has 3x less memory bandwidth than the M3 Ultra but 4x more FLOPS.

By running…

Geosh 님이 재게시함

🚨MASSIVE WHITE PILL🚨 AI JUST GENERATED NEW SCIENTIFIC KNOWLEDGE Google and Yale used a 27B Gemma model and it discovered a new cancer mechanism. it predicted a drug (silmitasertib) would only make tumors visible to the immune system if low interferon was present and lab…

IterIntellectus's tweet image. 🚨MASSIVE WHITE PILL🚨

AI JUST GENERATED NEW SCIENTIFIC KNOWLEDGE

Google and Yale used a 27B Gemma model and it discovered a new cancer mechanism.

it predicted a drug (silmitasertib) would only make tumors visible to the immune system if low interferon was present and lab…

An exciting milestone for AI in science: Our C2S-Scale 27B foundation model, built with @Yale and based on Gemma, generated a novel hypothesis about cancer cellular behavior, which scientists experimentally validated in living cells.  With more preclinical and clinical tests,…



Geosh 님이 재게시함

Holy shit... Tencent researchers just killed fine-tuning AND reinforcement learning in one shot 😳 They call it Training-Free GRPO (Group Relative Policy Optimization). Instead of updating weights, the model literally learns from 'its own experiences' like an evolving memory…

rryssf_'s tweet image. Holy shit... Tencent researchers just killed fine-tuning AND reinforcement learning in one shot 😳

They call it Training-Free GRPO (Group Relative Policy Optimization).

Instead of updating weights, the model literally learns from 'its own experiences' like an evolving memory…

Geosh 님이 재게시함

I'm super excited about M5. It's going to help a lot with compute-bound workloads in MLX. For example: - Much faster prefill. In other words time-to-first-token will go down. - Faster image / video generation - Faster fine-tuning (LoRA or otherwise) - Higher throughput for…

awnihannun's tweet image. I'm super excited about M5. It's going to help a lot with compute-bound workloads in MLX.

For example:
- Much faster prefill. In other words time-to-first-token will go down. 
- Faster image / video generation
- Faster fine-tuning (LoRA or otherwise)
- Higher throughput for…

United States 트렌드

Loading...

Something went wrong.


Something went wrong.