gxCodar's profile picture.

Gege Lala

@gxCodar

Gege Lala reposted

Spent the last couple of days trying to do a lot with GPT-5 on the chatgpt web app. Sorry to say I'm giving up on it :( Thinking mode takes way too long for everything, and makes bad choices. Auto mode mainly uses fast mode, which never gets anything right so is pointless.


Gege Lala reposted

GPT-5 just refactored my entire codebase in one call. 25 tool invocations. 3,000+ new lines. 12 brand new files. It modularized everything. Broke up monoliths. Cleaned up spaghetti. None of it worked. But boy was it beautiful.

vasuman's tweet image. GPT-5 just refactored my entire codebase in one call.

25 tool invocations. 3,000+ new lines. 12 brand new files.

It modularized everything. Broke up monoliths. Cleaned up spaghetti.

None of it worked.
But boy was it beautiful.

A language model for protein structure? What about physical accuracy? Is it good enough for practical use?

🚨 AI DID IN SECONDS WHAT NATURE NEEDED 500 MILLION YEARS FOR Nature spent half a billion years crafting proteins—AI just did it in months. Meet ESM3, the super-powered AI that designs brand-new proteins from scratch, no evolution required. This could change medicine, biotech,…

MarioNawfal's tweet image. 🚨 AI DID IN SECONDS WHAT NATURE NEEDED 500 MILLION YEARS FOR

Nature spent half a billion years crafting proteins—AI just did it in months. Meet ESM3, the super-powered AI that designs brand-new proteins from scratch, no evolution required. 

This could change medicine, biotech,…
MarioNawfal's tweet image. 🚨 AI DID IN SECONDS WHAT NATURE NEEDED 500 MILLION YEARS FOR

Nature spent half a billion years crafting proteins—AI just did it in months. Meet ESM3, the super-powered AI that designs brand-new proteins from scratch, no evolution required. 

This could change medicine, biotech,…
MarioNawfal's tweet image. 🚨 AI DID IN SECONDS WHAT NATURE NEEDED 500 MILLION YEARS FOR

Nature spent half a billion years crafting proteins—AI just did it in months. Meet ESM3, the super-powered AI that designs brand-new proteins from scratch, no evolution required. 

This could change medicine, biotech,…
MarioNawfal's tweet image. 🚨 AI DID IN SECONDS WHAT NATURE NEEDED 500 MILLION YEARS FOR

Nature spent half a billion years crafting proteins—AI just did it in months. Meet ESM3, the super-powered AI that designs brand-new proteins from scratch, no evolution required. 

This could change medicine, biotech,…


Gege Lala reposted

DeepSeek R1 671B on local Machine over 2 tok/sec *without* GPU "The secret trick is to not load anything but kv cache into RAM and let llama.cpp use its default behavior to mmap() the model files off of a fast NVMe SSD. The rest of your system RAM acts as disk cache for the…

rohanpaul_ai's tweet image. DeepSeek R1 671B on local Machine over 2 tok/sec *without* GPU

"The secret trick is to not load anything but kv cache into RAM and let llama.cpp use its default behavior to mmap() the model files off of a fast NVMe SSD. The rest of your system RAM acts as disk cache for the…

Gege Lala reposted

350MB is all you need to get near SoTA TTS! 🤯

reach_vb's tweet image. 350MB is all you need to get near SoTA TTS! 🤯

NEW: Kokoro 82M - APACHE 2.0 licensed, Text to Speech model, trained on < 100 hours of audio 🔥



Gege Lala reposted

Can desalinated water deliver a future of infinite water? Yes! • It's cheap • It will get even cheaper • Limited pollution • Some countries already live off of it We can transform deserts into paradise. And some countries are already on that path:🧵

tomaspueyo's tweet image. Can desalinated water deliver a future of infinite water?
Yes!
• It&apos;s cheap
• It will get even cheaper
• Limited pollution
• Some countries already live off of it

We can transform deserts into paradise. And some countries are already on that path:🧵

Gege Lala reposted

The protein concentration in our cells is ~ 200 mg / ml. That's a few billion protein molecules per human cell, similar to the number of people in the human societies inhabiting our planet. We know so little about the interactions in the protein societies making up our bodies.


Gege Lala reposted

Mixtures of engineered bacteria were able to: - Identify if a number is prime - Check if a letter in a string is a vowel - Determine the max number of pieces of a pie obtained from n straight cuts. Answers are printed by expressing fluorescent proteins in different patterns.

NikoMcCarty's tweet image. Mixtures of engineered bacteria were able to:

- Identify if a number is prime
- Check if a letter in a string is a vowel
- Determine the max number of pieces of a pie obtained from n straight cuts.

Answers are printed by expressing fluorescent proteins in different patterns.
NikoMcCarty's tweet image. Mixtures of engineered bacteria were able to:

- Identify if a number is prime
- Check if a letter in a string is a vowel
- Determine the max number of pieces of a pie obtained from n straight cuts.

Answers are printed by expressing fluorescent proteins in different patterns.

Gege Lala reposted

It's finally here. Q* rings true. Tiny LLMs are as good at math as a frontier model. By using the same techniques Google used to solve Go (MTCS and backprop), Llama8B gets 96.7% on math benchmark GSM8K! That’s better than GPT-4, Claude and Gemini, with 200x less parameters!

deedydas's tweet image. It&apos;s finally here. Q* rings true. Tiny LLMs are as good at math as a frontier model.

By using the same techniques Google used to solve Go (MTCS and backprop), Llama8B gets 96.7% on math benchmark GSM8K!

That’s better than GPT-4, Claude and Gemini, with 200x less parameters!

Gege Lala reposted

In a sufficiently high dimensional landscape, there's no meaningful difference between interpolation and extrapolation


Gege Lala reposted

The Unreasonable Ineffectiveness of the Deeper Layers We empirically study a simple layer-pruning strategy for popular families of open-weight pretrained LLMs, finding minimal degradation of performance on different question-answering benchmarks until after a large fraction

_akhaliq's tweet image. The Unreasonable Ineffectiveness of the Deeper Layers

We empirically study a simple layer-pruning strategy for popular families of open-weight pretrained LLMs, finding minimal degradation of performance on different question-answering benchmarks until after a large fraction

Gege Lala reposted

I’m currently using - ChatGPT - Gemini - Claude - Copilot - Cursor - Cody - Supermaven - Codeium - TabNine - DevGPT Why is my code worse than ever?


Gege Lala reposted

🧠 Run 70B LLM Inference on a Single 4GB GPU - with airllm and layered inference 🔥 layer-wise inference is essentially the "divide and conquer" approach 📌 And this is without using quantization, distillation, pruning or other model compression techniques 📌 The reason large…

rohanpaul_ai's tweet image. 🧠 Run 70B LLM Inference on a Single 4GB GPU - with airllm and layered inference 🔥

layer-wise inference is essentially the &quot;divide and conquer&quot; approach

📌 And this is without using quantization, distillation, pruning or other model compression techniques

📌 The reason large…

Gege Lala reposted

A big misconception about blindness is that a blind person only sees pitch black. In reality, blindness is a spectrum. This is a series of examples of how differently visually impaired people see. [📹 Blind on the Move]

From Pubity

Gege Lala reposted

Think of an LLM that can find entities in a given image, describe the image and answers questions about it, without hallucinating ✨ Kosmos-2 released by @Microsoft is a very underrated model that can do that. Code snippet with transformers integration is in the next tweet 👇…

mervenoyann's tweet image. Think of an LLM that can find entities in a given image, describe the image and answers questions about it, without hallucinating ✨ 

Kosmos-2 released by @Microsoft is a very underrated model that can do that. Code snippet with transformers integration is in the next tweet 👇…

Gege Lala reposted

Today, Quanta published our annual list of the biggest biology discoveries that we covered in 2023. Here’s a look at the list: 🧵


Gege Lala reposted

A new ocean is forming in Africa along a 35-mile crack that opened up in Ethiopia in 2005. The crack, which has been expanding ever since, is a result of three tectonic plates pulling away from each other. It’s thought that Africa’s new ocean will take at least 5 million to 10…

Rainmaker1973's tweet image. A new ocean is forming in Africa along a 35-mile crack that opened up in Ethiopia in 2005. The crack, which has been expanding ever since, is a result of three tectonic plates pulling away from each other. 

It’s thought that Africa’s new ocean will take at least 5 million to 10…

Gege Lala reposted

Zooming around a bacteria cell. It's so cool I just got the cell imported (including lipids) and rendering correctly with the new rendering engine (it's called Angstrom). 3x performance boost! So much detail (too much?!) #screenshotsunday #gamedev #scicomm #xcode #swiftlang


I reached level 5 with my horse Beauty!. #MyHorse nmgam.es/horse_twitter


United States Trends

Loading...

Something went wrong.


Something went wrong.