aolwyncode's profile picture. adam. I used to play high level video games but now I dabble in academia. masters candidate, suffering from imposter syndrome.

aolwyn

@aolwyncode

adam. I used to play high level video games but now I dabble in academia. masters candidate, suffering from imposter syndrome.

aolwyn a reposté

once a scientific paradigm has been established, improving sota is just a matter of puzzle-solving and RL on LLMs is very good at solving puzzles

Relevant section in Kuhn

aryaman2020's tweet image. Relevant section in Kuhn


aolwyn a reposté

Dear NeurIPS reviewers, please be reminded to delete the GPT prompts next time :)

_vztu's tweet image. Dear NeurIPS reviewers, please be reminded to delete the GPT prompts next time :)

aolwyn a reposté

I'm writing up a blog post about Markdown streaming, because I think it is an interesting topic. I reckon there will be almost a dozen people who agree with me!


aolwyn a reposté

Since it's summer, and more or less internship and tech interview season, I made all 30 chapters of my Machine Learning Q and AI book freely available for the summer: sebastianraschka.com/books/ml-q-and… Hope it’s helpful! Happy reading, and good luck if you are interviewing!


aolwyn a reposté

Hypixel SkyBlock Foraging is now available for public testing on the Hypixel Alpha Network! To thank you guys for being so patient, we are giving away 16,400 SkyBlock Gems (approx. $100 value on the Hypixel Network store 👀) to 5 winners! ⭐️ To enter, you must like and retweet…

HypixelNetwork's tweet image. Hypixel SkyBlock Foraging is now available for public testing on the Hypixel Alpha Network! 

To thank you guys for being so patient, we are giving away 16,400 SkyBlock Gems (approx. $100 value on the Hypixel Network store 👀) to 5 winners!

⭐️ To enter, you must like and retweet…

aolwyn a reposté

🚀 Excited to share the most inspiring work I’ve been part of this year: "Learning to Reason without External Rewards" TL;DR: We show that LLMs can learn complex reasoning without access to ground-truth answers, simply by optimizing their own internal sense of confidence. 1/n

xuandongzhao's tweet image. 🚀 Excited to share the most inspiring work I’ve been part of this year:
 
"Learning to Reason without External Rewards"

TL;DR: We show that LLMs can learn complex reasoning without access to ground-truth answers, simply by optimizing their own internal sense of confidence. 1/n

aolwyn a reposté

today my roommate, who was working on research in another field, handed in his resignation and quit research overall he iterated that research is most probably a 24/7 job, where people just work because they love to do that work all over the day, and the main incentive is always…


aolwyn a reposté

1/3 @geoffreyhinton once said that the future depends on some graduate student being suspicious of everything he says (via @lexfridman). He also said was that it was impossible to find biologically plausible approaches to backprop that scale well: radical.vc/geoffrey-hinto….

anilkseth's tweet image. 1/3 @geoffreyhinton once said that the future depends on some graduate student being suspicious of everything he says (via @lexfridman). He also said was that it was impossible to find biologically plausible approaches to backprop that scale well: radical.vc/geoffrey-hinto….

aolwyn a reposté

By far the greatest source of anxiety in my life is the voice in my head saying I should be working right now. I should be building/coding/marketing/making videos/researching/experimenting/hunched over my laptop working Ever since I started this solopreneur thing last year…


aolwyn a reposté

holy shit MIT researchers just turned skin cells directly into neurons without stem cell intermediate, 100-fold efficiency boost, and they actually worked when transplanted into mouse brains 1/

IterIntellectus's tweet image. holy shit

MIT researchers just turned skin cells directly into neurons without stem cell intermediate, 100-fold efficiency boost, and they actually worked when transplanted into mouse brains 

1/

aolwyn a reposté

The Pirate Bay co-founder Carl Lundstrom has died in a plane crash at age 64

Dexerto's tweet image. The Pirate Bay co-founder Carl Lundstrom has died in a plane crash at age 64
Dexerto's tweet image. The Pirate Bay co-founder Carl Lundstrom has died in a plane crash at age 64

aolwyn a reposté

This is one of the wildest git diffs I've ever seen

theo's tweet image. This is one of the wildest git diffs I've ever seen

aolwyn a reposté

HOLY SHIT IT'S HAPPENING AI can now write genomes from scratch. Arc Institute an NVIDIA just published Evo-2, the largest AI model for biology, trained on 9.3 trillion DNA base pairs spanning the entire tree of life. it doesn’t just analyze genomes. it creates them 1/

IterIntellectus's tweet image. HOLY SHIT IT'S HAPPENING

AI can now write genomes from scratch.
Arc Institute an NVIDIA just published Evo-2, the largest AI model for biology, trained on 9.3 trillion DNA base pairs spanning the entire tree of life.

it doesn’t just analyze genomes. it creates them
1/

aolwyn a reposté

Introducing deep-research - my own open source implementation of OpenAI's new Deep Research agent. Get the same capability without paying $200. You can even tweak the behavior of the agent with adjustable breadth and depth. Run it for 5 min or 5 hours, it'll auto adjust.


aolwyn a reposté

🔬Research ideation is hard: After the spark of a brilliant initial idea, much work is still needed to further develop it into a well-thoughtout project by iteratively expanding and refining the initial idea and grounding it to relevant literature. How can we better support this?

kevpjk's tweet image. 🔬Research ideation is hard: After the spark of a brilliant initial idea, much work is still needed to further develop it into a well-thoughtout project by iteratively expanding and refining the initial idea and grounding it to relevant literature. How can we better support this?

aolwyn a reposté

either you accept you're not meant for more and you settle at the level you belong at (many such cases) or you reject that notion and do what is necessary to become more there is literally nothing else to it

Ce tweet n’est plus disponible.

aolwyn a reposté

How do LLMs learn to reason from data? Are they ~retrieving the answers from parametric knowledge🦜? In our new preprint, we look at the pretraining data and find evidence against this: Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢 🧵⬇️

LauraRuis's tweet image. How do LLMs learn to reason from data? Are they ~retrieving the answers from parametric knowledge🦜? In our new preprint, we look at the pretraining data and find evidence against this:

Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢

🧵⬇️

aolwyn a reposté

i understand you guys like lists LeNet (1989) LSTM (1997) Deep Belief Networks (2006) AlexNet (2012) Word2Vec (2013) GAN (2014) VGG16/19 (2014) InceptionNet (2014) VAE (2014) ResNet (2015) U-Net (2015) YOLO v1 (2015) FastRCNN (2015) FasterRCNN (2015) Neural Style Transfer…


aolwyn a reposté

these are some computer vision papers that everyone must go through atleast once: 1. ResNets: arxiv.org/pdf/1512.03385… 2. YOLO: arxiv.org/abs/1506.02640 3. DeConv: lxu.me/mypapers/dcnn_… 4. GAN: arxiv.org/abs/1406.2661 5. Unet: arxiv.org/abs/1505.04597 6. Focal Loss:…


aolwyn a reposté

Grok-3 just proved Riemann's hypothesis. We decided to pause its training to check its proof, and if the proof is correct, training won't be resumed, as the AI is deemed so smart that it becomes a danger to humanity.


United States Tendances

Loading...

Something went wrong.


Something went wrong.