sankalp
@dejavucoder
LLMs and shitposting into crafting ai product and evals dm open to talk on ai engineering/post-training
你可能会喜欢
prompt caching is the most bang for buck optimisation you can do for your LLM based workflows and agents. in this post, i cover tips to hit the prompt cache more consistently and how it works under the hood (probably the first such resource) sankalp.bearblog.dev/how-prompt-cac…
i went through 10-15 blogposts by .@NeelNanda5 and liked them. i would recommend going through these two and then going through links shared in the post (to his other posts describing that idea) if you feel interested neelnanda.io/blog/mini-blog… neelnanda.io/blog/mini-blog…
hear me out: i guess if homes could be vibe coded using sonnet 4.5, that's how they would have looked
# Why Training MoEs is So Hard recently, i have found myself wanting a small, research focused training repo that i can do small experiments on quickly and easily. these experiments range from trying out new attention architectures (MLA, SWA, NSA, KDA - all pluggable) to…
claude got his color from amanda's hair
In her first Ask Me Anything, @amandaaskell answers your philosophical questions about AI, discussing morality, identity, consciousness, and more. Timestamps: 0:00 Introduction 0:29 Why is there a philosopher at an AI company? 1:24 Are philosophers taking AI seriously? 3:00…
a core difference between gpt-5.1-codex-max and opus 4.5 is opus 4.5 is so much more enjoyable to talk to. it's more collaborative, explains it's thought process and stuff well. codex sucks at all of these these. oai needs to fix this in whatever next model they are releasing
kimi k2 technical report is a work of art given the density of info and really good references. i wish they expanded more on the large scale agentic data synthesis for tool use learning pipeline... also i still dont get the infra section lol arxiv.org/pdf/2507.20534
if u prompt the model long enough the model prompts back at you
The boundary between you prompting the model and the model prompting you is going to get blurry in 2026
this video had prompted change in me early this year
this bit from the joe rogan episode with quentin tarantino and roger avary really resonated with me tarantino shares how he lost sight of his dream in his 20s, and seeing a colleague hitting 30 made him realize his mistake. sharing for anyone who is feeling lost in their 20s:
ask and it will be given to you. seek and you will find. knock and the door will be opened to you.
we should always in some ways ask for more than we can get and in other ways, or maybe the same way, be content with what we have
sometimes it's your sense of your identity that comes in the way of revealing what you want
in japan they refer to opus 4.5 as oppai
my understanding so far is codex (gpt-5.1-codex-max high) tends to write defensive code. it performs better at system design than opus (tad bit better) and has a "senior engineer" vibe like if you will ask it for review, it will say P1, P2 and it always answers in nested bullets.…
Codex is the master of over-engineering the simplest shit imaginable luckily you can use Claude Code to keep it under control
United States 趋势
- 1. Good Sunday 53.7K posts
- 2. SB19 ACONic PERFORMANCE 134K posts
- 3. Merab 48.2K posts
- 4. Indiana 108K posts
- 5. #UFC323 130K posts
- 6. Benin 43K posts
- 7. Petr Yan 29K posts
- 8. Ohio State 65.2K posts
- 9. Duke 62K posts
- 10. Walt 8,722 posts
- 11. Roach 29.6K posts
- 12. Pearl Harbor 6,821 posts
- 13. Mendoza 43.1K posts
- 14. Pantoja 36.2K posts
- 15. Vtuber 91.4K posts
- 16. TOP CALL 9,109 posts
- 17. Pitbull 18.8K posts
- 18. Joshua Van 11.6K posts
- 19. Heisman 19.8K posts
- 20. Tulane 18.2K posts
你可能会喜欢
-
vijay singh
@dprophecyguy -
TDM (e/λ) (L8 vibe coder 💫)
@cto_junior -
sandrone
@kosenjuu -
amul.exe
@amuldotexe -
shrihacker
@shrihacker -
shaurya
@shauseth -
fudge
@fuckpoasting -
❇️ Ankush Dharkar ☯️
@ankushdharkar -
Indro
@IndraAdhikary7 -
filterpapi
@filterpapi -
gravito
@Gravito841 -
pragun
@pragdua -
OTAKU
@OtakuProcess -
Nirant
@NirantK -
ankit
@ankitiscracked
Something went wrong.
Something went wrong.