AgenticLab1's profile picture. Free Skool community (join the movement of mastering agentic coding): https://www.skool.com/the-agentic-lab-6743/about?ref=6be3bb2df7b744df8202baebef624812

Agentic Lab

@AgenticLab1

Free Skool community (join the movement of mastering agentic coding): https://www.skool.com/the-agentic-lab-6743/about?ref=6be3bb2df7b744df8202baebef624812

Pinned

The creator of the Ralph Wiggum Loop @GeoffreyHuntley just declared my video on the loop as the official explainer video. Here's the run-down and core ideas (see thread)


Agentic Lab reposted

this is one of the best explainer videos for ralph that i have seen. declaring the official explanation m.youtube.com/watch?v=I7azCA…

GeoffreyHuntley's tweet card. You're Using Ralph Wiggum Loops WRONG

youtube.com

YouTube

You're Using Ralph Wiggum Loops WRONG


Bi-directional prompting when trying to learn or plan anything will change your life


New Video which explains the REALITY of Ralph Wiggum (no hype, no confusion) youtu.be/I7azCAgoUHc

AgenticLab1's tweet card. You're Using Ralph Wiggum Loops WRONG

youtube.com

YouTube

You're Using Ralph Wiggum Loops WRONG


PSA: Don't use Anthropic's ralph wiggum plugin. It isn't set up right and runs in one context window, causing compaction and context rot.


Make it a rule in your life to finish what you start. Especially coding projects. The speed of iteration provided by agentic coding is a blessing for those who can stay focused, and a curse for those who can't.


PSA: Just run a ralph wiggum loop overnight the night before your claude usage resets on whatever you want. No downside (as long as you shut off overflow being charged as API credits)


Ralph wiggum to plug away at all ways a user might use your site by leveraging playwright MCP/dev-tools browser, then posting well documented bug reports with how to reproduce the bug might just be the way if you have a ton of tokens to spare.


Hot take: I think it is still good to learn the basics of n8n so you can better understand agentic flows visually


Let's use our brains for a second and come to the conclusion that 1M context models are not the answer lol. Attention is finite and the more tokens in context the more diluted it becomes (weaker attention peaks) meaning the model can't tend to relevant context properly


Loading...

Something went wrong.


Something went wrong.