Omar Ramadan
@cordobasunset
AI researcher @JohnsHopkins, cybersecurity analyst @NSAGov. Cofounder of @0pinionAi & Spark. Published author @IEEEorg. Personal project: AI Gene Editing
RLHF aligns models but is partially implemented in the spirit of RL. Feedback is subjective, static, and rarely recursive. Like MCP, there needs to be a standard for end-to-end synthesis from idea to product during training: RLTF (reinforcement learning w/ tool feedback).
Had one of the greatest weekends of my life working with @ycombinator at #vibecon last week. There’s a unique, frantic joy that only a hackathon can create. For me, it peaked while standing at a whiteboard, markers in hand, mapping out backend API calls for my frontend…
An intricate lesson from the startup world this weekend @ycombinator: Big ideas require massive momentum, and building momentum early carries high risk for several reasons: 1) The idea is already well-developed 2) The idea has been tried but poorly executed 3) Adoption will…
So it begins with @MianMajidd . “Time to make something people want” @ycombinator @AnthropicAI @EmergentLabsHQ @AWSstartups @vercel #vibecon
Startup Idea: Pay people to wear streaming glasses, capturing 1st-person "how-to" data to train automation. The legal/privacy hurdle solution: On-device, real-time AI that anonymizes all faces, voices, and text before a single byte is uploaded. @MianMajidd
Startup idea: Fine tune a coding model to give more original front end aesthetics than just a “clean & modern” look. Make a companion agent who is a “designer”. @MianMajidd
Startup idea: a unified physics engine similar to @nvidia Isaac Sim, with custom libraries enabling an LLM to programmatically design robotic actuators and frames from scratch for trial-and-error robotic design and research.
There should be a context management efficiency benchmark. Picture a test with a 300k-token codebase, but the model can only handle 250k tokens. The coding problem is solvable by the model alone. Can tools find the needle in the haystack? Which tool is more efficient empirically?
Personal Opinion: Today I accepted that X is not just about my academic thoughts, but for something to document my understanding of my work and myself. I'll still be posting my thoughts of course about specific work. Maybe a bit more philosophy though...
To determine model alignment and judge bias, don't aim for "unbiased" behavior. Instead, categorize different biased speech types. Biased speech often shares speech patterns and common words that we can categorize systematically, and ultimately monitor without human intervention.
100 signups done for @0pinionAi No eve jobs yet again but If you’re someone who like to talk in facts and figures this is for you One problem at a time . . . . Also signup on the link in bio
When scaling compute, firms should prioritize larger transformers and allocate significant resources for R&D into alternative neural architectures. Liquid models & LCMs need reevaluation. Humans think in flow & next-idea prediction, not itemized tasks & next-word prediction.
Crazy idea: We match human personalities for compatibility using vectors. 1. Vectorize personalities into n-dimensional profiles (beyond Big 5) 2. Extract rich behavioral data through daily AI interactions 3. Compare vectored personalities with known matches for calibration
Flattening rich objects into 1-D embeddings to feed a Transformer is like sketching a statue on a paper. Detail evaporates. Unix fits deterministic silicon; but thinking machines need object-oriented wiring. Capsule networks keeps structure alive, and we should take a look at…
Spent ages chasing “beyond-Transformer” architectures despite Titans, Transformer² and others being closely related. Perhaps Transformers are basically the “Unix” of deep learning, and every related addition to the architecture is another kernel patch.
Will do a comparison between Novelseek and @SakanaAILabs's ai-scientist for sure. Best architecture wins! github.com/Alpha-Innovato…
Two sides of the same coin: greater intelligence, greater unpredictability. Making the model more trustworthy may be impossible, it's bound to the same innovative system. Better approach: trust but verify. Spin up a smaller model to sanity check outputs for plausibility. By the…
United States Trends
- 1. Peggy 29.9K posts
- 2. Zeraora 11.9K posts
- 3. Berseria 3,940 posts
- 4. Sonic 06 1,751 posts
- 5. Cory Mills 24.8K posts
- 6. Randy Jones N/A
- 7. Dearborn 357K posts
- 8. $NVDA 44.7K posts
- 9. Luxray 2,000 posts
- 10. #ComunaONada 2,421 posts
- 11. Ryan Wedding 1,832 posts
- 12. #wednesdaymotivation 7,748 posts
- 13. #Wednesdayvibe 2,748 posts
- 14. Xillia 2 N/A
- 15. International Men's Day 74.1K posts
- 16. Good Wednesday 36.7K posts
- 17. Cleo 3,018 posts
- 18. Abyss 12.3K posts
- 19. Christian Wood N/A
- 20. Zestiria N/A
Something went wrong.
Something went wrong.