
Ebtesam ✈️ VL/HCC
@ebtesamdotpy
AI tools for SE research | CS PhD @GeorgeMasonU @INSPIREDLabGMU | Prev @MSFTResearch
Tal vez te guste
crazy that they called it context window when attention span was right there
As we all know by now, reasoning models often generate longer responses, which raises compute costs. Now, this new paper (arxiv.org/abs/2504.05185) shows that this behavior comes from the RL training process, not from an actual need for long answers for better accuracy. The RL…

vibe coding, where 2 engineers can now create the tech debt of at least 50 engineers
For the confused, it's actually super easy: - GPT 4.5 is the new Claude 3.6 (aka 3.5) - Claude 3.7 is the new o3-mini-high - Claude Code is the new Cursor - Grok is the new Perplexity - o1 pro is the 'smartest', except for o3, which backs Deep Research Obviously. Keep up.
New post re: Devin (the AI SWE). We couldn't find many reviews of people using it for real tasks, so we went MKBHD mode and put Devin through its paces. We documented our findings here. Would love to know if others have had a different experience. answer.ai/posts/2025-01-…

Long overdue, a paper finally exposes the Emperor's New “Threats to Validity” Clothes in empirical software engineering research. Even better, it provides suggestions for improving the state of practice.


Presenting our paper @ESEM_conf soon: Threats to Validity in Software Engineering – hypocritical paper section or essential analysis? Paper #OpenAccess dl.acm.org/doi/10.1145/36…
It's common to add personas in system prompts, assuming this can help LLMs. However, through analyzing 162 roles x 4 LLMs x 2410 questions, we show that adding a persona mostly has *no* statistically significant difference from the no-persona setting. If there is a difference, it…
🎙️ What if the way we prompt LLMs might actually hold it back? 🚨 Assigning personas like "helpful assistant" in system prompts might *not* be as helpful as we think! ✨ Check out our work accepted to Findings of @emnlpmeeting ✨ 📜 arxiv.org/abs/2311.10054 🧵 [1/7]
![elisazmq_zheng's tweet image. 🎙️ What if the way we prompt LLMs might actually hold it back?
🚨 Assigning personas like "helpful assistant" in system prompts might *not* be as helpful as we think!
✨ Check out our work accepted to Findings of @emnlpmeeting ✨
📜 arxiv.org/abs/2311.10054
🧵 [1/7]](https://pbs.twimg.com/media/GalxWUlagAALbNR.jpg)
If you get frequent urges to go deep into a subject, do not ignore them Pick a weekend, stop everything else, and give in to the urge Fresh insights await at the other end
Is hallucination in LLMs inevitable even with an idealized model architecture and perfect training data? This work argues YES and offers a formal proof. Let's dig in ⤵ 🧵1/n

Instead, evaluation processes should track the diverse notions of extrinsic utility which are to be found in both everyday usage of our technology today, but also anticipating how people might use technology tomorrow.
🚨 Inclusive tech research alert! 🚨 Are you a tech user who identifies as BIPOC (bit.ly/BIPOC_defined)? Or a researcher/practitioner who uses data in your work? Share your experiences in our 20 min. survey→go.gmu.edu/EngagingTheMar… IRBNet #: 1945546-2 #data #tech #trust
Never name a manuscript draft "_FINAL"
Academic research: months of experiments and data analysis that ends up being a few sentences in a paper

I feel like large language model feels a bit reductive when GPT-2 is in the same class as GPT-4. gigantic language models? enormous language models? big ass language models? Nimitz-class language models? better suggestions needed
Happy birthday to Python creator Guido van Rossum. The open source language was named after comedy troupe Monty Python: bit.ly/2B8R7h6 Image v/Midjourney

When I got started with programming, I debugged using printf() statements. Today, I debug with print() statements. The purpose of debugging is to correct your mental model of what your code does, and no tool can do that for you. The best any tool can do is provide visibility…
“The most effective debugging tool is still careful thought, coupled with judiciously placed print statements.” — Brian Kernighan, co-creator of Unix
If there was only one scientific practice I could teach to every scientist regardless of stage or field I think it would be: look at the data. Spot check it. Find a few data points and trace them through to see if they make sense. Look at the raw data. Don't just do analyses.
United States Tendencias
- 1. Emiru 10.1K posts
- 2. Dodgers 274K posts
- 3. Good Saturday 19.9K posts
- 4. Ohtani 231K posts
- 5. World Series 64.6K posts
- 6. Massie 34.7K posts
- 7. Babe Ruth 3,748 posts
- 8. Carson Beck 17K posts
- 9. #HeartofTaehyung 43.7K posts
- 10. Louisville 29.1K posts
- 11. TOP CALL 10.1K posts
- 12. Sam Harris 1,279 posts
- 13. FDV 5min 2,990 posts
- 14. Nebraska 17.8K posts
- 15. Talus 16.1K posts
- 16. AI Alert 8,634 posts
- 17. TwitchCon 26.3K posts
- 18. Miami 103K posts
- 19. George Santos 91.6K posts
- 20. #SEVENTEEN_NEW_IN_LA 65.6K posts
Something went wrong.
Something went wrong.