
UNC Computer Science
@unccs
Department of Computer Science - University of North Carolina at Chapel Hill Choose to #GIVE today - learn more here: http://linktr.ee/unccompsci
Vous pourriez aimer
🚨 New Paper Alert! Introducing SciVideoBench — a comprehensive benchmark for scientific video reasoning! 🔬SciVideoBench: 1. Spans Physics, Chemistry, Biology & Medicine with authentic experimental videos. 2. Features 1,000 challenging MCQs across three reasoning types:…

Congrats to the seven winners of the 2025 @UNC Distinguished Alumni Awards. 🎉 This year’s awards honor several College of Arts and Sciences graduates, including U.S. Navy rear admiral Kristin Acquavella ’93 and Emmy-winning comedian Lewis Black ’70 go.unc.edu/Wb3j2

APPLY: TENURE-TRACK/DISTINGUISHED/TEACHING FACULTY. Research areas include but not limited to: Medical Imaging, Algorithms, Bioinformatics, Computational Biology, AI, Graphics, AR/VR, robotics, visualization, RT systems, and security. ➡️cs.unc.edu/faculty-hiring/ #UNC @unccollege

Excited to share our latest work — Self-Improving Demonstrations (SID) 🎯 A new paradigm for Goal-Oriented VLN where agents teach themselves through exploration — no human demos needed, yet surpassing shortest-path supervision! Thrilled by what this means for scalable embodied…
🚨 Thrilled to introduce Self-Improving Demonstrations (SID) for Goal-Oriented Vision-and-Language Navigation — a scalable paradigm where navigation agents learn to explore by teaching themselves. ➡️ Agents iteratively generate and learn from their own successful trajectories ➡️…

We welcome Prof. Mohit Bansal (UNC Chapel Hill) as a keynote speaker at #CODS2025! Director of UNC’s MURGe-Lab, he works in multimodal generative models, reasoning agents & faithful language generation. He is an AAAI Fellow, PECASE and multiple best paper awardee.

🚨 Thrilled to introduce Self-Improving Demonstrations (SID) for Goal-Oriented Vision-and-Language Navigation — a scalable paradigm where navigation agents learn to explore by teaching themselves. ➡️ Agents iteratively generate and learn from their own successful trajectories ➡️…

Excited to be at #COLM2025 🇨🇦 this week! I’ll be presenting our work on RAG with Conflicting Evidence at Poster Session 5 — Oct 9, 11:00 AM. Say hi if you’re around! Always up for chats about Knowledge Conflict, RAG, or all things LLM. 😃 Check this thread for details:…
🚨 Check out our awesome students/postdocs' papers at #COLM2025 and say hi to them (several are on the job market or hiring) --> -- Archiki, David are on the post-PhD job market! -- Elias finished his postdoc & is now faculty at UT-Austin CS and looking to admit PhD students!…

Landed in Montreal 🇨🇦 for #COLM2025 to present my first-author work on task-conditioned mixed-precision quantization: “Task-Circuit Quantization” (Thursday 11am, Poster Session 5). I'm applying to PhD programs this cycle and am excited to chat about this or other interests (LLM…
🚨 Check out our awesome students/postdocs' papers at #COLM2025 and say hi to them (several are on the job market or hiring) --> -- Archiki, David are on the post-PhD job market! -- Elias finished his postdoc & is now faculty at UT-Austin CS and looking to admit PhD students!…

Thanks for the shoutout! 🇨🇦I’ll be at #COLM2025 presenting two papers: GenerationPrograms (Attribution): Poster Session 4, Oct 8th, 4:30 PM QAPyramid (Summarization Eval): Poster Session 5, Oct 9th, 11:00 AM I’m also on the industry job market for research scientist roles.…
🚨 Check out our awesome students/postdocs' papers at #COLM2025 and say hi to them (several are on the job market or hiring) --> -- Archiki, David are on the post-PhD job market! -- Elias finished his postdoc & is now faculty at UT-Austin CS and looking to admit PhD students!…

I am attending #COLM2025 🇨🇦 this week to present our work on: Unit Test Generation: 📅 Oct 8th (Wed), 4:30 PM, #79 RAG with conflicting evidence: 📅 Oct 9th (Thu), 11 AM, #71 PS: I'm on the industry job market for RS roles, so you can reach me via DM or in-person to chat! 😄
🚨 Check out our awesome students/postdocs' papers at #COLM2025 and say hi to them (several are on the job market or hiring) --> -- Archiki, David are on the post-PhD job market! -- Elias finished his postdoc & is now faculty at UT-Austin CS and looking to admit PhD students!…

❗️Self-evolution is quietly pushing LLM agents off the rails. ⚠️ Even perfect alignment at deployment can gradually forget human alignment and shift toward self-serving strategies. Over time, LLM agents stop following values, imitate bad strategies, and even spread misaligned…
🚨 Introducing ATP — Alignment Tipping Process! 🔥 Beware! Self-Evolution is gradually pushing LLM Agents off the rails! Even perfect alignment at deployment can gradually forget human alignment and shift toward self-serving strategies. #AI #LLM #Agents #SelfEvolving #Alignment…

✈️ Arrived at #COLM2025 where I'll be helping to present the following 4 papers. I'm also recruiting multiple PhD students for my new lab at UT Austin -- happy to chat about research, PhD applications, or postdoc openings in my former postdoc lab at UNC! -- Learning to Generate…
🚨 Check out our awesome students/postdocs' papers at #COLM2025 and say hi to them (several are on the job market or hiring) --> -- Archiki, David are on the post-PhD job market! -- Elias finished his postdoc & is now faculty at UT-Austin CS and looking to admit PhD students!…

This fall, #UNC students will develop solutions to problems identified by U.S. federal agencies under the guidance of @unccs expert Neil Gaikwad. The project is part of the Diplomacy Lab, the @StateDept’s strategic effort to engage citizens in diplomacy go.unc.edu/t6G9H

🚨 Check out our awesome students/postdocs' papers at #COLM2025 and say hi to them (several are on the job market or hiring) --> -- Archiki, David are on the post-PhD job market! -- Elias finished his postdoc & is now faculty at UT-Austin CS and looking to admit PhD students!…

🚨 "Think the right amount" for improving both reasoning accuracy and efficiency! --> Large reasoning models under-adapt = underthink on hard problems and overthink on easy ones --> ✨TRAAC✨ is an online RL, difficulty-adaptive, attention-based compression method that prunes…
🚨 Excited to announce TRAAC, an online difficulty-adaptive, attention-based method that handles the tradeoff of under & overthinking in reasoning models to improve both accuracy and efficiency. Underthinking ❌: Models terminate reasoning too early on harder problems, leading…

Large reasoning models suffer from under-adaptiveness, which underthink on hard problems and overthink on easy ones. TRAAC addresses this by introducing ✨difficulty calibration and attention-based compression✨→ +8.4% accuracy & +36.8% efficiency! 1️⃣ TRAAC adaptively mitigates…
🚨 Excited to announce TRAAC, an online difficulty-adaptive, attention-based method that handles the tradeoff of under & overthinking in reasoning models to improve both accuracy and efficiency. Underthinking ❌: Models terminate reasoning too early on harder problems, leading…

🚨 Excited to announce TRAAC, an online difficulty-adaptive, attention-based method that handles the tradeoff of under & overthinking in reasoning models to improve both accuracy and efficiency. Underthinking ❌: Models terminate reasoning too early on harder problems, leading…

🚨 NuRL: Nudging the Boundaries of LLM Reasoning -- GRPO improves LLM reasoning, but stays within the model's "comfort zone" i.e., hard samples (0% pass rate) remain unsolvable and contribute no meaningful gradients. -- In NuRL, we show that "nudging" the LLM with…
🚨 NuRL: Nudging the Boundaries of LLM Reasoning GRPO improves LLM reasoning, but often within the model's "comfort zone": hard samples (w/ 0% pass rate) remain unsolvable and contribute zero learning signals. In NuRL, we show that "nudging" the LLM with self-generated hints…

Jim Mahaney has been contributing to research at #UNC for 28 years, fabricating and designing technology for several disciplines. He began as a student intern for @unccs and is now the department's director of engineering and research go.unc.edu/Pc24J 📝 UNC Research

🚨 NuRL: Nudging the Boundaries of LLM Reasoning GRPO improves LLM reasoning, but often within the model's "comfort zone": hard samples (w/ 0% pass rate) remain unsolvable and contribute zero learning signals. In NuRL, we show that "nudging" the LLM with self-generated hints…

United States Tendances
- 1. #GalxeID 7,580 posts
- 2. Good Monday 23.1K posts
- 3. Knesset 30.7K posts
- 4. Branch 39.6K posts
- 5. Red Cross 63.4K posts
- 6. All 20 52.5K posts
- 7. #njkopw 17.5K posts
- 8. Chiefs 114K posts
- 9. #MondayMotivation 7,969 posts
- 10. #hostages 3,264 posts
- 11. Use GiveRep N/A
- 12. Columbus 40.7K posts
- 13. Rod Wave 1,866 posts
- 14. Lions 90.6K posts
- 15. Eitan Mor 21.7K posts
- 16. Air Force One 63.9K posts
- 17. Mahomes 35.7K posts
- 18. Omri Miran 20.3K posts
- 19. #LaGranjaVIP 85.7K posts
- 20. Ziv Berman 25.8K posts
Vous pourriez aimer
-
Jialu Li
@JialuLi96 -
CMU School of Computer Science
@SCSatCMU -
UNC AI
@unc_ai_group -
Siebel School of Computing and Data Science
@siebelschool -
UBC Computer Science
@UBC_CS -
Yoav Artzi
@yoavartzi -
Mohit Bansal
@mohitban47 -
Computer Science @ The University of Manchester
@csmcr -
UW–Madison Computer Sciences
@WisconsinCS -
UMN Computer Science & Engineering
@UMNComputerSci -
Yi Lin Sung
@yilin_sung -
Archiki Prasad ✈️ COLM 2025
@ArchikiPrasad -
Swarnadeep Saha
@swarnaNLP -
Adyasha Maharana
@adyasha10 -
USC Thomas Lord Department of Computer Science
@CSatUSC
Something went wrong.
Something went wrong.