LabCLU's profile picture. Computational Language Understanding Lab at University of Arizona

clulab

@LabCLU

Computational Language Understanding Lab at University of Arizona

clulab reposted

We (w/ @msurd) will be presenting our Spotlight paper at #ICLR2024 in Vienna next week. Drop by for some amazing intellectual exchanges on data contamination. Paper: arxiv.org/abs/2308.08493 Code: github.com/shahriargolchi… Media: thenewstack.io/how-to-detect-…

ShahriarGolchin's tweet image. We (w/ @msurd) will be presenting our Spotlight paper at #ICLR2024 in Vienna next week. Drop by for some amazing intellectual exchanges on data contamination.

Paper: arxiv.org/abs/2308.08493
Code: github.com/shahriargolchi…
Media: thenewstack.io/how-to-detect-…

Congratulations, @msurd !

I'd like to congratulate my colleague, Mihai Surdeanu @msurd, on his promotion to full Professor at U.Arizona! So glad to see Mihai get the recognition that he deserves for his excellence as both a researcher and teacher. (And he's an amazing colleague too.) Congrats, Mihai! 🏆



By the way, both @robert_nlp and @ShahriarGolchin are on the market! See some of their papers in our earlier tweets.


clulab reposted

🔗 Preprint: arxiv.org/abs/2404.07544 LLMs like GPT-4 or Claude 3 can perform both linear and non-linear regression surprisingly well when given (input, output) examples in their context, without any parameter update. (1/n)


And one more paper from @robert_nlp and colleagues: LLMs can do complicated regression tasks using only in-context examples: arxiv.org/pdf/2404.07544…


Loading...

Something went wrong.


Something went wrong.