liu_ruomeng's profile picture.

Ruomeng Liu

@liu_ruomeng

Ruomeng Liu memposting ulang

The fact that this had to be written: "The scientific community must urgently develop new data validation standards and reconsider its reliance on nonprobability, low-barrier online data collection methods." Academia never should have gotten to this point. I don't think we…

AI presents a fundamental threat to our ability to use polls to assess public opinion. Bad actors who are able to infiltrate panels can flip close election polls for less than the cost of a Starbucks coffee. Models will also infer and confirm hypotheses in experiments. Current…

seanjwestwood's tweet image. AI presents a fundamental threat to our ability to use polls to assess public opinion.  Bad actors who are able to infiltrate panels can flip close election polls for less than the cost of a Starbucks coffee.  Models will also infer and confirm hypotheses in experiments.  Current…


Ruomeng Liu memposting ulang

AI presents a fundamental threat to our ability to use polls to assess public opinion. Bad actors who are able to infiltrate panels can flip close election polls for less than the cost of a Starbucks coffee. Models will also infer and confirm hypotheses in experiments. Current…

seanjwestwood's tweet image. AI presents a fundamental threat to our ability to use polls to assess public opinion.  Bad actors who are able to infiltrate panels can flip close election polls for less than the cost of a Starbucks coffee.  Models will also infer and confirm hypotheses in experiments.  Current…

Ruomeng Liu memposting ulang

Increasingly, social scientists are advocating for the use of LLM agents as human stand-ins. The assumption is: these things can respond and generate text like humans. But is that true? Our new paper suggests not quite...

cbarrie's tweet image. Increasingly, social scientists are advocating for the use of LLM agents as human stand-ins. 

The assumption is: these things can respond and generate text like humans. 

But is that true? Our new paper suggests not quite...

Ruomeng Liu memposting ulang

Cool new working paper on why and how cable news threw gasoline on the culture war fire Link in reply

JakeMGrumbach's tweet image. Cool new working paper on why and how cable news threw gasoline on the culture war fire

Link in reply

Ruomeng Liu memposting ulang

gander is an R package that brings AI directly into RStudio or Posit. Instead of switching between your IDE and a chat window, gander lets you ask questions or request code changes right inside your script. It automatically shares relevant context such as variable names, data…


Ruomeng Liu memposting ulang

AI always calling your ideas “fantastic” can feel inauthentic, but what are sycophancy’s deeper harms? We find that in the common use case of seeking AI advice on interpersonal situations—specifically conflicts—sycophancy makes people feel more right & less willing to apologize.

chengmyra1's tweet image. AI always calling your ideas “fantastic” can feel inauthentic, but what are sycophancy’s deeper harms? We find that in the common use case of seeking AI advice on interpersonal situations—specifically conflicts—sycophancy makes people feel more right & less willing to apologize.

Ruomeng Liu memposting ulang

Individuals are very different in their social behaviour. In this Perspective Kuper and colleagues examine interdisciplinary evidence for why this is and what it means for our understanding of individual and collective human behaviour. nature.com/articles/s4156…


Ruomeng Liu memposting ulang

New research finds that conservatives tended to endorse moral absolutism, whereas liberals tend to endorse moral relativism. Moral absolutists are more likely to support banning practices they deem immoral psycnet.apa.org/record/2026-54…

jayvanbavel's tweet image. New research finds that conservatives tended to endorse moral absolutism, whereas liberals tend to endorse moral relativism.

Moral absolutists are more likely to support banning practices they deem immoral psycnet.apa.org/record/2026-54…

Ruomeng Liu memposting ulang

🚨 New working paper! 🚨 Happy to share my new data on affective polarization by party across states, CDs, counties, and towns from 2009-23. Key point: polarization is not just an individual trait... contexts and electorates can be "polarized" too! 1/ papers.ssrn.com/sol3/papers.cf…

sethbwarner's tweet image. 🚨 New working paper! 🚨

Happy to share my new data on affective polarization by party across states, CDs, counties, and towns from 2009-23.

Key point: polarization is not just an individual trait... contexts and electorates can be "polarized" too! 1/

papers.ssrn.com/sol3/papers.cf…
sethbwarner's tweet image. 🚨 New working paper! 🚨

Happy to share my new data on affective polarization by party across states, CDs, counties, and towns from 2009-23.

Key point: polarization is not just an individual trait... contexts and electorates can be "polarized" too! 1/

papers.ssrn.com/sol3/papers.cf…

Ruomeng Liu memposting ulang

Very interesting!

edenhofer_jacob's tweet image. Very interesting!

Ruomeng Liu memposting ulang

🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**. Paper: arxiv.org/pdf/2509.08825

joabaum's tweet image. 🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825

Ruomeng Liu memposting ulang

🇺🇸 Can watching dialogue across party lines reduce polarisation? ➡️ L-O Ankori-Karlinsky, @robert_a_blair, J. Gottlieb & @smooreberg show that a documentary of an intergroup workshop reduces polarisation and boosts faith in democracy cambridge.org/core/journals/… #FirstView

PSRMJournal's tweet image. 🇺🇸 Can watching dialogue across party lines reduce polarisation?

➡️ L-O Ankori-Karlinsky, @robert_a_blair, J. Gottlieb & @smooreberg show that a documentary of an intergroup workshop reduces polarisation and boosts faith in democracy cambridge.org/core/journals/… #FirstView
PSRMJournal's tweet image. 🇺🇸 Can watching dialogue across party lines reduce polarisation?

➡️ L-O Ankori-Karlinsky, @robert_a_blair, J. Gottlieb & @smooreberg show that a documentary of an intergroup workshop reduces polarisation and boosts faith in democracy cambridge.org/core/journals/… #FirstView

Ruomeng Liu memposting ulang

1st paper from my lab out @CommunicationsPsychology @CommsPsychol nature.com/articles/s4427… We show an alternative way to understand how people mentally represent other people's characteristics, namely high-dimensional networks, beyond the popular latent factor models.

🚨3rd preprint from my lab out! with my awesome grad @LuJunsong19474🌟 How do people mentally represent numerous inferences about others?🤯 Prior work proposed low-dimensional rep with latent dimensions We show high-dimensional network brings new insights🧵👇…



Ruomeng Liu memposting ulang

Currently in FirstView: In “Attention and Political Choice: A Foundation for Eye Tracking in Political Science,” Libby Jenke and Nicolette Sullivan explain what eye tracking allows researchers to measure and how these measures are relevant to political science questions.

polanalysis's tweet image. Currently in FirstView: In “Attention and Political Choice: A Foundation for Eye Tracking in Political Science,” Libby Jenke and Nicolette Sullivan explain what eye tracking allows researchers to measure and how these measures are relevant to political science questions.

Ruomeng Liu memposting ulang

This is a useful reading list on recent advances in econometrics.

heimbergecon's tweet image. This is a useful reading list on recent advances in econometrics.

Ruomeng Liu memposting ulang

what are large language models actually doing? i read the 2025 textbook "Foundations of Large Language Models" by tong xiao and jingbo zhu and for the first time, i truly understood how they work. here’s everything you need to know about llms in 3 minutes↓

alex_prompter's tweet image. what are large language models actually doing?

i read the 2025 textbook "Foundations of Large Language Models" by tong xiao and jingbo zhu and for the first time, i truly understood how they work.

here’s everything you need to know about llms in 3 minutes↓

Ruomeng Liu memposting ulang

🚨 New paper in @ScienceAdvances Can changing how we argue about politics online improve the quality of replies we get? @THeideJorgensen, @a_rasmussen, and I use an LLM to manipulate counter-arguments to see how people respond to different approaches to arguments. Thread 🧵1/n

GregoryEady's tweet image. 🚨 New paper in @ScienceAdvances

Can changing how we argue about politics online improve the quality of replies we get?

@THeideJorgensen, @a_rasmussen, and I use an LLM to manipulate counter-arguments to see how people respond to different approaches to arguments.

Thread 🧵1/n

Ruomeng Liu memposting ulang

This research advances a mechanistic reward learning account of social learning strategies. Through experiments & simulations, it shows how people learn to learn from others, dynamically shaping the processes involved in cultural evolution. @DSchultner nature.com/articles/s4156…


Ruomeng Liu memposting ulang

🚨New paper in @TrendsCognSci 🚨 Why do some ideas spread widely, while others fail to catch on? @Jayvanbavel and I review the “psychology of virality,” or the psychological and structural factors that shape information spread online and offline. Thread 🧵(1/n)

steverathje2's tweet image. 🚨New paper in @TrendsCognSci 🚨

Why do some ideas spread widely, while others fail to catch on?

@Jayvanbavel and I review the “psychology of virality,” or the psychological and structural factors that shape information spread online and offline.

Thread 🧵(1/n)

Ruomeng Liu memposting ulang

Can large language models (LLMs) fairly annotate data on contentious topics? Our new paper dives into this question—looking at whether LLM-generated labels reflect diverse viewpoints or skew toward majority perspectives. The results are surprisingly nuanced. 🧵


United States Tren

Loading...

Something went wrong.


Something went wrong.