(2024/09/23) The Soda Lab is currently seeking PhD students for Fall 2025. If you are interested, please check here for more information!
We are the Soda (Social Data and AI) Lab at the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington. We study social phenomena through large-scale data and computational tools, aiming to tackle big societal problems.
We focus particularly on human behavior on online platforms—the measurement, understanding, design, and assessment of implications. We use mobile devices at any time to access the internet, read the news, watch videos, search for nearby restaurants, chat with friends, and leave posts on social networking sites. Those electronic footprints enable us to understand individual or collective human behavior: what people like or hate, how people feel about various topics, and how people behave and engage. Thus, it has become crucial to understand human behavior on these online platforms.
We develop new computational methods and tools for understanding, predicting, and changing human behavior on online platforms. One of the challenges posed by online data is the diversity and complexity of the datasets. We explore various types of large-scale data, investigate and compare existing tools to overcome their limitations and use them in the right way, and develop new measurements, machine learning models, and linguistic methods to understand human behaviors online and, furthermore, solve real-world problems.
However, our goal does not only solve real-world problems but those in online spaces. We are also interested in understanding obstacles to trusted public space online, developing methodologies to make them transparent, building frameworks to monitor them at a large-scale in real-time, and transforming the public space online more credible.
We are located at the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington, IN, USA. We are a member of Complex Networks and Systems Research (CNetS).
We are part of the research team studying 'BRAIN (Belief Resonance and AI Narratives): Understanding Belief-Narrative Resonance in the Era of Generative AI.' This five-year, $7.5 million project is funded by the U.S. Department of Defense.
Aug 2024Our work 'Neural embedding of beliefs reveals the role of relative dissonance in human decision-making' (under review) is uploaded on arXiv!
June 2024Our work 'X-posing Free Speech: Examining the Impact of Moderation Relaxation on Online Social Networks' is presented at the 8th Workshop on Online Abuse and Harms (WOAH) at NAACL 2024.
May 2024Our work 'A Survey on Predicting the Factuality and the Bias of News Media' is accepted at ACL 2024 Findings.
Apr 2024Our work 'What is Twitter, a social network or a news media?' (WWW'10) has received 10,000 citations!
Mar 2024Our work 'Rematch: Robust and Efficient Matching of Local Knowledge Graphs to Improve Structural and Semantic Similarity' is accepted at NAACL 2024 Findings.
Mar 2024Our work 'The impact of toxic trolling comments on anti-vaccine YouTube videos ' is published for publication in Scientific Reports
Feb 2024Our work 'ChatGPT Rates Natural Language Explanation Quality Like Humans: But on Which Scales?' is accepted at LREC-COLING 2024.
Jan 2024Our work 'Public Perception of Generative AI on Twitter: An Empirical Study Based on Occupation and Usage' has been published in EPJ DataScience.