We are the Soda (Social Data and AI) Lab at the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington. We study social phenomena through large-scale data and computational tools, aiming to tackle big societal problems.
We focus particularly on human behavior on online platforms—the measurement, understanding, design, and assessment of implications. We use mobile devices at any time to access the internet, read the news, watch videos, search for nearby restaurants, chat with friends, and leave posts on social networking sites. Those electronic footprints enable us to understand individual or collective human behavior: what people like or hate, how people feel about various topics, and how people behave and engage. Thus, it has become crucial to understand human behavior on these online platforms.
We develop new computational methods and tools for understanding, predicting, and changing human behavior on online platforms. One of the challenges posed by online data is the diversity and complexity of the datasets. We explore various types of large-scale data, investigate and compare existing tools to overcome their limitations and use them in the right way, and develop new measurements, machine learning models, and linguistic methods to understand human behaviors online and, furthermore, solve real-world problems.
However, our goal does not only solve real-world problems but those in online spaces. We are also interested in understanding obstacles to trusted public space online, developing methodologies to make them transparent, building frameworks to monitor them at a large-scale in real-time, and transforming the public space online more credible.
We are located at the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington, IN, USA. We are a member of Complex Networks and Systems Research (CNetS).
(2024/02/17) The Fall 2024 admissions process has closed. Thank you all for your interest in our lab. (more info)!
Our work 'The impact of toxic trolling comments on anti-vaccine YouTube videos ' is published for publication in Scientific Reports
Feb 2024Our work on evaluating LLM's capability to 'rate' the quality of explanations is accepted at LREC-COLING 2024.
Jan 2024Our work 'Public Perception of Generative AI on Twitter: An Empirical Study Based on Occupation and Usage' has been published in EPJ DataScience.
Dec 2023Our work 'Is ChatGPT better than Human Annotators? Potential and Limitations of ChatGPT in Explaining Implicit Hate Speech'(WWW Companion, 2023) has already got 100 citations!
Oct 2023Our work 'Enhancing Spatio-temporal Traffic Prediction through Urban Human Activity Analysis' has been published in ACM CIKM'23.
Aug 2023Jisun serves as a workshop chair for the Web Conference 2024 and Haewoon serves as a PhD Symposium chair.
July 2023Our work 'Can we trust the evaluation on ChatGPT? has been presented in TrustNLP (Collocated with ACL) 2023.
6 Mar 2023Our studies 'Is ChatGPT better than Human Annotators? Potential and Limitations of ChatGPT in Explaining Implicit Hate Speech' and 'Chain of Explanation: New Prompting Method to Generate Higher Quality Natural Language Explanation for Implicit Hate Speech ' got accepted for The Web 2023 Poster track
2 Feb 2023Our studies ''Political Honeymoon Effect on Social Media: Characterizing Social Media Reaction to the Changes of Prime Minister in Japan' and 'Wearing Masks Implies Refuting Trump?: Towards Target-specific User Stance Prediction across Events in COVID-19 and US Election 2020' got accepted for ACM WebSci'23.