Welcome to the Soda Lab

We are the Soda (Social Data and AI) Lab at the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington. We study social phenomena through large-scale data and computational tools, aiming to tackle big societal problems.

We focus particularly on human behavior on online platforms—the measurement, understanding, design, and assessment of implications. We use mobile devices at any time to access the internet, read the news, watch videos, search for nearby restaurants, chat with friends, and leave posts on social networking sites. Those electronic footprints enable us to understand individual or collective human behavior: what people like or hate, how people feel about various topics, and how people behave and engage. Thus, it has become crucial to understand human behavior on these online platforms.

We develop new computational methods and tools for understanding, predicting, and changing human behavior on online platforms. One of the challenges posed by online data is the diversity and complexity of the datasets. We explore various types of large-scale data, investigate and compare existing tools to overcome their limitations and use them in the right way, and develop new measurements, machine learning models, and linguistic methods to understand human behaviors online and, furthermore, solve real-world problems.

However, our goal does not only solve real-world problems but those in online spaces. We are also interested in understanding obstacles to trusted public space online, developing methodologies to make them transparent, building frameworks to monitor them at a large-scale in real-time, and transforming the public space online more credible.

We are located at the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington, IN, USA. We are a member of Complex Networks and Systems Research (CNetS).

(2024/02/17) The Fall 2024 admissions process has closed. Thank you all for your interest in our lab. (more info)!

News

Mar 2024

Our work 'Rematch: Robust and Efficient Matching of Local Knowledge Graphs to Improve Structural and Semantic Similarity' is accepted at NAACL 2024 Findings.

Mar 2024

Our work 'The impact of toxic trolling comments on anti-vaccine YouTube videos ' is published for publication in Scientific Reports

Feb 2024

Our work 'ChatGPT Rates Natural Language Explanation Quality Like Humans: But on Which Scales?' is accepted at LREC-COLING 2024.

Jan 2024

Our work 'Public Perception of Generative AI on Twitter: An Empirical Study Based on Occupation and Usage' has been published in EPJ DataScience.

Dec 2023

Our work 'Is ChatGPT better than Human Annotators? Potential and Limitations of ChatGPT in Explaining Implicit Hate Speech' (WWW Companion, 2023) has already got 100 citations!

Oct 2023

Our work 'Enhancing Spatio-temporal Traffic Prediction through Urban Human Activity Analysis' has been published in ACM CIKM'23.

Aug 2023

Jisun serves as a workshop chair for the Web Conference 2024 and Haewoon serves as a PhD Symposium chair.

July 2023

Our work 'Can we trust the evaluation on ChatGPT? has been presented in TrustNLP (Collocated with ACL) 2023.

6 Mar 2023

Our studies 'Is ChatGPT better than Human Annotators? Potential and Limitations of ChatGPT in Explaining Implicit Hate Speech' and 'Chain of Explanation: New Prompting Method to Generate Higher Quality Natural Language Explanation for Implicit Hate Speech ' got accepted for The Web 2023 Poster track

... see all News