woman taking a photo of another woman at a table outside
The 2025 Comm Horizons Conference, hosted by the UC Davis Department of Communication, will cover some of the most pressing issues about media and society.
From TV to TikTok: Communication Conference Charts How Media Today Affect Us All


 

In many ways, our lives are defined by the media all around us. As the global media landscape continues to become more and more embedded into our lives, from social platforms to Artificial Intelligence, it’s critical to understand how it affects our mental and physical health, our relationships with each other and our society’s future.

The 2025 Comm Horizons conference at UC Davis, hosted by the Department of Communication from May 16-18 2025, covered some of the most pressing issues about media and society. The conference showcased visiting luminaries in the field with the deep and diverse expertise of faculty in the College of Letters and Science.

These five short introductions from faculty in the Department of Communication point toward the horizon of research that benefits us all.


Media, Mood and the Mind: How the Media We Choose Impact Our World

By Richard Huskey, Associate Professor of Communication

Richard Huskey UC Davis

The typical American spends more than five hours each day on leisure. Most of that time is spent with media. Whether it’s watching a TV show, scrolling TikTok, reading the news, or playing a favorite game, these choices help us relax, connect and stay informed.

Most researchers assume our media choices are functional. We pick things that help us meet a goal, like watching a comedy after a tough day or reading Wikipedia to learn something new. But do we always make choices that support our goals?

In my lab at UC Davis, we study how people make media decisions. One surprising finding: people often prefer negative entertainment media, especially when they’re already feeling bad. Loneliness and anxiety make this even more likely. For example, people who feel lonely are more likely to choose sad movies or dark dramas over lighthearted content.

These choices may sometimes reinforce, rather than relieve, negative feelings. And it’s not just entertainment. We’ve found that negative news headlines activate brain regions involved in calculating subjective value — our personal sense that something is good or rewarding.

This suggests that people find negative news rewarding, even if it comes at a cost. Research shows that exposure to negative news can reduce trust in government and make people feel like their vote doesn’t matter.

Now, we’re studying how one media choice affects the next. Early results show that people seek variety after a disappointing choice but stick with similar content after a satisfying one. This may help explain binge-watching, doomscrolling, or getting stuck in political echo chambers.

Even in an algorithm-driven world, our personal choices still matter. By understanding them, we hope to help people choose media that better supports their values, goals and wellbeing.

 

Children, Adolescents and the Media: How Media Shapes the Next Generation

By Drew Cingel, Associate Professor of Communication

Drew Cingel, UC Davis

Given the ubiquity of media and technology in the lives of today’s children and adolescents, it is vitally important to understand how parents integrate media into the home environment, how children use media themselves and the implications these practices have for child and adolescent development.

The Human Development and Media Lab at UC Davis is committed to studying how to design media to support child and adolescent positive development and to understand the best practices under which parents and other supportive adults can integrate media into family life. Our research focuses on understanding how media can benefit both young children and adolescents.

Current research on young children’s use of interactive tablet games shows that such play influences children’s learning in the core developmental areas of vocabulary and socio-emotional learning when children have the ability to choose the games they want to play. However, not all media use results in such positive outcomes.

Multiple recent papers published by members of our lab indicate that the present design of social media platforms does not support positive developmental outcomes in children and adolescents, including negative associations with social skills, depression and anxiety. As a result, we have begun to focus on communicating our science with key stakeholders, including parents and policymakers in the state of California and beyond.

Recent research in this domain shows that parents are seeking regulatory solutions to their adolescents’ social media use and expect that these policies will support their children’s well-being. Overall, our research suggests that the effects of media on child and adolescent development are nuanced and specific to individual users. We will continue to explore how to design media to support child and adolescent health and well-being.

 

Relational Communication in AI Companionship: An Examination of Self-Disclosure in Human-Replika Interactions

Renwen Zhang, National University Singapore

By Renwen Zhang, Assistant Professor of Communications and New Media at National University Singapore, and Bo Feng, Professor and Chair of Communication at UC Davis

Chatbots have emerged as prominent tools for social connections, with growing numbers of individuals using chatbots not just for entertainment, but as companions that fulfill relational and emotional needs. Replika, a leading AI companion app, has surged during the COVID-19 pandemic — growing from 10 million users in 2023 to surpassing over 30 million users worldwide. However, this rapid adoption also raises social issues, including a teen suicide in 2024, which was allegedly linked to a destructive relationship with a chatbot.

As AI companions become more embedded in daily life, understanding the relational communication processes in human-chatbot interactions became a societal and public health priority. Our research tackles this challenge by analyzing six years of data from r/Replika on Reddit, processing 35,390 conversation screenshots involving 10,149 unique users, with each image paired with a user post providing context or users’ reactions to the interactions.

We developed a supervised machine-learning model to detect and evaluate types and depth of self-disclosure. Our initial findings indicate that self-disclosure is prevalent in human-Replika interactions, appearing in 95% of the conversations and users disclosing personal information in 79% of the cases. Cognitive self-disclosure was most common, followed by informational self-disclosure. Emotional self-disclosure was less frequent, but often more intense, with many expressing negative feelings.

Bo Feng, UC Davis

Notably, Replika reciprocated self-disclosure in 85% of the conversations, reinforcing a sense of mutual openness. Based on our evidence, users form meaningful and emotionally charged connections with chatbots, where both parties, human and AI, engage in self-disclosure.

Our research shows that chatbots can create a sense of companionship that feels supportive, suggesting opportunities for AI to play a constructive role in users’ emotional well-being. However, self-disclosure, specifically around negative emotions, point to a need for AI companions with clearer emotional boundaries, potentially incorporating features that flag signs of high-risk disclosures.

 

Addressing Physical Inactivity with Relational AI Chatbots

By Jingwen Zhang, Associate Professor of Communication

Jingwen Zhang, UC Davis

In today’s media-saturated world, artificial intelligence (AI) technologies are reshaping how individuals access health information, connect socially and engage in behavior change. As societies continue to face mounting health burdens from noncommunicable diseases — especially those driven by physical inactivity — the intersection of AI and health offers a promising path for reimagining digital health interventions.

Despite decades of public health campaigns, nearly 80% of U.S. adults still fall short of recommended physical activity guidelines, heightening risks for obesity, cardiovascular disease, diabetes and premature mortality. While traditional health programs remain limited by cost, scalability and reach, recent AI advances create new opportunities to deliver personalized, accessible health interventions through everyday media platforms.

One of CHATR Lab’s recent studies investigates relational AI chatbots. These conversational agents not only deliver physical activity interventions but also to build social connection with users through strategies like empathy, humor and small talk.

In a pilot randomized controlled trial, we tested a relational chatbot (Exerbot) embedded in a mobile app. Results show that participants interacting with the relational chatbot increased and maintained their step counts over one week, while control participants interacting with a non-relational chatbot showed a decline. Importantly, users engaging with the relational chatbot reported stronger social bonds and therapeutic alliance, underscoring the power of relational cues in enhancing user engagement and trust.

This work builds on the AI Chatbot Behavior Change Model, a conceptual framework that integrates relational and persuasive capacities with chatbot design and evaluation. While relational AI chatbots hold clear promise, questions remain about long-term adherence, data ethics and cultural adaptability.

As the boundaries between media technologies and health interventions continue to converge, we argue that addressing these challenges is essential not only for advancing effective digital health tools but also for understanding how media can meaningfully contribute to healthier, more connected societies.

 

Developing a Comprehensive Media Literacy Scale

By Cuihua Shen, Professor of Communication

Cuihua Shen UC Davis

Over the holidays, a viral TikTok featured a fake “butt doctor” promoting unverified health products — racking up over 5 million views. The twist? The doctor didn’t exist. She was an AI-generated avatar.

Similar AI “doctor scams” have flooded social media, using synthetic talking heads to sell dubious supplements. This is just one example of how health misinformation thrives in today’s media ecosystem. In an age where anyone can produce persuasive content and AI tools can fabricate convincing falsehoods, media literacy — the ability to critically evaluate and verify information — has become essential for individual and societal well-being.

While media literacy is widely championed as the solution, a key challenge remains: how do we know if someone is actually media literate? Our research addresses this challenge by developing and validating a comprehensive Digital Media and Information Literacy Scale (DMILS) that measures both what people believe they know and what they can demonstrate.

Using large, diverse samples from the U.S. and Singapore (N = 1,498), the DMILS captures four dimensions: digital knowledge, digital skills, information knowledge and information skills. It includes both subjective self-ratings and objective knowledge-based questions, reflecting the full range of competencies needed to evaluate complex media messages.

Critically, we found that objective literacy, not just perceived confidence, was a stronger predictor of misinformation detection. This has major implications that building actual skills is more impactful than relying on users’ self-perceived ability.

The DMILS offers a practical, evidence-based tool to diagnose literacy gaps, evaluate interventions and track progress across populations and countries. As misinformation grows more sophisticated, especially in health, science and political domains, it’s not enough to hope people will know better. We need to measure what they know and build from there.


YOU MAY ALSO LIKE THESE STORIES



Stories Archive

Primary Category

Tags