Published: 2025-01-07

In a move to address the rapid evolution of AI in communication and media, Hong Kong Baptist University’s (HKBU) School of Communication and Renmin University of China convened a high-profile conference titled ‘Embracing Digital Futures: Exploring the Synergy of AI, Communication, and Future Media’ at HKBU and Beijing Normal University-Hong Kong Baptist University United International College (UIC) between November 29 and December 1, 2024.

Supported by the Beijing-Hong Kong Universities Alliance, the event gathered leading scholars and experts to explore the intersection of AI innovations with communication, journalism, media innovation, and societal change. Presenters and discussants at the conference addressed the critical challenges and opportunities presented by these technological transformations.

Academic Optimism vs Public Scepticism: A Juxtaposition of Trust in AI’s Potentials

Two major challenges that continue to trail the evolving AI applications and human-machine interactions are trust concerns and security issues. Much of the society share the concern that AI will, at worst, destroy humanity or at best replace human jobs. Addressing these challenges in a fireside chat, Andrea Tapia, Dean, College of Information Sciences and Technology, Pennsylvania State University debunks these concerns and highlights the dichotomy in how AI impact is perceived by the society and the scientific community.

Andrea Tapia speaking during the fireside chat

“The way that artificial intelligence is researched and discussed by scholars and students is that they will provide us with the discovery of pattern that is too large for human analysis. They will provide us with personalized and customized responses, and optimization of large systems that will make human activities easier. However, just outside the university, AI is viewed as something very dangerous and unpredictable – a complete opposite of what the researchers think of it”, Tapia observes.

Tapia attributes public misunderstanding of AI’s impact and potentials to technological mythology reinforced by decades of Hollywood and mass media’s portrayal of AI as the precursor to a dystopian future and the eventual doom of human existence. One way to bridge the contrast between academic optimism and public scepticism about AI, according to Tapia, is to shape the narrative and understanding of AI’s societal impacts. “This is where we need communication scholars,” She says, “We need a cautious approach to AI integration, and the field of communication can help contribute to this discourse on AI trust and safety”.

Bu Zhong speaking during the fireside chat

Bu Zhong, Dean, School of Communication, HKBU corroborates this position by highlighting the importance of cultural considerations in the deployment intervention strategies and public acceptance of AI technologies. “Differences in cultural contexts can significantly influence public trust and the ethical frameworks surrounding AI. This is why communication research is needed, now more than ever, to address these issues and bridge the growing divide between scholarly confidence and societal concerns about AI”, Zhong remarks. He further emphasized on the importance of interdisciplinary collaboration to address emerging challenges in AI applications and ethics.

Celine Song speaking during the fireside chat

Celine Song, Associate Dean of Postgraduate Studies in the School of Communication, HKBU, furthered the discussion on AI’s role in spreading misinformation and how communication scholars can contribute to its detection and the building of trust in AI systems. “One approach to addressing this issue is through the detection dimension. We collaborated with colleagues in computer science to develop elaborate coding schemes for systematically identifying and classifying misinformation across diverse platforms and media types,” Song explained. “Another approach involves corrective messaging, where we conduct various experiments to test corrective strategies across different demographic groups.

How can Communication Research bridge the Divide?

In a second fireside chat addressing the role of communication discipline in the navigating the intricacies of AI research and its impact on the society, discussants highlighted the best practices of the field and explored how communication researchers can leverage their works to propagate the limitless potentials of AI.

Speaking about the challenges of communication in AI education, Yan Yan, Associate Dean, Renmin University of China said, “Communication discipline has borrowed many theories and findings from other disciplines, and one of the future directions of our research is to make contributions to the whole academia and other disciplines”. Yan further underscored the role of journalism in communicating AI positively to the public, adding that communication schools should consider how to situate journalism among the disciplines and adapt Journalism and its education to the new media environments.

Marie Hardin (left) and Yan Yan (right) speaking at the fireside chat

Another solution proffered was an investment in the enhancement of AI literacy and information literacy as a strategy to increase trust in AI frontiers. Marie Hardin, Dean, Donald P. Bellisario College of Communications, Pennsylvania State University emphasized the critical role of communication in addressing societal challenges, the threats posed by AI, and the urgent need for interdisciplinary efforts to promote news and information literacy. “Programmes at Penn State focus on helping students and the broader population become more critical consumers of information”, she says.

Echoing Yan’s position about the fragmented state of communication discipline, Hardin stated that the field is often seen as disconnected and lacking a unified theoretical canon which makes it less visible in highly specialized university settings. “Despite this, the ability of communication discipline to work across various contexts can be considered as a strength, because effective communication is needed to promote understanding and behaviour change at scale, which requires expertise in cooperation with other disciplines”, Hardin surmises.

Speakers at the conference

Given the unique functionality of generative AI in generating and analysing vast amount of data to solve complex problems, the discussants further expanded on the need to preserve data diversity. “There is a risk of generative AI becoming degenerative if it leads to a lack of diversity in the data pool”, Tapia explains, “So, we need to preserve original data and maintain diversity”. The discussants called on university managements and leaders of academic institutions to advocate for more interdisciplinary collaborations that will ensure that AI research is better communicated and understood.