Published: 2024-05-20

Artificial Intelligence (AI) agents and social media apps are heavily relied on for the creativity and convenience they enable. Interdisciplinary researchers are curious about how people’s memories, critical thinking, decision making, perceptions, trust, attitudes, and behavior are influenced by these new technologies.

How much of AI can be trusted? How reliable can algorithms be? How does social media contents construct lifestyles? In what ways can digital content promote nationalism? These questions were addressed in the research projects shared by two scholars of School of Communication in May 2024. Yupeng Li (Assistant Professor, Department of Interactive Media) and Sheng Zou (Assistant Professor, Department of Journalism) presented their work at the Research Mingle session organized by Hong Kong Baptist University’s Research Office.

Sheng Zou (L) and Yupeng Li (R) at the Research Mingle

Understanding the trustworthiness of AI systems and information

Generative AI such as ChatGPT, OpenAI, DALL-E, and Google Bard have transformed access to information, optimized learning experiences, and increased productivity. These applications can, within minutes, churn out massive amounts of content and outputs that would otherwise take hours of mental labour. But how can users guarantee that the outputs from these tools are trustworthy? Yupeng Li addresses this in his talk entitled “Toward a trustworthy digital future: Pioneering reliable AI techniques for social good”.

Li argued that Large Language Models (LLMs) can be susceptible to various threats, for example, if malicious datasets are fed into the machine learning systems which will inadvertently compromise the trustworthiness of the output generated by the AI tools. He noted that these attacks can occur within the AI systems (endogenous threats) or they could be outside threats occurring from information that the AI interacts with (exogenous threats). If we don’t pay enough attention to this, the AI, with its generated information, may have adverse effects on the society. For instance, a compromised AI tool that was trained with erroneous information can be a source of misinformation to unsuspecting users who already believe that these tools are practically infallible. Similarly, rumours on social media are easily propagated because of dysfunctional rumour detection models in the presence of such threats.

Yupeng Li giving his presentation.

Therefore, Li is interested in studying how to design trustworthy AI system in the presence of endogenous and exogenous threats. “The aim is to enhance the performance of machine learning systems in dynamic and uncertain environments, which can improve the reliability of AI tools in real world settings”, Li said. To investigate this phenomenon, Li constructed a multisource benchmark for fake news detection that is carefully curated from 14 Chinese and English-based factcheck agent sources, including our HKBU Fact Check. Findings from the experiments conducted to test Li’s newly constructed datasets against the performance of machine learning models shows that the models trained exclusively on single information source exhibited decreased performance when confronted with information from diverse sources. Furthermore, it was observed that using multisource data to learn how to distinguish fake and real news can improve the performance and robustness of the learning outcome. “The key takeaway of my sharing is that combating misinformation with trustworthy mechanism is imperative at this point” Li noted, “So, my research aims to build a trustworthy digital future in the era of generative AI”.

Politics and lifestyles through videos as multimodal data

People take to social media platforms like Tiktok, Instagram, and Snapchat to share everyday life stories via videos and photos. The multimodal (textual, verbal, visual, and sonic) nature of these contents creates a sense of authenticity and relatability that drives engagement and interactivity from other users. However, beyond their lightness and banality, these social media artefacts also serve as channels through which political or ideological constructs are expressed in entertaining formats. Sheng Zou terms this ‘ideotainment’, which is a play on the term ‘infotainment’.

Sheng Zou giving his presentation.

Zou’s presentation titled “Seeing politics and popular culture through video: Banality, interactivity, and multimodality” explains how politics is intertwined with popular culture and explores how social media videos are used as productive formats to express this interconnection. “It is important for us as researchers to look for and locate the political in the banal,” Zou enthused, “It is the mundane moments in life that constitute most of our aesthetic and political experiences which reveal broader social-cultural structures, and social media videos serve this purpose”.

In one of his studies about rural vlogs on Douyin, a Chinese video-sharing social media platform, Zou examined how short videos are used to construct and normalise imageries of rural subjects and lifestyles in China, focusing particularly on the “new peasant’’ campaign launched by Douyin in 2021. The campaign aims to use hashtags to promote visions of rural lifestyles and agricultural practices in China. From the analysis of these videos, Zou observed that the peasant lifestyles are often romanticised as diligent, skillful, and entrepreneurial, and their primitive and peaceful ways are often viewed as a sharp contrast to the busy routine of urban lifestyles. Furthermore, these videos pander to the curiosity of urban dwellers and idealise rural prosperity initiatives that encourage people to strive towards a better rural future.

In another study, Zou and a team of US-based researchers explored the articulation of Chinese digital nationalism via Douyin videos about Covid-19 vaccines. They argued that both state-affiliated and non-state accounts on Douyin will promote nationalism, but often in different ways; state-affiliated accounts may emphasize state-centric narratives while non-state accounts individual-oriented narratives to express bottom-up national pride. “Even when people are sharing mundane experiences, they sometimes incorporate tags and music that express their patriotism and inject little bits of nationalistic sentiments into it”, Zou stated.

Cross-section of guests and speakers at the event

Given that users’ engagements on digital media spaces and their use of AI technologies have transcended basic information and entertainment purposes to become veritable vehicles for the construction of political, social, and cultural realities, it is imperative that these new technologies are optimized to ensure that they are dependable and reliable for social good and a safer digital future.