DHCH / Pema Frick

Pema Frick
Master's Student
University of Basel
Curriculum-Vitae
Pema Frick is a research associate at the Digital Humanities Lab at the University of Basel, where she works in three different projects. With a background in comparative linguistics, religious studies, and digital humanities, Pema’s work bridges cultural analysis and computational approaches. Her current research, under supervision by Prof. Dr. Ranjodh Singh Dhaliwal, examines the function of memes as visual strategies in political discourse and social storytelling, integrating perspectives from critical algorithm studies with historical image theory and contemporary digital ethnography. She also provides research support to Prof. Dr. Rosa Lavelle-Hill on ongoing and future research projects. Pema is co-authoring work with Prof. Dr. Moniek Kuijpers about the SNSF-funded subproject on digital social reading practices of young adults. Their forthcoming Cambridge University Press publication will present findings from a Q methodology study that combines quantitative and qualitative research methods.
PhD-Project
From TikTok’s For You page to Instagram feeds, algorithms have become central to how we navigate the digital world. But are we navigating freely, or are we being navigated? This talk aims to explore the role of algorithms, how they shape our online experience (including behaviour analysis, mediation, curation, and access to information) and how bias is not simply a “glitch” in algorithmic systems, but often an embedded feature in their design and purpose.
Drawing on Dourish, Striphas, and Seaver, as well as other emerging critiques of AI, our presentation tries to demonstrate how algorithms are not neutral tools, but sociotechnical systems deeply entangled with culture, ideology, and power. These systems blur the lines between persuasion and coercion, offering environments that are psychologically seductive yet difficult to escape.
Employing practical examples like social media experiments using different user profiles, exploring music recommendation systems, and more, we will illustrate how automated systems reinforce existing inequalities and filter what is perceived as a “probabilistic preferred reality”. Rather than giving technical explanations, we would like to unpack how critical AI methodologies have to be reconsidered and adapted just as often and quickly as their AI counterparts. Instead of staying with deterministic algorithm analysis, which is not how most of today’s AI tools work, one should critique it based on the structure of its architecture (Offert & Dhaliwal, 2025). By connecting these theories, we encourage critical reflection on the role of algorithms shaping our everyday life and ask ourselves if we can reclaim agency from non-human cultural production environments. But, has human agency actually been subdued?
So the question is, if our cultural logic does not allow for unbiased algorithms – what should we actually focus on, and what do we want to change by critiquing it?