understanding AI and its effects on society
"will AI companions actually reduce loneliness, or will they only heighten it?"
At a time of growing isolation, loneliness, and vulnerability commercial AI companions provide digital characters that are marketed to act as our friends, trusted confidantes, and even romantic partners. But what do these apps, which have been downloaded by tens of millions of people, mean for human society and our relationships with one another? What ethical concerns are raised when companies have a financial incentive to keep users continually on their apps? And will AI companions actually reduce loneliness, or will they only heighten it?
Dr Raffaele Ciriello, a senior lecturer at the University of Sydney who researches and writes extensively about AI companion apps, is at the forefront to understand this new technological and ethical frontier. Join us to explore the rise of AI companions, the effect they have on their users, and the implications they have for the future of human relationships.
"the EU approved its AI Act to foster AI innovation while safeguarding fundamental citizen rights"
Information Technology is an industry that has historically been unaddressed by regulation, in part due to its recency and speed. This lack of regulation has often been framed as a decisive enabler of innovation, and many important industry stakeholders have actively defended a soft approach to regulation. However, some of the major breakthroughs in the field have stirred extensive legal battles, e.g. Apple v. Microsoft and Apple v. Samsung, evidencing the need of regulation to protect innovation and intellectual property. More recently, the speed and magnitude with which breakthroughs have impacted society have prompted governments to propose technology regulations such as the General Data Protection Regulation (GDPR), the Digital Services Act (DSA) and the Digital Markets Act (DMA), proposed by the European Union (EU) and replicated by other major countries. In 2024, the EU approved its AI Act to foster AI innovation while safeguarding fundamental citizen rights. In this meet-up, we will discuss the major aspects of the AI Act and the role played by the AI Office, the EU institution responsible for implementing the AI Act. In particular, we will discuss the potential impact of systemic risks from general-purpose AI (GPAI) models, like GPT and Gemini, and how the AI Act attempts to mitigate them.
Alberto Franzin is a technology specialist in the AI Safety unit of the European AI Office. He holds a Master’s degree in Computer Engineering from the University of Padova in Italy and a Ph.D. from the Université libre de Bruxelles (ULB) in Belgium, where he also worked as a postdoctoral researcher. He has worked on various topics in artificial intelligence, optimization, and machine learning, through academic projects and industry collaborations.
"news organisations face growing disintermediation, reduced visibility, and weakening relationships with audiences"
As search engines and platforms increasingly deliver AI-generated answers rather than links, news organisations face growing disintermediation, reduced visibility, and weakening relationships with audiences. This seminar critically examines how developments in search and generative AI summaries are reshaping access to news and redistributing power within the media ecosystem. While these dynamics pose particular challenges for journalism, the talk also explores how public service media such as the BBC are using AI across news and wider media production to play to organisational strengths, mitigate third-party platform dependencies, and uphold public service values in an AI-mediated environment.
Anna Rezk-Parker is a Postdoctoral Research Associate in Computer Science at the University of Glasgow, working on participatory audits of search and information retrieval systems as part of the Responsible AI UK–funded PHAWM project. Her research focuses on AI in media and journalism, search and recommender systems, personalisation, and designing for user agency in automated systems. She completed her PhD in Design Informatics at the University of Edinburgh in collaboration with BBC R&D.
"Instead of asking whether an LLM knows what it is talking about, I suggest we first ask whether it is talking about anything at all."
Large language models (LLMs) can already be made more linguistically responsible—not by turning them into artificial agents or subjects, but by aligning the informational relations that generate their outputs with the semantic ambitions of the text itself. In this talk, I argue that this route to progress has been obscured by the tendency to frame debates about LLMs in terms of whether they possess human-like understanding. Instead of asking whether an LLM knows what it is talking about, I suggest we first ask whether it is talking about anything at all. Drawing on work in the philosophy of language—especially Gareth Evans’ account of reference—I focus on language that purports to refer to particulars (specific objects or events), and ask what causal and informational relations must obtain for such reference to succeed. I argue that current LLM architectures generally fail to meet these conditions, even while generating text that presupposes them, resulting in systematic semantic anomalies. We can response to this either by constraining our uses of LLMs to match their actual semantic capacities, or we can enhance LLM-based systems so that the informational processes producing their outputs support singular reference. I illustrate the second approach by looking at three cases with genuine promise: retrieval-augmented generation (RAG), multi-model (sic) imagistic reasoning, and multi-step chain-of-thought reasoning.
Ron Chrisley is Professor of Cognitive Science and Artificial Intelligence at the University of Sussex, where he is also Director of the Centre for Cognitive Science (COGS). He has been researching at the intersection of AI, cognitive science and philosophy since the 1980s. After getting his BS in Symbolic Systems from Stanford in 1987 he investigated expert systems and machine learning at NASA, Xerox PARC, the Helsinki University of Technology (as a Fulbright scholar), ATR Labs in Japan, and the University of Birmingham (as a Leverhulme fellow). In 1992 he took up a lectureship in the School of Cognitive and Computing Sciences at the University of Sussex, and received a DPhil in Philosophy from the University of Oxford in 1997. In 2021 he co-founded the generative voice AI startup Tenyx, which was acquired by Salesforce in September of 2024.
"if you have an idea and want to get feedback on the concept and general idea, then I think vibe coding can fast track the process significantly"
Code generation is one of the major applications of Large Language Models (LLMs), the most recent AI breakthrough that has drawn significant attention from society. Automating and popularizing coding has long been an aim of computer science, so now software companies and communities have coined the term “vibe coding” to promote LLM-based code generation. Indeed, vibe coding became the Collins Dictionary word of the 2025 year, which evidences the hype around this approach. However, LLMs raise a number of concerns regarding their training, the aggressive behavior of their provider companies, and the sociotechnical effects they produce. In this meet-up, we will discuss vibe coding scenarios and experiences in higher education and whether it could be a means for democratising access to coding. Importantly, we will also discuss the sociotechnical risks that should be considered in these scenarios, as well as eventual mitigation procedures that could be employed to address potential harms.
Michaela Hruskova, Vassilis Galanos, Simon Powers, and Conor McKeown are Lecturers at the University of Stirling. Michaela is a Lecturer in Entrepreneurship and Vassilis is a Lecturer in Digital Work, both at the Stirling Business School. In turn, Simon is a Lecturer in Trustworthy Computer Systems at the School of Computing Science & Mathematics, while Conor is a Lecturer in Digital Media at the School of Communications, Media and Culture. Along with multiple other academics at Stirling, they comprise the StirAI multidisciplinary research lab (formerly JustAI Lab).
"tools often created with positive or neutral intentions can still reinforce stereotypes, deepen inequalities and create new forms of harassment and exclusion"
In this seminar, I will present the first findings from my ongoing research on how generative AI, including chatbots, deepfakes and virtual assistants, can reproduce and amplify gender-based violence on social media. We will examine how tools often created with positive or neutral intentions can still reinforce stereotypes, deepen inequalities and create new forms of harassment and exclusion. After sharing the initial data, I will open the floor for a collective conversation on how we can imagine and design AI tools that do not reproduce gender-based violence or other forms of oppression, and instead contribute to building fairer, safer and more inclusive digital spaces.
Carolline Querino is a specialist in Gender Mainstreaming, focusing on Artificial Intelligence in social media and Environmental Justice, with over eight years’ experience leading social impact projects across Latin America and Europe. As co-founder of Caju Consultoria Nordestina, she develops intersectional strategies that connect communities, public policies, and climate justice, always centring marginalised groups. Her research explores gender-based violence, misinformation, and the social impact of AI, with an emphasis on ethics and community-driven methodologies. She brings expertise in MEAL frameworks, advocacy, and participatory approaches, having collaborated with universities, grassroots movements, government bodies, and international organisations. She is a Chevening Scholar and holds a Master’s in Gender, Violence and Conflict from the University of Sussex.
"organisations across the world are facing growing challenges in implementing responsible AI at scale"
As adoption of Generative AI solutions increases and more regulatory scrutiny is given to AI more broadly, organisations across the world are facing growing challenges in implementing responsible AI at scale. This talk will explore the key challenges organisations encounter in adopting responsible AI, including establishment and operationalisation of key principles, and navigating evolving regulations. We will examine how organisations can build effective governance frameworks that balance innovation with ethical considerations, and the role of regulators in setting clear guidelines for AI deployment. Additionally, we will discuss the challenges of fostering a culture of responsibility within organisations, particularly in fast-moving AI environments. The session will provide practical insights on how to address these challenges and successfully integrate responsible AI practices at scale.
Wiktoria Kulik is a Responsible AI Manager at Accenture, where she supports clients across multiple sectors in AI governance and responsible use of technology. She previously led Digital Ethics consulting at Sopra Steria, supporting clients in financial services and other sectors in embedding digital ethics solutions in their organisations. Wiktoria is a former policy advisor at the Centre for Data Ethics and Innovation, where she supported the launch of the UK-US Privacy Enhancing Technologies Prize Challenge and conducted research to enable the creation of trustworthy Smart Data ecosystem in the UK. She was also a member of the Oversight Group that advised the DARE UK programme. She has degrees in Philosophy and Computer Science, with specialisation in Speech and Language Processing, as well as significant experience in the tech sector.
"dominance move is very intuitive but hard to calculate due to its combinatorial nature"
Dominance move (DoM), a binary quality indicator, can be used in multi-objective and many-objective optimisation to compare two solution sets. DoM is very intuitive but hard to calculate due to its combinatorial nature. Different mathematical models are presented and analysed. A computationally fast approximate approach is also discussed. Computational results are promising and an upper bound analysis for the approximation ratio would be useful.
Elizabeth Wanner received her B.S. degree in Mathematics in 1994, and her M.Sc. in 2002, both from the Universidade Federal de Minas Gerais, where she also completed her Ph.D. in Electrical Engineering in 2006. Until December 2022, she served as an Associate Professor in the Department of Computer Engineering at the Centro Federal de Educação Tecnológica de Minas Gerais, in Belo Horizonte, Brazil. Since January 2023, she has been a Reader in Computer Science at Aston University, Birmingham, UK. Her research focuses on evolutionary computation, global optimization, constraint handling, and multiobjective optimization.