Residentials in Cambridge
You are expected to attend all of the four week-long residentials in Cambridge, as follows.
22 - 26 September 2025
05 - 09 January 2026
27 April – 1 May 2026
07 - 11 September 2026
In addition to the in person, taught residentials, there will be a number of pre-recorded lectures from a range of guest speakers, as well as live discussion sessions. The live sessions will typically take place weekly or fortnightly on Fridays, in term-time only.
There will also be some online dissertation workshops in Year 2, to help with planning and share ideas with fellow students. These are planned for 22 January 2027 and 23 April 2027 (dates tbc).
Year 1
Module 1: The Nature and History of AI
Aims: To provide students with theoretical, academic and practical understanding of how artificial intelligence has been developed, used and understood historically across different traditions, and how it is being applied in society today.
Key areas:
● The technical foundations of AI and the current capabilities and status of the technology
● Current applications of AI across a range of domains and sectors
● The history of AI and its relationship to other disciplines and technologies, including the history of computing and administration
● The nature and measurement of intelligence, and comparisons between human, animal and artificial intelligence
Module 2: Ethical and Societal Challenges
Aims: To provide students with a comprehensive understanding of key ethical and societal challenges raised by AI, through engagement with the contemporary critical literature and case studies.
Key areas:
● Critical discussion of the following themes:
- Privacy
- Fairness and equality
- Safety
- Accountability
- Human dignity and autonomy
● The relationship between the near- and long-term challenges of AI
● Comparison of different global perspectives
Module 3: Governing AI
Aims: To critically engage with a range of practical approaches to navigating the ethical and societal challenges of AI, including those found in policy, regulation, law, ethics principles, and social action.
Key areas:
● Comparison and critical analysis of current AI policy initiatives worldwide
● Overview and critical discussion of different codes of practice and principles for AI ethics, and their implementation
● Critical discussion of methods for ethical impact assessment
● Critical discussion of methods for ethical design
● The role of activism and civil society
Year 2
Module 4: Theories and Methods
Aims: To increase rigor and depth in understanding and analysing the ethical and societal challenges of AI by introducing students to foundational knowledge, theories and methods in established academic disciplines.
Key areas:
● Theories and methods from the following disciplines:
- Philosophical ethics
- The history and philosophy of science
- Literary and cultural studies
- Social and behavioural sciences
- Futures studies and foresight methods
- Critical design studies
Module 5: Dissertation
Aims: To enable students to apply and develop their learning from Modules 1-4 through an innovative, independent research project in an area relevant to the course, topic and scope to be agreed with the supervisor.
Assessment
Assignments on the MSt are divided into two components: the essays, taken as a group, and the dissertation.
Students are expected to submit academically rigorous, properly referenced assignments. Guidance on academic writing is offered through the Course Guide and VLE, wider University resources - including within Colleges - and within the first module.
As students enter the MSt with differing levels of experience of academic writing, it is expected that students will seek to develop these skills independently as needed, thereafter throughout the programme.
The modules are assessed as follows:
● Module 1: 2,000 word essay (8% of final grade)
● Modules 2, 3 and 4: 4,500 word essay each (14% each of final grade)
All summative assessment is compulsory. Students will receive continual formative feedback throughout the course using a variety of strategies and techniques, including evidence of regular reflection.
In the second year (module 5), students will write a 15,000 word dissertation which accounts for 50% of the final grade.
Course Team
Dr Henry Shevlin (PhD, CUNY Graduate Center, 2016; BPhil, Oxford, 2009) is a Education Director at the Leverhulme Centre for the Future of Intelligence. His work focuses on issues at the intersection of philosophy of mind, cognitive science, and animal cognition, with a particular emphasis on perception, memory, and desire. Since 2015, he has been serving as a student committee member of the Association for the Scientific Study of Consciousness.
Dr Garfield Benjamin is an Assistant Teaching Professor at ICE. Garfield is a Science and Technology Studies scholar focused on the social inequalities surrounding AI and related technologies. Their research and teaching is concerned with issues of power, identity, trust, discrimination, privacy, injustice and marginalisation. Garfield's current work builds on queer performativity to unpick the roles and norms embedded within technology discourses. Garfield was previously a Senior Lecturer in Sociology at Solent University, and Research Officer at the Birmingham Centre for Cyber Security and Privacy. They are committed to high quality, socially-engaged academic activity and creating opportunities to model the aims of tackling social inequalities through their own research, teaching and engagement with wider communities.
Dr Jonnie Penn , FRSA, is a historian of information technology, broadcaster, and public speaker. He is an Affiliate at the Berkman Klein Center at Harvard Law School, a Research Fellow at St. Edmunds College at the University of Cambridge, a New York Times bestselling author, and a fellow of the Royal Society of the Arts. He has held prior fellowships at the MIT Media Lab, Google, and the British National Academy of Writing. He writes and speaks widely about the future of work, data governance, youth and worker empowerment, and sustainable digital technologies.
Dr William Chan is a Teaching Fellow of the Leverhulme Centre for the Future of Intelligence, where he contributes to the MSt in AI Ethics and Society and MPhil in Ethics of AI, Data and Algorithms. Alongside his teaching work, he is a Data Ethics Consultant at Information Governance Services, working with legal professionals to produce industry-facing AI/data ethics education, events, opinions and training materials.
Dr Milena Ivanova is a philosopher of science interested in the relationship between science and art, the role of aesthetic values and creativity in scientific pursuits, and whether automated scientific discoveries can be valued aesthetically. Dr Ivanova studied History and Philosophy of Science at the University of Athens and completed her PhD at the University of Bristol supported by the British Society for Philosophy of Science and the Royal Institute for Philosophy. Dr Ivanova is a Bye-Fellow, Director of Studies and Graduate Tutor at Fitzwilliam College at the University of Cambridge.
Dr Jedrzej (Jedrek) Niklas is a Teaching Fellow at the Leverhulme Centre for the Future of Intelligence, where he contributes to the MSt in AI Ethics and Society. Jędrzej is a socio-legal scholar whose research focuses on the complex relationship between technology, governance, and social justice. He has extensively written about the proliferation of data technologies in the public sector, the evolution of digital rights, and issues surrounding automated discrimination.
Dr Achim Roseman teaches on the MSt in AI Ethics and Society and the MPhil in Ethics, Data and Algorithms. He has an interdisciplinary background in anthropology, science and technology studies (STS) and the ethics and governance of AI and emerging technologies more widely. In collaboration with UNESCO’s Bioethics and Ethics of Science and Technology Section, Achim is leading the UKRI-funded pilot project “Strengthening the Role of Civil Society in the Global Governance of AI”.