Aims of the course
- To understand the basic principles and development of Large Language Models (LLMs) within Generative AI and their impact across various sectors.
- To understand the technical architecture of LLMs, including transformer model and attention mechanisms, and grasp the significance of dataset size and biases.
- To create, enter, and refine effective prompts for LLMs (using one or more of ChatGPT, Claude, Gemini, and Copilot), appreciating the influence of prompt design on output quality and bias.
- To gain practical insights into more advanced LLM interaction strategies, including prompt tuning and zero-shot and few-shot techniques.
- To appreciate the application of these LLM tools and technologies to real-world problems,
- To consider LLMs future development potential as well as responsible use considerations.
Course content overview
This course provides an overview of Large Language Models (LLMs) from their foundational concepts to their advanced applications. An introduction to the basics and history of LLMs, looking at the technical architecture behind these models, as well as transformers, self-attention mechanisms, and large datasets.
This course is designed to engage students with practical exercises and emphasizes how to write effective prompts and tuning for LLMs. Participants will learn how to effectively interact with LLMs and customize their outputs for specific needs. This practical training is combined with a look into the real-world applications of Generative AI and LLM technology, encouraging critical thinking about societal implications of AI's advancement.
This course combines an overview of LLM technologies with practical exercises, and aims to equip students with a solid understanding of LLMs, preparing them to navigate and contribute to the evolving field of artificial intelligence.
Target audience
This course is for learners interested in exploring the cutting-edge technology of Generative AI and Large Language Models and seeking to understand how these tools can be used in their own life and various other domains, and those interested in the societal impact of AI technologies and understanding the capabilities and limitations of LLMs.
It will also be of interest to professionals across industries looking to use AI for enhancing decision-making, customer service, and operational efficiency through practical applications of ChatGPT, Claude, Gemini, Copilot and other LLMs.
And finally, teachers and educators aiming to integrate LLMs into their teaching methods or research projects, enhancing engagement and exploring the recent generative AI frontiers. 4. Those interested in the societal impact of AI technologies and understanding the capabili$es and limita$ons of LLMs.
Schedule (this course is completed entirely online)
Orientation Week: 17-23 February 2025
Teaching Weeks: 24 February-30 March 2025
Feedback Week: 31 March-6 April 2025
Teaching Week 1 - Introduction to Large Language Models (LLMs)
This week will introduce the underlying concepts of Generative AI, focusing on Large Language Models (LLMs). Participants will learn what LLMs are, their history, development, and the basic principles of their operation. This week sets the stage for a deeper exploration of the capabilities, benefits, and limitations of these technologies.
Learning outcomes:
• To understand what Large Language Models are and their role in the Generative AI landscape.
• To identify the key components and principles that enable LLMs to generate text.
• To discuss the historical development and evolution of LLMs.
• Discuss the potential impact of LLMs on various sectors.
Teaching Week 2 - Building blocks of LLMs
This week delves deeper into the technical aspects of LLMs, including large datasets, transformer architecture, self-attention mechanisms, and large parameter counts. Students will learn about pre-training, fine-tuning, reinforcement learning, and the generative capabilities of LLMs, setting a foundation for understanding practical applications.
Learning outcomes:
• To gain a good understanding of the architecture and mechanisms that power LLMs.
• To learn about the importance of large data sets in training LLMs and how biases in these datasets can influence model outputs.
• To understand the concepts of pre-training, fine-tuning, and their importance in developing specialized LLM applications.
• To explore the generative capabilities of LLMs, including text generation and prediction.
Teaching Week 3 - Introduction to Prompt Engineering
This week addresses an introduction to popular LLMs such as ChatGPT, Claude, Gemini, and Copilot. It focuses on practical prompt engineering, and focusing on writing prompts that result in effective model responses. Students will complete hands-on activities to create, refine, and evaluate prompts, understanding their impact on model output quality and bias.
Learning outcomes:
• To write effective prompts for diverse applications and contexts.
• To evaluate and refine prompts to improve interaction quality with LLMs.
• To understand the impact of prompt design on output bias and methods to mitigate it.
• To gain practical experience through prompt-based exercises and examples.
Teaching Week 4 - Prompt Tuning and Advanced Interaction Strategies
This week focuses on prompt tuning and advanced strategies to further customize LLM outputs. Participants will cover techniques for prompt tuning, including zero-shot, few-shot learning, and chain-of-thought prompting. These will be applied to a mini-project. We will discuss strategies to overcome limitations and biases through advanced prompt engineering.
Learning outcomes:
• To master prompt tuning techniques for task-specific enhancements.
• To implement advanced promptng strategies to navigate limitatons.
• To develop a refined understanding of how prompt structure influences LLM responses.
• To engage in exercises that apply advanced prompt engineering in various contexts.
Teaching Week 5 - Practical Applications and Future Trends
This week is dedicated to exploring real-world applications of LLM tools and technologies and the future of LLMs. Participants will review case study examples and engage in small project work that applies their LLM of choice (ChatGPT, Claude, Gemini, Copilot, …) to solve practical problems. This week will encourage students to think critcally about the future impact and evolution of LLMs in society.
Learning outcomes:
• To apply LLM use to real-world scenarios.
• To critcally assess potential future developments and impacts of LLM technology.
• To complete a small project demonstrating practical LLM applications.
• To participate in discussions on responsible use and long-term considerations of LLMs.
Each week of an online course is roughly equivalent to 2-3 hours of classroom time. On top of this, participants should expect to spend roughly 2-3 hours reading material, etc., although this will vary from person to person.
While they have a specific start and end date and will follow a weekly schedule (for example, week 1 will cover topic A, week 2 will cover topic B), our tutor-led online courses are designed to be flexible and as such would normally not require participants to be online for a specific day of the week or time of the day (although some tutors may try to schedule times where participants can be online together for web seminars, which will be recorded so that those who are unable to be online at certain times are able to access material).
Virtual Learning Environment
Unless otherwise stated, all course material will be posted on the Virtual Learning Environment (VLE) so that they can be accessed at any time throughout the duration of the course and interaction with your tutor and fellow participants will take place through a variety of different ways which will allow for both synchronous and asynchronous learning (discussion boards etc).
Certificate of participation
A Certificate of Participation will be awarded to participants who contribute constructively to weekly discussions and exercises/assignments for the duration of the course.