The course consists of four modules and 16 topics, approximately four key topics will be covered in each module. Within each module, the following teaching activities are planned: two lectures, a list of assigned reading materials, one to two quizzes, and an assignment. In each lecture, the instructor will introduce the theoretical concepts of the key topics in that module. After each lecture, the students will then have a week to read related materials and practice their knowledge in quizzes and assignments. Students will work in a group of 3-4 to complete a final project. In the final project, each student group will choose multiple topic taught in the course and practice them in a case study by designing and implementing the strategy, analyzing and evaluating the effectiveness of strategy.
FDD3559 Large Language Models for Computer Scientists 7.5 credits

The LLM for Computer Scientists course is designed to prepare students with a deep understanding of the principles and practices involved in building Large Language Models (LLMs). Through a comprehensive exploration of four main modules, including the foundations of LLMs, key LLM model architectures, building techniques including prompting, RAG, agents, and optimization techniques, the course integrates theoretical insights with practical applications.
Information per course offering
Information for Spring 2025 Start 17 Mar 2025 programme students
- Course location
KTH Campus
- Duration
- 17 Mar 2025 - 2 Jun 2025
- Periods
- P4 (7.5 hp)
- Pace of study
50%
- Application code
61375
- Form of study
Normal Daytime
- Language of instruction
English
- Course memo
- Course memo is not published
- Number of places
Places are not limited
- Target group
- No information inserted
- Planned modular schedule
- [object Object]
- Schedule
- Schedule is not published
- Part of programme
- No information inserted
Contact
Course syllabus as PDF
Please note: all information from the Course syllabus is available on this page in an accessible format.
Course syllabus FDD3559 (Spring 2025–)Content and learning outcomes
Course disposition
Course contents
This course aims to prepare students with a deep understanding of the principles and practices involved in building Large Language Models. The course combines theoretical knowledge with practice via demonstration in use cases to support students to apply the learnt knowledge in their own research work. The main topics covered in the course include:
1)Module 1: Introduction to LLMs: History and evolution of LLMs; Building Blocks of LLMs; Key LLM architectures; Environmental, Computational, and Ethical considerations
2)Module 2: Foundations of LLMs: LLMs in practice; Prompting; RAG; LangChain and Llamaindex.
3)Module 3: Building LLMs: Prompting with LangChain; Indexes, retrievers, and data preparation; Advanced RAG; Agents;
4)Module 4: Fine-Tuning LLMs: Understand Fine-tuning; Low-Rank Adaptation; LLM Deployment; Quantization and Prunning
The course has four modules that are specifically designed to prepare students for the first four intended learning outcomes. In addition, The students must apply their acquired knowledge in the final project and write and present the project to demonstrate the fifth learning outcome is also achieved.
The first two modules focus on basic theoretical knowledge while the third and fourth modules deepen the knowledge from the first two modules by applying the theoretical knowledge in practical use cases. At the end of each module, students will be assessed with quizzes and assignments.
Intended learning outcomes
IL01: Describe the fundamental concepts of key LLM model architectures and their application areas
IL02: Identify and describe the fundamental building blocks in LLMs, prompting and RAG
IL03: Demonstrate conducting key steps of building LLMs in framworks LangChain and Llama Index
IL04: Analyze and evaluate fine-tuning strategies and optimization techniques for LLMs
ILO5: Design and conduct a qualified and quantifiable use case that applies LLM techniques related to own research
Literature and preparations
Specific prerequisites
Enrolled as doctoral student in Computer Science
Literature
Examination and completion
If the course is discontinued, students may request to be examined during the following two academic years.
Grading scale
Examination
- LAB1 - Laboratory work, 1.0 credits, grading scale: P, F
- LAB2 - Laboratory work, 1.0 credits, grading scale: P, F
- LAB3 - Laboratory work, 1.0 credits, grading scale: P, F
- LAB4 - Laboratory work, 1.0 credits, grading scale: P, F
- PRO1 - Project, 3.5 credits, grading scale: P, F
Based on recommendation from KTH’s coordinator for disabilities, the examiner will decide how to adapt an examination for students with documented disability.
The examiner may apply another examination format when re-examining individual students.
The course has four modules and each module consists of an assignment and one quiz. The course is designed so that a later module relies on the completion of previous modules, so that the knowledge can accumulate and build up.
Each student must also complete a final project, including writing up a final report and giving a presentation. The project will evaluate the learning outcomes via designing and conducting a qualified and quantifiable use case that applies LLM techniques to their own research context.
The course is evaluated progressively through multiple components throughout the whole course. The components for the 7.5 credits course are:
•Quiz x 4: Four tests in the form of quizzes that test the knowledge students have acquired in the modules. One test per module. P /F.
•Assignments x 4: Four assignments that consist of a set of theoretical questions that test student's understanding of knowledge and concepts and a set of programming tasks that test student's practice in applying the knowledge in LLM frameworks. One assignment per module. P /F.
•Final Project x 1: The final project requires each student or group to submit a report that applies the subject areas of the course in their own research and give an oral presentation.
Other requirements for final grade
None
Examiner
Ethical approach
- All members of a group are responsible for the group's work.
- In any assessment, every student shall honestly disclose any help received and sources used.
- In an oral assessment, every student shall be able to present and answer questions about the entire assignment and solution.