Machine Learning (CS)
General information
Degree: | Master in Computer Science |
Period: | September - December |
Objectives
Provide knowledge of both theoretical and practical aspects of machine learning. Present the main techniques of machine learning and probabilistic reasoning.
Prerequisites
Linear algebra, probability theory (briefly revised during the course). Boolean algebra, knowledge of a programming language. For a good introduction to linear algebra see: Gilber Strang, Introduction to Linear Algebra, Wellesley-Cambridge Press, 2016.
Content
Introduction to machine learning: designing a machine learning system, learning settings and tasks, decision trees, k-nearest-neighbour estimation. Mathematical foundations: linear algebra, probability theory, statistical tests. Maximum likelihood and Bayesian parameter estimation. Probabilistic graphical models: formalization, parameters and structure learning. Discriminative learning: linear discriminant functions, support vector machines, kernel machines. Neural networks: representation learning, deep architectures. Ensemble methods. Unsupervised and reinforcement learning.
Course Information
Instructor: |
Andrea Passerini Email: |
Teaching assistant: | TBA |
Office hours: |
Arrange by email |
Lecture time: |
Monday 8:30-10:30 (room a101) Wednedsay 11:30-13:30 (room b017) |
Communications: | Please check the moodle page of the course for news and updates. | Bibliography: |
R.O. Duda, P.E. Hart and D.G. Stork, Pattern Classification (2nd edition),
Wiley-Interscience, 2001. D. Koller and N. Friedman, Probabilistic Graphical Models, The MIT Press, 2009 J.Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Analysis, Cambridge University Press, 2004. I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, The MIT Press, 2016 (online version available here). S. Russell and P. Norvig,Artificial Intelligence: A Modern Approach, 4th Global ed., Pearson, 2021. K. Murphy, Probabilistic Machine Learning: An Introduction, The MIT Press, 2021 (online version available here). |
Slides: |
Introduction [slides] [handouts] Decision Trees [slides] [handouts] K-nearest neighbours [slides] [handouts] Linear algebra [slides] [handouts] Probability theory [slides] [handouts] Evaluation [slides] [handouts] Parameter estimation [slides] [handouts] Bayesian Networks [slides] [handouts] Inference in BN [slides] [handouts] Learning BN [slides] [handouts] Naive Bayes [slides] [handouts] Bayesian Network lab [slides] [data] [software] Linear discriminant functions [slides] [handouts] Support Vector Machines [slides] [handouts] Kernel Machines [slides] [handouts] Scikit-learn lab [repository] Artificial Neural Networks [slides] [handouts] Artificial Neural Networks lab [repository] Ensemble Methods [slides] [handouts] Unsupervised learning [slides] [handouts] Unsupervised learning Lab [repository] Reinforcement learning [slides] [handouts] Reinforcement learning Lab [repository] |
Videos: |
Registered lectures (from previous year) made available on Moodle |
Exams
Modality: | Oral examination. |