CB.

Machine learning (INFOB3ML)

Completed: 06-01-2025 | 7.5 EC | Universiteit Utrecht

What I Learned

In (INFOB3ML), I delved into advanced machine learning techniques essential for modern AI systems. This course provided a mix of theoretical foundations and practical applications with theoretical exercises and assignments, evaluated with two exams. Below is a breakdown of the topics covered:

Supervised Learning Foundations

Linear Regression and Regularization: Mastered linear regression with regularization techniques to model data relationships and prevent overfitting.

Generative View of Linear Regression: Explored the generative perspective of linear regression, understanding probabilistic modeling.

Support Vector Machines & Kernel Methods: Learned SVMs with kernel methods for robust classification, leveraging non-linear decision boundaries.

 

Probabilistic and Bayesian Methods

Uncertainty in Estimates and Predictions: Studied how to quantify uncertainty in predictions using probabilistic approaches.

Bayesian Machine Learning: Core Concepts: Developed skills in Bayesian methods, including prior and posterior distributions.

MAP Estimation and Bayesian Inference: Applied maximum a posteriori (MAP) estimation for refined modeling under uncertainty.

Bayesian Inference: Approximation and Sampling: Tackled advanced inference techniques like approximation and sampling for complex distributions, tested via the first exam.

 

Unsupervised Learning and Clustering

Dimensionality Reduction: PCA & LDA: Mastered PCA and LDA to simplify high-dimensional data while preserving key insights.

Clustering: K-Means: Implemented K-Means clustering to uncover patterns in unlabeled data.

Clustering: EM Algorithm for GMM: Applied the Expectation-Maximization algorithm for Gaussian Mixture Models to enhance clustering flexibility.

 

Explainability and Neural Networks

Explainable ML: Introduction & Interpretable Models: Explored the foundations of explainable AI and interpretable models to ensure transparency.

Model-Agnostic Explainability Methods: Studied techniques like SHAP and LIME for interpreting complex models.

Neural Networks: Backpropagation: Learned backpropagation to train neural networks, building a foundation for deep learning.

Introduction to Deep Learning: Gained an overview of deep learning architectures and their applications.

 

Practical Application

The course included four practical assignments implemented in Python, completed alongside weekly theoretical exercises:

Assignment 1: Built and analyzed regression models to apply foundational supervised learning techniques.

Assignment 2: Developed Bayesian models and dimensionality reduction methods.

Assignment 3: Implemented clustering algorithms and SVMs to solve unsupervised and supervised tasks.

Assignment 4: Created explainable models and neural networks, integrating interpretability and advanced techniques.

 

Through these assignments and exercises, I applied theoretical concepts practically, with two exams testing my comprehensive understanding of the material.