In (KI2V20001), I explored the basics of machine learning, focusing on relevant algorithms and mathematical foundations through lectures and Python-based assignments, assessed via an exam and multiple tasks. Below is a breakdown of the topics covered:
Types of Learning: Learned supervised, unsupervised, and other learning paradigms.
Probability Theory: Covered basics like Hoeffding’s inequality for model evaluation.
K-Nearest Neighbors: Studied simple distance-based classification.
Linear Regression: Explored fitting linear models to data.
Overfitting: Analyzed causes and risks of over-complex models.
VC Dimension: Understanding model capacity and generalization theory.
Logistic Regression: Introduced to classification with gradient descent.
Neural Networks: Introduced to feed-forward networks and their structure.
Bias and Variance: Studied trade-offs in model performance.
K-Means Clustering: Learned how to group unlabeled data into clusters.
Dataset Splits: Explored train-test-validation splits for evaluation.
Assignment 1: K-NN Classifier: Implemented a k-nearest neighbors model to classify data, using basic probability concepts.
Assignment 2: Linear Regression: Built and evaluated a linear model, exploring overfitting and dataset splits.
Assignment 3: Logistic Regression: Developed a classifier with gradient descent, analyzing bias and variance.
Assignment 4: Neural Networks and Clustering: Created a simple neural network and applied K-means clustering to unlabeled data.