Topic
|
Date
|
Notes
|
Linear modeling
|
|
Background
Linear models
Least squares notes
Least squares gradient descent
algorithm
Regularization
Stochastic gradient descent pseudocode
Stochastic gradient
descent (original paper)
|
Neural networks
|
|
Multilayer perceptrons
Basic single hidden layer neural network
Back propagation
Approximations by superpositions of sigmoidal functions (Cybenko 1989)
Approximation Capabilities of Multilayer Feedforward Networks (Hornik 1991)
The Power of Depth for Feedforward Neural Networks (Eldan and Shamir
2016)
The expressive power of neural networks: A view from the width
(Lu et. al. 2017)
Convolution and single layer neural networks objective and optimization
Softmax and cross-entropy loss
Relu
activation single layer neural networks objective and optimization
Multi layer neural network objective and optimization.pdf
Image localization and segmentation
|
Machine learning - running linear models in Python scikit-learn
|
|
Scikit learn linear models
Scikit learn support vector machines
SVM in Python scikit-learn
Breast cancer training
Breast cancer test
Linear data
Non linear data
|
Deep learning - running neural networks in Scikit-learn
|
|
Scikit-learn MLPClassifier
Scikit-learn MLP code
|
Kernels
|
|
Kernels
More on kernels
|
Multiclass classification
|
|
Multiclass classification
One-vs-all method
|
Logistic regression
|
|
Logistic regression
|
Empirical and regularized risk minimization
|
|
Empirical risk minimization
Regularized risk minimization
Regularization and overfitting
|
Support vector machine
|
|
Support vector machines
|
Decision trees and random forests
|
|
Decision trees, bagging, boosting, and stacking
Decision trees (additional notes)
Ensemble methods (additional notes)
|
Feature selection
|
|
Feature selection
Feature selection (additional notes)
|
Dimensionality reduction
|
|
Dimensionality reduction
|
Clustering
|
|
Clustering
|
Maximum likelihood
|
|
Bayesian learning
|
Autoencoders
|
|
Generative models and networks
Autoencoder
|