- Classical Models
- Clustering & Dimensionality Reduction
- Data & Features
- Deep Learning
- Fundamentals
- Metrics & Evaluation
- Optimization & Training
- Classification
- Clustering
- Dimensionality Reduction
- Evaluation
- Feature Engineering
- Optimization / Training
- Regression
- Sequence Modeling
A field of study focused on algorithms that learn patterns from data to make predictions or decisions.
Learning a mapping from inputs to outputs using labeled examples (e.g., regression, classification).
Finding structure in unlabeled data, such as grouping similar points or discovering latent factors.
Learning to act via trial and error to maximize cumulative reward in an environment.
A supervised learning approach for predicting continuous numeric targets.
A supervised learning task that assigns inputs to discrete categories.
An unsupervised learning task that groups similar instances without using labels.
A term in fundamentals used in machine learning practice.
A term in data & features used in machine learning practice.
A term in data & features used in machine learning practice.
A term in fundamentals used in machine learning practice.
Methods to tune hyperparameters to improve model performance.
A resampling procedure for estimating generalization performance by training/validating on different splits.
A procedure to assess generalization by rotating train/validation splits across folds.
A term in data & features used in machine learning practice.
A term in data & features used in machine learning practice.
A term in fundamentals used in machine learning practice.
A term in fundamentals used in machine learning practice.
A term in fundamentals used in machine learning practice.
Techniques that constrain model complexity to reduce overfitting (e.g., L1, L2, dropout, weight decay).
A technique to control model complexity and reduce overfitting by penalizing large parameters.
A technique to control model complexity and reduce overfitting by penalizing large parameters.
A term in optimization & training used in machine learning practice.
A linear classifier that models the log-odds of a binary label with a sigmoid link function.
A supervised learning approach for predicting continuous numeric targets.
A term in classical models used in machine learning practice.
An ensemble of decision trees that reduces variance by averaging many randomized trees.
An additive model that fits new learners to the residuals of current predictions to reduce errors.
A term in classical models used in machine learning practice.
A term in classical models used in machine learning practice.
A margin-based classifier that finds a separating hyperplane; can use kernels for nonlinearity.
A term in classical models used in machine learning practice.
A term in classical models used in machine learning practice.
A clustering algorithm that partitions data into k clusters by minimizing within-cluster variance.
An unsupervised learning task that groups similar instances without using labels.
A term in clustering & dimensionality reduction used in machine learning practice.
A technique for dimensionality reduction that finds orthogonal directions of maximum variance.
A term in clustering & dimensionality reduction used in machine learning practice.
A term in clustering & dimensionality reduction used in machine learning practice.
Feature scaling transforms that normalize ranges or distributions to aid model training.
Feature scaling transforms that normalize ranges or distributions to aid model training.
Feature scaling transforms that normalize ranges or distributions to aid model training.
Feature scaling transforms that normalize ranges or distributions to aid model training.
Feature scaling transforms that normalize ranges or distributions to aid model training.
A term in data & features used in machine learning practice.
Methods to tune hyperparameters to improve model performance.
Methods to tune hyperparameters to improve model performance.
Methods to tune hyperparameters to improve model performance.
A way to quantify model performance for validation, selection, and comparison.
A way to quantify model performance for validation, selection, and comparison.
A way to quantify model performance for validation, selection, and comparison.
A way to quantify model performance for validation, selection, and comparison.
A way to quantify model performance for validation, selection, and comparison.
A threshold-agnostic metric summarizing the tradeoff between true positive rate and false positive rate.
A metric suited for imbalanced data, summarizing precision-recall tradeoffs across thresholds.
A way to quantify model performance for validation, selection, and comparison.
A way to quantify model performance for validation, selection, and comparison.
A way to quantify model performance for validation, selection, and comparison.
A way to quantify model performance for validation, selection, and comparison.
A way to quantify model performance for validation, selection, and comparison.
An objective function that quantifies prediction error during training.
A way to quantify model performance for validation, selection, and comparison.
A term in data & features used in machine learning practice.
A term in data & features used in machine learning practice.
A term in data & features used in machine learning practice.
A term in data & features used in machine learning practice.
A term in data & features used in machine learning practice.
A term in data & features used in machine learning practice.
A term in data & features used in machine learning practice.
A term in data & features used in machine learning practice.
A term in data & features used in machine learning practice.
A term in deep learning used in machine learning practice.
A function approximator composed of layers of linear operations and nonlinear activations.
A term in deep learning used in machine learning practice.
A deep network that uses convolutions to capture local spatial patterns in images.
A recurrent neural network architecture designed to model sequences and long-range dependencies.
A recurrent neural network architecture designed to model sequences and long-range dependencies.
A recurrent neural network architecture designed to model sequences and long-range dependencies.
A neural architecture based on attention mechanisms that models relationships across sequence positions.
A mechanism that lets models focus on the most relevant parts of the input when computing representations.
An attention-based deep learning architecture effective for sequence modeling and long contexts.
An attention-based deep learning architecture effective for sequence modeling and long contexts.
A term in deep learning used in machine learning practice.
A term in deep learning used in machine learning practice.
A term in deep learning used in machine learning practice.
A term in deep learning used in machine learning practice.
A term in deep learning used in machine learning practice.
A term in deep learning used in machine learning practice.
An objective function that quantifies prediction error during training.
A term in deep learning used in machine learning practice.
A term in optimization & training used in machine learning practice.
An optimization method that updates parameters in the direction that reduces the loss.
An optimization method that updates parameters in the direction that reduces the loss.
A term in optimization & training used in machine learning practice.
A term in optimization & training used in machine learning practice.
A term in optimization & training used in machine learning practice.
A term in optimization & training used in machine learning practice.
A technique to control model complexity and reduce overfitting by penalizing large parameters.
A term in optimization & training used in machine learning practice.
An algorithm to compute gradients of parameters efficiently via the chain rule for training neural networks.