ml

Machine Learning Vocab

TermMeaning
Metrics
precisionTODO
recallTODO
ROC curveReceiver Operating Characteristic Curve [TODO]
AUCArea under the ROC curve. The probability the model will rank a random positive example higher than a random negative example.
Precision-Recall CurveTODO
Architecture
softmax Sends each input through an exponential function and then normalizes the results to add up to one. In other words: $$ y_i = \frac{e^{\beta x_i}}{\sum_{j=1}^N{\beta x_j}} $$ where $\beta$ is a hyper-parameter. This is typically used to convert vague "confidence" between mutually exclusive categories into probabilities in the final layer in a neural network. It can also be used for embeddings.
autoencoderTODO
kernelTODO
strideThe distance between the center of two copies of a kernel.
dilationTODO
widthThe number of chananels in a layer.
depthThe number of layers in a network.
Activation Functions
Sigmoid$ y = \frac{e^x}{1+e^x} $
Softplus$ y = \log(1+e^x) $
ReLURectified lienar units: $ y = \max(0, x) $
Noisy ReLUTODO
Leaky ReLU$ y = \max(0.01x, x) $
ELUExponential linear units: $ y = \begin{cases} x & x \geq 0 \\ ax & x \lt 0 \end{cases} $, where $a$ is a hyperparameter
Data Augmentation
TODOTODO
Training
momentumWhile performing gradient descent, we can use "momentum" (i.e. an expoential moving average of past gradients, rather than a single batch's gradient) to help speed up convergence (since it makes your steps less noisy).
Nesterov momentumTODO
dampeningTODO
stochastic depthDropping layers randomly during training
batch normalizationFixing the mean (typically 0) and variance (typically 1) of each channel. Ideally this is done over the entire training set, but this is infeasible with stochastic gradient descent. Instead the mean and variance of each channel is computed with a moving average.
L2 RegularizationThis can be done by summing over the squares of all your parameters and adding this to your loss. This incentivizes the network to keep its weights small, even if it means slightly increasing training loss. In practice this is sometimes implemented while incrementing your parameters, rather than during backpropagation.
L1 RegularizationThis can be done by summing over the absolute value of all your parameters and adding this to your loss. This incentivizes the network to keep its weights small, even if it means slightly increasing training loss.
saturatedTODO
stochastic gradient descentTODO
mini-batch gradient descentTODO
TODO