Skip to content
Snippets Groups Projects
Commit 1c97683d authored by Nando Farchmin's avatar Nando Farchmin
Browse files

Fix typo

parent bce20273
No related branches found
No related tags found
1 merge request!1Update math to conform with gitlab markdown
......@@ -129,11 +129,11 @@ The empirical regression problem then reads
> A _loss functions_ is any function, which measures how good a neural network approximates the target values.
Typical loss functions for regression and classification tasks are
- mean-square error (MSE, standard $`L^2`$-error)
- weighted $`L^p`$- or $`H^k`$-norms (solutions of PDEs)
- cross-entropy (difference between distributions)
- Kullback-Leibler divergence, Hellinger distance, Wasserstein metrics
- Hinge loss (SVM)
- mean-square error (MSE, standard $`L^2`$-error)
- weighted $`L^p`$- or $`H^k`$-norms (solutions of PDEs)
- cross-entropy (difference between distributions)
- Kullback-Leibler divergence, Hellinger distance, Wasserstein metrics
- Hinge loss (SVM)
To find a minimizer of our loss function $`\mathcal{L}_N`$, we want to use the first-order optimality criterion
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment