Skip to content
Snippets Groups Projects

Update math to conform with gitlab markdown

Merged Nando Hegemann requested to merge nn-doc into main
1 file
+ 0
6
Compare changes
  • Side-by-side
  • Inline
+ 0
6
@@ -128,12 +128,6 @@ The empirical regression problem then reads
> **Definition** (loss function):
> A _loss functions_ is any function, which measures how good a neural network approximates the target values.
Typical loss functions for regression and classification tasks are
- mean-square error (MSE, standard $`L^2`$-error)
- weighted $`L^p`$- or $`H^k`$-norms (solutions of PDEs)
- cross-entropy (difference between distributions)
- Kullback-Leibler divergence, Hellinger distance, Wasserstein metrics
- Hinge loss (SVM)
To find a minimizer of our loss function $`\mathcal{L}_N`$, we want to use the first-order optimality criterion
Loading