From 8a855bd16d1c0a7e22a53ae57892cbbf630134ee Mon Sep 17 00:00:00 2001
From: Nando Farchmin <nando.farchmin@gmail.com>
Date: Fri, 1 Jul 2022 19:26:17 +0200
Subject: [PATCH] Fix typo

---
 doc/basics.md | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/doc/basics.md b/doc/basics.md
index a9ec093..1b7bdbe 100644
--- a/doc/basics.md
+++ b/doc/basics.md
@@ -128,6 +128,12 @@ The empirical regression problem then reads
 > **Definition** (loss function):
 > A _loss functions_ is any function, which measures how good a neural network approximates the target values.
 
+Typical loss functions for regression and classification tasks are
+  - mean-square error (MSE, standard $`L^2`$-error)
+  - weighted $`L^p`$- or $`H^k`$-norms (solutions of PDEs)
+  - cross-entropy (difference between distributions)
+  - Kullback-Leibler divergence, Hellinger distance, Wasserstein metrics
+  - Hinge loss (SVM)
 
 To find a minimizer of our loss function $`\mathcal{L}_N`$, we want to use the first-order optimality criterion
 
-- 
GitLab