diff --git a/doc/basics.md b/doc/basics.md
index 1b7bdbe6825c07f13b184efc79b9f8694c048fd2..a9ec093b08407a12086ee86063bbd1defa1ba6ff 100644
--- a/doc/basics.md
+++ b/doc/basics.md
@@ -128,12 +128,6 @@ The empirical regression problem then reads
 > **Definition** (loss function):
 > A _loss functions_ is any function, which measures how good a neural network approximates the target values.
 
-Typical loss functions for regression and classification tasks are
-  - mean-square error (MSE, standard $`L^2`$-error)
-  - weighted $`L^p`$- or $`H^k`$-norms (solutions of PDEs)
-  - cross-entropy (difference between distributions)
-  - Kullback-Leibler divergence, Hellinger distance, Wasserstein metrics
-  - Hinge loss (SVM)
 
 To find a minimizer of our loss function $`\mathcal{L}_N`$, we want to use the first-order optimality criterion