From 1c97683d54afa2f32c6e4f79eec26716baa2aa24 Mon Sep 17 00:00:00 2001 From: Nando Farchmin <nando.farchmin@gmail.com> Date: Fri, 1 Jul 2022 19:23:47 +0200 Subject: [PATCH] Fix typo --- doc/basics.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/doc/basics.md b/doc/basics.md index 80b57d3..1b7bdbe 100644 --- a/doc/basics.md +++ b/doc/basics.md @@ -129,11 +129,11 @@ The empirical regression problem then reads > A _loss functions_ is any function, which measures how good a neural network approximates the target values. Typical loss functions for regression and classification tasks are -- mean-square error (MSE, standard $`L^2`$-error) -- weighted $`L^p`$- or $`H^k`$-norms (solutions of PDEs) -- cross-entropy (difference between distributions) -- Kullback-Leibler divergence, Hellinger distance, Wasserstein metrics -- Hinge loss (SVM) + - mean-square error (MSE, standard $`L^2`$-error) + - weighted $`L^p`$- or $`H^k`$-norms (solutions of PDEs) + - cross-entropy (difference between distributions) + - Kullback-Leibler divergence, Hellinger distance, Wasserstein metrics + - Hinge loss (SVM) To find a minimizer of our loss function $`\mathcal{L}_N`$, we want to use the first-order optimality criterion -- GitLab