Skip to content
Snippets Groups Projects
Verified Commit 46a4abfe authored by Björn Ludwig's avatar Björn Ludwig
Browse files

refactor(thesis): evaluate uncertainty propagation runtimes

parent 00b4a991
No related branches found
No related tags found
No related merge requests found
...@@ -2856,6 +2856,39 @@ and possible delays due to other operations on the used machine. ...@@ -2856,6 +2856,39 @@ and possible delays due to other operations on the used machine.
\label{tab:computational_time_quadlu} \label{tab:computational_time_quadlu}
\end{table} \end{table}
It should be noted, however, that this is not the total time of execution of the
required examples, which are also included in the code
repository~\citep{ludwig_pytorch_gum_uncertainty_propagation_2023}.
Some pre- and post-processing code was needed around the main code snippets measured,
but its runtime is not relevant for the desired comparison, as it is always the same.
For interested readers, please refer to the column \enquote*{others} in
table~\ref{tab:computational_time_robustness} in
chapter~\ref{ch:robustness_verification} regarding the runtime of these pre- and
post-processing steps.
This contains almost identical operations in otherwise the same execution environment.
The following observations apply to all three activation functions.
In general, one can notice that for tiny networks with 10 neurons, the nonlinear
parts require more effort than the linear parts.
Already at 2.7e+01 or for QuadLU at 1.0e+01, this ratio changes with increasing
degree.
For the largest instances, the execution times for the non-linear components are
already three orders of magnitude smaller than those of the linear components.
Despite the apparently more complex structure in dependence on a parameter for
QuadLU, all three activation functions are in the same order of magnitude with regard
to the computing times, with a few exceptions, and otherwise only differ from each
other by one order of magnitude.
However, these differences can only be observed up to a neuron count of 2,500.
The differences in the smaller networks can most likely be explained by overhead in
calling functions and size comparisons, which is compensated for by the more
efficient calculation of the function values of QuadLU for larger vectors and matrices.
Interestingly, no systematic difference can be found between the calculations with
and without uncertainties, although the two cases are handled separately in the
implementations.
In the case of unoccupied uncertainties, care was taken to really only perform
calculations for the values.
This is surprising insofar as two multiplications of fully populated matrices are
carried out less in the case of unpopulated covariance matrices, which should also lead to corresponding runtime savings.
\chapter{Robustness Verification}\label{ch:robustness_verification} \chapter{Robustness Verification}\label{ch:robustness_verification}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment