Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
G
GUM-compliant neural network robustness verification - a Masters thesis
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Analyze
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
ludwig10_masters_thesis
GUM-compliant neural network robustness verification - a Masters thesis
Commits
46a4abfe
Verified
Commit
46a4abfe
authored
2 years ago
by
Björn Ludwig
Browse files
Options
Downloads
Patches
Plain Diff
refactor(thesis): evaluate uncertainty propagation runtimes
parent
00b4a991
No related branches found
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
src/thesis/Thesis_408230.tex
+33
-0
33 additions, 0 deletions
src/thesis/Thesis_408230.tex
with
33 additions
and
0 deletions
src/thesis/Thesis_408230.tex
+
33
−
0
View file @
46a4abfe
...
@@ -2856,6 +2856,39 @@ and possible delays due to other operations on the used machine.
...
@@ -2856,6 +2856,39 @@ and possible delays due to other operations on the used machine.
\label
{
tab:computational
_
time
_
quadlu
}
\label
{
tab:computational
_
time
_
quadlu
}
\end{table}
\end{table}
It should be noted, however, that this is not the total time of execution of the
required examples, which are also included in the code
repository~
\citep
{
ludwig
_
pytorch
_
gum
_
uncertainty
_
propagation
_
2023
}
.
Some pre- and post-processing code was needed around the main code snippets measured,
but its runtime is not relevant for the desired comparison, as it is always the same.
For interested readers, please refer to the column
\enquote*
{
others
}
in
table~
\ref
{
tab:computational
_
time
_
robustness
}
in
chapter~
\ref
{
ch:robustness
_
verification
}
regarding the runtime of these pre- and
post-processing steps.
This contains almost identical operations in otherwise the same execution environment.
The following observations apply to all three activation functions.
In general, one can notice that for tiny networks with 10 neurons, the nonlinear
parts require more effort than the linear parts.
Already at 2.7e+01 or for QuadLU at 1.0e+01, this ratio changes with increasing
degree.
For the largest instances, the execution times for the non-linear components are
already three orders of magnitude smaller than those of the linear components.
Despite the apparently more complex structure in dependence on a parameter for
QuadLU, all three activation functions are in the same order of magnitude with regard
to the computing times, with a few exceptions, and otherwise only differ from each
other by one order of magnitude.
However, these differences can only be observed up to a neuron count of 2,500.
The differences in the smaller networks can most likely be explained by overhead in
calling functions and size comparisons, which is compensated for by the more
efficient calculation of the function values of QuadLU for larger vectors and matrices.
Interestingly, no systematic difference can be found between the calculations with
and without uncertainties, although the two cases are handled separately in the
implementations.
In the case of unoccupied uncertainties, care was taken to really only perform
calculations for the values.
This is surprising insofar as two multiplications of fully populated matrices are
carried out less in the case of unpopulated covariance matrices, which should also lead to corresponding runtime savings.
\chapter
{
Robustness Verification
}
\label
{
ch:robustness
_
verification
}
\chapter
{
Robustness Verification
}
\label
{
ch:robustness
_
verification
}
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment