Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
G
GUM-compliant_neural-network_uncertainty-propagation
Manage
Activity
Members
Labels
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
ludwig10_masters_thesis
GUM-compliant_neural-network_uncertainty-propagation
Commits
00afcda5
Verified
Commit
00afcda5
authored
2 years ago
by
Björn Ludwig
Browse files
Options
Downloads
Patches
Plain Diff
feat(uncertainties): turn is_symmetric and is_positive_semi_definite into public functions
parent
311a7d73
No related branches found
No related tags found
Loading
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
src/pytorch_gum_uncertainty_propagation/uncertainties.py
+53
-11
53 additions, 11 deletions
src/pytorch_gum_uncertainty_propagation/uncertainties.py
with
53 additions
and
11 deletions
src/pytorch_gum_uncertainty_propagation/uncertainties.py
+
53
−
11
View file @
00afcda5
"""
Contains utilities to process measurement uncertainties
"""
"""
Contains utilities to process measurement uncertainties
"""
__all__
=
[
"
cov_matrix_from_std_uncertainties
"
,
"
UncertainTensor
"
]
__all__
=
[
"
cov_matrix_from_std_uncertainties
"
,
"
is_positive_semi_definite
"
,
"
is_symmetric
"
,
"
UncertainTensor
"
,
]
from
typing
import
NamedTuple
from
typing
import
NamedTuple
...
@@ -31,24 +36,61 @@ def cov_matrix_from_std_uncertainties(sigma: Tensor) -> Tensor:
...
@@ -31,24 +36,61 @@ def cov_matrix_from_std_uncertainties(sigma: Tensor) -> Tensor:
covariance matrix
covariance matrix
"""
"""
cov_tensor
=
sigma
*
sigma
.
unsqueeze
(
1
)
cov_tensor
=
sigma
*
sigma
.
unsqueeze
(
1
)
assert
_
is_symmetric
(
cov_tensor
)
assert
is_symmetric
(
cov_tensor
)
assert
_
is_positive_semi_definite
(
cov_tensor
)
assert
is_positive_semi_definite
(
cov_tensor
)
return
cov_tensor
return
cov_tensor
def
_is_symmetric
(
matrix
:
Tensor
)
->
Tensor
:
def
is_symmetric
(
matrix
:
Tensor
)
->
Tensor
:
"""
Returns True if matrix is symmetric
"""
"""
Returns True if matrix is symmetric while NaNs are considered equal
Parameters
----------
matrix : Tensor
the matrix under test
Returns
-------
Tensor[bool]
True, if matrix is symmetric considering NaNs as equal, False otherwise
Raises
------
RuntimeError
if matrix is not square
"""
return
torch
.
all
(
torch
.
isnan
(
matrix
[
~
matrix
.
isclose
(
matrix
.
T
)]))
return
torch
.
all
(
torch
.
isnan
(
matrix
[
~
matrix
.
isclose
(
matrix
.
T
)]))
def
_is_positive_semi_definite
(
tensor_under_test
:
Tensor
)
->
Tensor
:
def
is_positive_semi_definite
(
matrix
:
Tensor
)
->
Tensor
:
"""
Returns True if tensor is positive semi-definite
"""
"""
Returns True if tensor is positive semi-definite
if
len
(
tensor_under_test
)
==
1
:
return
tensor_under_test
>=
0
If there are :class:`torch.nan` or :class:`torch.inf` present in the matrix,
eigenvalues
=
torch
.
linalg
.
eigvalsh
(
tensor_under_test
)
unexpected behaviour can occur.
Parameters
----------
matrix : Tensor
the matrix under test
Returns
-------
Tensor[bool]
True, if matrix is positive_semi_definite, False otherwise
Raises
------
RuntimeError
if matrix is not square
"""
if
len
(
matrix
)
==
1
:
if
matrix
.
shape
[
1
]
!=
1
:
torch
.
linalg
.
eigvalsh
(
matrix
)
return
matrix
>=
0
eigenvalues
=
torch
.
linalg
.
eigvalsh
(
matrix
)
return
torch
.
all
(
return
torch
.
all
(
torch
.
logical_or
(
torch
.
logical_or
(
eigenvalues
>=
0
,
eigenvalues
>=
0
,
torch
.
isclose
(
eigenvalues
,
tensor_under_test
.
new_zeros
(
1
),
atol
=
1e-6
),
torch
.
isclose
(
eigenvalues
,
matrix
.
new_zeros
(
1
),
atol
=
1e-6
),
)
)
)
)
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment