- Dec 17, 2021
-
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
- Dec 16, 2021
-
-
Jörg Martin authored
-
Jörg Martin authored
The script evaluate_tabular.py was renamed to evaluate_metrics. This script now not only prints the results, but also stores them in JSON files in a Experiments/results folder (should be created). These files can be read via a new script create_tabular.py The JSON files have now all been changed to a less frequent update of std_y.
-
Jörg Martin authored
-
Jörg Martin authored
Based now on JSON files in results folder. evaluate_tabular.py has been renamed into evaluate_metrics. JSON files have also been updated. Need to check whether correct now for all datasets.
-
- Dec 15, 2021
-
-
Jörg Martin authored
-
Jörg Martin authored
Corrected eiv_prediction_number_of_draws in JSON configuration files and updated the intervals of std_y updating for some of the datasets.
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
- Dec 14, 2021
-
-
Jörg Martin authored
-
- Dec 13, 2021
-
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
In the folder `Experiments/configurations` JSON files are included that contain the configuration for training and evaluating. All training scripts were replaced by two files `train_eiv.py` and `train_noneiv.py` that read from this folder. The hitherto existent training configurations from the training scripts were copied to the JSON files. For `yacht_hydrodynamics` the configuration was updated. The script `evaluate_tabular.py` now also reads from this folder.
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
- Dec 10, 2021
-
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
Previously, this was done by looking at the average of the predictions over the posterior predictive. This was removed, as this didn't make much sense.
-
- Dec 09, 2021
-
-
Jörg Martin authored
-
Jörg Martin authored
Several metrics were included in evaluate_tabular.py. To this end the file coverage_metrices.py was added and the processing of a larger number of metrics in evaluate_tabular.py was simplified.
-
Jörg Martin authored
-
Jörg Martin authored
The evaluate_tabular script was simplified to ease the analysis of more quantities. Several coverage quantities and the bias were added to these quantities. However, they perform rather poor.
-
- Dec 08, 2021
-
-
Jörg Martin authored
-
- Dec 07, 2021
-
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
Added EiV training scripts for the three datasets and moreover included bias evaluation in `evaluate_tabular`. The inclusion of some coverage measure is still needed.
-
- Dec 06, 2021
-
-
Jörg Martin authored
-
- Dec 03, 2021
-
-
Jörg Martin authored
-
Jörg Martin authored
-
- Dec 02, 2021
-
-
Jörg Martin authored
-
- Dec 01, 2021
-
-
Jörg Martin authored
This covers all regression datasets treated in the MC Dropout and Deep Ensemble paper. Results are comparable or even better. For multivariate dataset, the decouple_dimensions keyword in the evaluation scripts can be used to follow the (rather weird) convention of these papers.
-