- Feb 01, 2022
-
-
Jörg Martin authored
-
- Jan 27, 2022
-
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
Was tested on EiV linear and non-EiV linear, but needs further environment to generate plots.
-
- Jan 25, 2022
-
-
Jörg Martin authored
-
- Jan 13, 2022
-
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
- Jan 12, 2022
-
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
- Jan 11, 2022
-
-
Jörg Martin authored
-
- Jan 10, 2022
-
-
Jörg Martin authored
By this we mean metrics that can only be computed when all seeds are evaluated so that, in particular, no std can be computed. The only metric of this kind at this point is the average of the absolute values of x-dependant biases ('avg-bias'). These metrics will only be computed for datasets with a dataloader of the type EIVData.repeated_sampling.repeated_sampling.
-
Jörg Martin authored
-
Jörg Martin authored
This allows now to compute (the average of) a x-dependant bias.
-
- Jan 07, 2022
-
-
Jörg Martin authored
-
- Jan 06, 2022
-
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
- Jan 05, 2022
-
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
For simulated examples the true coverage is evaluated. For all other datasets create_tabular will show None. The coverage was reversed to use the prediction (mean under the posterior predictive). Note, that the "repeated sampling" assumption is treated indirectly via averaging over the 10 seeds. For simulated data this works decently, for real data one often (but not always, cf. wine) sees a discrepancy.
-
Jörg Martin authored
-
Jörg Martin authored
-
- Jan 04, 2022
-
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
- Dec 17, 2021
-
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
Jörg Martin authored
-
- Dec 16, 2021
-
-
Jörg Martin authored
-
Jörg Martin authored
The script evaluate_tabular.py was renamed to evaluate_metrics. This script now not only prints the results, but also stores them in JSON files in a Experiments/results folder (should be created). These files can be read via a new script create_tabular.py The JSON files have now all been changed to a less frequent update of std_y.
-
Jörg Martin authored
-
Jörg Martin authored
Based now on JSON files in results folder. evaluate_tabular.py has been renamed into evaluate_metrics. JSON files have also been updated. Need to check whether correct now for all datasets.
-
- Dec 15, 2021
-
-
Jörg Martin authored
-