Skip to content
Snippets Groups Projects
Commit 71c84f16 authored by Jörg Martin's avatar Jörg Martin
Browse files

Update README.md

parent 70ad5442
No related branches found
No related tags found
No related merge requests found
......@@ -35,74 +35,6 @@ pip install EIVPackage/
Installing this package will make 3 modules available to the python environment: `EIVArchitectures` (for building EiV Models), `EIVTrainingRoutines` (containing a general training framework), `EIVGeneral` (containing a single module needed for repeated sampling).
## Training and pre-trained models
To avoid the need for retraining the models, we provide the ready trained network parameters for download under the following [link](https://drive.google.com/file/d/1O_5uudTLbvw_bviK1YSTbWJGPXueP_Uf/view?usp=sharing)
```
https://drive.google.com/
file/d/1O_5uudTLbvw_bviK1YSTbWJGPXueP_Uf/view?usp=sharing
```
Clicking "Download" on this site will start downloading a zipped folder `saved_networks_copy.zip`. Copy the content of the unzipped folder into
`Experiments/saved_networks`. The reminder of this section can then be skipped
### General comments and time required
The preprint contains results for 4 different datasets: data that follow a noisy Mexican hat, a modulated 5D polynomial (multinomial), a dataset about [wine quality](https://archive.ics.uci.edu/ml/datasets/wine+quality) and the famous Boston Housing dataset. The source code contains different training scripts for each dataset and for EiV and non-EiV models. For the Mexican hat example there are two training scripts for each model.
While training of a single network takes something around an hour (for the multinomial) and a couple of minutes (for all other datasets), the scripts below loop over different Deming factors (for the EiV models), random seeds and noise levels (for the Mexican hat and multinomial) so that their execution takes substantially longer. For all datasets except the multinomial this amounts to a computational time of around a day (depending on the available resources) and for the multinomial dataset to a couple of days (around 4, again depending on the available resources). The non-EiV scrips will run substantially faster as they do not loop over Deming factors and since their algorithm is faster by a factor of around 2 for the settings used here.
### Starting the training
All training scripts, together with the scripts to load the data, are contained in the folder `Experiments`. With the packages from above installed, the training can be started by running within `Experiments`
```
python <name-of-training-script>
```
where `<name-of-training-script>` should be replaced with one of the following:
+ *Mexican hat dataset*: `train_eiv_mexican.py` (EiV) and \
`train_noneiv_mexican.py` (non-EiV). There are also two versions that do not loop over `std_x` and only use 0.07 (used for Figure 1): `train_eiv_mexican_fixed_std_x.py` (EiV) \
and `train_noneiv_mexican_fixed_std_x.py` (non-EiV).
+ *Multinomial dataset*: `train_eiv_multinomial.py` (EiV) and \
`train_noneiv_multinomial.py` (non-EiV).
+ *Wine quality dataset*: `train_eiv_wine.py` (EiV) and \
`train_noneiv_wine.py` (non-EiV).
+ *Boston Housing dataset*: `train_eiv_housing.py` (EiV) and \
`train_noneiv_housing.py` (non-EiV).
## Evaluation
The trained models are evaluated using the 4 [Jupyter](https://jupyter.org/) Notebooks contained within `Experiments`
+ `evaluate_mexican.ipynb` for the Mexican hat dataset
+ `evaluate_multinomial.ipynb` for the multinomial dataset (needs around 1h 45min for execution)
+ `evaluate_wine.ipynb` for the wine quality dataset
+ `evaluate_housing.ipynb` for the Boston Housing dataset
To start `jupyter` in a browser run within `Experiments`
```
jupyter notebook
```
and click, in the opening tab, on the notebook you want to execute. Further instructions are given in the headers of the notebooks.
All notebooks will run by default on the CPU. To use the GPU (if available) for the computations in a notebook, set the flag `use_gpu` in the second cell to `True`.
## Results
All results contained in the preprint are produced by the Jupyter notebooks mentioned in the Section *Evaluation*. Plots are displayed in the notebooks and will, in addition, be saved within the folder `Experiments/saved_images`. The contents of Table 1 in the preprint, that is \
\
![](tabular.pdf)\
\
\
arise from running `evaluate_mexican.ipynb` (for the *Mexican hat* columns) and `evaluate_multinomial.ipynb` (for the *multinomial* columns) with `std_x` equal to 0.05, 0.07 and 0.10 (see instructions within the header of the notebooks).
## Contributing
Will be completed upon publication. The code will be made publically available on a repository under a BSD-like license.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment