# SDC:GRIS Inversion Pipeline
This is the SDC pipeline code for performing inversions of GRIS spectropolarimeteric data.
The code currently uses the [Very Fast Inversion of the Stokes Vector (VFISV)](https://gitlab.leibniz-kis.de/borrero/vfisv_spec) code v5.0 (node for spectrograph data)
as the main backend to carry out a Milne-Eddington Stokes inversion for individual spectral lines.
The manual for the original code can be found here: [manual_vfisv](https://gitlab.leibniz-kis.de/borrero/vfisv_spec/-/blob/master/manual_vfisv.pdf)
This is an implementation of VFISV code to facilitate a seamless processing of the data provided by the SDC.
## Implementation
The current implementation of the pipeline is a modified version of VFISV code to work with the GRIS data. The GRIS L1 header and Stokes data is extracted
using a Python core module and sent to a bare VFISV via an MPI intercommunicator. The inversion is performed using VFISV
and the buffer with the inversion results is communicated back to the Python module. The Python module propagates the
keywords from L1 and packages the inversion results and outputs a FITS file (when used as a command-line interface) or
returns an NDarray (when called within a script).
## Install and usage
You can either use Docker or install it natively.
### 1. Using Docker
Go to the directory where the data files (`level1split`) are located.
```shell
docker run -it --rm -v $PWD:/home/grisuser ghcr.io/vigeesh/sdc-grisinv
```
This will run/pull the latest image from the container repo.
Within the new shell,
```shell
mpiexec -n 1 vfisv \
--path='data/' \
--id=3 \
--line=15648.514 \
--numproc=20 \
--width=1.8 \
--out='output.fits'
```
### 2. Native installation
#### Requirements
It is highly recommended to install the pipeline in a new `conda` environment.
If you don't have conda installed, check [Miniconda](https://docs.conda.io/en/latest/miniconda.html) for a minimal installer for conda.
To install Miniconda:
```sh
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
```
when prompted, install it in your home path, e.g. ~/conda
Conda is cross-language and can install non-Python libraries and tools (compilers, OpenMPI) in the user space.
We recommend, you use the conda-forge channel for installing the required packages.
> Note: In the conda-forge channel, NumPy is built against a dummy “BLAS” package.
When a user installs NumPy from conda-forge, that BLAS package then gets
installed together with the actual library - this defaults to OpenBLAS ...
(see [Numpy Documentation](https://numpy.org/install/) for more info.)
First, create a conda environment (e.g named `gris_env`) and install the required packages from conda-forge.
Next, activate the newly created conda environment.
```sh
conda create -n gris_env -c conda-forge mpi4py numpy scipy gfortran_linux-64 lapack matplotlib
conda activate gris_env
```
#### Installation
You can install the pipeline code from the GitLab repo directly using `pip`
```
pip install git+https://gitlab.leibniz-kis.de/sdc/gris/grisinv.git
```
or clone the repo and run setup
```sh
git clone https://gitlab.leibniz-kis.de/sdc/gris/grisinv.git
cd grisinv
python setup.py install
```
This builds the code and installs the command-line tool (`vfisv`) and in your `conda` environment.
Set the `LD_LIBRARY_PATH` so that the MPI libraries provided by conda takes precedence.
```shell
export LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH
```
## Help
```sh
Usage: vfisv [OPTIONS]
Options:
-p, --path TEXT Path to the fits files
-d, --id INTEGER Observation ID
-o, --out TEXT Output fits file
-n, --numproc INTEGER Number of processors to run on
-l, --line FLOAT Wavelength of the line (Angstrom)
-w, --width FLOAT Wavelength range +/- (Angstrom)
-h, --help Show this message and exit.
```
## Example usage
### Command line utility
To run the VFISIV inversion on multi-processor, you can either call `mpiexec` with one processor
and set the `--numproc` to the required number of processors to run the inversion on.
```sh
mpiexec -n 1 vfisv \
--path='/dat/sdc/gris/20150919/level1_split/' \
--id=3 \
--line=15648.514 \
--numproc=20 \
--width=1.8 \
--out='output.fits'
```
or, call `mpiexec` directly with the required number of processors and the python core will distribute the inversion.
```shell
mpiexec -n 20 vfisv \
--path='/dat/sdc/gris/20150919/level1_split/' \
--id=3 \
--line=15648.514 \
--width=1.8 \
--out='output.fits'
```
```shell
mpiexec -n 20 vfisv \
--path='/dat/sdc/gris/20150919/level1_split/' \
--id=3 \
--line=15648.514 \
--width=1.8 \
--out='output.fits'
```
To output uncertainities, use the `--errors=''` option.
If you want to check the line fits, use `--diagnose=''`.
To account for observations with multiple maps, `filename_diagnose` is appended by the map-number, e.g.
`filename_diagnose_001.fits`
A run with all the options will look like,
```shell
mpiexec -n 1 vfisv \
--path='/dat/sdc/gris/20150919/level1_split/' \
--id=3 --line=15648.514 \
--numproc=20 \
--width=1.8 \
--out='test_inversion.fits' \
--preview='test_preview.png' \
--errors='test_errors.fits' \
--diagnose='test_diagnose.fits' \
--log='test_log.txt'
```
### Python interface
The pipeline can be also be run within a python script. To do so, you need to call the python using `mpiexec` with one processor.
With `example.py`:
```python
from grisinv import vfisv
inversions, fits, header = vfisv('/dat/sdc/gris/20150920/level1_split/',5,15648.514,1.8,20)
```
you can run,
```sh
mpiexec -n 1 python example.py
```
### Displaying results
```shell
ds9 -multiframe -match frame image -lock slice image -lock frame image -single -zoom to fit output.fits
```
or, with python (using `sunpy`)
```python
from sunpy.map import Map
m = Map('output.fits')
#Fitted continuum
m[0].peek()
#Line-of-sight velocity
m[1].peek()
#Inclination
m[2].peek()
#Azimuth
m[3].peek()
#Field Strength
m[3].peek()
```