Commit b712fd9c authored by Vigeesh Gangadharan's avatar Vigeesh Gangadharan
Browse files

Merge branch 'develop' into 'master'

Clean version, add diagnose

Closes #28, #29, #25, #27, and #14

See merge request sdc/grisinv!7
parents da899670 75b5a532
Pipeline #1275 passed with stage
in 2 minutes and 30 seconds
......@@ -3,7 +3,7 @@
.before_script_linux: &before_script_linux
- apt-get -qq install -y make
- conda create -n gris_inv -c conda-forge mpi4py numpy scipy gfortran_linux-64 lapack
- conda create -n gris_inv -c conda-forge mpi4py numpy scipy gfortran_linux-64 lapack matplotlib
- conda init bash
- source ~/.bashrc
- conda activate gris_inv
......
......@@ -27,7 +27,7 @@ We recommend, you use the conda-forge channel for installing the required packag
First, create a conda environment (e.g named `gris_env`) and install the required packages from conda-forge. <br>
Next, activate the newly created conda environment.
```sh
conda create -n gris_env -c conda-forge mpi4py numpy scipy gfortran_linux-64 lapack
conda create -n gris_env -c conda-forge mpi4py numpy scipy gfortran_linux-64 lapack matplotlib
conda activate gris_env
```
......@@ -78,7 +78,7 @@ First, set the `LD_LIBRARY_PATH` so that the MPI libraries provided by conda tak
export LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH
```
To run the VFISIV inversion on multi-processor and output a Magnetogram, you can either call `mpiexec` with one processor
To run the VFISIV inversion on multi-processor, you can either call `mpiexec` with one processor
and set the `--numproc` to the required number of processors to run the inversion on.
```sh
......@@ -92,7 +92,7 @@ mpiexec -n 1 vfisv \
```
or, call `mpiexec` directly with the required number of processors and the python core will distribute the inversion.
```sh
```shell
mpiexec -n 20 vfisv \
--path='/dat/sdc/gris/20150919/level1_split/' \
--id=3 \
......@@ -101,6 +101,35 @@ mpiexec -n 20 vfisv \
--out='output.fits'
```
```shell
mpiexec -n 20 vfisv \
--path='/dat/sdc/gris/20150919/level1_split/' \
--id=3 \
--line=15648.514 \
--width=1.8 \
--out='output.fits'
```
To output uncertainities, use the `--errors='<filename_erros.fits>'` option.<br>
If you want to check the line fits, use `--diagnose='<filename_diagnose.fits>'`.<br>
To account for observations with multiple maps, `filename_diagnose` is appended by the map-number, e.g.
`filename_diagnose_001.fits`
A run with all the options will look like,
```shell
mpiexec -n 1 vfisv \
--path='/dat/sdc/gris/20150919/level1_split/' \
--id=3 --line=15648.514 \
--numproc=20 \
--width=1.8 \
--out='test_inversion.fits' \
--preview='test_preview.png' \
--errors='test_errors.fits' \
--diagnose='test_diagnose.fits' \
```
### Python interface
The pipeline can be also be run within a python script. To do so, you need to call the python using `mpiexec` with one processor.
......@@ -108,7 +137,7 @@ The pipeline can be also be run within a python script. To do so, you need to ca
With `example.py`:
```python
from grisinv import vfisv
inv, header = vfisv('/dat/sdc/gris/20150920/level1_split/',5,15648.514,1.8,20)
inversions, fits, header = vfisv('/dat/sdc/gris/20150920/level1_split/',5,15648.514,1.8,20)
```
you can run,
```sh
......@@ -118,6 +147,23 @@ mpiexec -n 1 python example.py
### Displaying results
```shell
ds9 -multiframe -match frame image -lock slice image -lock frame image -single -zoom to fit output.fits
```
or, with python (using `sunpy`)
```python
from sunpy.map import Map
m = Map('output.fits')
#Fitted continuum
m[0].peek()
#Line-of-sight velocity
m[1].peek()
#Inclination
m[2].peek()
#Azimuth
m[3].peek()
#Field Strength
m[3].peek()
```
......@@ -3,7 +3,7 @@ Temporal Information,DATE,DATE-OBS,DATE-BEG,DATE-END,DATE-AVG
Instrument and Processing Info,ORIGIN,OBSRVTRY,TELESCOP,INSTRUME,GRATING,CAMERA,OBSGEO-X,OBSGEO-Y,OBSGEO-Z,AWAVLNTH,AWAVMAX,AWAVMIN,WAVEUNIT
Detector Readout and Data Scaling,OBS_MODE,IMGSYS,XPOSURE,TEXPOSUR,NSUMEXP
Adaptive Optics and Derotator,AOSYSTEM,AO_LOCK,AO_NMODE,ATMOS_R0,ROTANGLE,ROTCODE,ROTTRACK
L2 Processing,LEVEL,FILENAME,EXTNAME,BTYPE,BUNIT,WAVELNTH,WAVE_STR,CREATOR,VERS_SW,PRSTEP1,PRPROC1,PRPARA1
L2 Processing,LEVEL,FILENAME,EXTNAME,BTYPE,BUNIT,WAVELNTH,WAVE_STR,WAVERPIX,WAVERVAL,WAVEDELT,WAVESYER,CREATOR,VERS_SW,PRSTEP1,PRPROC1,PRPARA1
Projection and Pointing,WCSNAME,WCSAXES,CTYPE1,CUNIT1,CRPIX1,CRVAL1,CDELT1,CSYER1,CTYPE2,CUNIT2,CRPIX2,CRVAL2,CDELT2,CSYER2,CTYPE3,CUNIT3,CRPIX3,CRVAL3,CDELT3,CSYER3,PC1_1,PC1_2,PC2_1,PC2_2,LONPOLE,LATPOLE,MJDREF,AZIMUT,ELEV_ANG,SLITORIE,SOLAR_L0,SOLAR_P0
Measurement Parameters,GRATANGL,SLIT_WID,STEPSIZE,STEPANGL,SLITCNTR
File Integrity,CHECKSUM,DATASUM
......
This diff is collapsed.
......@@ -227,7 +227,14 @@ PROGRAM VFISV_SPEC
ENDDO
!Send inverted data to the grandmaster.
IF (NTASKS == 1) CALL MPI_SEND(INV,LX*LY*16,MPI_REAL,0,4000,intracomm,IERR)
IF (NTASKS == 1) THEN
CALL MPI_SEND(INV,LX*LY*16,MPI_REAL,0,4000,intracomm,IERR)
CALL MPI_SEND(SIFIT,LX*LY*NUMW,MPI_REAL,0,6660,intracomm,IERR)
CALL MPI_SEND(SQFIT,LX*LY*NUMW,MPI_REAL,0,6661,intracomm,IERR)
CALL MPI_SEND(SUFIT,LX*LY*NUMW,MPI_REAL,0,6662,intracomm,IERR)
CALL MPI_SEND(SVFIT,LX*LY*NUMW,MPI_REAL,0,6663,intracomm,IERR)
END IF
!
DEALLOCATE(SI,SQ,SU,SV,INV,ICONT)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment