Go to file
Hanno Spreeuw cb28b445ae Copied the Makefiles from the master branch 2018-01-22 17:02:47 +01:00
src Copied the Makefiles from the master branch 2018-01-22 17:02:47 +01:00
test Inserting sources for a sky model near the NCP instead of near 3C196, which is more appropriate for the particular MS we are loading for profiling purposes 2018-01-11 16:56:31 +01:00
AUTHORS Initial commit 2014-01-06 23:07:07 +01:00
COPYING Initial commit 2014-01-06 23:07:07 +01:00
ChangeLog Major update, added MPI support (distributed calib) 2015-02-05 16:57:28 +01:00
INSTALL major update: 1) CUSOLVER/NVML support, 2) LOFAR Beam model 2016-12-03 00:12:35 +01:00
README added info on -E option 2017-11-28 08:57:23 +01:00

README

SAGECAL
=======
Read INSTALL for installation. This file gives a brief guide to use SAGECal.
Warning: this file may be obsolete. use sagecal -h to see up-to-date options.


Step by Step Introduction:
#######################################################################
1a)Calibrate data in the standard way using BBS/CASA or anything else. 
Use NDPP to average the data in your MS to a few channels (also average in time to about 10sec). Also flag any spikes in the data.
1b)For subtraction of the ATeam from raw data (CasA,CygA,...), no initial calibration is necessary. Just run sagecal on raw data, but it is better to scale the sky model to match the apparent flux of the sources that are being subtracted.
#######################################################################
2) Sky Model:
3a)Make an image of your MS (using ExCon/casapy). 
Use Duchamp to create a mask for the image. Use buildsky to create a sky model. (see the README file on top level directory). Also create a proper cluster file.
Special options to buildsky: -o 1 (NOTE: not -o 2)

Alternatively, create these files by hand according to the following formats.

2b)Cluster file format:
cluster_id chunk_size source1 source2 ...
e.g.

0 1 P0C1 P0C2
2 3 P11C2 P11C1 P13C1

Note: putting -ve values for cluster_id will not subtract them from data.
chunk_size: find hybrid solutions during one solve run. Eg. if -t 120 is used 
to select 120 timeslots, cluster 0 will find a solution using the full 120 timeslots while cluster 2 will solve for every 120/3=40 timeslots.

2c)Sky model format:
#name h m s d m s I Q U V spectral_index RM extent_X(rad) extent_Y(rad) pos_angle(rad) freq0

or

#name h m s d m s I Q U V spectral_index1 spectral_index2 spectral_index3 RM extent_X(rad) extent_Y(rad) pos_angle(rad) freq0

e.g.

P1C1 0 12 42.996 85 43 21.514 0.030498 0 0 0 -5.713060 0 0 0 0 115039062.0
P5C1 1 18 5.864 85 58 39.755 0.041839 0 0 0 -6.672879 0 0 0 0 115039062.0
# A Gaussian mjor,minor 0.1375,0.0917 deg diameter -> radius(rad), PA 43.4772 deg (-> rad)
# Position Angle: "West from North (counter-clockwise)" (0 deg = North, 90 deg = West). 
# Note: PyBDSM and BBS use "North from East (counter-clockwise)" (0 deg = East, 90 deg = North). 
G0  5 34 31.75 22 00 52.86 100 0 0 0 0.00 0 0.0012  0.0008 -2.329615801 130.0e6
# A Disk radius=0.041 deg
D01 23 23 25.67 58 48 58 80 0 0 0 0 0 0.000715 0.000715 0 130e6
# A Ring radius=0.031 deg
R01 23 23 25.416 58 48 57 70 0 0 0 0 0 0.00052 0.00052 0 130e6
# A shapelet ('S3C61MD.fits.modes' file must be in the current directory)
S3C61MD 2 22 49.796414 86 18 55.913266 0.135 0 0 0 -6.6 0 1 1 0.0 115000000.0


Note: Comments starting with a '#' are allowed for both sky model and cluster files.
Note: 3rd order spectral indices are also supported, use -F 1 option in sagecal.
Note: Spectral indices use natural logarithm, exp(ln(I0) + p1*ln(f/f0) + p2*ln(f/f0)^2 + ..) so if you have a model with common logarithms like 10^(log(J0) + q1*log(f/f0) + q2*log(f/f0)^2 + ..) then, conversion is

ln(I0)+p1*ln(f/f0)+p2*ln(f/f0)^2+... = ln(10)*(log(J0)+q1*log(f/f0)+q2*log(f/f0))^2)+...)
=ln(10)*(ln(J0)/ln(10)+q1*ln(f/f0)/ln(10)+q2*ln(f/f0)^2/ln(10)^2+...)

so

I0=J0
p1=q1
p2=q2/ln(10)
p3=q3/ln(10)^2
...


#######################################################################
3)Run sagecal
Optionally: Make sure your machine has (1/2 working NVIDIA GPU cards or Intel Xeon Phi MICs) to use sagecal.
Recommended usage: (with GPUs)

sagecal -d my_data.MS -s my_skymodel -c my_clustering -n no.of.threads -t 60 -p my_solutions -e 3 -g 2 -l 10 -m 7 -w 1 -b 1

Use your solution interval (-t 60) so that its big enough to get a decent solution and not too big to make the parameters vary too much. (about 20 minutes per solution is reasonable).

Note: It is also possible to calibrate more than one MS together. See section 4 below.
Note: To fully use GPU acceleration use -E 1 option.

Simulations:
With -a 1, only a simulation of the sky model is done.
With -a 1 and -p 'solutions_file', simulation is done with the sky model corrupted with solutions in 'solutions_file'.
With -a 1 and -p 'solutions_file' and -z 'ignore_file', simulation is done with the solutions in the 'solutions_file', but ignoring the cluster ids in the 'ignore_file'.
Eg. If you need to ignore cluster ids '-1', '10', '999', create a text file :

-1
10
999

and use it as the 'ignore_file'.


##########################################################################
4)Distributed calibration

Use mpirun to run sagecal-mpi, example:
 mpirun  -np 11 -hostfile ./machines --map-by node --cpus-per-proc 8 
 --mca yield_when_idle 1 -mca orte_tmpdir_base /scratch/users/sarod 
 /full/path/to/sagecal-mpi -f 'MS*pattern' -A 30 -P 2 -r 5 
 -s sky.txt -c cluster.txt -n 16 -t 1 -e 3 -g 2 -l 10 -m 7 -x 10 -F 1 -j 5

Specific options : 
-np 11 : 11 processes : starts 10 slaves + 1 master
./machines : will list the host names of the 11 nodes used ( 1st name is the master ) : normally the node where you invoke mpirun
/scratch/users/sarod : this is where MPI stores temp files (default /tmp)
-f 'MS*pattern' : Search MS names that match this pattern and calibrate all of them together
-A 30 : 30 C-ADMM iterations
-P 2 : polynomial in frequency has 2 terms
-r 5 : regularization factor is 5.0

Note: the number of slaves (-np option) can be lower than the number of MS calibrated. The program will divide the workload among the number of available slaves.


The rest of the options are similar to sagecal.


##########################################################################
5)Solution format
All SAGECal solutions are stored as text files. Lines starting with '#' are comments.
The first non-comment line includes some general information, i.e.
freq(MHz) bandwidth(MHz) time_interval(min) stations clusters effective_clusters

The remaining lines contain solutions for each cluster as a single column, the first column is just a counter. 
Let's say there are K effective clusters and N directions. Then there will be K+1 columns, the first column will start from 0 and increase to 8N-1, 
which can be used to count the row number. It will keep repeating this, for each time interval.
The rows 0 to 7 belong to the solutions for the 1st station. The rows 8 to 15 for the 2nd station and so on. 
Each 8 rows of any given column represent the 8 values of a 2x2 Jones matrix. Lets say these are S0,S1,S2,S3,S4,S5,S6 and S7. Then the Jones matrix is [S0+j*S1, S4+j*S5; S2+j*S3, S6+j*S7] (the ';' denotes the 1st row of the 2x2 matrix).

When a luster has a chunk size > 1, there will be more than 1 solution per given time interval. 
So for this cluster, there will be more than 1 column in the solution file, the exact number of columns being equal to the chunk size.