NOTCam array monitoring script documentation
To understand in detail how the analysing scripts work for both modes, one
should look at the scripts which take the images (lintest.run), as well as
the analysing scripts themselves. Here we give a rough idea of how
different things are calculated.
The analysis is done using IDL scripts and many built in IDL
functions.
Preparation procedures
- First old NOTCam images were studied to find good areas on the array,
i.e. areas without bad columns etc. These areas can easily be redefined
for the science grade array when the time comes to exchange array.
- All values larger than 60000 are reset to zero in order to solve
the wrap-around of negative values (because NOTCam BIAS stores data as
unsigned integers). Negative values in the reset subtracted image happen
when the noise is high and the background is low. In particular, darks
taken when NOTCam suffered from pick-up noise did not give any meaningful
results without this special treatment.
Good to know
- Ramp-sampling mode:
- As flats were used first slice of flat image-cube
- As darks were used second slice of dark image-cube
- If the defined area on the flat field image has a count
level above 50000, then to avoid saturation, we use the
8th slice of the flat image-cube
GAIN and RON calculations
- GAIN and RON are calculated using all possible combinations of
well exposed flats and short exp time darks:
- Reset-read-read mode: 0.5s and 1s
- Ramp-sampling mode: 4.2s
- In the end, the MEAN of all results has been taken (per quadrant)
- GAIN and RON are calculated using Janesick's method
DEAD pixels
- A pixel is here called dead if it has zero counts always
- Dead pixels are calculated from the darks with the shortest exp time
- A bunch of darks is combined using the REBIN command
- From this picture it is calculated how many pixels that are zero,
and this value is then divided by the amount of pixels in array (1024x1024)
- Reset-read-read mode: First dark image got with mdark is not used
- The location of the dead pixels is shown in the dead pixel map
- Old pictures are always overwritten
HOT and COLD pixels
- A pixel is defined as hot if it deviates more than +8 sigma from the mean
and as cold if it deviates more than -8 sigma from the mean
- The amount of hot and cold pixels is calculated from the darks which
have the most frequently used exposure time (= close to one minute)
- The chosen darks are combined into one image using the REBIN command
- A smoothed image is made using MEDIAN to smooth over 5 pixels in both
directions
- This smoothed image is subtracted from the rebinned one
- The mean level and the standard deviation (SIGCLIP_STDEV) of new image
is found
- The amount of hot and cold pixels is found by counting in how many
cases the pixel value is over the given limit
Linearity
- From each flat the corresponding dark is subtracted (= corresponding
in terms of exptime and obtained just after or before the flat)
- A linear fit is made using the points from the 3 shortest exptimes
- All real points are compared to this fit
- When the deviation exceeds a given limit for the first time (1% or
3%), that data point (in ADUs) is taken as the corresponding 1% or
3% limit of the linearity
Temperature
- The temperature of the array is monitored mostly to be able to
understand dark current fluctuation
- From all image headers the array temperature is read out
- MEAN, MAX and MIN temperatures during the test run are noted down
Dark current
- The number of counts as a function of exptime is shown for the
dark images
Scripts
- The scripts can be found on cassanda
|