This chapter describes a number of procedures visualising and assessing the quality of raw data files. These steps need not be part of your data reduction process and do not concern the iterative map-maker.
However, there are reasons you may wish to examine your raw data in greater depth. The most likely motivation is unusual result from your reduction such as higher than expected noise, artefacts in your map, or inconsistent noise across multiple tiles. This chapter will help you get to the bottom of many of these issues.
Since SCUBA-2 data for a given sub-array are broken into multiple 30-second scans by the data acquisition (DA)
system, it is useful to concatenate the data into a single file. The Smurf task sc2concat can be used for this
operation. The example below combines all of the files associated with Observation 8 for the s8a array into a
single file called
sc2concat will automatically filter out any dark or flat-field observations, so that the concatenated file contains only the science data. Be careful when concatenating a very long observation since the output file may be too large to reasonably handle. Fifteen-minute chunks (30 files) should be sufficient.
sc2concat applies the flat-field by default (although it can be disabled using the
noflat option on the
The flat-field can also be applied manually using the flatfield command.
Here, the output will be a flat-fielded version of each science scan in Observation 8; the file names will be the original input names with _flat appended to them.
As a rule of thumb, you should apply the flat-field to your data before examining it.
There are two Kappa tasks which are extremely useful for examining your data: fitslist and ndftrace, which can be used to view the FITS headers and dimensions of the data.
|fitslist:||This lists the FITS header information for any file (raw or reduced). This extensive list includes
dates & times, source name, scan type, pattern and velocity, size of the map, exposure time,
start and end elevation, opacity, and the temperature of the instrument. An example is given
If you already know the name of the parameter you want to view you can use the fitsval command instead, e.g.
|ndftrace:||ndftrace displays the attributes of the data structure. This will tell you the units of the data, pixel bounds, dimensions, world coordinate bounds and attributes, and axis assignations.|
Full details of these commands can be found in the Kappa manual.
Load the tst file generated by jcmtstate2cat into Topcat.
In Topcat select the scatter plot option from the menu bar across the top of the window.
With the scatter plot displayed you can adjust the -axis and -axis values to DRA and DDEC respectively to display the scan pattern. If you are interested in seeing how any of the variables change over time, select the the Axis to be either Id or RTS_NUM.
The movement of the telescope throughout a scan (as well as other state information) is stored in the
MORE.SMURF.JCMTSTATE extension of a data file. The Smurf task jcmtstate2cat converts this information into a
simple ASCII tab-separated table.
Multiple files can be supplied to the command using standard shell wild cards. If you have already concatenated your data you can simply input the single concatenated file. It may be useful to view the scan pattern for your observation, particularly for maps taken at high elevations, to ensure the pattern completed successfully.
-hto find out more information.
This catalogue can be loaded into Topcat for plotting, making sure to specify the TST format during loading.
Example of scan patterns displayed with Topcat can be seen in Figure 2.2. Detailed instructions on how to display the scan pattern for your observation are given in Figure 9.1. All of the time-varying header values are available for plotting. Other values include the azimuth and elevation offsets (DAZ & DEL), the WVM and 225 GHz opacity values, and the instrument temperatures (e.g. SC2_FPUTEMP gives the temperature of the focal plane).
Due to extreme accelerations at “turn-around” points of a scan pattern (especially for pongs), the telescope finds it
hard to follow the proscribed scan patterns at high elevations. To mitigate this we try to avoid observing any sources
elevation. If the fitslist keywords
ELEND indicate that your map was taken at high elevation you
may consider checking the success of the scan pattern. If you find your observation has failed to follow the
demanded scan pattern don’t worry, the data are likely to still be useful. This is especially true for daisy maps
where the high exposure-time central region is usually unaffected.
Use the Starlink application Gaia to visualise the bolometer time-series data (or indeed any SCUBA-2 data file).
This is initiated simply typing
gaia into a terminal.
Loading a file in Gaia produces two windows. The main window (see Figure 9.2) shows a map of bolometer values at a given point in time. The time slice displayed may be changed by scrolling through the time axis. This is done in the second window entitled Display image sections of a cube. The Index of plane slider towards the top of this window may be moved to display different time slices in the main window.
A third window will appear when you click on a bolometer—the Spectral plot (see Figure 9.3). This shows an automatically scaled plot of the raw time stream of data for that given bolometer. It will be overridden when you click on a different bolometer.
A second way to scroll through the time axis is to click and drag the vertical red bar on the Spectral plot window. As you do so, the array shown in the main window will automatically update.
To highlight small variations between bolometers you will need to change the auto cut and (depending on your preference) the colour scheme—both are controlled by buttons on the sidebar.
See the Gaia manual for full details.1
Any raw time-series data can be quickly regridded into sky frame coordinates using the Smurf makemap task in rebin mode. This involves no further processing of the data. The following command produces a map from the raw concatenated data; unlike the iterative mode of makemap described in the next chapter, no configuration file is required.
The output map here is called
crl2688_sky.sdf and is shown in Figure 9.4. The pixel scale is left at the default values of 2 arcsec on a
side at 450m and 4 arcsec at
850m (although this can be
changed using the
on the command-line, where
is in arcsec).
method=rebin, the map-maker will default to
Cleaning raw data is an essential first step towards making a quality final map. The map-maker performs all of these cleaning steps during the pre-processing stage. The commands for manually cleaning your data are given in Appendix A. You can also check out the SMURF SRO Cookbook2 which goes into great depth on the data cleaning options.
The on-sky performance of the array can be assessed using the Smurf command calcnoise. Rather than give an absolute measure, calcnoise should be used as an indicator of array performance and stability. calcnoise cleans the data then calculates the white noise on the array (between 2 and 10 Hz by default).
It will prompt for a configuration file to describe the cleaning steps. The default is the file
/smurf/dimmconfig_calcnoise.lis that is included in Smurf. Two noise measurements are reported in the
terminal: the ‘Effective noise’ and the ‘Effective NEP’.
An output file is created for each sub-array with the NEP map stored in the
If you have a bright source in the field this will contaminate the signal. In this case you should examine the
model from the map-maker instead—see Section 3.3 for a description and Section 9.8 for details on how to
By default, the final values of the models fitted by the map-maker are not written out. However, this can
be changed by setting
exportndf in the configuration file to the list of models that you wish to
In addition to the models listed in Section 3.3, you can request
RES in order to export the
RES model (the
residual signal remaining after the other models have been removed). If
NOI (the estimate of the bolometer noise
levels) is exported, it is stored as the VARIANCE component of the
RES model; thus, export of
RES is implied if
NOI is specified.
exportndf parameter will write out the requested models as NDF files with names based on the first input
file that went into the maps for each sub-array. This is first suffixed by
con, indicating that several data files may
have been concatenated together. The three-letter code for each model is then appended to the
filename (such as
The variance and quality for the data are stored as the VARIANCE and QUALITY components within the
residual file NDF.