Chapter 1

 1.1 This cookbook
 1.2 Before you start: computing resources
 1.3 Before you start: Starlink software
 1.4 Options for Reducing your Data

1.1 This cookbook

This cookbook is designed to instruct ACSIS users on the best ways to reduce and visualise their data using Starlink packages.

This guide covers the following:

Style convention
In this cookbook the following styles are followed:

1.2 Before you start: computing resources

Before reducing heterodyne data you should consider whether you have sufficient computing resources for your data. That said, there can only be vague guidelines because resource usage depends on many factors, such as the spatial area and resolution, the number of observations being reduced together, observing and bandwidth modes (see Section 3.3), and the number of subbands. However, the main factor is the volume of data being processed concurrently.

The ORAC-DR [413] pipeline can demand approaching 300 GB of peak storage for the largest (about a degree square) HARP maps, and at least 24 GB of memory during spectral-cube creation. For more-normal-sized maps of individual targets, say 20 square arcminutes from your observations, would only require about 10 GB of storage and most modern computers would have sufficient memory. The storage requirements can more than double if all intermediate files are retained for diagnostic purposes; intermediate files are normally removed at the end of each pass through a recipe. Reducing manually permits you to tidy files as you move along, but is very time consuming.

Reducing RxA data and HARP stares or small jiggle maps is undemanding of resources.

1.3 Before you start: Starlink software

This manual utilises software from the Starlink package; Smurf [5], Kappa [8], Gaia [10], ORAC-DR [4], Convert [6], Ccdpack [11], and Picard [12]. Starlink software must be installed on your system, and Starlink aliases and environment variables must be defined before attempting any ACSIS data reduction. You can download Starlink from the Starlink webpage.

Below is a brief summary of the packages used in this cookbook and how to initialise them. Note that all the example commands are shown within a UNIX shell.






The Sub-Millimetre User Reduction Facility (Smurf) contains makecube that will process raw ACSIS data into spectral cubes.




A general-purpose applications package with commands for processing, visualising, and manipulating NDFs.




CONVERT allows the interchange of data files to and from NDF. Other formats include IRAF, FITS, and ASCII.




CUPID is a package of commands that allows the identification and analysis of clumps of emission within one-, two- or three-dimensional data arrays




The ORAC-DR Data Reduction Pipeline [4] is an automated reduction pipeline. Orac-dr uses Smurf and Kappa (along with other Starlink tools) to perform an automated reduction of the raw data following pre-defined recipes.




Picard uses a similar pipeline system as ORAC-DR but for the post-processing of reduced data. While mainly implemented for SCUBA-2 data, there are a few recipes that are heterodyne compatible. See Appendix A for more details and a description of the available recipes.

%picard RECIPE <files >







Gaia is an interactive image and data-cube display and analysis tool. It incorporates tools such as source detection, three-dimensional visualisation, clump visualisation, photometry, and the ability to query and overlay on-line or local catalogues.


SUN/214 SC/17 


This tool lets you examine the contents of Starlink data files.

%hdstrace <file >



Splat is a graphical spectral-analysis tool. It can also interact with the Virtual Observatory.



1.4 Options for Reducing your Data

You have three options for processing your data:

(1) performing each step manually,
(2) write your own scripts, or
(3) running the automated pipeline.

The automated pipeline is recommended for new users of JCMT heterodyne data or those unfamiliar with the Starlink software. The pipeline approach works well if your project is suited to using one of the standard recipes, as are most projects. Running the pipeline is probably essential if you have a lot of data to process. To use the science pipeline, skip straight to Chapter 5.

Performing each step by hand allows more fine-tuned control of certain processing and analysis steps, although some fine-tuning is available in the pipeline via recipe parameters (see Section 6.2).

Once you have determined your optimal parameters you can pass them to the pipeline or a script. Chapter 7 and Chapter 8 discuss the manual approach.

While you have the option of running the pipeline yourself, you can find pipeline reduced files for individual and co-added observations by night in the JCMT Science Archive (JSA).

These have been reduced using the ORAC-DR pipeline with the recipe specified in the MSB or with the Legacy Survey recipe. Principal investigators (PIs) and co-investigators (co-Is) can access these data through the JSA before it becomes public by following the instructions in Section 11.