Chapter 1
Introduction

 1.1 This cookbook
 1.2 Before you start: computing resources
 1.3 Before you start: Starlink software
 1.4 Options for reducing your data

1.1 This cookbook

This cookbook is designed to instruct ACSIS users on the best ways to reduce and visualise their data using Starlink packages.

This guide covers the following:

Style convention
In this cookbook the following styles are followed:

1.2 Before you start: computing resources

Before reducing heterodyne data you should consider whether you have sufficient computing resources for your data. That said, there can only be vague guidelines because resource usage depends on many factors, such as the spatial area and resolution, the number of observations being reduced together, observing and bandwidth modes (see Section 3.3), and the number of subbands. However, the main factor is the volume of data being processed concurrently.

The Orac-dr [414] pipeline can demand approaching 300 GB of peak storage for the largest (about a degree square) HARP maps, and at least 24 GB of memory during spectral-cube creation. For more-normal-sized maps of individual targets, say 20 square arcminutes from your observations, would only require about 10 GB of storage and most modern computers would have sufficient memory. The storage requirements can more than double if all intermediate files are retained for diagnostic purposes; intermediate files are normally removed at the end of each pass through a recipe. Reducing manually permits you to tidy files as you move along, but is very time consuming.

Reducing Nāmakanui data and HARP stares, small jiggle maps, or older RxA data is undemanding of resources.

1.3 Before you start: Starlink software

This manual utilises software from the Starlink package; Smurf [5], Kappa [8], Gaia [10], Orac-dr [4], Convert [6], Cupid [1], Ccdpack [11], and Picard [12]. Starlink software must be installed on your system, and Starlink aliases and environment variables must be defined before attempting any ACSIS data reduction. You can download Starlink from the Starlink webpage.

Below is a brief summary of the packages used in this cookbook and how to initialise them. Note that all the example commands are shown within a UNIX shell.


Package

Description

Initialise

Help





Smurf 

The Sub-Millimetre User Reduction Facility (Smurf) contains makecube that will process raw ACSIS data into spectral cubes.

%smurf

%smurfhelp
SUN/258





Kappa 

A general-purpose applications package with commands for processing, visualising, and manipulating NDFs.

%kappa

%kaphelp
SUN/95 





Convert 

CONVERT allows the interchange of data files to and from NDF. Other formats include IRAF, FITS, and ASCII.

%convert

%conhelp
SUN/55 





Cupid 

CUPID is a package of commands that allows the identification and analysis of clumps of emission within one-, two- or three-dimensional data arrays

%cupid

%cupidhelp
SUN/255 





Orac-dr 

The Orac-dr Data Reduction Pipeline [4] is an automated reduction pipeline. Orac-dr uses Smurf and Kappa (along with other Starlink tools) to perform an automated reduction of the raw data following pre-defined recipes.

%oracdr_acsis

SUN/230





Picard 

Picard uses a similar pipeline system as Orac-dr but for the post-processing of reduced data. While mainly implemented for SCUBA-2 data, there are a few recipes that are heterodyne compatible. See Appendix A for more details and a description of the available recipes.

%picard RECIPE <files >

SUN/265 





Tool

Description

Open

Help





Gaia 

Gaia is an interactive image and data-cube display and analysis tool. It incorporates tools such as source detection, three-dimensional visualisation, clump visualisation, photometry, and the ability to query and overlay on-line or local catalogues.

%gaia

SUN/214 SC/17 





Hdstrace 

This tool lets you examine the contents of Starlink data files.

%hdstrace <file >

SUN/102 





Splat 

Splat is a graphical spectral-analysis tool. It can also interact with the Virtual Observatory.

%splat

SUN/243 






1.4 Options for reducing your data

You have three options for processing your data:

(1) performing each step manually,
(2) write your own scripts, or
(3) running the automated pipeline.

The automated pipeline is recommended for new users of JCMT heterodyne data or those unfamiliar with the Starlink software. The pipeline approach works well if your project is suited to using one of the standard recipes, as are most projects. Running the pipeline is probably essential if you have a lot of data to process. To use the science pipeline, skip straight to Chapter 5.

Performing each step by hand allows more fine-tuned control of certain processing and analysis steps, although some fine-tuning is available in the pipeline via recipe parameters (see Section 6.2).

Once you have determined your optimal parameters you can pass them to the pipeline or a script. Chapter 7 and Chapter 8 discuss the manual approach.

While you have the option of running the pipeline yourself, you can find pipeline reduced files for individual and co-added observations by night in the JCMT Science Archive (JSA).

These have been reduced using the Orac-dr pipeline with the recipe specified in the MSB or with the Legacy Survey recipe. Principal investigators (PIs) and co-investigators (co-Is) can access these data through the JSA before it becomes public by following the instructions in Section 12.