Starlink routines run on HDS files (Hierarchical Data System) which normally have a
.sdf extension (Starlink Data
File). HDS files can describe a multitude of formats. The most common HDS format you will encounter is the NDF
Data Format). This is the standard file format for storing data which represent
arrays of numbers, such as spectra, images, etc. The parameter files discussed in Section 2.2 are also HDS
Raw data retrieved from the JCMT Science Archive comes in FITS format. For information on converting between FITS and NDF see Appendix B.
A directory called
adam is created in your home-space by default when you run Starlink applications. In this
directory you will find HDS files for each of the application that you have run. These files contain
the parameters used and results returned (if appropriate) from the last time you ran a particular
You can specify a different location for the
adam directory by setting the environment variable
ADAM_USER to be
the path to your alternative location. This is useful when running more than one reduction to avoid interference
or access clashes.
To see the ADAM parameters, run Hdstrace on any of the parameter files. For example, to see which parameters were used and the results from last time you ran stats, you can type the following from anywhere on your system:
This will report the following
You can see stats was run on the data array with ordered statistics as well as the resulting values. Any of the parameters returned by running Hdstrace on an ADAM file can be extracted using the command parget. In the example below, the mean value from the last instance of stats is printed to the screen.
!!at a prompt if possible to have a clean exit.
parget is designed to make life easier when passing values between shell scripts. In the C-shell scripting example below, the median value from histat is assigned to the variable med. Note the use of the back quotes.
If the parameter comprises a vector of values these can be stored in a C-shell array. For other scripting languages such as Python, the alternative vector format produced by setting parameter VECTOR to TRUE may be more appropriate. Single elements of a parameter array may also be accessed using the array index in parentheses.
In addition to running hdstrace on the ADAM file, you can find a list of all parameter names that can be returned with parget in the Kappa manual under ‘Results Parameters’ for the command in question. Note that these names may be different from the names returned in a terminal when running application.
There are two Kappa tasks which are extremely useful for examining your metadata: fitslist and ndftrace,
which can be used to view the FITS headers and properties, such as dimensions, of the data respectively. The
third option is the stand-alone application Hdstrace.
|fitslist||This lists the FITS header information for any NDF (raw or reduced). This extensive list includes dates &
times, source name, observation type, band width, number of channels, receptor information,
exposure time, start and end elevation and opacity. In the example below, just the object name is
Likewise, if you know the name of the keyword you want to view you can use the fitsval command instead, for instance
|ndftrace||ndftrace displays the attributes of the NDF data structure. This will tell you, for example, the units of
the data, pixel bounds, dimensions, world co-ordinates (WCS), and axis assignations.
An NDF can contain more than one set of world co-ordinates. The
|hdstrace||hdstrace lists the name, data type and values of an HDS (Hierarchical Data System) object. The following
example shows the structure of a time-series cube, including the pixel origin of the data structure. To show just
lines of values for each parameter include the option |
Otherwise to see all the lines and information that is available in each extension use
You can see it descends two levels (into
ACSIS) to retrieve the information. Other information available
at this level include receptors and receiver temperatures.
Full details of ndftrace and fitslist can be found in the Kappa manual. Details on Hdstrace can be found in the hdstrace manual.
.sdfextension on filenames is not required when running Starlink commands.
If you are presented with a data file you may wish to see what commands have been run on it and, in the
case of a co-added cube, which data went into it. Two Kappa commands can help you with this:
|hislist||The Kappa command hislist will return the history records of the NDF.
|provshow||The Kappa command provshow displays the details of the NDFs that were used in the creation of the given file. It includes both immediate parents and older ancestor NDFs.|
For all Starlink commands you can specify a sub-section of your data on which to run an application. You do this by appending the bounds of the section to be processed the NDF name. This may be on the command line or in response to a prompt.
The example below runs stats on a sub-cube within your original cube. This sub-cube is defined by bounds given for each axis. The upper and lower bounds for each axis are separated by a colon, while the axes themselves are separated by a comma. Note that the use of quotes is necessary on a UNIX shell command line, but not in response to a prompt or in many other scripting languages.
The bounds are given in the co-ordinates of the data (Galactic for Axes 1 and 2 and velocity for Axis 3). You can find the number and names of the axes along with the pixel bounds of your data file by running ndftrace. In the example above the ranges are 10.5 to 10 in Longitude (note the range goes from left to right), 0 to 0.25 in Latitude, and -25 to +25 km/s in velocity. To leave an axis untouched simply include the comma but do not specify any bounds.
To define your bounds in FK5 co-ordinates use the following format.
To write a section of your data into a new file (called
newcube.sdf in the example below) use ndfcopy with
Here the option
title defines a title for the new cube which replaces the title derived from the
You can also define a region as a number of pixels from a given origin. The example below extracts a 2525 pixels cube around the position , .
If you know the name of the Starlink document you want to view use showme. When run, it launches a new web page or tab displaying the hypertext version of the document.
findme searches Starlink documents for a keyword. When run, it launches a new web page or tab listing the results.
docfind searches the internal list files for keywords. It then searches the document titles. The result is displayed using the UNIX more command.
Run routines with prompts
You can run any routine with the option