12 Dealing with Files

 12.1 Extracting parts of filenames
 12.2 Process a Series of Files
 12.3 Filename modification
 12.4 File operators
 12.5 Creating text files
 12.6 Reading lines from a text file
 12.7 Reading tabular data
 12.8 Reading from dynamic text files
 12.9 Discarding text output
 12.10 Obtaining dataset attributes
 12.11 FITS Headers
 12.12 Accessing other objects
 12.13 Defining NDF sections with variables

12.1 Extracting parts of filenames

Occasionally you’ll want to work with parts of a filename, such as the path or the file type. The C-shell provides filename modifiers that select the various portions. A couple are shown in the example below.

       set type = $1:e
       set name = $1:r
       if ( $type == "bdf" ) then
          echo "Use BDF2NDF on a VAX to convert the Interim file $1"
       else if ( $type != "dst" ) then
          hdstrace $name
       else
          hdstrace @’"$1"’
       endif

Suppose the first argument of a script, $1, is a filename called galaxy.bdf. The value of variable type is bdf and name equates to galaxy because of the presence of the filename modifiers :e and :r. The rest of the script uses the file type to control the processing, in this case to provide a listing of the contents of a data file using the Hdstrace utility.

The complete list of modifiers, their meanings, and examples is presented in the table below.

Modifier

Value returned

Value for filename

/star/bin/kappa/comwest.sdf



:e

Portion of the filename following a full stop; if the filename does not contain a full stop, it returns a null string

sdf
:r

Portion of the filename preceding a full stop; if there is no full stop present, it returns the complete filename

comwest
:h

The path of the filename (mnemonic: h for head)

/star/bin/kappa
:t

The tail of the file specification, excluding the path

comwest.sdf



12.2 Process a Series of Files

One of the most common things you’ll want to do, having devised a data-processing path, is to apply those operations to a series of data files. For this you need a foreach...end construct.

       convert               # Only need be invoked once per process
       foreach file (*.fit)
          stats $file
       end

This takes all the FITS files in the current directory and computes the statistics for them using the stats command from Kappa. file is a shell variable. Each time in the loop file is assigned to the name of the next file of the list between the parentheses. The * is the familiar wildcard which matches any string. Remember when you want to use the shell variable’s value you prefix it with a $. Thus $file is the filename.
12.2.1 NDFs

Some data formats like the NDF demand that only the file name (i.e. what appears before the last dot) be given in commands. To achieve this you must first strip off the remainder (the file extension or type) with the :rfile modifier.

       foreach file (*.sdf)
          histogram $file:r accept
       end

This processes all the HDS files in the current directory and calculates an histogram for each of them using the histogram command from Kappa. It assumes that the files are NDFs. The :r instructs the shell to remove the file extension (the part of the name following the the rightmost full stop). If we didn’t do this, the histogram task would try to find NDFs called SDF within each of the HDS files.

12.2.2 Wildcarded lists of files

You can give a list of files separated by spaces, each of which can include the various UNIX wildcards. Thus the code below would report the name of each NDF and its standard deviation. The NDFs are called ‘Z’ followed by a single character, ccd1, ccd2, ccd3, and spot.

       foreach file (Z?.sdf ccd[1-3].sdf spot.sdf)
          echo "NDF:" $file:r"; sigma: "‘stats $file:r | grep "Standard deviation"‘
       end

echo writes to standard output, so you can write text including values of shell variables to the screen or redirect it to a file. Thus the output produced by stats is piped (the | is the pipe) into the UNIX grep utility to search for the string "Standard deviation". The ‘  ‘ invokes the command, and the resulting standard deviation is substituted.

You might just want to provide an arbitrary list of NDFs as arguments to a generic script. Suppose you had a script called splotem, and you have made it executable with chmod +x splotem.

       #!/bin/csh
       figaro                 # Only need be invoked once per process
       foreach file ($*)
          if (-e $file) then
             splot $file:r accept
          endif
       end

Notice the -e file-comparison operator. It tests whether the file exists or not. (Section 12.4 has a full list of the file operators.) To plot a series of spectra stored in NDFs, you just invoke it something like this.

       % ./splotem myndf.sdf arc[a-z].sdf hd[0-9]*.sdf

See the glossary for a list of the available wildcards such as the [a-z] in the above example.

12.2.3 Exclude the .sdf for NDFs

In the splotem example from the previous section the list of NDFs on the command line required the inclusion of the .sdf file extension. Having to supply the .sdf for an NDF is abnormal. For reasons of familiarity and ease of use, you probably want your relevant scripts to accept a list of NDF names and to append the file extension automatically before the list is passed to foreach. So let’s modify the previous example to do this.

       #!/bin/csh
       figaro                 # Only need be invoked once per process
  
       #  Append the HDS file extension to the supplied arguments.
       set ndfs
       set i = 1
       while ( $i <= $#argv )
          set ndfs = ($ndfs[*] $argv[i]".sdf")
          @ i = $i + 1
       end
  
       #  Plot each 1-dimensional NDFs.
       foreach file ($ndfs[*])
          if (-e $file) then
             splot $file:r accept
          endif
       end

This loops through all the arguments and appends the HDS-file extension to them by using a work array ndfs. The set defines a value for a shell variable; don’t forget the spaces around the =. ndfs[*] means all the elements of variable ndfs. The loop adds elements to ndfs which is initialised without a value. Note the necessary parentheses around the expression ($ndfs[*] $argv[i]".sdf").

On the command line the wildcards have to be passed verbatim, because the shell will try to match with files than don’t have the .sdf file extension. Thus you must protect the wildcards with quotes. It’s a nuisance, but the advantages of wildcards more than compensate.

       % ./splotem myndf ’arc[a-z]’ ’hd[0-9]*’
       % ./noise myndf ’ccd[a-z]’

If you forget to write the ’ ’, you’ll receive a  No match error.
12.2.4 Examine a series of NDFs

A common need is to browse through several datasets, perhaps to locate a specific one, or to determine which are acceptable for further processing. The following presents images of a series of NDFs using the display task of Kappa. The title of each plot tells you which NDF is currently displayed.

       foreach file (*.sdf)
          display $file:r axes style="’title==$file:r’" accept
          sleep 5
       end

sleep pauses the process for a given number of seconds, allowing you to view each image. If this is too inflexible you could add a prompt so the script displays the image once you press the return key.

       set nfiles = ‘ls *.sdf | wc -w‘
       set i = 1
       foreach file (*.sdf)
          display $file:r axes style="’title==$file:r’" accept
  
       # Prompt for the next file unless it is the last.
          if ( $i < $nfiles ) then
             echo -n "Next?"
             set next = $<
  
       # Increment the file counter by one.
             @ i++
          endif
       end

The first lines shows a quick way of counting the number of files. It uses ls to expand the wildcards, then the command wc to count the number of words. The back quotes cause the instruction between them to be run and the values generated to be assigned to variable nfiles.

You can substitute another visualisation command for display as appropriate. You can also use the graphics database to plot more than one image on the screen or to hardcopy. The script $KAPPA_DIR/multiplot.csh does the latter.

12.3 Filename modification

Thus far the examples have not created a new file. When you want to create an output file, you need a name for it. This could be an explicit name, one derived from the process identification number, one generated by some counter, or from the input filename. Here we deal with all but the trivial first case.

12.3.1 Appending to the input filename

To help identify datasets and to indicate the processing steps used to generate them, their names are often created by appending suffices to the original file name. This is illustrated below.

       foreach file (*.sdf)
          set ndf = $file:r
          block in=$ndf out=$ndf"_sm" accept
       end

This uses block from Kappa to perform block smoothing on a series of NDFs, creating new NDFs, each of which takes the name of the corresponding input NDF with a _sm suffix. The accept keyword accepts the suggested defaults for parameters that would otherwise be prompted. We use the set to assign the NDF name to variable ndf for clarity.
12.3.2 Appending a counter to the input filename

If a counter is preferred, this example

       set count = 1
       foreach file (*.sdf)
          set ndf = $file:r
          @ count = $count + 1
          block in=$ndf out=smooth$count accept
       end

would behave as the previous one except that the output NDFs would be called smooth1, smooth2 and so on.
12.3.3 Appending to the input filename

Whilst appending a suffix after each data-processing stage is feasible, it can generate some long names, which are tedious to handle. Instead you might want to replace part of the input name with a new string. The following creates another shell variable, ndfout by replacing the string _flat from the input NDF name with _sm. The script pipes the input name into the sed editor which performs the substitution.

       foreach file (*_flat.sdf)
          set ndf = $file:r
          set ndfout = ‘echo $ndf | sed ’s#_flat#_sm#’‘
          block in=$ndf out=$ndfout accept
       end

The # is a delimiter for the strings being substituted; it should be a character that is not present in the strings being altered. Notice the ‘ ‘ quotes in the assignment of ndfout. These instruct the shell to process the expression immediately, rather than treating it as a literal string. This is how you can put values output from UNIX commands and other applications into shell variables.

12.4 File operators

There is a special class of C-shell operator that lets you test the properties of a file. A file operator is used in comparison expressions of the form if (file_operator file) then. A list of file operators is tabulated to the right.

The most common usage is to test for a file’s existence. The following only runs cleanup if the first argument is an existing file.

   



File operators


Operator True if:


-d file is a directory
-e file exists
-f file is ordinary
-o you are the owner of the file
-r file is readable by you
-w file is writable by you
-x file is executable by you
-z file is empty


       # Check that the file given by the first
       # argument exists before attempting to
       # use it.
       if ( -e $1 ) then
          cleanup $1
       endif

Here are some other examples.

       # Remove any empty directories.
       if ( -d $file && -z $file ) then
          rmdir $file
  
       # Give execute access to text files with a .csh extension.
       else if ( $file:e == "csh" && -f $file ) then
          chmod +x $file
       endif

12.5 Creating text files

A frequent feature of scripts is redirecting the output from tasks to a text file. For instance,

       hdstrace $file:r > $file:r.lis
       fitshead $fits > $$.tmp

directs the output of the hdstrace and fitshead to text files. The name of the first is generated from the name of the file whose contents are being listed, so for HDS file cosmic.sdf the trace is stored in cosmic.lis. In the second case, the process identification number is the name of the text file. You can include this special variable to generate the names of temporary files. (The :r is described in Section 12.1.)

If you intend to write more than once to a file you should first create the file with the touch command, and then append output to the file.

       touch logfile.lis
       foreach file (*.sdf)
          echo "FITS headers for $file:r:"  >> logfile.lis
          fitslist $file:r >> logfile.lis
          echo " "
       end

Here we list FITS headers from a series of NDFs to file logfile.lis. There is a heading including the dataset name and blank line between each set of headers. Notice this time we use » to append. If you try to redirect with > to an existing file you’ll receive an error message whenever you have the noclobber variable set. >! redirects regardless of noclobber.

There is an alternative—write the text file as part of the script. This is often a better way for longer files. It utilises the cat command to create the file.

     cat >! catpair_par.lis   <<EOF
     ${refndf}.txt
     ${compndf}.TXT
     ${compndf}_match.TXT
     C
     XPOS
     YPOS
     XPOS
     YPOS
     $distance
     ‘echo $time | awk ’{print substr($0,1,5)}’‘
     EOF

The above writes the text between the two EOFs to file catpair_par.lis. Note the second EOF must begin in column 1. You can choose the delimiting words; common ones are EOF, FOO. Remember the >! demands that the file be written regardless of whether the file already exists.

A handy feature is that you can embed shell variables, such as refndf and distance in the example. You can also include commands between left quotes (‘ ‘); the commands are evaluated before the file is written. However, should you want the special characters $, \, and ‘ ‘ to be treated literally insert a \ before the delimiting word or a \ before each special character.

12.5.1 Writing a script within a script

The last technique might be needed if your script writes another script, say for overnight batch processing, in which you want a command to be evaluated when the second script is run, not the first. You can also write files within this second script, provided you choose different words to delimit the file contents. Here’s an example which combines both of these techniques.

     cat >! /tmp/${user}_batch_${runno}.csh    <<EOF
     #!/bin/csh
  
     # Create the data file.
     cat >! /tmp/${user}_data_${runno}  <<EOD
     $runno
     \‘date\‘
     ‘star/bin/kappa/calc exp="LOG10($C+0.01*$runno)"‘
     EOD
  
     <commands to perform the data processing using the data file>
  
     # Remove the temporary script and data files.
     rm /tmp/${user}batch_${runno}.csh
     rm /tmp/${user}_data_${runno}
  
     exit
     EOF
     chmod +x  /tmp/${user}_batch_${runno}.csh

This excerpt writes a script in the temporary directory. The temporary script’s filename includes our username ($user) and some run number stored in variable runno. The temporary script begins with the standard comment indicating that it’s a C-shell script. The script’s first action is to write a three-line data file. Note the different delimiter, EOD. This data file is created only when the temporary script is run. As we want the time and date information at run time to be written to the data file, the command substitution backquotes are both escaped with a \. In contrast, the final line of the data file is evaluated before the temporary script is written. Finally, the temporary script removes itself and the data file. After making a temporary script, don’t forget to give it execute permission.

12.6 Reading lines from a text file

There is no simple file reading facility in the C-shell. So we call upon awk again.

       set file = ‘awk ’{print $0}’ sources.lis‘

Variable file is a space-delineated array of the lines in file sources.lis. More useful is to extract a line at a time.

       set text = ‘awk -v ln=$j ’{if (NR==ln) print $0}’ sources.lis‘

where shell variable j is a positive integer and no more than the number of lines in the file, returns the jth line in variable text.

12.7 Reading tabular data

When reading data into your script from a text file you will often require to extract columns of data, determine the number of lines extracted, and sometimes the number columns and selecting columns by heading name. The shell does not offer file reading commands, so we fall back heavily on our friend awk.

12.7.1 Finding the number of fields

The simple recipe is

       set ncol = ‘awk ’{if (FNR==1) print NF}’ fornax.dat‘

where FNR is the number of the records read. NF is the number of space-separated fields in that record. If another character delimits the columns, this can be set by assigning the FS variable without reading any of the records in the file (because of the BEGIN pattern or through the -F option).
      set nlines = ‘wc fornax.dat‘
  
      set ncol = ‘awk ’BEGIN { FS = ":" }{if (FNR==1) print NF}’ fornax.dat‘
      set ncol = ‘awk -F: ’{if (FNR==1) print NF}’ fornax.dat‘

FNR, NF, and FS are called built-in variables.

There may be a header line or some schema before the actual data, you can obtain the field count from the last line.

       set ncol = ‘awk -v nl=$nlines[1] ’{if (FNR==nl) print NF}’ fornax.dat‘

First we obtain the number of lines in the file using wc stored in $lines[1]. This shell variable is passed into awk, as variable nl, through the -v option.

If you know the comment line begins with a hash (or can be recognised by some other regular expression) you can do something like this.

       set ncol = ‘awk -v i=0 ’{if ($0 !~ /^#/) i++; if (i==1) print NF}’ fornax.dat‘

Here we initialise awk variable i. Then we test the record $0 does not match a line starting with # and increment i, and only print the number of fields for the first such occurrence.
12.7.2 Extracting columns

For a simple case without any comments.

       set col1 = ‘awk ’{print $1}’ fornax.dat‘

Variable col1 is an array of the values of the first column. If you want an arbitrary column
       set col$j = ‘awk -v cn=$j ’{print $cn}’ fornax.dat‘

where shell variable j is a positive integer and no more than the number of columns, returns the jth column in the shell array colj.

If there are comment lines to ignore, say beginning with # or *, the following excludes those from the array of values.

       set col$j = ‘awk -v cn=$j ’$0!~/^[#*]/ {print $cn}’ fornax.dat‘

or you may want to exclude lines with alphabetic characters.
       set col2 = ‘awk ’$0!~/[A-DF-Za-df-z]/ {print $2}’ fornax.dat‘

Notice that E and e are omitted to allow for exponents.
12.7.3 Selecting a range

awk lets you select the lines to extract through boolean expressions, that includes involving the column data themselves, or line numbers through NR.

       set col$j = ‘awk -v cn=$j ’$2 > 0.579 {print $cn}’ fornax.dat‘
       set col$j = ‘awk -v cn=$j ’$2>0.579 && $2<1.0 {print $cn}’ fornax.dat‘
       set col4 = ‘awk  ’sqrt($1*$1+$2*$2) > 1 {print $4};’ fornax.dat‘
       set col2 = ‘awk ’NR<=5 || NR>10 {print $2}’ fornax.dat‘
       set col2 = ‘awk ’$0!~/-999/ {print $2}’ fornax.dat‘
  
       set nrow = $#col2.

The first example only includes those values in column 2 that exceed 0.579. The second further restricts the values to be no greater than 1.0. The third case involves the square-root of the sum of the squares of the first two columns. The fourth omits the sixth to tenth rows. The fifth example tests for the absence of a null value, -999.

You can find out how many values were extracted through $#var, such as in the final line above.

You have the standard relational and boolean operators available, as well as and ! for match and does not match respectively. These last two con involve regular expressions giving powerful selection tools.

12.7.4 Choosing columns by name

Suppose your text file has a heading line listing the names of the columns.

      set name = Vmag
      set cn = ‘awk -v col=$name ’{if (NR==1) {for(i=1;i<=NF;\
                i++) {if ($i==col) {print i; break}}}}’ fornax.dat‘

That looks complicated, so let’s go through it step by step. We supply the required column name name into the awk variable col through to -v command-line option. For the first record NR==1, we loop through all the fields (NF starting at the first, and if the current column name ($i) equals the requested name, the column number is printed and we break from the loop. If the field is not present, the result is null. The extra braces associate commands in the same for or if block. Note that unlike C-shell, in awk the line break can only appear immediately after a semicolon or brace.

The above can be improved upon using the toupper function to avoid case sensitivity.

      set name = Vmag
      set cn = ‘awk -v col=$name ’{if (NR==1) {for(i=1;i<=NF;\
                i++) {if (toupper($i)==toupper(col)) {print i; break}}}}’ fornax.dat‘

Or you could attempt to match a regular expression.

12.8 Reading from dynamic text files

You can also read from a text file created dynamically from within your script.

       ./doubleword < mynovel.txt
  
       myprog <<BAR
       320 512
       $nstars
       ‘wc -l < brightnesses.txt‘
       testimage
       BAR
  
       commentary <<\foo
          The AITCH package offers unrivalled facilities.
          It is also easy to use because of its GUI interface.
  
                     Save $$$ if you buy now.
       foo

Command ./doubleword reads its standard input from the file mynovel.txt. The «word obtains the input data from the script file itself until there is line beginning word. You may also include variables and commands to execute as the $, \, and ‘ ‘ retain their special meaning. If you want these characters to be treated literally, say to prevent substitution, insert a \ before the delimiting word. The command myprog reads from the script, substituting the value of variable nstars in the second line, and the number of lines in file brightnesses.txt in the third line.

The technical term for such files are here documents.

12.9 Discarding text output

The output from some routines is often unwanted in scripts. In these cases redirect the standard output to a null file.

       correlate in1=frame1 in2=frame2 out=framec > /dev/null

Here the text output from the task correlate is disposed of to the /dev/null file. Messages from Starlink tasks and usually Fortran channel 6 write to standard output.

12.10 Obtaining dataset attributes

When writing a data-processing pipeline connecting several applications you will often need to know some attribute of the data file, such as its number of dimensions, its shape, whether or not it may contain bad pixels, a variance array or a specified extension. The way to access these data is with ndftrace from Kappa and parget commands. ndftrace inquires the data, and parget communicates the information to a shell variable.

12.10.1 Obtaining dataset shape

Suppose that you want to process all the two-dimensional NDFs in a directory. You would write something like this in your script.

       foreach file (*.sdf)
          ndftrace $file:r > /dev/null
          set nodims = ‘parget ndim ndftrace‘
          if ( $nodims == 2 ) then
             <perform the processing of the two-dimensional datasets>
          endif
       end

Note although called ndftrace, this function can determine the properties of foreign data formats through the automatic conversion system (SUN/55, SSN/20). Of course, other formats do not have all the facilities of an NDF.

If you want the dimensions of a FITS file supplied as the first argument you need this ingredient.

       ndftrace $1 > /dev/null
       set dims = ‘parget dims ndftrace‘

Then dims[i] will contain the size of the ith dimension. Similarly

       ndftrace $1 > /dev/null
       set lbnd = ‘parget lbound ndftrace‘
       set ubnd = ‘parget ubound‘

will assign the pixel bounds to arrays lbnd and ubnd.
12.10.2 Available attributes

Below is a complete list of the results parameters from ndftrace. If the parameter is an array, it will have one element per dimension of the data array (given by parameter NDIM); except for EXTNAM and EXTTYPE where there is one element per extension (given by parameter NEXTN). Several of the axis parameters are only set if the ndftrace input keyword fullaxis is set (not the default). To obtain, say, the data type of the axis centres of the current dataset, the code would look like this.

       ndftrace fullaxis accept > dev/null
       set axtype = ‘parget atype ndftrace‘

Name Array?

Meaning




AEND Yes

The axis upper extents of the NDF. For non-monotonic axes, zero is used. See parameter AMONO. This is not assigned if AXIS is FALSE.

AFORM Yes

The storage forms of the axis centres of the NDF. This is only written when parameter FULLAXIS is TRUE and AXIS is TRUE.

ALABEL Yes

The axis labels of the NDF. This is not assigned if AXIS is FALSE.

AMONO Yes

These are TRUE when the axis centres are monotonic, and FALSE otherwise. This is not assigned if AXIS is FALSE.

ANORM Yes

The axis normalisation flags of the NDF. This is only written when FULLAXIS is TRUE and AXIS is TRUE.

ASTART Yes

The axis lower extents of the NDF. For non-monotonic axes, zero is used. See parameter AMONO. This is not assigned if AXIS is FALSE.

ATYPE Yes

The data types of the axis centres of the NDF. This is only written when FULLAXIS is TRUE and AXIS is TRUE.

AUNITS Yes

The axis units of the NDF. This is not assigned if AXIS is FALSE.

AVARIANCE Yes

Whether or not there are axis variance arrays present in the NDF. This is only written when FULLAXIS is TRUE and AXIS is TRUE.

AXIS

Whether or not the NDF has an axis system.

BAD

If TRUE, the NDF’s data array may contain bad values.

BADBITS

The BADBITS mask. This is only valid when QUALITY is TRUE.

CURRENT

The integer Frame index of the current co-ordinate Frame in the WCS component.

DIMS Yes

The dimensions of the NDF.

EXTNAME Yes

The names of the extensions in the NDF. It is only written when NEXTN is positive.

EXTTYPE Yes

The types of the extensions in the NDF. Their order corresponds to the names in EXTNAME. It is only written when NEXTN is positive.

FDIM Yes

The numbers of axes in each co-ordinate Frame stored in the WCS component of the NDF. The elements in this parameter correspond to those in FDOMAIN and FTITLE. The number of elements in each of these parameters is given by NFRAME.

FDOMAIN Yes

The domain of each co-ordinate Frame stored in the WCS component of the NDF. The elements in this parameter correspond to those in FDIM and FTITLE. The number of elements in each of these parameters is given by NFRAME.

FLABEL Yes

The axis labels from the current WCS Frame of the NDF.

FLBND Yes

The lower bounds of the bounding box enclosing the NDF in the current WCS Frame. The number of elements in this parameter is equal to the number of axes in the current WCS Frame (see FDIM).

FORM

The storage form of the NDF’s data array.

FTITLE Yes

The title of each co-ordinate Frame stored in the WCS component of the NDF. The elements in this parameter correspond to those in FDOMAIN and FDIM. The number of elements in each of these parameters is given by NFRAME.




Name Array?

Meaning




FUBND Yes

The upper bounds of the bounding box enclosing the NDF in the current WCS Frame. The number of elements in this parameter is equal to the number of axes in the current WCS Frame (see FDIM).

FUNIT Yes

The axis units from the current WCS Frame of the NDF.

HISTORY

Whether or not the NDF contains HISTORY records.

LABEL

The label of the NDF.

LBOUND Yes

The lower bounds of the NDF.

NDIM

The number of dimensions of the NDF.

NEXTN

The number of extensions in the NDF.

NFRAME

The number of WCS domains described by FDIM, FDOMAIN and FTITLE. Set to zero if WCS is FALSE.

QUALITY

Whether or not the NDF contains a QUALITY array.

TITLE

The title of the NDF.

TYPE

The data type of the NDF’s data array.

UBOUND Yes

The upper bounds of the NDF.

UNITS

The units of the NDF.

VARIANCE

Whether or not the NDF contains a VARIANCE array.

WCS

Whether or not the NDF has any WCS co-ordinate Frames, over and above the default GRID, PIXEL and AXIS Frames.

WIDTH Yes

Whether or not there are axis width arrays present in the NDF. This is only written when FULLAXIS is TRUE and AXIS is TRUE.




12.10.3 Does the dataset have variance/quality/axis/history information?

Suppose you have an application which demands that variance information be present, say for optimal extraction of spectra, you could test for the existence of a variance array in your FITS file called dataset.fit like this.

       #  Enable automatic conversion
       convert         # Needs to be invoked only once per process
  
       set file = dataset.fit
       ndftrace $file > /dev/null
       set varpres = ‘parget variance ndftrace‘
       if ( $varpres == "FALSE" ) then
          echo "File $file does not contain variance information"
       else
          <process the dataset>
       endif

The logical results parameters have values TRUE or FALSE. You merely substitute another component such as quality or axis in the parget command to test for the presence of these components.
12.10.4 Testing for bad pixels

Imagine you have an application which could not process bad pixels. You could test whether a dataset might contain bad pixels, and run some pre-processing task to remove them first. This attribute could be inquired via ndftrace. If you need to know whether or not any were actually present, you should run setbad from Kappa first.

       setbad $file
       ndftrace $file > /dev/null
       set badpix = ‘parget bad ndftrace‘
       if ( badpix == "TRUE" ) then
          <remove the bad pixels>
       else
          goto tidy
       endif
       <perform data processing>
  
       tidy:
       <tidy any temporary files, windows etc.>
       exit

Here we also introduce the goto command—yes there really is one. It is usually reserved for exiting (goto exit), or, as here, moving to a named label. This lets us skip over some code, and move directly to the closedown tidying operations. Notice the colon terminating the label itself, and that it is absent from the goto command.

12.10.5 Testing for a spectral dataset

One recipe for testing for a spectrum is to look at the axis labels. (whereas a modern approach might use WCS information). Here is a longer example showing how this might be implemented. Suppose the name of the dataset being probed is stored in variable ndf.

       # Get the full attributes.
       ndftrace $ndf fullaxis accept > /dev/null
  
       # Assign the axis labels and number of dimensions to variables.
       set axlabel = ‘parget atype ndftrace‘
       set nodims = ‘parget ndim‘
  
       # Exit the script when there are too many dimensions to handle.
       if ( $nodims > 2 ) then
          echo Cannot process a $nodims-dimensional dataset.
          goto exit
       endif
  
       # Loop for each dimension or until a spectral axis is detected.
       set i = 1
       set spectrum = FALSE
       while ( $i <= nodims && $spectrum == FALSE )
  
       # For simplicity the definition of a spectral axis is that
       # the axis label is one of a list of acceptable values.  This
       # test could be made more sophisticated.  The toupper converts the
       # label to uppercase to simplify the comparison.  Note the \ line
       # continuation.
          set uaxlabel = ‘echo $axlabel[$i] | awk ’{print toupper($0)}’‘
          if ( $uaxlabel == "WAVELENGTH" || $uaxlabel == "FREQUENCY" \
               $uaxlabel == "VELOCITY" ) then
  
       # Record that the axis is found and which dimension it is.
             set spectrum = TRUE
             set spaxis = $i
          endif
          @ i++
       end
  
       # Process the spectrum.
       if ( $spectrum == TRUE ) then
  
       # Rotate the dataset to make the spectral axis along the first
       # dimension.
          if ( $spaxis == 2 ) then
             irot90 $file $file"_rot" accept
  
       # Fit the continuum.
             sfit spectrum=$file"_rot" order=2 output=$file"_fit" accept
          else
             sfit spectrum=$file order=2 output=$file"_fit accept
          end if
       endif

12.11 FITS Headers

Associated with FITS files and many NDFs is header information stored in 80-character ‘cards’. It is possible to use these ancillary data in your script. Each non-comment header has a keyword, by which you can reference it; a value; and usually a comment. Kappa from V0.10 has a few commands for processing FITS header information described in the following sections.

12.11.1 Testing for the existence of a FITS header value

Suppose that you wanted to determine whether an NDF called image123 contains an AIRMASS keyword in its FITS headers (stored in the FITS extension).

       set airpres = ‘fitsexist image123 airmass‘
       if ( $airpres == "TRUE" ) then
          <access AIRMASS FITS header>
       endif

Variable airpres would be assigned "TRUE" when the AIRMASS card was present, and "FALSE" otherwise. Remember that the ‘ ‘ quotes cause the enclosed command to be executed.
12.11.2 Reading a FITS header value

Once we know the named header exists, we can then assign its value to a shell variable.

       set airpres = ‘fitsexist image123 airmass‘
       if ( $airpres == "TRUE" ) then
          set airmass = ‘fitsval image123 airmass‘
          echo "The airmass for image123 is $airmass."
       endif

12.11.3 Writing or modifying a FITS header value

We can also write new headers at specified locations (the default being just before the END card), or revise the value and/or comment of existing headers. As we know the header AIRMASS exists in image123, the following revises the value and comment of the AIRMASS header. It also writes a new header called FILTER immediately preceding the AIRMASS card assigning it value B and comment Waveband.

       fitswrite image123 airmass value=1.062 comment=\"Corrected airmass\"
       fitswrite image123 filter position=airmass value=B comment=Waveband

As we want the metacharacters " to be treated literally, each is preceded by a backslash.

12.12 Accessing other objects

You can manipulate data objects in HDS files, such as components of an NDF’s extension. There are several Starlink applications for this purpose including the FIGARO commands copobj, creobj, delobj, renobj, setobj; and the Kappa commands setext, and erase.

For example, if you wanted to obtain the value of the EPOCH object from an extension called IRAS_ASTROMETRY in an NDF called lmc, you could do it like this.

       set year = ‘setext lmc xname=iras_astrometry option=get \
                   cname=epoch noloop‘

The noloop prevents prompting for another extension-editing operation. The single backslash is the line continuation.

12.13 Defining NDF sections with variables

If you want to define a subset or superset of a dataset, most Starlink applications recognise NDF sections (see SUN/95’s chapter called “NDF Sections”) appended after the name.

A naïve approach might expect the following to work

       set lbnd = 50
       set ubnd = 120
       linplot $KAPPA_DIR/spectrum"($lbnd:$ubnd)"
       display $KAPPA_DIR/comwest"($lbnd:$ubnd",$lbnd~$ubnd)"

however, they generate the   Bad : modifier in $ ($).  error. That’s because it is stupidly looking for a filename modifier :$ (see Section 12.1).

Instead here are some recipes that work.

       set lbnd = 50
       set ubnd = 120
       set lrange = "101:150"
  
       linplot $KAPPA_DIR/spectrum"($lbnd":"$ubnd)"
       stats abc"(-20:99,~$ubnd)"
       display $KAPPA_DIR/comwest"($lbnd":"$ubnd",$lbnd":"$ubnd")"
       histogram hale-bopp.fit’(’$lbnd’:’$ubnd’,’$lbnd’:’$ubnd’)’
       ndfcopy $file1.imh"("$lbnd":"$ubnd","$lrange")" $work"1"
       splot hd102456’(’$ubnd~60’)’

An easy-to-remember formula is to enclose the parentheses and colons in quotes.