### 8 Executing a Starlink Application

Running Starlink tasks from a script is much the same as running them interactively from the shell prompt. The commands are the same. The difference for shell use is that you should provide values on the command line (directly or indirectly) for parameters for which you would normally be prompted. You may need to rehearse the commands interactively to learn what parameter values are needed. Although there is less typing to use a positional parameter for the expression, it’s prudent to give full parameter names in scripts. Positions might change and parameter names are easier to follow. CURSA is an exception. For this package you should list the answers to prompts in a file as described in Section 12.8.

The script must recognise the package commands. The options for enabling this are described below. Then you can run Starlink applications from the C-shell script by just issuing the commands as if you were prompted. You do not prefix them with any special character, like the % used throughout this manual.

If you already have the commands defined in your current shell, you can source your script so that it runs in that shell, rather than in a child process derived from it. For instance,

% source myscript test

will run the script called myscript with argument test using the current shell environment; any package definitions currently defined will be known to your script. This method is only suitable for quick one-off jobs, as it does rely on the definition aliases being present.

The recommended way is to invoke the startup scripts, such as kappa, ccdpack within the script. The script will take a little longer to run because of these extra scripts, but it will be self-contained. To prevent the package startup message appearing you could temporarily redefine echo as shown here.

alias echo "echo > /dev/null"
kappa
ccdpack
unalias echo

In traditional UNIX style there is a third option: you could add the various directories containing the executables to your PATH environment variable, however this will not pick up the synonym commands.

setenv PATH $PATH:/home/user1/dro/bin:/home/user2/drmoan/noddy As most of the examples in this document are script excerpts, and for reasons of brevity, most do not define the package commands explicitly. #### 8.1 Parameter files and the graphics database If you run simultaneously more than one shell script executing Starlink applications, or run such a script in the background while you continue an interactive session, you may notice some strange behaviour with parameters. Starlink applications uses files in the directory $ADAM_USER to store parameter values. If you don’t tell your script or interactive session where this is located, tasks will use the same directory. To prevent sharing of the parameter files use the following tip.

#!/bin/csh
mkdir /user1/dro/vela/junk_$$setenv ADAM_USER /user1/dro/vela/junk_$$

<main body of the script>

\rm -r /user1/dro/vela/junk_$$# end of script This creates a temporary directory (/user1/dro/vela/junk_$$) and redefines $ADAM_USER to point to it. Both exist only while the script runs. The$$substitutes the process identification number and so makes a unique name. The backslash in \rm overrides any alias rm. If you are executing graphics tasks which use the graphics database, you may also need to redefine$AGI_USER to another directory. Usually, it is satisfactory to equate $AGI_USER to the $ADAM_USER directory.

#### 8.2 How to test whether or not a Starlink task has failed

In a typical script involving Starlink software, you will invoke several applications. Should any of them fail, you normally do not want the script to continue, unless an error is sometimes expected and your shell script can take appropriate action. Either way you want a test that the application has succeeded.

If you set the ADAM_EXIT environment variable to 1 in your script before calling Starlink applications then the status variable after each task, will indicate whether or not the task has failed, where 1 means failure and 0 success.

.    .    .
.    .    .
.    .    .
stats allsky > /dev/null
echo $status 1 stats$KAPPA_DIR/comwest > /dev/null
echo $status 0 The NDF allsky is absent from the current directory, so stats fails, reflected in the value of status, whereas$KAPPA_DIR/comwest does exist.

Here’s an example in action.

normalize in1=$ndfgen in2=$ndfin out=! device=! > /dev/null
if ( $status == 1 ) then echo "normalize failed comparing$ndf1 and $ndf2." goto tidy else set offset = ‘parget offset normalize‘ set scale = ‘parget slope normalize‘ endif . . . . . . . . . tidy: \rm${ndfgen}.sdf
The script first switches on the ADAM_EXIT facility. A little later you create an NDF represented by $ndfgen and then compare it with the input NDF $ndfin using normalize. If the task fails, you issue an error message and move to a block of code, normally near the end of the script, where various cleaning operations occur. In this case it removes the generated NDF.