Chapter 15
The Data System

 15.1 HDS — Hierarchical data system
  15.1.1 Symbolic names and Include files
  15.1.2 Creating objects
  15.1.3 Writing and reading objects
  15.1.4 Accessing objects by mapping
  15.1.5 Mapping character data
  15.1.6 Copying and deleting objects
  15.1.7 Subsets of objects
  15.1.8 Temporary objects
  15.1.9 Enquiries
  15.1.10 Packaged routines
 15.2 REF — References to HDS objects
  15.2.1 Uses
 15.3 NDF — Extensible n-dimensional data format
  15.3.1 Relationship with HDS
  15.3.2 Data format
  15.3.3 Routines
  15.3.4 Example program

In this chapter, a bottom-up approach is adopted — the low-level packages are described first, followed by the higher-level ones.

15.1 HDS — Hierarchical data system

15.1.1 Symbolic names and Include files

Symbolic names should be used for important constant values in HDS programs to make them clearer and to insulate them from possible future changes in their values. These symbolic names are defined by Fortran ‘include’ files as shown in the examples below. The following include files are available:

— This file is not actually part of HDS, but it defines the global symbolic constant SAI__OK (the value of the status return indicating success) and will be required by nearly all routines which call HDS. It should normally be included as a matter of course.1
— Defines various symbolic constants for HDS. These should be used whenever the associated value is required (typically this is when program variables are defined)
— Defines symbolic names for the error status values returned by the DAT_ and HDS_ routines.
— Defines symbolic names for the additional error status values returned by the CMP_ routines.

The symbolic names of these include files may be used directly on VAX/VMS systems, but on Unix systems an explicit directory specification for the file is normally also required, and the file name should appear in lower case. Thus, to include the DAT_PAR file on a VAX/VMS system, the following code would be used:


whereas on a Unix system, the following code is required:

      INCLUDE ’/star/include/dat_par’

(/star/include is a standard directory containing include files on Starlink machines running Unix.)

If it is necessary to test for specific error conditions, the appropriate include file and symbolic names should be used in your program. Here is an example of how to use these symbols:

*  Find a structure component.  
*  Check the status value returned.  
        <normal action>  
        <take appropriate action for object not found>  
        <action on other errors>  

15.1.2 Creating objects

To fix ideas, look at the example data structure in Fig 15.1. This is actually one form of NDF structure, but for the purposes of this chapter it will be treated as if it were simply an arbitrary HDS structure, i.e. we will use HDS routines rather than NDF routines to process it. The following notation is used to describe each object:

                      NAME[(dimensions)] <TYPE> [value]

Note that scalar objects have no dimensions and that each level down the hierarchy is indented.

       DATASET <NDF>  
          DATA_ARRAY(512,1024)     <_UBYTE>     0,0,0,1,2,3,255,3,...  
          LABEL                    <_CHAR*80>   ’This is the data label’  
          AXIS(2) <AXIS>  
             AXIS <AXIS>  
                DATA_ARRAY(512)    <_REAL>      0.5,1.5,2.5,...  
                LABEL              <_CHAR*30>   ’Axis 1’  
             AXIS <AXIS>  
                DATA_ARRAY(1024)   <_REAL>      5,10,15.1,20.3,...  
                LABEL              <_CHAR*10>   ’Axis 2’

Figure 15.1: A simple NDF structure.

This example exhibits several of the most important properties of HDS data objects.

The following code will create the structure in Fig 15.1:

      DATA DIMS /512, 1024/  
*  Create a container file with a top level scalar object of type NDF.  
      CALL HDS_NEW(’dataset’, ’DATASET’, ’NDF’, 0, 0, NLOC, STATUS)  
*  Create components in the top level object.  
      CALL DAT_NEWC(NLOC, ’LABEL’, 80, 0, 0, STATUS)  
      CALL DAT_NEW(NLOC, ’AXIS’, ’AXIS’, 1, 2, STATUS)  
*  Create components in the AXIS structure...  
*  Get a locator to the AXIS component.  
*  Get a locator to the array cell AXIS(1).  
*  Create internal components within AXIS(1) using the CELL locator.  
      CALL DAT_NEWC(CELL, ’LABEL’, 30, 0, 0, STATUS)  
*  Annul the cell locator  
*  Do the same for AXIS(2).  
      CALL DAT_NEWC(CELL, ’LABEL’, 10, 0, 0, STATUS)  
*  Access objects which have been created.  
*  Tidy up  

The following points should be borne in mind:

Here are some notes on particular aspects of this example:

— This is one of the constants mentioned in Section 15.1, which is defined in the include file DAT_PAR and specifies the length in characters of all HDS locators. Similar constants, DAT__SZNAM and DAT__SZTYP, specify the maximum lengths of object names and types.
— HDS routines conform to Starlink error handling conventions and use inherited status checking.
— A container file called ‘dataset’ is created (HDS provides the default file extension of ‘.SDF’). A scalar structure called DATASET with a type of NDF is created within this file, and a locator, NLOC, is associated with this structure. It is usually convenient, although not essential, to make the top-level object name match the container file name, as here.
— These routines create new objects within an object — they are not equivalent to HDS_NEW because they don’t have any reference to the container file, only to a higher level structure. Two variants are used simply because the character string length has to be specified when creating a character object and it is normally most convenient to provide this via an additional integer argument. However, DAT_NEW may be used to create new objects of any type, including character objects. In this case the character string length would be provided via the type specification, e.g. ‘_CHAR15’ (a character string length of one is assumed if ‘_CHAR’ is specified alone).
— After an object has been created, it is necessary to associate a locator with it before values can be inserted; this routine performs this function.
— There are several routines for accessing components of objects. This one obtains a locator to a scalar object (structure or primitive) within a non-scalar object like a vector.
— This is used to close the container file and to annul the locator passed to it.

15.1.3 Writing and reading objects

Having created a structure, the next step will usually be to put some values into it. This can be done by using the DAT_PUT and DAT_PUTC routines. For example, the main data array in the above example could be filled with values as follows:

      BYTE IMVALS(512, 1024)  
*  Put data from array IMVALS into the object DATA_ARRAY.  
*  Put data from character constant to the object LABEL.  
      CALL DAT_PUTC(LOC, 0, 0, ’This is the data label’, STATUS)  

Because this sort of activity occurs quite often, packaged access routines have been provided (see Section 15.1.10) for the programmer.

A complementary set of routines also exists for getting data from objects back into program arrays or variables; these are the DAT_GET routines. Again, packaged versions exist and are often handy in reducing the number of subroutine calls required.

15.1.4 Accessing objects by mapping

Another technique for accessing the data values stored in primitive HDS objects is termed ‘mapping’.2 An important advantage is that it removes a size restriction imposed by having to declare fixed size program arrays to hold data. This simplifies software, so that a single routine can handle objects of arbitrary size without recourse to accessing subsets.

HDS provides mapped access to primitive objects via the DAT_MAP routines. Essentially, DAT_MAP will return a pointer to a region of the computer’s memory in which the object’s values are stored. This pointer can then be passed to another routine using the VAX Fortran ‘%VAL’ facility.3 An example will illustrate this:

*  Map the DATA_ARRAY component of the NDF structure as a vector of type  
*  _REAL (even though the object is actually a 512 x 1024 array whose  
*  elements are of type _UBYTE).  
*  Pass the "array" to a subroutine.  
*  Unmap the object and annul the locator.  
*  Routine which takes the LOG of all values in a REAL array.  
      REAL A(N)  
      DO 1 I = 1, N  
         A(I) = LOG(A(I))  
 1    CONTINUE  

This example illustrates two features of HDS which we haven’t yet mentioned:

— It is possible to force HDS to regard objects as vectors, irrespective of their true dimensionality. This facility was useful in the above example as it made the subroutine SUB much more general in that it could be applied to any numeric primitive object.
Automatic type conversion
— The program can specify the data type it wishes to work with and the program will work even if the data are stored as a different type. HDS will (if necessary) automatically convert the data to the type required by the program.4 This useful feature can greatly simplify programming — simple programs can handle all data types. Automatic conversion works on reading, writing and mapping.

Note that once a primitive has been mapped, the associated locator cannot be used to access further data until the original object is unmapped.

15.1.5 Mapping character data

Although the above example used a numeric type of ‘_REAL’ to access the data, HDS allows any primitive type to be specified as an access type, including ‘_CHAR’. It gives you a choice about how to determine the length of the character strings it will map. You may either specify the length you want explicitly, e.g:


(in which case HDS would map an array of character strings with each element containing 30 characters) or you may leave HDS to decide on the length required by omitting the length specification, thus:


In the latter case, HDS will determine the number of characters actually required to format the object’s values without loss of information. It uses decimal strings for numerical values and the values ‘TRUE’ and ‘FALSE’ to represent logical values as character strings. If the object is already of character type, then its actual length will be used directly. The routine DAT_MAPC also operates in this manner.

You should consult SUN/92 for details of how to use this facility on different machines. It is one of the areas where it is very difficult to produce a mechanism which works properly on all machines.

15.1.6 Copying and deleting objects

HDS can also copy and delete objects. Routines DAT_COPY and DAT_ERASE will recursively copy and erase all levels of the hierarchy below that specified in the subroutine call:

*  Copy the AXIS structure to component AXISCOPY of the structure located  
*  by OLOC (which must have been previously defined).  
*  Erase the original AXIS structure.  

Note that the locator to the AXIS object has been annulled before attempting to delete it. This whole operation can also be done using DAT_MOVE:


15.1.7 Subsets of objects

The routine DAT_CELL accesses a single element of an array. An example was shown in Section 15.1.2. The routine DAT_SLICE accesses a subset of an arbitrarily dimensioned object. This subset can then be treated as if it were an object in its own right. For example:

      INTEGER LOWER(2), UPPER(2)  
      DATA LOWER / 100, 100 /  
      DATA UPPER / 200, 200 /  
*  Get a locator to the subset DATA_ARRAY(100:200,100:200).  
*  Map the subset as a vector.  

In contrast to DAT_SLICE, routine DAT_ALTER makes a permanent change to a non-scalar object. The object can be made larger or smaller, but only in the last dimension. This function is entirely dynamic, i.e. it can be done at any time, provided the object is not mapped for access. Note that DAT_ALTER works on both primitives and structures. It is important to realise that the number of dimensions cannot be changed by DAT_ALTER.

15.1.8 Temporary objects

Temporary objects of any type and shape may be created by using the DAT_TEMP routine. This returns a locator to the newly created object, and this may then be manipulated just as if it were an ordinary object (in fact a temporary container file is created with a unique name to hold all such objects, and this is deleted when HDS_STOP is executed at the end of the program). This is often useful for providing workspace for algorithms which may have to deal with large arrays.

15.1.9 Enquiries

One of the most important properties of HDS is that its data files are self-describing. This means that each object carries with it information describing all its attributes (not just its value), and these attributes can be obtained by means of enquiry routines. An example will illustrate:

*  Enquire the names and types of up to MAXCMP components...  
*  First get the total number of components.  
*  Now index through the structure’s components, obtaining locators and the  
*  required information.  
      DO 1 I = 1, MIN(NCOMP,MAXCMP)  
*  Get a locator to the I’th component.  
*  Obtain its name and type.  
*  Is it primitive?  

Here, DAT_INDEX is used to get locators to objects about which (in principle) we know nothing. This is just like listing the files in a directory, except that the order in which the components are stored in an HDS structure is arbitrary (so they won’t necessarily be accessed in alphabetical order).

15.1.10 Packaged routines

HDS includes families of routines which provide a more convenient method of accessing objects than the basic routines. For instance, members of the family DAT_PUT write values of specific type and dimensionality, and the DAT_GET routines read similar values. Thus DAT_PUT0I will write a single INTEGER value to a scalar primitive, and DAT_GET1R will read the value of a vector primitive and store it in a REAL program array. There are no DAT_GET2x routines; all dimensionalities higher than one are handled by DAT_GETNx and DAT_PUTNx.

Another family of routines are the CMP routines. These access components of the ‘current level’. This usually involves:

The CMP routines package this sort of operation, replacing three or so subroutine calls with one. The naming scheme is based on the associated DAT routines. An example is shown below.

      INTEGER DIMS(2)  
      REAL IMVALS(512, 1024)  
      DATA DIMS / 512, 1024 /  
*  Get REAL values from the DATA_ARRAY component.  
*  Get a character string from the LABEL component and store it in DLAB.  

15.2 REF — References to HDS objects

Reference objects are HDS objects which store references to other HDS data objects. They act as pointers to data, rather than storing data themselves.

The REF package allows reference objects to be created and written, and it allows locators to referenced objects to be obtained. The routines are listed in Section 21.2.3.

The referenced object may be defined as internal, in which case it is assumed to be within the same container file as the reference object itself, even if the reference object is copied to another container file. In that case the reference must point to an object which has the same pathname within the new file as it had in the old one. References which are not internal will point to a named container file.

Reference objects may be copied and erased using DAT_COPY and DAT_ERASE. Care must be taken when copying reference objects or referenced objects, otherwise the reference may no longer point to the referenced object.

Referenced objects must exist at the time the reference is made or used.

15.2.1 Uses

Two main uses for this package are foreseen:

As an example of the second use, suppose that a large object is logically required to form part of a number of other objects. To avoid duplicating the common object, the others may contain a reference to it. For example:

   Name                type                  Comments  
DATA                DATA_SETS  
  .SET1             SPECTRUM  
     .AXIS1         _REAL(1024)          Actual axis data  
     .DATA_ARRAY    _REAL(1024)  
  .SET2             SPECTRUM  
     .AXIS1         REFERENCE_OBJ        Reference to DATA.SET1.AXIS1  
     .DATA_ARRAY    _REAL(1024)  
  .SET3             SPECTRUM  
     .AXIS1         REFERENCE_OBJ        Reference to DATA.SET1.AXIS1  
     .DATA_ARRAY    _REAL(1024)

Then a piece of code which handles structures of type SPECTRUM, which would normally contain the axis data in .AXIS1 (as SET1 does), could be modified as follows to handle an object .AXIS1 containing either the actual axis data or a reference to the object which does contain the actual axis data.

*  LOC1 is a locator associated with a SPECTRUM object; obtain locator to AXIS data  
*  Modification to allow AXIS1 to be a reference object; check type of object  
          CALL REF_GET(LOC2, ’READ’, LOC3, STATUS)  
*  End of modification.  LOC2 now locates the axis information wherever it is.

This code has been packaged into the subroutine REF_FIND which can be used instead of DAT_FIND in cases where the component requested may be a reference object.

When a locator which has been obtained in this way is finished with, it should be annulled using REF_ANNUL rather than DAT_ANNUL. This is so that, if the locator was obtained via a reference, the HDS_OPEN for the container file may be matched by an HDS_CLOSE. Note that this should only be done when any other locators derived from the locator to the referenced object are also finished with.

15.3 NDF — Extensible n-dimensional data format

The programmer’s manual (SUN/33) for NDF runs to 199 pages. It is impossible to produce a useful summary of it, and pointless to reproduce the whole of it here. Instead, we give you an overview of how NDFs are related to HDS, what is in an NDF, and the sort of functions provided by the NDF routines. A realistic, fully commented example program is also given, which should at least give you an idea of how the routines are used. The routines are listed in Section 21.2.1.

15.3.1 Relationship with HDS

The NDF file format is based on HDS, and NDF data structures are stored in HDS container files. However, this does not necessarily mean that all applications which can read HDS files can also handle data stored in NDF format.

To understand why, you must appreciate that HDS provides only a rather low-level set of facilities for storing and handling astronomical data. These include the ability to store primitive data objects (such as arrays of numbers, character strings, etc.) in a convenient and self-describing way within container files. However, the most important aspect of HDS is its ability to group these primitive objects together to build larger, more complex structures. In this respect, HDS can be regarded as a construction kit which higher-level software can use to build even more sophisticated data formats.

The NDF is a higher-level data format which has been built in this way out of the more primitive facilities provided by HDS. Thus, in HDS terms, an NDF is a data structure constructed according to a particular set of conventions to facilitate the storage of typical astronomical data (such as spectra, images, or similar objects of higher dimensionality).

While HDS can be used to access such structures, it does not contain any of the interpretive knowledge needed to assign astronomical meanings to the various components of an NDF, whose details can become quite complicated. In practice, therefore, it is cumbersome to process NDF data structures using HDS directly. Instead, the NDF access routines are provided. These ‘know’ how NDF data structures are built, so they can hide the details from writers of astronomical applications. This results in a subroutine library which deals in higher-level concepts more closely related to the work which typical astronomical applications need to perform, and which emphasises the data concepts which an NDF is designed to represent, rather than the details of its implementation.

15.3.2 Data format

The simplest way of regarding an NDF is to view it as a collection of those items which might typically be required in an astronomical image or spectrum. The main part is an N-dimensional array of data (where N is 1 for a spectrum, 2 for an image, etc.), but this may also be accompanied by a number of other items which are conveniently categorised as follows:

Character components: TITLE — NDF title
LABEL — Data label
UNITS — Data units
Array components: DATA — Data pixel values
VARIANCE — Pixel variance estimates
QUALITY — Pixel quality values
Miscellaneous components: AXIS — Coordinate axes
HISTORY — Processing history5
Extensions: EXTENSION — Provides extensibility

The names of these components are significant, since they are used by the NDF access routines to identify the component(s) to which certain operations should be applied.6 The following describes the purpose and interpretation of each component in slightly more detail.

Character components:

— This is a character string whose value is intended for general use as a heading for such things as graphical output; e.g. ‘M51 in good seeing’.
— This is a character string whose value is intended to be used on the axis of graphs to describe the quantity in the NDF’s data component; e.g. ‘Surface brightness’.
— This is a character string whose value describes the physical units of the quantity stored in the NDF’s data component; e.g. ‘J/(m 2Angs)’.

Array components:

— This is an N-dimensional array of pixel values representing the spectrum, image, etc. stored in the NDF. This is the only NDF component which must always be present. All the others are optional.
— This is an array of the same shape and size as the data array, and represents the measurement errors or uncertainties associated with the individual data values. If present, these are always stored as variance estimates for each pixel.
— This is an array of the same shape and size as the data array which holds a set of unsigned byte values. These are used to assign additional ‘quality’ attributes to each pixel (for instance, whether it is part of the active area of a detector). Quality values may be used to influence the way in which the NDF’s data and variance components are processed, both by general-purpose software and by specialised applications.

Miscellaneous components:

— This represents a group of axis components which may be used to describe the shape and position of the NDF’s pixels in a rectangular coordinate system. The physical units and a label for each axis of this coordinate system may also be stored. (Note that the ability to associate extensions with an NDF’s axis coordinate system, although described in SGP/38, is not yet available via the NDF access routines.)
— This may be used to keep a record of the processing history. If present, this component should be updated by any applications which modify the data structure. Support for this component is not yet provided by the NDF access routines.


— These are user-defined HDS structures associated with the NDF, and are used to give the data format flexibility by allowing it to be extended. Their formulation is not covered by the NDF definition, but a few simple routines are provided for accessing and manipulating named extensions, and for reading and writing the values of components stored within them.

15.3.3 Routines

The NDF access routines are listed in Section 21.2.1. They perform the following types of operation on NDF data structures:

Programming support for these routines, including on-line help, is also provided by the Starlink language-sensitive editor STARLSE.

15.3.4 Example program

The following application adds two NDF data structures pixel-by-pixel. It is a fairly sophisticated ‘add’ application which will handle both the data and variance components, as well as coping with NDFs of any shape and data type.

*  Name:  
*     ADD  
*  Purpose:  
*     Add two NDF data structures.  
*  Description:  
*     This routine adds two NDF data structures pixel-by-pixel to produce a new NDF.  
*  ADAM Parameters:  
*     IN1 = NDF (Read)  
*        First NDF to be added.  
*     IN2 = NDF (Read)  
*        Second NDF to be added.  
*     OUT = NDF (Write)  
*        Output NDF to contain the sum of the two input NDFs.  
*     TITLE = LITERAL (Read)  
*        Value for the title of the output NDF.  A null value will cause  
*        the title of the NDF supplied for parameter IN1 to be used instead.  
*  Type Definitions:  
      IMPLICIT NONE              ! No implicit typing  
*  Global Constants:  
      INCLUDE ’SAE_PAR’          ! Standard SAE constants  
      INCLUDE ’NDF_PAR’          ! NDF_ public constants  
*  Status:  
      INTEGER STATUS             ! Global status  
*  Local Variables:  
      CHARACTER*(13) COMP        ! NDF component list  
      CHARACTER*(NDF__SZFTP) DTYPE ! Type for output components  
      CHARACTER*(NDF__SZTYP) ITYPE ! Numeric type for processing  
      INTEGER EL                 ! Number of mapped elements  
      INTEGER IERR               ! Position of first error (dummy)  
      INTEGER NDF1               ! Identifier for 1st NDF (input)  
      INTEGER NDF2               ! Identifier for 2nd NDF (input)  
      INTEGER NDF3               ! Identifier for 3rd NDF (output)  
      INTEGER NERR               ! Number of errors  
      INTEGER PNTR1(2)           ! Pointers to 1st NDF mapped arrays  
      INTEGER PNTR2(2)           ! Pointers to 2nd NDF mapped arrays  
      INTEGER PNTR3(2)           ! Pointers to 3rd NDF mapped arrays  
      LOGICAL BAD                ! Need to check for bad pixels?  
      LOGICAL VAR1               ! Variance component in 1st input NDF?  
      LOGICAL VAR2               ! Variance component in 2nd input NDF?  
*  Check inherited global status.  
*  Begin an NDF context.  
*  Obtain identifiers for the two input NDFs.  These involve calls to the  
*  parameter system and may be resolved from the interface file, the command  
*  line, or by prompting the user.  
*  Trim their pixel-index bounds to match.  This selects the largest common  
*  set of pixels from the two input arrays.  
*  Create a new output NDF based on the first input NDF.  Propagate the axis  
*  and quality components, which are not changed.  This program does not  
*  support the units component.  
      CALL NDF_PROP(NDF1, ’Axis,Quality’, ’OUT’, NDF3, STATUS)  
*  See if a variance component is available in both input NDFs and generate  
*  an appropriate list of input components to be processed.  
      CALL NDF_STATE(NDF1, ’Variance’, VAR1, STATUS)  
      CALL NDF_STATE(NDF2, ’Variance’, VAR2, STATUS)  
      IF (VAR1.AND.VAR2) THEN  
         COMP = ’Data,Variance’  
         COMP = ’Data’  
      END IF  
*  Determine which numeric type to use to process the input arrays and set an  
*  appropriate type for the corresponding output arrays.  This program supports  
*  integer, real and double-precision arithmetic.  ITYPE says what type should  
*  be used for the processing;  DTYPE is the type needed for the output data  
*  (identified by NDF3) and so is passed on to NDF_STYPE  
     :                NDF1, NDF2, COMP, ITYPE, DTYPE, STATUS)  
*  Map the input and output arrays.  Note that the identifier NDF3 produced by  
*  NDF_PROP is used for the output data, which must be in WRITE access mode.  
*  Merge the bad pixel flag values for the input data arrays to see if checks  
*  for bad pixels are needed.  The first argument ‘.TRUE.’ says that this  
*  application can handle bad pixels (if it were .FALSE. and bad pixels were  
*  present the STATUS would be set to an error value).  The fifth argument  
*  ‘.FALSE.’ says not to check whether there actually are any bad pixels present.  
*  Select the appropriate routine for the data type being processed and add the data  
*  arrays.  Note that the arithmetic is done by one of the VEC_ routines in PRIMDAT  
*  which handle bad pixels and any arithmetic errors, such as overflow, for you.  
         IF (ITYPE.EQ.’_INTEGER’) THEN  
            CALL VEC_ADDI(BAD, EL, %VAL(PNTR1(1)),  
     :                     %VAL(PNTR2(1)), %VAL(PNTR3(1)),  
     :                     IERR, NERR, STATUS)  
         ELSE IF (ITYPE.EQ.’_REAL’) THEN  
            CALL VEC_ADDR(BAD, EL, %VAL(PNTR1(1)),  
     :                     %VAL(PNTR2(1)), %VAL(PNTR3(1)),  
     :                     IERR, NERR, STATUS)  
            CALL VEC_ADDD(BAD, EL, %VAL(PNTR1(1)),  
     :                     %VAL(PNTR2(1)), %VAL(PNTR3(1)),  
     :                     IERR, NERR, STATUS)  
         END IF  
*  Flush any messages resulting from numerical errors.  
      END IF  
*  See if there may be bad pixels in the output data array and set the output  
*  bad pixel flag value accordingly.  NERR is the number of errors detected by  
*  the VEC_ADDx routine.  
      BAD = BAD .OR. (NERR.NE.0)  
      CALL NDF_SBAD(BAD, NDF3, ’Data’, STATUS)  
*  If variance arrays are also to be processed (i.e. added), then see if bad  
*  pixels may be present in the variance arrays.  
      IF (VAR1.AND.VAR2) THEN  
         CALL NDF_MBAD(.TRUE., NDF1, NDF2, ’Variance’, .FALSE., BAD,  
     :                  STATUS)  
*  Select the appropriate routine to add the variance arrays.  
         IF (STATUS.EQ.SAI__OK) THEN  
            IF (ITYPE.EQ.’_INTEGER’) THEN  
               CALL VEC_ADDI(BAD, EL, %VAL(PNTR1(2)),  
     :                        %VAL(PNTR2(2)), %VAL(PNTR3(2)),  
     :                        IERR, NERR, STATUS)  
            ELSE IF (ITYPE.EQ.’_REAL’) THEN  
               CALL VEC_ADDR(BAD, EL, %VAL(PNTR1(2)),  
     :                        %VAL(PNTR2(2)), %VAL(PNTR3(2)),  
     :                        IERR, NERR, STATUS)  
            ELSE IF (ITYPE.EQ.’_DOUBLE’) THEN  
               CALL VEC_ADDD(BAD, EL, %VAL(PNTR1(2)),  
     :                        %VAL(PNTR2(2)), %VAL(PNTR3(2)),  
     :                        IERR, NERR, STATUS)  
            END IF  
*  Flush any messages resulting from numerical errors.  
         END IF  
*  See if bad pixels may be present in the output variance array and set the  
*  bad pixel flag accordingly.  
         BAD = BAD .OR. (NERR.NE.0)  
         CALL NDF_SBAD(BAD, NDF3, ’Variance’, STATUS)  
      END IF  
*  Obtain a new title for the output NDF, by way of the parameter system.  
      CALL NDF_CINP(’TITLE’, NDF3, ’Title’, STATUS)  
*  End the NDF context.  
*  If an error occurred, then report context information.  
         CALL ERR_REP(’ADD_ERR’,  
     :   ’ADD: Error adding two NDF data structures.’, STATUS)  
      END IF  

The following is a possible interface file for the above application:

   interface ADD  
      parameter IN1                 # First input NDF  
         position 1  
         prompt   ’First input NDF’  
      parameter IN2                 # Second input NDF  
         position 2  
         prompt   ’Second input NDF’  
      parameter OUT                 # Output NDF  
         position 3  
         prompt   ’Output NDF’  
      parameter TITLE               # Title for output NDF  
         type     ’LITERAL’  
         prompt   ’Title for output NDF’  
         vpath    ’DEFAULT’  
         default  !  

1Due to an historical anomaly on VAX/VMS systems, the file SAE_PAR also contains definitions for the DAT__ symbolic constants which should properly reside in DAT_PAR. To prevent multiple definitions occurring, DAT_PAR is therefore an empty file on VAX/VMS systems. Thus, if SAE_PAR has been included, the further inclusion of DAT_PAR is optional (but only on VAX/VMS). Its inclusion is recommended, however, because this allows the same software to be used on other systems without change.

2This terminology derives from the facility originally provided by VAX/VMS for mapping the contents of files into the computer’s memory, so that they appear as if they are arrays of numbers directly accessible to a program. Although HDS still exploits this technique when appropriate, other techniques are also used internally so that HDS no longer depends on the use of file mapping (which some operating systems do not provide). The terminology remains in use, however.

3This VAX extension to Fortran 77 is also supported by the other implementations of Fortran for which HDS is available.

4This will work even if the object was originally created on a different computer which formats its numbers differently.

5The history component is not fully supported by the present version of the NDF access routines.

6Note that the name ‘DATA’ used by the NDF_ routines to refer to an NDF’s data component differs from the actual name of the HDS object in which it is stored, which is ‘DATA_ARRAY’.