The ECHMENU monolith main menu options are described in the following text. Many of these options are also available as tasks which can be accessed from the shell. Where a task is available its name is given in the section heading.

In each of the following sections a description is given of the task, it’s purpose, the parameters it uses and the reduction file objects it accesses. Parameters used by the tasks are described in detail under Parameters.

0:echhelp – HELP

Provides entry into the HELP library for browsing. Upon entry the first topic displayed will depend upon the context and will usually be appropriate to the next default option at the time.

To leave the HELP type a CTRL-Z, or hit carriage-return until the ECHMENU menu re-appears.

The stand-alone HELP browser echhelp can be invoked from the shell or the hypertext version of the text can be accessed using

  % echwww
1: ech_locate – Start a Reduction

This is usually the first option selected and will cause the following operations to be performed:


Reduction File Objects:

2:ech_trace – Trace the Orders

The tracing of the paths of the orders across the data frame is often a source of difficulty as it is fairly easy for blemishes in the frame to fatally deflect order tracing algorithms from the actual path of the order. echomop provides a variety of options to help combat these problems. echomop order tracing first locates the positions of the orders at the centre of the frame, and estimates the average order slope. It uses this information to predict the existence of any partial orders at the top/bottom of the frame which may have been missed by the examination of the central columns during order location.

Tracing then proceeds outwards from the centre of each order. At each step outwards a variable size sampling box is used to gather a set of averages for the rows near the expected order centre. The centre of this data is then evaluated by one of the following methods:

The trace algorithm will loop increasing its sampling box size automatically when it fails to find a good centre. The sample box can increase up to a size governed by the measured average order separation.

When a set of centres have been obtained for an order, a polynomial is fitted to their coordinates. The degree is selectable. For ideal data, these polynomials will represent an accurate reflection of the path of the order across the frame. For real data it is usually helpful to refine these polynomials by clipping the most deviant points, and re-fitting. Options are provided to do this automatically or manually.

When dealing with distorted data it is often necessary to use a high degree polynomial to accurately fit the order traces. This in turn can lead to problems at the edges of the frame when the order is often faint.

Typically the polynomial will ‘run away’ from the required path. The simplest solution is of course to re-fit with a lower order polynomial, however, this may not be satisfactory if the high degree is necessary to obtain a good fit over the rest of the order.

In these circumstances, and others where one or more orders polynomials have ‘run away’, echomop provides an automatic consistency checker. The consistency checking task works by fitting polynomials to order-number and Y-coordinate at a selection of positions across the frame. The predicted order centres from both sets of polynomials are then compared with each other and then mean and sigma differences calculated. The ‘worst’ order is then corrected by re-calculating its trace polynomial using the remaining orders (but excluding its own contribution). This process is repeated until the mean deviation between the polynomials falls below a tunable threshold value. The consistency checker will also cope with the ‘bad’ polynomials which can result when partial orders have been automatically fitted.

Viewing traced paths

The tracing of the échelle order paths is central to the entire extraction process and care should be taken to ensure the best traces possible. echomop provides a large number of processing alternatives to help ensure this can be done. Most of these provide information such as RMS deviations etc., when run. In general however, the best way of evaluating the success or failure of the tracing process is to visually examine the paths of the trace fitted polynomials. Three methods of viewing the traced paths are provided.

For a single order, a more detailed examination of the relation of a trace polynomial to the points it fits, can be obtained using the V(view) command in the task ech_fitord/ECHMENU Option 3.


Reduction File Objects:

3:ech_fitord – Clip Trace Polynomials


Figure 2: A typical plot of the order trace deviations as show during the trace clipping stage (Option 3). The points represent the deviation of the order from the fitted polynomial/spline at a set of sample column positions.

This routine performs automatic or manual clipping of points from the set of samples representing the path of an order across the frame. A variety of methods of manually clipping points is available, and the degree of the polynomial used may also be altered. The relationship of the fit to the trace samples, and the deviations, may be examined graphically.


Figure 3: A Plot of the fitted order trace overlaid on the trace data. This plot was made using the option “V” in ECHMENU Option 3.

Orders may be repeatedly re-fitted once they have been initially traced. In general an automatic fit, clipping to a maximum deviation of third of a pixel will yield good trace polynomials. The routine may also be used to remove sets of points which are distorting a fit, for example a run of bad centre estimates caused by a dodgy column/row.

This routine should be used manually on single orders when they are clearly not correctly fitted. This will usually be seen by viewing the paths of the fitted order traces using ech_trplt. In many cases it will be possible to get a good trace fit by clipping a set of obviously bad samples, and re-fitting (perhaps with a lower degree of polynomial).

The other main option for coping with such problems is to rely on the trace consistency checking routine. This will produce a set of consistent trace fits in most cases. If a particular order is beyond recovery and you do not wish to extract it at all, then it may be disabled by clipping away all of its trace samples with this routine.

The figure above shows an example of the deviations plot after automatic clipping has been performed on an order.

If you set TRC_INTERACT=YES then a multi-option menu will be displayed. The options are:


Reduction File Objects:

4:ech_spatial – Determine Dekker/Object Limits

The determination of the position of the object data within the slit proceeds by first locating the slit jaws. To do this either an ARC frame or (ideally) a flat-field frame may be used. The profile is calculated along the slit and the edges are then located by determining the points where the profile intensity drops below a tunable threshold. For problem cases the dekker positions may also be indicated manually on a graph of the ARC/Flat field profile. Once the dekker limits have been determined, the object profile is measured. The object is sampled by averaging the profile over all orders (using the central columns of the frame only). The median intensity of the profile inside the dekker limits is then calculated and used to set an expected sky threshold. The profile is then examined by stepping outwards from its peak until the profile intensity falls to the expected sky threshold. Masks are then created in which each pixel along the slit is flagged as sky or object. You may also interactively edit these masks and flag particular sections of the profile as sky or object. Only pixels flagged as ‘object’ will contribute to an extraction. Therefore the profile editing provides a (tedious) simple mechanism for producing spatially resolved spectra (each spatial increment in turn is flagged as the only object pixel in the profile, and extracted).

The default behaviour is to average all the orders together thus generating a composite profile. In certain circumstances it may be necessary to derive a separate profile for each order (for instance for multi-fibre spectra). To select this option the parameter TUNE_USE_NXF=1 must be set.

This option must be used before any extraction of the orders can take place. It consists of two steps:

In each case a plot is produced on the graphics device (specified using the SOFT parameter), showing the status of pixels in relation to their position relative to the path of the order across the frame.


Figure 4: The plot used for dekker limit setting. In interactive use the cursor is moved to the desired X-position and then either l (for lower) or u (for upper) key is pressed to set a limit. If the scale of the graph is too small then it can be extended by pressing l beyond the left-hand vertical axis, or u beyond the right-hand vertical axis.

The Figure above shows an example of the dekker plot. The regions indicated by a solid line are inside the dekker. Pixels in positions corresponding to the dot/dash line are outside the dekker and will not be used during processing.


Figure 5: The type of plot used for object limit setting. The cursor is placed at the X-position of the region of interest and then one of the o (object), s (sky), i (ignore), b (both), keys pressed. Dekker limits may also be changed using the u and l keys.

The Figure above shows an example of the object limits plot. The regions indicated by a solid line are ‘object’ pixels and will contribute to the extraction. Regions shown by the dashed line are ‘sky’ pixels which will be used to calculate the sky model. Pixels in the dot/dash region are outside the dekker limits and will not be used during processing.

The pixels’ status as set by this option, determine what part they play when the extraction takes place. That is, it determines if a pixel is part of the sky, object, outside the slit, or to be ignored completely by the extraction routine. If the parameter PFL_INTERACT=YES is set then you are also provided with the opportunity to edit these quantities on a profile plot.

In addition the limits may be specified by using parameters, which will over-ride the values calculated by the modules.

After running Option 4, the post-trace cosmic-ray locator may be run if required (Option 17). It cannot be used before as it uses the object limits derived in Option 4.


Reduction File Objects:

5:ech_ffield – Model Flat-field Balance Factors

The ‘balance factors’ are the per-pixel values which are multiplied into the raw data to perform the photometric correction required to correct for differing pixel-to-pixel responses of some detectors (mostly CCDs).

echomop will use a flat-field frame if one is available. The flat-field frame should be produced using a continuum lamp exposure with the instrument in an identical configuration to that used for the object exposure (to ensure that any wavelength dependent behaviour of pixel response is taken into account). The exposure should be of sufficient duration to attain high counts in the brightest parts of the image.

echomop fits functions in two directions; along the traces, and along the image columns. The degree of polynomials fitted is tunable, but the (low) default degree will normally be perfectly reasonable. Each flat-field pixel in an order is then used to calculate a ‘balance’ factor. This is a number close to 1 which represents the factor by which a given pixel exceeds its expected value (predicted by the polynomial).

Note that this technique requires that the flat-field orders vary slowly and smoothly both along and across each order.

If required many flat-field frames (with identical instrument configuration) may be co-added prior to ECHOMOP, and the high S/N flat field used by ECHOMOP. At present no special facilities are provided for calculating the actual error on such a co-added flat field; the expected error (derived from root-N statistics) is what is used to calculate the error on the balance factors unless appropriate variances are provides in the flat-field frame error array. Other modes of operation are triggered by setting the parameter FLTFIT. If the parameter is set to NONE then no modelling in the X-direction will be performed. If the parameter is set to MEAN, then no polynomials are used, but the balance factors are calculated using the local mean value based on a 5-pixel sample. This will normally be used when the flat field at the dekker limits cannot be modelled because its intensity changes too rapidly on a scale of 1 pixel due to under-sampling of the profile.

The full set of modelling options is:

If you produce your own balance-factor frame, then this may be used by echomop. The parameter TUNE_PREBAL should then be set to YES. In this case no modelling takes place and the balance factors are simply copied from the frame supplied. This should be used if the echomop models do not generate an appropriate flat-field. In cases where no flat-field frame is available then the parameter TUNE_NOFLAT=YES can be specified; or, alternatively you can reply NONE when prompted for the name of the flat-field frame. In either case, the balance factors will be set to unity.


Reduction File Objects:

6:ech_sky – Model Sky Background

The sky intensity is modelled at each increment along the order. The degree of polynomial fitted is adjustable, by default it is set to zero to obtain the ‘average’ sky behavior.

The use of polynomials or splines of higher degree is advisable when there is a significant gradient to the sky intensity along the slit, as the polynomials are used to predict the sky intensity at each object pixel in the order independently. Note that the meaning of ‘increment’ differs between regular and 2-D distortion-corrected extractions. For a simple extraction an increment is a single-pixel column. For a 2-D extraction each increment is a scrunched wavelength-scale unit, thus ensuring the accurate modelling of distorted bright emission lines in the sky.

It is also possible to model the sky intensity in the wavelength direction using polynomials. In this case there are parameters available to define the threshold for possible sky lines which will be excluded from the fit (TUNE_SKYLINW and TUNE_SKYLTHR). When a wavelength-dependent model is used it is also possible to request an extra simulation step which allows the accurate evaluation of the errors on the fitted model (using a monte-carlo simulation). This procedure can improve the variances used during an ‘optimal’ extraction, particularly in cases where the object is only fractionally brighter than the sky. The simulation is enabled using the hidden parameter TUNE_SKYSIM=YES.

The determination of which pixels are sky is done using the masks set by the profiling task or ECHMENU option. These masks can be freely edited to cope with any special requirements as to which regions of the slit are to be used for the sky. This facility is of particular use when processing frames where ‘periscopes’ have been used to add in ‘sky’ regions when observing an extended source. In such cases, echomop currently provides no special treatment and the periscope sky-pixel positions will have to be edited into the sky mask using the task ech_spatial/ECHMENU Option 4.

It is also possible to use a separate sky frame by flagging all pixels as sky, modelling the sky, and then resetting the requisite object pixels (using Option 4.2) before extracting using the object frame.

In cases where there is significant contamination of the background due to scattered light, it is possible to use a global model of the background intensity instead. ech_mdlbck (Option 22) performs this process and should be used instead of the sky modelling option (the two processes are mutually exclusive).


Reduction File Objects:

7:ech_profile – Model Object Profile

The object profile model is constructed by subsampling the profile and may be an all order average, or independently calculated for each order (enabled by setting TUNE_USE_NXF=1). There are also facilities for modelling profiles which vary slowly with wavelength by fitting polynomials in the wavelength direction (set TUNE_OBJPOLY>0).

The degree of subsampling is controlled using the parameter TUNE_PFLSSAMP which sets the number of subsamples across the spatial profile.


Reduction File Objects:

8:ech_extrct – Extract Object and Arc Order Spectra

The extraction of both object and arc orders proceeds in parallel to ensure that the same weights are used in both cases. There are three possible weighting schemes implemented currently. All methods maintain variances and allow individual pixels to be excluded from the extraction process by referring to the object frame quality array. Simple extraction weights all object pixels equally and is much less computationally demanding than the other methods. The object intensity is calculated by summing all the object pixels in each column for each order.

Profile weighted extraction weights each pixel by a factor P(i, j)2 where P(i, j) is the calculated normalised profile at spatial offset j (sub-sampled) from the trace centre and i is the column number.

Optimally weighted (or Variance weighted) extraction weights each pixel by the product of the calculated profile P(i, j) and an estimate of the uncertainty of the pixel intensity.

This estimate is based on the calculated variance following the scheme described by Horne in An Optimal Extraction Algorithm for CCD spectroscopy (P.A.S.P. 1986), modified to cope with profile subsampling associated with sloping and/or distorted orders.

In addition, the rejection of cosmic-ray-contaminated pixels has been made available as a separate function in echomop as the package is not dedicated to CCD-only data reduction. The original cosmic-ray rejection described by Horne has also been retained and can be enabled using the parameter TUNE_CRCLEAN, although the dedicated CR module seems to perform better in most cases. Optimally weighted extraction has been shown to improve S/N in the extracted spectra by factors corresponding to up to 20% increases in exposure time, and its use is therefore to be encouraged in most cases. The provision of sky variance modelling helps to ensure that the optimal extraction can still perform ‘optimally’ even with very low S/N data.

NOTE: Option 19—Quick-look extraction is provided primarily for at-the-telescope use to permit the observer to quickly check that decent data are being obtained. Quick-look does not use the sky model or flat-field model and should not be used to produce spectra for further analysis.


Reduction File Objects:

9:ech_linloc – Locate Arc Line Candidates

This option is used to locate arc line features for later identification. It consists of two steps:


Figure 6: The “average” line-profile for an arc. This was produced using ECHMENU Option 9.

The FWHM is evaluated by co-adding all possible arc line features in the arc frame, and then averaging the resulting profile, and calculating its FWHM. The value is used to scale the Gaussian-s which are fitted to each arc line in order to obtain an estimate of its center position (in X).

Possible arc lines are denoted by any region of an order in which 5 consecutive pixels (P1-5) obey the following relation:

P1 < P2 < P3 > P4 > P5

and are amenable to a Gaussian fit with the FWHM calculated.

This ensures that even very faint features are put forward for possible identification (useful when there are no bright lines in an entire order).


Reduction File Objects:

10:ech_idwave – Wavelength Calibrate

The wavelength calibration is done using a reference feature list, usually provided by an ARC-lamp exposure. The routine allows any candidate features to be identified and used as ‘knowns’ for the calibration (as position/intensity pairs). These features may then be manually identified using a reference lamp atlas. Facilities are provided for adding/deleting lines and altering the degree of polynomial fit performed.

An automatic line-identifier is included which operates by searching the supplied line list for multi-line ‘features’. You may optimise the search by constraining the space to be searched in terms of permissible wavelength and/or dispersions (in Angstroms/pixel).

In addition the program will automatically constrain the search range further if it can determine the central order number and wavelength (by looking in the data frame header)

As soon as three orders have been successfully calibrated, the search range for the remaining orders is re-evaluated to take this into account. In general, the automatic method will be most useful when you are unsure of the exact wavelength range covered. When the level of doubt is such that the wavelength scale may decrease from left-to-right across the frame, then the software may be instructed to automatically check for this ‘reversed’ condition. Set parameter TUNE_REVCHK to YES to check for a reversed arc; the parameter defaults to NO to avoid wasting CPU time. If the wavelength scale is reversed then you should use fiGARO IREVX to flip all the relevant images, and then re-start the reduction.

Finally the software is flexible as to the vertical orientation of the orders, i.e., higher wavelength orders may be at the top or bottom of the frame (for échelle data). Calibration may be performed using either 1 or 2 (before and after) ARC frames at present. See Arc Frames for details.

Options are presented in a menu form and selected by typing a one or two character string, followed by carriage return. The following options are supported:


Figure 7: A typical plot during interactive line-identification.

The Figure above shows an example of the plot displayed during interactive line-identification. The following points should be noted:


Reduction File Objects:

11:ech_blaze – Fit and Apply Ripple Correction

This option consists of two steps as follows:

If flux calibration is not being performed it is sometimes desirable to remove the ‘blaze’ function from the extracted spectrum to assist in fitting line profiles etc. during data analysis.

A task is provided for this purpose which operates by fitting curves to the flat-field orders. The curves can be polynomials, splines or simple fits based on local-median values. The fits may be automatically or interactively clipped and the resulting blaze spectrum is normalised such that its median intensity is unity.

The normalised blaze is then divided into the extracted spectrum. It is important to remember that this operation is performed upon the ‘extracted’ spectrum.

After a blaze function has been applied to the extracted order all its values may be reset to unity to ensure that the order(s) cannot be re-flattened in error (TUNE_BLZRSET=YES). If the blaze is to be re-applied then the correct procedure is to first re-extract the order(s) concerned and then re-fit the blaze.


Reduction File Objects:

12:ech_scrunch – Scrunch

This option is used to scrunch the extracted order spectra (and arc order spectra) into a (usually) linear wavelength scale.

The scrunching of spectra into a linear wavelength scale provides exactly the same facilities available using the fiGARO SCRUNCH program, except that it works on an order-by-order basis.

echomop provides both global (bin size constant for all orders) and per-order scrunching options. The global option would normally be used when it is necessary to co-add the extracted orders from multiple data frames, and a standard bin-size is required.

Scrunching results in both a 2-D array of scrunched individual orders, and a merged 1-D array of the whole wavelength range. A utility (Option 21) is provided to assist in the co-adding of spectra from many frames together. This option assumes that the first frame in the reduction has been scrunched with the required wavelength scale. It then reads a list of additional reduction database names (or EXTOBJ result file names) from an ASCII file called NAMES.LIS. The extracted spectra from each of these reduction files are then scrunched to the same scale and co-added into the scrunched spectra in the current reduction file. The type of weighting during addition is controlled using the parameter TUNE_MRGWGHT.


Reduction File Objects:

13:ech_ext2d – 2-D Distortion Correction

Detectors such as the IPCS often cause major geometric distortions in the image created using them. echomop provides a mechanism for modelling such distortion, and using that model to provide corrections during the extraction process. It is also possible to generate a ‘corrected’ version of each order, for visual examination, or processing by other (single spectra) software.

The distortion model uses a coordinate system based on X = calibrated wavelength at trace, Y = pixel offset from trace, and is thus performed independently for each order in turn.

The ARC frame is used to locate the positions of each identified arc line at a variety of offsets from the trace centre. The difference between its wavelength (as identified) and that predicted by the wavelength polynomial for its observed position is then calculated. These differences are modelled using a Chebyshev polynomial.

Once a 2-D fit has been obtained, it is refined by either manual or automatic clipping of deviant points. When done manually the positions of all the points being fitted (i.e., arc line centers) may be plotted in a highly exaggerated form, in which systematic distortions of sub-pixel magnitude are readily apparent.

As the wavelength scale produced by the distortion fitting leads inevitably to some re-binning when the extraction takes place, it is normal to extract into a scrunched wavelength scale (e.g., constant bin size) and this is the default behavior of the 2-D extraction task/ECHMENU Option 13.

This option is used to perform a full 2-D distortion-corrected extraction and is provided for cases where the distortion of the frame is significant. The option consists of four steps as follows:

Distortion correction is done on a per-order basis, each order having its own distortions mapped independently. The distortion is modelled by using a tie-point data set composing of the positions and wavelength of all identified arc lines in the order. Thus, a wavelength calibration is a pre-requisite to the distortion corrected extraction operation. A 2-D Chebyshev polynomial is then fitted to the wavelength deviations of each arc-line pixel, relative to the wavelength at the trace/arc line intersection. The polynomial is used to generate delta-wavelength values at pairs of (X,Y-offset) coordinates, i.e.;

delta-wavelength = 2dPoly( X-pixel, Y-offset from trace )

for all X- and all Y-offsets within the dekker.

This map of wavelength delta values is then used to drive a 2-D scrunch of the order into a form where each column (X=nn) corresponds to consistent wavelength increment.

The final step is to extract the data from this re-binned form. The extraction algorithm used is identical to the 1-D case from this point on.


Reduction File Objects:

14:ech_result – Write Results File

This option provides three output formats for data reduced within echomop.

The supported formats are NDF, ASCII, and DIPSO stack. Many other file formats can be accessed by use of the Starlink utility CONVERT. Where applicable to the data format, errors will be included. For example, DIPSO stacks can not handle error data; NDFs can.

Object or arc spectra data may be output. Data for any of: extracted orders, scrunched orders, or merged spectra may be used. A single order may be selected for output using the task ech_single/ECHMENU Option 24 otherwise all-order data are output.


15:ech_trplt – Plot Order Traces

This option simply plots a graph with the same dimensions as the raw data frames. The graph shows the paths of the traced order polynomials across the frame. The option should be used after Option 2 or 3 to check that the traces are appropriate.

The Figure below shows an example of the order trace paths plot.

Figure 8: A typical set of order paths as plotted using ECHMENU Option 15. The distortions at the order extremities are due to the IPCS detector used.


16:ech_trcsis – Check Trace Consistency

This option may be used to check the consistency of the order traces with each other. The task predicts the path of each trace by fitting a function to the positions of the other orders at each X-position.

The order whose trace deviates the most from the prediction is flagged as the ‘worst’ and the option to update the path using the predictions is offered. This will allow the easy correction of common tracing problems which can occur at the frame edges and with very faint or partial orders.

In general it will only be effective when there are more than half a dozen orders in the frame. The degree of variation and the consistency threshold may be set using the parameters TUNE_CNSDEV and TUNE_TRCNS.


Reduction File Objects:

17:ech_decos2 – Post-trace Cosmic-Ray Locate

This utility option should be run immediately after ech_spatial/ECHMENU Option 4 has been used to define the dekker and object limits.

It uses information about the order paths and the spatial profile in order to do a more effective cosmic-ray location. The sky and object pixels are processed separately in two passes.

Each order in turn is processed by evaluating the degree to which its pixels exceed their expected intensities (based upon profile and total intensity in the increment).

A cumulative distribution function is then constructed and clipped at a pre-determined sigma level. Clipping and re-fitting continues until a Kolmogorov-Smirnov test indicates convergence or until the number of clipped points falls to one per iteration. Located cosmic-ray pixels are flagged in the quality array. Both this and the pre-trace-locator are automatically followed by a routine to do sky-line checking. This routine can restore any pixels it judges to be possible sky line pixels (rather than cosmic-ray hits). If there are many frames of the same object available then it is possible to use coincidence checking to enhance cosmic-ray detection. The script decos_many will take list of input frames and perform this checking, and flags the cosmic-ray pixels in each frame. To use it type:

  % $ECHOMOP_EXEC/decos_many


Reduction File Objects:

18:ech_decimg – Image Cosmic-Ray Pixels

This option uses the quality array to determine which pixels have been flagged as contaminated by cosmic-ray hits. It then takes the original object frame and makes a copy in which all the hit pixels are replaced by zero values. This frame should then be blinked with the original to visually assess the success of the cosmic-ray location process.


19:ech_qextr – Quick-look Extraction

This option allows quick extraction of an order (or all orders) once ech_spatial/ECHMENU Option 4 has been completed (object- and sky-pixel selection). The extraction method used is simple sum of pixels in the dekker and the sky subtraction is done by calculating the average value over all sky pixels in the increment. No flat-field balance factors are used. This option should only be used to get a quick-look at the data, the spectra produced should not be used for further analysis as much better results will be obtained by using Option 8 for a full extraction.


Reduction File Objects:

20:ech_wvcsis – Check Wavelength Scales

This function performs a function analogous to that performed by Option 16 (order traces), only operating upon the wavelength fits.

It is thus used after Option 10 has been used to calculate the wavelength scales.

The wavelength consistency check is confined to those areas beyond the range within which lines have been identified. It therefore only corrects the very ends of the orders wavelength scales.

These are the regions where problems are most likely to occur, as the polynomial fits can become unstable when a high number of coefficients has been used, and there are no fitted points for a substantial fraction of the order (e.g., first 20%).


Reduction File Objects:

21:ech_mulmrg – Merge Multiple Spectra

This utility option is provided to assist in co-adding spectra from many frames together. This option assumes that the first frame in the reduction has been scrunched with the required wavelength scale.

It then reads a list of additional reduction database names from an ASCII file called NAMES.LIS. The extracted spectra from each of these reduction files is then scrunched to the same scale and co-added into the scrunched spectra in the current reduction file.

The parameter TUNE_MRGWGHT controls the type of weighting used during addition.


Reduction File Objects:

22:ech_mdlbck – Model Scattered Light

This option is used in place of Option 6 (Model sky) in cases where there is severe scattered light contamination. It works by fitting independent polynomials/splines to each image column (actually only the inter-order pixels). Once the column fits have been done the results are used as input to a second round of fits which proceeds parallel to the the order traces.

The final fitted values are saved in the sky model arrays.

This process is very CPU intensive and should not be used unless is it needed.


Reduction File Objects:

23:ech_tuner – Adjust Tuning Parameters

This option simply provides a centralised mechanism for viewing and editing the values of all the tuning parameters known to the system.

Most of the time it is more convenient to use the -option syntax from the main menu, as this only lists parameters used by the current default option.

In Option 23, any parameters used by the current default option are flagged with an asterix.

If, when used, a tuning parameter has a non-default value, an informational message is displayed.

24:ech_single – Set Single-order Processing

Allows the selection of a single order for all tasks which operate on an order-by-order basis, e.g., to re-fit the order trace for order 3, leaving all other orders unchanged, you would first use this option to change the selected order to number 3, and then invoke the ech_fitord task/ECHMENU Option 3.

Note that any options which operate on all orders at once (e.g., trace consistency checking) will still operate correctly when a single order is selected, as they ignore the selection and use all the orders anyway).

Note that when using the individual tasks the strategy for selecting single order/all order operation is different. Individual function tasks which can operate on single orders all utilise the parameter IDX_NUM_ORDERS.

which you should set to the number of the order to process, or to zero to indicate that all orders are to be processed in turn. e.g.:

  % ech_trace idx_num_orders=4

would just trace order number 4.

25:ech_all – Set All-order Processing

Selects automatic looping through all available orders for all tasks which are performed on an order by order basis. This is the default method of operation.

26:ech_disable – Disable an Order

This option disables an order from any further processing. The mechanism for doing this is to remove the order trace. If you need to re-enable an order then it should be re-traced by using Option 24 (select single order) and then Option 2 (trace an order).


27:ech_plot – Run Plot Utility

Many of the temporary results arrays stored in the reduction database can be of assistance when tracking down problems during a reduction. All of these can be graphically examined using the ech_plot task/ECHMENU Option 27.

This utility prompts for object names for the Y-axis (and optionally the X-axis separated by a comma). If a null object name is returned then that axis will be automatically generated using monotonically increasing integer values.

The normal usage will be to supply only the name of the Y-axis object and leave the X-axis to be auto generated. An exception is when plotting wavelength objects along the X-axis.

Note that unless you are interested in the first order of a multi-order array then the array indices to start plotting from must be supplied.

e.g.: OBJ would denote the first orders’ extracted object spectra. OBJ[1,4] would denote the fourth orders’ extracted object spectra. ARC[100,13,2] would denote the region of the second arc frames thirteenth order starting at X-sample 100.

Note that in this last case unless the N(umber) of samples to plot has been set to less than the array X-dimension then some samples from the fourteenth order would also be plotted.

Also provided are the following facilities most of which are selected by typing the single character followed by carriage-return.

28:ech_menu – Display Full Menu

This option selects the display of the full menu of options available in ECHMENU top-level menu. By default only the utility options and the currently most likely selections will appear on the menu.

29:ech_system – Do System Command

This option allows the execution of one or more system level commands without leaving and re-starting the ECHMENU shell task.

The command is prompted for with:

  - System_$ /’’/ >

you should then enter a command. If more than one command is required then the csh command should be given to initiate a fully independent process.

This process must be terminated by a CTRL-D in order to return to the ECHMENU shell task. This option can be used to perform system commands like ls and also fiGARO commands such as IMAGE etc. However, echomop commands which change parameter values will not operate perfectly because the monolith already has the echomop parameter file open.

System commands can also be entered directly at the Option: prompt by making the first character a $. Thus:


would execute the system ‘directory’ command, and return to the main menu.

30:ech_genflat – Output Flattened-field


Figure 9: A flattened-field. This was produced using ECHMENU Option 30.

This option was introduced at echomop version 3.2-0. The flat-field balance factors produced by ech_ffield are written to an image which can then be inspected, for example using KAPPA DISPLAY.


31:ech_exit – Exit ECHMENU

This option exits the ECHMENU shell task. It is also possible to select this option by typing either EXIT, QUIT, a single ‘E’ or ‘Q’, or 99, followed by carriage return.