4 The Basics of Echelle Data Reduction

 4.1 Image Preparation
 4.2 Order Location
 4.3 Order Tracing
 4.4 Slit Definition
 4.5 Flat Fielding
 4.6 Background Handling
 4.7 Extraction of Spectra
 4.8 Wavelength Calibration
 4.9 Finishing Reduction
 4.10 Handling Cosmic Rays

Figure 2 is a flow chart which maps out the main steps in the échelle data reduction process. ECHMENU is a menu-driven program in which each of the natural steps in the reduction process is handled by a task. This is the top-level menu presented by ECHMENU:

    0. HELP/HYPER (ASCII or hypertext help).
    1. Start a reduction.                 16. Check trace consistency.
    2. Trace orders.                      17. Post-trace Cosmic Ray locate.
    3. Clip fitted traces.                18. Image cosmic ray pixels.
    4. Determine dekker/object extent.    19. Quick-look Extraction.
    5. Model flat field.                  20. Check wavelength scales.
    6. Model sky.                         21. Scrunch and merge multiple.
    7. Model object profile.              22. Model scattered light.
    8. Extract orders 1-D.                23. ADJUST tuning parameters.
    9. Locate arc line candidates.        24. Set single-order operations.
   10. Identify features.                 25. Set all-order operations.
   11. Flatten order shape.               26. DISABLE an order.
   12. Scrunch to linear scale.           27. PLOT reduction arrays.
   13. Model/Extract orders 2-D.          28. Full MENU.
   14. Save reduced data.                 29. System ($) commands.
   15. Plot order traces.                 30. Output balance-factor frame.
                                          31. EXIT (alias Q/E/QUIT/EXIT/99).


PICT

Figure 2: Outline of the échelle data reduction process.


In this section each of the major steps is outlined and some of the important and perhaps tricky considerations are mentioned. Most of the steps are similar to those which would be used for extraction of a single-order spectrum, plus a few additional elements to deal with the multiple orders. The handling of cosmic-ray removal is described separately at the end of this section as there are several different approaches to the problem.

4.1 Image Preparation

Most échelle data will be obtained using a CCD to record the image. Careful preparation of the CCD data prior to attempting extraction of spectra is essential. In this Guide the basic procedures are outlined (very!) briefly. Those points relevant to échelle data reduction are included.

The basic steps in the preparation are:

Generate bias frame
Typically done by finding the median of several frames taken at the telescope.
Generate flat-field frame
Again, done by finding the median of several frames taken at the telescope. Before taking the median of the frames, the median bias frame and a zero-offset constant are subtracted (see below).
Subtract bias frame
The median bias frame is subtracted from each arc frame and each object frame.
Subtract zero-offset
A selected area of the CCD overscan region is used to find the ‘zero-level’ of the camera electronics. This is usually a number in the range 20–200 ADU. The median value of the selected region should be used (to avoid cosmic-ray contamination). This constant value is then subtracted from every pixel in the CCD image.
Crop images
Regions—such as the overscan—are not used during the extraction process and should be removed (the same as cropping a photograph) as they may confuse some of the algorithms in the échelle data reduction engine. Very often CCD images will have ‘rough’ edges, i.e. non-useable data, which should also be trimmed off at this point.

Depending on your source data and choice of reduction software you may need to:

Rotate and orient the image
All the major packages constrain the orientation of the échelle orders in some way. The images may have to be adjusted to meet these constraints. In general, if the échelle orders in your data run roughly horizontally (parallel to the X-axis) and the wavelength increases from left to right you will be alright. In other situations, you may need to use some image utilities to re-orient the data. This may involve rotating and perhaps reflecting the images.

You may have data which contain ‘dead columns’ or few-pixel hot-spots. Handling of these is discussed in the documentation for the CCD data preparation packages.

Once the set of arc and object images have been prepared in this way the échelle data reduction process can begin.

4.1.1 Software for CCD Data Preparation

There are quite a few different packages for preparing CCD data. All these packages offer similar functionality. You’ll probably find that it’s easiest to use the preparation package which complements the échelle reduction software you choose, e.g., noao.imred.ccdred for IRAF doecslit. There are two popular Starlink packages which you might use, FIGARO and CCDPACK. CCDPACK includes some tools for conveniently managing the preparation of many frames and supports error propagation.

4.2 Order Location

Order location is simply the process of finding the approximate position of each of the orders in an échelle image. You can select which of the located orders should then be extracted. This saves time if there is no useful data in some of the orders. Location is achieved by taking a slice across the dispersion direction which when plotted appears similar to the graph in Figure 3.

This example is a section across an IUE echellé image. You can see about 60 orders are present in this case. In practice, the section across the image will use data from several columns (in the case of a roughly horizontal dispersion), rather than a single column.


pdfpict


Figure 3: Order Location: cross-dispersion section through an IUE échelle spectrum.


4.3 Order Tracing

Once the required orders have been selected the next step is to determine the path of each of them across the image. This process is order tracing. Typically the tracing procedure will involve sampling each order in steps along the dispersion direction. The reduction program will attempt to estimate the centre of the order at each sample point. Once an order has been traced in this way you will have the option to fit a curve to the sample data. If all is well, the curve should represent the true path of the order across the frame. In practice, some of the sample points may have to be ignored to get a good fit. This is particularly the case if the trace frame is strongly contaminated with cosmic rays, if the orders are not very bright, or if the spectrum being traced contains strong absorption features.

If many images are taken with the same spectrograph configuration it may well be possible to use a single trace frame for all the extractions.

Figure 4 shows a polynomial (solid line) fitted to the sample points (dots) of a single échelle order.


pdfpict


Figure 4: Order Tracing: Fitting a polynomial to an order trace in ECHMENU.


4.4 Slit Definition


pdfpict


Figure 5: Slit Definition: Selecting the object and background channels in ECHMENU.


In each order of an échelle spectrogram an image of the spectrograph slit is projected onto the final image. The physical length of the slit is determined by the dekker. In a flat-field frame the entire slit should be illuminated, and so the length of the projected image (in the dispersed direction) will be limited by the dekker setting. (The dekker setting should be made sufficiently small that adjacent orders don’t overlap in the spatial direction.)

Using a flat-field frame we can determine which pixels on the detector lie inside the projected slit and which pixels lie outside the slit. A reduction program will use the previously determined order traces to build a cross-dispersion profile of each order in the frame. These profiles can then be used to decide where the ‘software’ dekker limits are.

Once the ‘software’ dekker has been set, the object frame is inspected in a similar way, this time to choose which pixels should contribute to the signal from the object and which (if any) are sky (or ‘local’) background. Figure 5 shows a typical plot during object and background definition using the ECHMENU program. This procedure leads to a pair of pixel-selecting masks—sometimes called channels—one marking the object, one marking the sky background. If there is a sky signal present in the spectrum it is advisable (if possible) to select pixels on both sides of the object spectrum to contribute to the sky signal. Refer to the next subsection for more information on handling of the background signal.

In the above procedures it may be satisfactory to use dekker/object profiles determined using data from all the orders in the spectrum. This will depend on the nature of the object spectrum and the chosen extraction method. Basically, if the spatial profiles are similar in all the orders then a single set of profiles can be used. If there are significant order-to-order variations in the profile then some or all of the orders will have to be profiled separately. To be useful, the optimal extraction method requires an accurate cross-dispersion profile.

4.5 Flat Fielding

The flat-fielding of échelle data is handled in different ways by the major reduction packages. The resulting spectra should, however, be essentially the same.

The flat-fielding process removes pixel-to-pixel variations in the response of the detector and any interference fringes (due to either the detector electrode structure or internal reflections in thinned detectors).

Statistical extraction methods (such as optimal extraction) require that the flat field be normalised to remove the colour of the lamp used.

IRAF and MIDAS both provide tasks for the preparation of normalised flat-field frames (respectively, noao.imred.echelle.apflatten and flat/echelle). These frames can be viewed in the same way as any other image. ECHOMOP can use such a normalised flat-field frame, or the normalised apertures can be computed by ECHOMOP and stored within the data reduction structure.

Normalised flat-fields are generated by fitting a polynomial to the shape of each order along the dispersion direction and, in some cases, fitting polynomials to the profile in the spatial direction as well. Pixels in any inter-order gaps—where there will be no signal—are set to a value of one in the normalised flat-field.

4.6 Background Handling

The background signal in an échelle image consists of:

The first two contributions are removed in the CCD data preparation phase. The signal due to ?? is usually small for short (the order of minutes) exposures in cryogenically-cooled CCD systems.

There are two approaches to determining the background signal level in an échelle image (three if you include not bothering with any background subtraction). These are: use the sky pixels as determined previously in the ‘slit definition’ step or, use a surface fitted to the inter-order background over the whole image. In many cases the first method will be adequate, however, sometimes a suitable background channel cannot be defined. An example of this is an image in which the signal from the object channel of one or more orders has spread out into the inter-order area, perhaps to the extent that some of the orders overlap in the spatial direction. In such a case it may be better to construct a global model for the background over the whole image, rather than trying to use inter-order background channels which are contaminated with light from the object or from an adjacent order. Figure 3 shows how the short-wavelength orders (left-hand side) of an IUE spectrum start to overlap—the inter-order background has clearly risen in this region.

Even if no sky background signal is present in the échelle image it may still be acceptable to use background channel masks as defined in the ‘slit definition’ step. In this case the channels should be selected to lie in the inter-order gaps and so sample the scattered light.

The fitting of a single surface to the background over the whole image is a computationally intensive process and so should be avoided accept for those cases where no useable local background can be determined.

4.7 Extraction of Spectra

Having produced a set of models for our data we can proceed to the extraction of the spectral information in the image.

In previous steps we have produced models of:

There are several approaches to the extraction of the spectral data. The most commonly used are the optimal, and linear (sometimes called ‘normal’ or ‘simple’).

Linear extraction is simply the integration of all pixels selected in the profiling step with equal weighting. The corresponding signal in the background channel is subtracted. The disadvantage of this method (as compared to optimal extraction) is that no attempt is made to allow for the fact that pixels at the edges of the order profile contain a smaller part of the signal than those in the middle of the order profile. These pixels will consequently have smaller signal-to-noise ratio and should carry reduced weighting for the ‘best’ possible extraction. Linear extraction is less computationally demanding than weighted extraction methods and is useful for checking data quickly or situations where it is not possible to prepare the data for optimal extraction (e.g., you don’t have the CCD readout noise details).

Optimal extraction is designed—in theory at least—to achieve the best possible signal-to-noise ratio with CCD spectral data. The method uses the Poisson statistics of photons, information about the CCD signal processing electronics transfer function, and the modelled profile for the object to weight contributions to the signal. To use the optimal extraction method you will need to know the readout noise and gain for the CCD camera used to obtain the spectra. The main limitation of optimal extraction algorithms is the requirement that the spatial profile of the object is a smooth function of wavelength. This means that optimal extraction is unlikely to be useful if spatial (cross-dispersion) resolution is required and/or the spatial profile of the object varies rapidly with wavelength, as for objects with spatially-extended emission-line regions.

Optimal extraction only gives significant improvements over linear extraction at low signal-to-noise levels. However, it has the advantage that the profile models can be used to reject cosmic rays which are incident upon the object or background channels.

There are other extraction weighting-schemes available, refer to §5 for information.

The extraction process is applied to both arc spectra and any standard-star spectra, as well as to object spectra.

4.8 Wavelength Calibration

From the preceding extraction step we have a set of data frames sometimes called collapsed échelle spectra. For each object or arc CCD frame we have a three-dimensional dataset: order, sample number and intensity. (You may also have variance information for each sample of each order.) Sample number is simply an index for each integration bin along the order (e.g., the X-axis in Figure 6). The next step in the reduction process is to attempt to determine the relationship between wavelength and sample number for these data.

The basic steps here are:


pdfpict


Figure 6: Line Identification: typical plot during interactive fitting with ECHMENU.


The wavelength-sample relation can be fitted separately for each order (1-D solution) or a model for the whole échellogram can be built (2-D solution). The success of the latter technique will depend to some extent on how many lines you can identify and where they lie in the spectrum.

Which ever reduction software you choose to use, you should find that a list of spectral feature wavelengths for common arc reference lamps is available on-line (refer to the package documentation for details). You may also be able to lay your hands on a hardcopy of a ‘mapped’ comparison spectrum for your selected ??, perhaps obtained using the same spectrograph as your data. For example, UCLES Spectrum of the Thorium-Argon Hollow-Cathode Lamp should be available at most UK Starlink sites (a UES version is also available). This document also gives the free spectral range and wavelength coverage for each order of the UCLES which can be used to estimate the wavelengths in other orders once you have identified features for your first order—the same trick can be used for other instruments if you have similar data available. Some people prefer the ESO arc-line atlas in which the line wavelengths are more clearly printed: An Atlas of the Thorium-Argon Spectrum for the ESO Echelle Spectrograph in the λλ 3400–9000Å Region.

Ideally each order to be fitted should contain at least three or four identifiable spectral lines, preferably with one close to each end of the order and one or more spread in the middle of the order. For some orders it may be useful to refer to the object and/or reference star spectra to look for strong features of known, or approximately known, wavelength. These can be used to help you ‘home-in’ on other features in the arc spectrum for that order. When a fit is made to these features you will be advised of the goodness-of-fit, usually in the form of a plot of line versus deviation-from-fit or RMS deviation values for each line. You will be able to adjust the fit parameters and reject any lines which seem so deviant that they have been mis-identified, then re-fit the data.

Figure 6 shows a plot of a single order during interactive line-identification using ECHMENU.

If attempting a 2-D solution to the wavelength relation for your data you should 1-D fit at least three or four orders before trying a 2-D fit. In a similar manner to the 1-D fits, you’ll get the best result if you use an order at each end of the échellogram and one or more from the middle.

Once you have a complete wavelength-calibrated comparison spectrum you can ‘copy’ the wavelength scale onto you object spectra. It may be useful to calibrate two arcs which bracket the object exposure in time. This will show any time-dependent variation in the wavelength scale. If there is some change (and it is reasonably small) you can take a time-weighted mean of the two bracketing wavelength scales and use this for the object spectrum. A method for applying this technique with ECHOMOP-reduced data is given in the Echelle Data Reduction Cookbook (SC/3).

4.9 Finishing Reduction

It may be the case that a set of wavelength-calibrated, individual-order spectra is suitable for your scientific purposes. At this point, you can also perform flux calibration or correct for the blaze function (a grating-dependent variation in the brightness along orders). You might also want to combine the individual orders into a single spectrum and/or re-bin the data onto a fixed-step wavelength scale.

4.9.1 Blaze Correction

The per-order normalised flat-field models generated earlier can be divided into extracted order spectra to remove the blaze function of the échelle. This can aid the process of fitting line profiles as instrumental effects are removed.

Figure 7 shows a plot of a single-order spectrum and a blaze-corrected version of the same order.


pdfpict


Figure 7: Blaze correction: the top spectrum is the blaze-corrected version of the lower spectrum. Note that the flux values in the uncorrected order have been scaled and shifted for this plot.


4.9.2 Flux Calibration

An alternative to applying the blaze correction is to fully flux-calibrate the data. As mentioned earlier, there may not be reference standards of sufficiently small band-pass size to enable a useful flux calibration to be applied. In échelle spectra a velocity difference between the reference standard and the object can lead to an effective change in the band-pass wavelength which invalidates the calibration—particularly where strong features are present in the spectrum.

You will probably have to apply a correction for observation of the object and standard star through differing air masses.

The flux calibration process is conceptually straightforward: the object and standard star spectra are summed in the same pass bands as the reference tables. The correction factors can then be calculated by comparing the standard star spectrum with the reference tables.

4.9.3 Re-binning Spectra—Scrunching

Use of a fixed bin of the spectra allows spectra from separate exposures to be co-added. There are many options for the re-binning of the data to a fixed wavelength scale. Two basic options are: bin to a fixed wavelength interval, or bin to a fixed velocity interval. The latter is equivalent to using a logarithmic wavelength scale.

Scrunching the data is equivalent to applying a filter to the data. You might want to investigate the possibility of applying different weighting schemes during the binning process. For example, FIGARO SCRUNCH offers both simple linear interpolation and a quadratic option.

4.9.4 Combining Orders—Merging

Once individual échelle orders have been blaze-corrected and scrunched to a fixed-bin wavelength scale they can me combined into a single spectrum. This process might also involve combining spectra from different exposures to overcome dropouts due to ?? etc.

Although there are plenty of utilities available for splicing together spectra, the best option here is to use one of the merging utilities included in an échelle data reduction package. This will allow you to apply a weighting strategy in the regions where the wavelength coverage of orders overlaps; e.g., ignore data where one order is much fainter than the other, use flux-weighted mean etc.

4.10 Handling Cosmic Rays

In the previous descriptions cosmic rays have only been occasionally mentioned. The successful reduction of échelle data requires careful attention to be paid to the location and handling of cosmic-ray hits in the data. There are several strategies for the detection of cosmic rays, here are some of them:

Inspection by eye
This is the simplest method—display an image and you will probably be able to see some cosmic-ray hits. This is less useful for object frames than, say, dark frames as real data can appear similar to a cosmic-ray hit.
Median Filtering
Two median filters are applied to each image; one along rows, one along columns. These are then divided into the original image and a histogram of the result is produced. If there are lots of cosmic-ray hits in the image then a clear cut-off point in the histogram will be visible. Pixels above the threshold can be flagged as cosmic-ray hits.
Profile Modelling
This technique can be applied in several ways. Most commonly it is implemented as part of the optimal extraction of spectra. Essentially, by constructing a profile of the ‘real’ science data, unexpected—i.e., statistically unlikely —cosmic-ray hits can be found, even when they fall on the spectrum itself. This method can require a large amount of processing time for a large échelle image.
Comparison of ‘Identical’ frames
This is another simple method. It can be used where you have several frames of the same object taken in the same configuration. A median image can be generated and pixels which deviate strongly from the median are probably cosmic-ray hits. This method works best when the images are all of the same exposure time.

Depending on the particular frame involved it may be possible to interpolate across a cosmic-ray hit. The alternative is to simply flag the cosmic-ray pixels found so that they are not used in the spectrum extraction.

You should ensure that cosmic rays in your CCD bias, dark, and flat-field frames are removed prior to attempting reduction—a median filter is suitable. It is particularly important that cosmic rays do not severely degrade the frame used for order tracing—otherwise the whole reduction will be unsuccessful.

Once slit-definition is complete any bright pixels lying outside the slits are almost certainly cosmic-ray hits.

Some implementations of the optimal spectrum extraction method allow you to select whether profile-based cosmic-ray rejection is applied during the extraction process (as is the norm) or post-extraction (e.g., ECHOMOP).

Whichever cosmic-ray removal/flagging strategy you choose to adopt, it is wise to check the results by displaying the original image with detected events flagged—sometimes bright sky lines can be mistakenly flagged as cosmic-ray events.