Last modified: 29 February 2024

URL: https://cxc.cfa.harvard.edu/ciao/guides/gspec_hrcsletg.html

High Resolution Spectroscopy
LETG/HRC-S Observations

Analysis Guide


It is possible to analyze any Chandra gratings dataset almost straight out of the box (only response files need to be created). However, to get scientifically accurate results, there are a number of data processing questions that should be considered. When Chandra data goes through Standard Data Processing (SDP, or the pipeline), the most recently available calibration is applied to it. Since this calibration is continuously being improved, one should check whether there are currently newer files available. Similarly, some science decisions are made during SDP; every user has the option to reprocess the data with different parameters.

For gratings observations, there is the additional question of whether the source spectrum extracted during SDP corresponds to the source of interest to the user, and if so, whether the source location was correctly determined.

The chandra_repro reprocessing script runs all the data processing threads for grating data. Everything in the reprocessing data steps of this guide are automated by the script. Refer to "ahelp chandra_repro" for more information.

Using TGCat, the Chandra Gratings Catalog and Archive: If the user is analyzing archival data, and furthermore is satisfied with the source position and the choices made for SDP, essentially all steps described here can be avoided by using the gratings catalog, TGCat, and downloading the pre-computed spectral products (i.e., spectra and response files). The user can then proceed to the "Simple spectral analysis" threads.


This analysis guide gives an overview of a subset of the CIAO threads pertaining to LETG/HRC-S data analysis. They take the user through the analysis of a single source from a single observation. By following the threads, the user can determine for an LETG/HRC-S dataset:

  1. whether the location of the extracted source was correct,
  2. whether the data should be reprocessed and filtered before starting the data analysis stage,
  3. how to create response files for the gratings spectrum, and
  4. how to begin a simple spectral analysis.

The following threads are referenced:

The threads should be run in the order in which they are presented below.



Thread: Correcting a Misplaced Zero-order Source Position

When tgdetect, the CIAO tool for locating zero-order position of an observed source, is run in the pipeline, it uses the observer-specified target coordinates from the file header to locate the center of the box in which to search for zero-order sources.

However, if tgdetect is run as a stand-alone tool with the zero order positions, zo_pos_x and zo_pos_y, set to their default values, the tool uses hard-wired numbers to locate the center of the box in which it searches for zero-order sources. If the source is outside of the default search area (1800 square pixels for HRC),the tool will not locate it, regardless of how prominent it is.

This thread shows how to re-run tgdetect with the correct source positions for the search.



Thread: New Observation-Specific HRC Bad Pixel File

The HRC-S bad pixel files are used both to define the valid coordinate regions in the detectors and to identify bad (hot) pixels. Observation-specific bad pixel files are generated from calibration data products by applying the appropriate degap corrections and selecting all time-dependent bad pixel regions in the calibration data that are appropriate to the time of the observation.

You should create a new HRC bad pixel file if hrc_process_events was run with degap corrections different from those used in standard processing, or if you have identified new bad pixel regions that are not contained in the CALDB bad pixel list.

As explained at the end of the thread, it is necessary to reprocess the data with hrc_process_events in order to apply the new bad pixel file.



Thread: Computing Average HRC Dead Time Corrections

The HRC dead time correction factor are determined as a function of time and so vary during the observation. The complete dead time information is recorded in a deadtime factor (dtf1) file and the average value is recorded in the DTCOR header keyword of the event file.

There was a bug in HRC standard processing versions 7.6.4 through 7.6.7 which led to the use of incorrect good time intervals (GTIs) in the calculation of DTCOR in the dtfstats file, and hence the LIVETIME and EXPOSURE. Users whose datasets were processed with these software versions should follow this thread to verify the deadtime corrections in their data.

Users might also need to make a new DTF file in order to time-filter the event list in a manner different from that used in standard processing, particularly if the deadtime factors in the dtf1 file have been flagged as variable in the standard deadtime statistics (std_dtfstat1) file.



Thread: Setting the Observation-specific Bad Pixel Files

Although the majority of the calibration files are now contained within the Chandra Calibration Database (CALDB), the observation-specific bad pixel list must be set by the user. This file will be used by many of the CIAO tools, such as mkarf, mkgarf, and mkinstmap. Setting the bad pixel file ensures that the most accurately known bad pixel list for any observation will consistently be used in the data processing.

It is very important that you know what files are set in your ardlib.par. If you do not set the bad pixel file for your observation, the software will use a generic detector bad pixel file from the CALDB; pixels that are flagged as bad in a specific observation will not get filtered out when using this map. The criteria for a pixel to be flagged are described in the badpix dictionary entry.

Remember to "punlearn" or delete your ardlib.par file after completing analysis of this dataset to ensure that the proper bad-pixel maps are used the next time that ardlib.par is referenced by a tool.



Thread: LETG/HRC-S Grating Spectra

This thread begins with a level=1 event file, which might have been subject to the reprocessing threads outlined above. The "Setting the Observation-specific Bad Pixel File" thread, however, should be run for all data. At this point, the user might wish to apply other changes to the reprocessing options.

After reprocessing, the source positions are determined using the tgdetect tool. If one of the above "Determining source-position" threads has been run, this step of the thread would have already been performed.

The thread then takes the user through the process of identifying the image regions along the directions of the gratings arms (using tg_create_mask), and of assigning gratings events to spectral orders (using tg_resolve_events). An HRC-S background filter is applied with dmcopy to eliminate about 25% of the background within the near-0th-order region with no X-ray losses. dmcopy is used again to apply grade and status filters, creating a new level=2 event file. At this point, the user might also want to apply other filters, e.g., a time filter.

Spectra are then created using the tgextract tool.



Thread: Higher-order Responses for HRC-S/LETG Spectra

Because of the low energy resolution in the HRC-S, the LETG spectrum file (pha2.fits) contains only two rows (negative and positive) containing all the spectral orders. While it is not possible to separate the overlapping orders, CIAO can make response files (gRMFs and gARFs) for higher orders for modeling and fitting.

This thread describes how to estimate the importance of higher orders when analyzing LETG/HRC-S spectra, so that one can at least determine the minimum number of orders with which to start. The number of orders which will be modeled dictates how many gRMFs and gARFs are created in the next two sections of this guide.



Thread: HRC-S Grating RMFs

This thread shows how to use the mkgrmf tool to generate a grating RMF (gRMF) appropriate for spectral analysis of grating observations. The tool can be used either to create a standard gRMF with the most up-to-date calibration or to calculate a gRMF using non-standard grids.

For LETG/HRC-S observations, users must make gRMFs for at least the positive and negative first orders of the LEG (i.e., two gRMFs in total). Additional gRMFs are created for the higher orders to be modeled, determined after reading the Higher-order Responses for HRC-S/LETG Spectra thread.



Thread: LETG/HRC-S Grating ARFs

This thread shows how to use the fullgarf script to create a grating ARF for a particular order and grating of an observation. While the mkgarf tool will create a grating ARF for an individual chip given an aspect histogram, this script creates ARFs for each chip, creating aspect histograms as necessary. The script then combines the individual ARFS into one for the full array.

For LETG/HRC-S observations, users will typically make ARFs for at least the positive and negative first orders of the LEG (i.e., two gratings ARFs in total). Additional gARFs are created for the higher orders to be modeled, determined after reading the Higher-order Responses for HRC-S/LETG Spectra thread.



Thread: Grouping a Grating Spectrum

In order to use Gaussian statistics to fit a model to a dataset it is often necessary to "group" the data - i.e. combine channels until you have enough counts - before fitting. Since it is not possible to group all the rows in a PHA2 spectrum file at once, the individual spectra first need to be "split" from the file with dmtype2split. Then the dmgroup tool is used to perform the desired grouping.

Note that this thread is only relevant to users analyzing their data with XSPEC. Those using Sherpa or ISIS can group their data during spectral analysis.



Thread: Fitting Multiple Orders of HRC-S/LETG Data

After the pairs of gRMF and gARF response files have been created, the desired orders of the spectra can be modeled and fit in Sherpa. This thread shows how to load the data and response files, defined a source model, and set fit statistic and optimization method. After the fit is run, the Chi Squared statistic contribution per bin and confidence intervals for the fit parameters are calculated.