Last modified: 13 June 2014

URL: http://cxc.harvard.edu/csc/proc/source.html

Source Pipeline


The source pipeline is run for each source candidate produced by the detect pipeline, using the source and background regions (except where indicated "modified source and background regions"), as opposed to full-field data used in previous steps of catalog processing.

Note on Errors

All values calculated in the source pipeline, except for spatial quantities (position and size), are reported with two-sided confidence limits.

Measure the source and PSF size; flag extended sources

An image and exposure map are created for each detection. The PSF at the location is simulated using SAOTrace, which is equivalent to the publicly-available Chandra Ray Tracer (ChaRT). The PSF simulation is run with a ray density - the number of rays generated at the entrance aperture - of 0.2 rays/mm².

An example of the data products is shown in Figure 1.

[ACIS counts image, exposure map, and PSF image]

Figure 1. Counts image, exposure map, and PSF image of an ACIS detection.

A Mexican-Hat optimization method is used to measure the apparent source and PSF size, based on the raw extent of a source, i.e. the extent of a source before subtraction of overlapping source regions. This is a refinement of the wavdetect results; the method is described in detail in the Measuring Detected Source Extent Using Mexican-Hat Optimization memo.

The pipeline does not attempt to detect extent on scales greater than 30". For sources between 1" and 30" in extent, if there are a large number of counts in the source and if the fraction of the flux which is extended is large, the true extent of the source is estimated (since then the PSF-convolved extent in the presence of observational noise is distinguishable from a pure PSF). If there are only a few counts in the extended part, however, it is not possible to determine if it is statistically different from a point source. Additionally, for sources far off axis, it's not possible to see small-scale extent (i.e. extent << PSF size).

Create light curves and look for variability

For each source, the events across all chips are reduced to a common set of valid time intervals (GTIs). Then the time-resolved fraction of aperture area is calculated; e.g. how much time did the source "lose" by dithering off the chip or across a bad column.

[Plot of dither: fraction of aperture area vs time offsets [s]]

Figure 2. Plot of dither: fraction of aperture area vs time offsets [s].

Several variability tests are run on the data, taking the dither information into account:

[Gregory-Loredo light curve (top) compared to the same light curve with dither removed (bottom)]

Figure 3. Gregory-Loredo light curve (top) compared to the same light curve with dither removed (bottom).

The Gregory-Loredo light curve (lc3.fits) is included in the data distribution.

Perform spectral fitting to get flux

For each source, a PI spectrum and the corresponding ARF and RMF calibration files are created; these files are included in the distributed data products.

If the spectrum has at least 150 net counts in the 0.5-7.0 keV range, two spectral models are fit to the data: a black body model and a power law. Corrections for the PSF aperture fraction, livetime, and ARF are applied when fitting the models.

The free parameters in the power law fit are the total integrated flux, total neutral Hydrogen absorbing column, and power law photon index. In the black body model fit, the free parameters are the total integrated flux, total neutral Hydrogen absorbing column, and black body temperature. The initial value of the Hydrogen column density (nH) is input from Colden, the CXC's galactic neutral Hydrogen density calculator. Note that spectral fit parameters may be unreliable for sources at large off-axis angles, where background levels can be high. A background-fitting approach will be considered for future releases of the Catalog. For more information on spectral fit parameters in the Chandra Source Catalog, refer to the Spectral Properties page.

Calculate additional source properties

A number of additional source properties are calculated in the source pipeline. These are described in detail in the Column Descriptions section.

Output data products

The pipeline produces these data products:

Post-pipeline Tasks

After this pipeline runs, sources are flagged for removal if:

When a source is removed, it is not included in the database and no further processing is done.