Chandra X-Ray Observatory
	(CXC)
Skip to the navigation links
Last modified: 22 November 2017

URL: http://cxc.harvard.edu/csc/proc/index.html
[Updated]

Catalog Processing


The CSC is created by processing each Chandra dataset with a series of automated data analysis pipelines. Collectively, the pipelines are known as "Level 3 Processing" and the data products reflect that in their filenames—e.g. the event file suffix is evt3.fits. For more on this nomenclature, see Chandra Standard Data Processing, which also describes the Level 1 and 2 Chandra data products.

The pipelines run in order:
Observation Selection » Pre-Calibrate/Pre-Detect Pipeline » Fine Astrometry Pipeline » Calibrate Pipeline » ComboDet Pipeline » Source Validation Pipeline » MLE Pipeline » Rebundle » MLE Pipeline run 2 » Stacker Pipeline » Master Match Pipeline » Source Properties Pipeline

[Updated]Observation Selection

The Observation Selection page describes which observation intervals (OBIs) are chosen for catalog processing.

Each observation interval is assigned to a 'stack' such that all coaligned (within 1 arcmin) observations are in the same stack and can therefore be processed as a group. Stacks may therefore contain one or more observation intervals.

[New]Pre-Calibrate/Pre-Detect Pipeline

The Pre-Calibrate pipeline is run for each OBI that's a member of a stack with more than one OBI.

The Pre-Detect step uses a run of the wavdetect program with conservative parameter settings to identify bright point sources suitable for astrometrically matching the observations that comprise each observation stack.

  • Reprocess the selected datasets with the same CALDB.
  • Run wavdetect to create bright source list.

[New]Fine Astrometry Pipeline

The Fine Astrometry pipeline runs to compute the astrometric corrections needed to align each observation in a stack to the same astrometric frame. It is run on the observations that went through the Pre-Calibrate/Pre-Detect pipeline.

  • Calculate astrometric translations for each observation (usually less than 1 pixel).
  • Update aspect solution files to be consistent with correction.

[Updated]Calibrate Pipeline

The Calibrate pipeline is run for each OBI chosen by the observation selection process.

  • Reprocess the selected datasets with the same CALDB.
  • Identify and remove background flares.
  • Create data products for use in the other pipelines.
  • Create background maps.

[Updated]ComboDet Pipeline

The ComboDet (combine and detect) pipeline is run for each calibrated OBI from the Calibrate pipeline to create combined data products and identify candidate source detections.

  • Reproject observations to common tangent plane for stack.
  • Detect faint source candidates with wavdetect.
  • Combine wavdetect detections at different scales.
  • Detect extended source candidates with mkvtbkg.
  • Calculate the source and background regions.
  • Calculate the limiting sensitivity.

[New]Source Validation Pipeline

The Source Validation pipeline is run to reconcile the source lists.

  • Define 'bundles' of source detections which are overlapping or nearly so.
  • Flag detection problems (pileup, etc).
  • Perform QA to inspect, add, remove, modify, flag sources as needed.
  • Perform QA to inspect, add or modify convex-hull extended sources.

[New]MLE Pipeline Run 1

The MLE (Maximum Likelihood Estimator) pipeline takes the candidate sources in each bundle and assesses them using a source region sigificantly larger than the PSF, updating the source positions and evaluating their likelihood values.

  • Create ray trace PSF models in each energy band for each candidate source in the bundle.
  • Create background model for source neighbourhood using the adaptively smoothed backgrounds.
  • Perform maximum likelihood simultaneous fit for combined data in source bundle to derive best fit source positions, possible extent, and corresponding model likelihood.
  • Classify detection as true, marginal or false based on likelihood thresholds from simulations.
  • For true or marginal detections, compute MCMC confidence intervals for parameters.
  • Generate per-detection data products.

[New]Rebundle

The Rebundle step checks the new source positions and recalculates the assignment of sources to bundles.

[New]MLE Pipeline Run 2 (Recenter)

The MLE (Maximum Likelihood Estimator) pipeline takes the candidate sources in each reassigned bundle and assesses them, using smaller source regions. The source positions are further updated. The steps are the same as for the first run.

After the run, QA is performed to inspect and adjust bundle positions where needed.

[New]Stacker Pipeline

The Stacker pipeline creates a merged detection list for an observation stack.

  • Combine outputs for all MLE pipelines in stack.
  • Generate per-stack detection list.
  • Perform QA to reject or flag problem sources.

[Updated]Master Match Pipeline

The Master Match pipeline reconciles detections of the same source in different stacks. The method is similar to that used in Release 1.

  • Identify stacks which overlap despite being either (a) from different instruments or (b) with pointings more than 1 arcminute apart.
  • Taking different PSF sizes into account, identify detections of the same astronomical source in different stacks and define 'master sources'.
  • Assign 2CXO catalog names to master sources.
  • Perform manual QA to handle 'too hard' match cases.

[Updated]Source Properties Pipeline

The Source Properties pipeline is run for each master source and energy band.

  • Rebundle sources (again)
  • Calculate aperture photometry properties (flux PDFs) per observation and stack
  • Calculate remaining source properties per observation and stack
  • Group observations which are consistent with each other into 'blocks'.
  • Calculate source properties per block
  • Calculate master average flux properties

[New]Convex Hull Pipeline

The convex hull pipeline will be run for each band and ensemble to complete the analysis of highly extended sources. It is still under development.

  • Perform quality assurance on the per-stack convex hull regions
  • Run master match algorithm to match sources across stacks within an ensemble, creating master sources.
  • Assign 2CXO source name to master source and add to catalog.
  • Calculate certain source properties (flux, likelihood, positions).
  • Variability and extent properties are not calculated for convex hull sources.

[New]Limiting Sensitivity Pipeline

The limiting sensitivity pipeline will calculate the sensitivity in each band for each location covered by the catalog. This will be provided to the community as a separate data product. The pipeline is under development.


Last modified: 22 November 2017
Smithsonian Institute Smithsonian Institute

The Chandra X-Ray Center (CXC) is operated for NASA by the Smithsonian Astrophysical Observatory. 60 Garden Street, Cambridge, MA 02138 USA.   Email:   cxchelp@head.cfa.harvard.edu Smithsonian Institution, Copyright © 1998-2017. All rights reserved.