Introduction to Fitting PHA Spectra
Sherpa Threads (CIAO 4.10 Sherpa v1)
The basic steps used in fitting spectral data are illustrated in this thread. The data used herein were created by running the Creating ACIS Spectra for Pointlike Sources thread.
There are many options and variables that may affect how this process is applied to your data; for a more detailed explanation of the steps, see the following threads:
- Fitting Spectral Data: Fitting PHA Data with Multi-Component Source Models
- Fitting Spectral Data: Simultaneously Fitting Source and Background Spectra
Before fitting ACIS data sets with restricted pulse-height ranges, please read the CIAO caveat page "Spectral analyses of ACIS data with a limited pulse-height range."
Last Update: 11 Apr 2018 - updated for CIAO 4.10, outputs updated
- Load the Spectrum & Instrument Responses
- Filter the Data & Subtract the Background
- Defining the Source Model
- Examining Fit Results
- Scripting It
Load the Spectrum & Instrument Responses
First, load the spectrum file:
sherpa> load_pha("3c273.pi") WARNING: systematic errors were not found in file '3c273.pi' statistical errors were found in file '3c273.pi' but not used; to use them, re-read with use_errors=True read ARF file 3c273.arf read RMF file 3c273.rmf WARNING: systematic errors were not found in file '3c273_bg.pi' statistical errors were found in file '3c273_bg.pi' but not used; to use them, re-read with use_errors=True read background file 3c273_bg.pi
Since the RESPFILE, ANCRFILE, and BACKFILE header keywords were updated in the spectrum file, the response files (RMF and ARF) and background file are automatically read in as well. If the default dataset ID of "1" is to be used, it does not need to be explicitly included in the load function; only the data filenames are required in this case.
Sherpa issued a warning about systematic and statistical errors, which were not loaded. The statistical errors are calculated using the appropriate fit statistics set with set_stat in the Sherpa session. The standard treatment of systematic errors supplied with load_syserror is to add the array of systematic errors in quadrature to the statistical errors. Advanced methods to account for non-linear calibration uncertainties described in Lee et al. (2011) are available within pyblocxs Bayesian functions. However, they require the calibration products that are not available at this moment.
If Sherpa does not automatically read in the background and response files, read them manually:
sherpa> show_all() Data Set: 1 Filter: 0.1248-12.4100 Energy (keV) Bkg Scale: 0.134921 Noticed Channels: 1-1024 name = 3c273.pi channel = Float64 counts = Float64 staterror = None syserror = None bin_lo = None bin_hi = None grouping = Int16 quality = Int16 exposure = 38564.6089269 backscal = 2.52643646989e-06 areascal = 1.0 grouped = True subtracted = False units = energy rate = True plot_fac = 0 response_ids =  background_ids =  RMF Data Set: 1:1 name = 3c273.rmf detchans = 1024 energ_lo = Float64 energ_hi = Float64 n_grp = UInt64 f_chan = UInt32 n_chan = UInt32 matrix = Float64 offset = 1 e_min = Float64 e_max = Float64 ethresh = 1e-10 ARF Data Set: 1:1 name = 3c273.arf energ_lo = Float64 energ_hi = Float64 specresp = Float64 bin_lo = None bin_hi = None exposure = 38564.1414549 ethresh = 1e-10 Background Data Set: 1:1 Filter: 0.1248-12.4100 Energy (keV) Noticed Channels: 1-1024 name = 3c273_bg.pi channel = Float64 counts = Float64 staterror = None syserror = None bin_lo = None bin_hi = None grouping = Int16 quality = Int16 exposure = 38564.6089269 backscal = 1.87253514146e-05 areascal = 1.0 grouped = True subtracted = False units = energy rate = True plot_fac = 0 response_ids =  background_ids =  Background RMF Data Set: 1:1 name = 3c273.rmf detchans = 1024 energ_lo = Float64 energ_hi = Float64 n_grp = UInt64 f_chan = UInt32 n_chan = UInt32 matrix = Float64 offset = 1 e_min = Float64 e_max = Float64 ethresh = 1e-10 Background ARF Data Set: 1:1 name = 3c273.arf energ_lo = Float64 energ_hi = Float64 specresp = Float64 bin_lo = None bin_hi = None exposure = 38564.1414549 ethresh = 1e-10 sherpa> data_sum = calc_data_sum(id=1) # total counts (or values) in the data sherpa> print(data_sum) 736.0 sherpa> data_cnt_rate = calc_data_sum()/get_exposure(id=1) # calculating count rate in cts/sec sherpa> print(data_cnt_rate) 0.0190848557908 sherpa> bkg_sum = calc_data_sum(bkg_id=1) # total counts (or values) in the background data sherpa> print(bkg_sum) 216.0 sherpa> bkg_cnt_rate = calc_data_sum(bkg_id=1)/get_exposure(bkg_id=1) # calculating background count rate in cts/sec sherpa> print(bkg_cnt_rate) 0.00560099028644
Plot the data:
The data are plotted in energy space—as seen in Figure 1—since the instrument model provides the information necessary to compute the predicted counts for each bin. In general, the units of the x-axis are determined by the value in the units field of the data, which may be accessed with 'print(get_data().units)' or show_filter, and modified with set_analysis.
Figure 1: Plot of source spectrum
Filter the Data & Subtract the Background
The CIAO 'Why' topic on Choosing an Energy Filter contains information on selecting energy range for spectral modeling. We can use the Sherpa ignore or notice functions to select the energy range between 0.1 and 6.0 keV. These functions are applied to all data sets. The other two functions, notice_id()/ignore_id(), require explicit input of the source data set ID, as the first argument; the second argument defines the lower energy of the range, and the third the higher energy of the range. These are useful for multiple data sets requiring different filters. (The notice_id filter will automatically be applied to the associated background data when the background data set ID (bkg_id) parameter is not used, as in the example in this thread. A different filter for the background may be set by issuing the notice_id or ignore_id command with the bkg_id entered as the fourth argument to the function.) The data between 0.1 and 6.0 keV will be noticed with use of either of the following commands:
At this point, we also opt to subtract the background data:
Figure 2 shows the resulting plot.
Figure 2: Source spectrum, filtered and background-subtracted
The axis scaling for all plots created in the current Sherpa session may be changed to log by calling set_xlog and set_ylog with no arguments (and changed back to linear with set_xlinear/set_ylinear):
sherpa> set_xlog() sherpa> set_ylog()
To set the plot axis scaling for a specific type of plot, e.g., model, data, or fit plots, the set_xlog/set_ylog or set_xlinear/set_ylinear commands should be called with the appropriate argument, either "data", "model", "source", "fit", or "delchi"—similar to those accepted by the generic Sherpa plot function.
sherpa> p = get_data_plot_prefs() sherpa> p["xlog"] = True sherpa> p["ylog"] = True
To learn how to change the default axis scale from linear to logarithmic so that these commands do not have to be run in each Sherpa session, see this Sherpa FAQ.
Defining the Source Model
Before fitting the data, it is necessary to define a model that characterizes the source. All models available in Sherpa, or only models belonging to a specific category, may be returned at the Sherpa prompt by calling the list_models function accordingly:
sherpa> list_models() # all models, same as 'list_models("all")' sherpa> list_models("xspec") # all xspec models sherpa> list_models("2d") # Sherpa 2D analytic models
Here, we use a source model composed of two model components:
- powlaw1d — a one-dimensional power-law.
- xsphabs — an XSpec photoelectric absorption model.
We define an expression that is the product of these two components. The hydrogen column density (nH) is set to the known Galactic value for the source and the parameter is frozen so that it will not be allowed to vary in the fit.
The current source model definition may be displayed:
sherpa> show_model() Model: 1 apply_rmf(apply_arf((38564.6089269 * (xsphabs.abs1 * powlaw1d.p1)))) Param Type Value Min Max Units ----- ---- ----- --- --- ----- abs1.nh frozen 0.07 0 100000 10^22 atoms / cm^2 p1.gamma thawed 1 -10 10 p1.ref frozen 1 -3.40282e+38 3.40282e+38 p1.ampl thawed 1 0 3.40282e+38
Note that Sherpa and XSpec absorption models have to be multiplied by a model which has normalization and amplitude parameters, such as powlaw1d; they cannot be used as single models in the source expression. It may be necessary to modify the parameter values, since the Sherpa guess functionality does not apply to absorption models. However, we can use this command to guess the initial parameter values and ranges for the power-law model component (parameter values are not automatically guessed in Sherpa 4.10. To have Sherpa automatically query for the initial parameter values when a model is established, set 'paramprompt(True)' (it is 'False' by default).
sherpa> guess(p1) sherpa> show_model() Model: 1 apply_rmf(apply_arf((38564.6089269 * (xsphabs.abs1 * powlaw1d.p1)))) Param Type Value Min Max Units ----- ---- ----- --- --- ----- abs1.nh frozen 0.07 0 100000 10^22 atoms / cm^2 p1.gamma thawed 1 -10 10 p1.ref frozen 1 -3.40282e+38 3.40282e+38 p1.ampl thawed 0.000148802 1.48802e-06 0.0148802
The guess command makes an initial guess at parameter values to ensure convergence, but it is always a good idea to check that the initial range of values (soft limits) is sensible for the data being fit. Note that the initial parameter values can also be entered with set_par which is more appropriate for complex models, as guess is just a simple function and can make the parameter space too narrow for the search. set_par should be used in scripts.
Now we are ready to run the fit, using the Sherpa default fit statistic (chi2gehrels) and optimization method (levmar). (The available fit statistics and optimization methods may be returned with the list_stats and list_methods commands, and they may be changed with set_method and set_stat.)
sherpa> fit() Dataset = 1 Method = levmar Statistic = chi2gehrels Initial fit statistic = 866.653 Final fit statistic = 38.7871 at function evaluation 19 Data points = 46 Degrees of freedom = 44 Probability [Q-value] = 0.694074 Reduced statistic = 0.881525 Change in statistic = 827.865 p1.gamma 2.15146 p1.ampl 0.000224449
The fit information returned by the fit command includes the statistic value for chi2gehrels, goodness-of-fit and reduced χ2, along with the best-fit parameter values of the photon index and amplitude. The function calc_stat_info and its associated get_stat_info command, may be used to return the goodness-of-fit statistics without having to re-run the fit:
sherpa> calc_stat_info() Dataset = 1 Statistic = chi2gehrels Fit statistic value = 38.7871 Data points = 46 Degrees of freedom = 44 Probability [Q-value] = 0.694074 Reduced statistic = 0.881525 sherpa> goodness = get_stat_info() sherpa> print(goodness) name = Dataset  ids =  bkg_ids = None statname = chi2gehrels statval = 38.78708134475345 numpoints = 46 dof = 44 qval = 0.694073632832 rstat = 0.881524576017124
The calc_stat_info command is appropriate for accessing the fit statistics at the Sherpa prompt, where the information is printed to the screen, whereas get_stat_info is more useful for parsing this information within a script. Note that get_fit_result is available to access the full information returned by the fit, which is also useful when working on a script.
The best-fit model with the data and residuals may be plotted in the same window:
which creates Figure 3. The errors are plotted as "sigma", the sigma residuals of the fit [(data - model)/error].
Figure 3: Fit and sigma residuals
Examining Fit Results
Goodness of fit
sherpa> show_fit() Optimization Method: LevMar name = levmar ftol = 1.19209289551e-07 xtol = 1.19209289551e-07 gtol = 1.19209289551e-07 maxfev = None epsfcn = 1.19209289551e-07 factor = 100.0 verbose = 0 Statistic: Chi2Gehrels Chi Squared with Gehrels variance. The variance is estimated from the number of counts in each bin, but unlike `Chi2DataVar`, the Gaussian approximation is not used. This makes it more-suitable for use with low-count data. The standard deviation for each bin is calculated using the approximation from _: sigma(i,S) = 1 + sqrt(N(i,s) + 0.75) where the higher-order terms have been dropped. This is accurate to approximately one percent. For data where the background has not been subtracted then the error term is: sigma(i) = sigma(i,S) whereas with background subtraction, sigma(i)^2 = sigma(i,S)^2 + [A(S)/A(B)]^2 sigma(i,B)^2 Notes ----- The accuracy of the error term when the background has been subtracted has not been determined. A preferable approach to background subtraction is to model the background as well as the source signal. References ---------- ..  "Confidence limits for small numbers of events in astrophysical data", Gehrels, N. 1986, ApJ, vol 303, p. 336-346. http://adsabs.harvard.edu/abs/1986ApJ...303..336G Fit:Dataset = 1 Method = levmar Statistic = chi2gehrels Initial fit statistic = 866.653 Final fit statistic = 38.7871 at function evaluation 19 Data points = 46 Degrees of freedom = 44 Probability [Q-value] = 0.694074 Reduced statistic = 0.881525 Change in statistic = 827.865 p1.gamma 2.15146 p1.ampl 0.000224449 # retrieve a single value with get_fit_results(): sherpa> print(get_fit_results().qval) 0.694073632832 sherpa> print(get_fit_results().rstat) 0.881524576017124
The number of bins in the fit (Data points), the number of degrees of freedom, i.e. the number of bins minus the number of free parameters, and the final fit statistic value are reported. If the chosen statistic is one of the χ2 statistics, as in this example, the reduced statistic (i.e. the statistic value divided by the number of degrees of freedom) and the probability (Q-value) are included as well.
The calc_chisqr command calculates the statistic contribution per bin:
sherpa> calc_chisqr() array([ 8.20512388e+00, 6.00189930e+00, 1.15902485e+00, 2.48861986e-01, 1.63111464e-01, 4.42165503e-03, 7.62832579e-01, 3.77756824e-01, 7.76767705e-01, 8.04308532e-02, 6.25018235e-01, 2.30584312e+00, 2.06659867e-01, 3.39180690e-02, 9.78404119e-01, 4.01422895e-01, 1.31203945e-02, 1.26768071e+00, 1.71957682e+00, 2.10258058e-01, 8.73425302e-02, 1.39908429e-01, 5.99334565e-01, 5.18994619e-02, 1.96554734e+00, 1.07215327e-01, 1.04520789e+00, 4.26463654e-01, 3.17161811e-01, 6.24069585e-02, 1.62813093e-01, 1.22619894e+00, 5.82217015e-01, 3.28840614e-05, 8.85576055e-01, 1.11908875e+00, 1.60916375e-01, 5.94580127e-02, 2.65393408e-01, 2.17006437e+00, 2.13649397e-01, 1.92127636e-01, 1.30694441e-01, 4.03522159e-01, 9.03427393e-02, 7.80364698e-01])
The covariance() command—which may be shortened to covar()—computes covariance matrices and provides an estimate of confidence intervals for the thawed parameters; also see the related command conf():
sherpa> covar() Dataset = 1 Confidence Method = covariance Iterative Fit Method = None Fitting Method = levmar Statistic = chi2gehrels covariance 1-sigma (68.2689%) bounds: Param Best-Fit Lower Bound Upper Bound ----- -------- ----------- ----------- p1.gamma 2.15146 -0.0800324 0.0800324 p1.ampl 0.000224449 -1.48112e-05 1.48112e-05 sherpa> conf() p1.ampl lower bound: -1.48112e-05 p1.ampl upper bound: 1.48112e-05 p1.gamma lower bound: -0.0794073 p1.gamma upper bound: 0.0806575 Dataset = 1 Confidence Method = confidence Iterative Fit Method = None Fitting Method = levmar Statistic = chi2gehrels confidence 1-sigma (68.2689%) bounds: Param Best-Fit Lower Bound Upper Bound ----- -------- ----------- ----------- p1.gamma 2.15146 -0.0794073 0.0806575 p1.ampl 0.000224449 -1.48112e-05 1.48112e-05
The output is the best-fit parameter value with positive and negative error estimates.
Flux and Counts
sherpa> calc_photon_flux() 0.00046866917841107038 sherpa> calc_photon_flux(2., 10.) 7.3178851987805592e-05 sherpa> calc_energy_flux() 9.6448691196738444e-13 sherpa> calc_energy_flux(2., 10.) 4.5907144409599051e-13
sherpa> calc_data_sum() 706.85714092017133 sherpa> calc_data_sum(2., 10.) 306.2301570173737 sherpa> calc_model_sum() 639.76888323681487 sherpa> calc_model_sum(2., 10.) 272.59887094765571 sherpa> calc_source_sum() 0.046866918188503075 sherpa> calc_source_sum(2., 10.) 0.0073178851705369228
The file fit.py is a Python script which performs the primary commands used above; it can be executed by typing exec(open("fit.py").read()) on the Sherpa command line.
The Sherpa script command may be used to save everything typed on the command line in a Sherpa session:
sherpa> script(filename="sherpa.log", clobber=False)
(Note that restoring a Sherpa session from such a file could be problematic since it may include syntax errors, unwanted fitting trials, et cetera.)
The CXC is committed to helping Sherpa users transition to new syntax as smoothly as possible. If you have existing Sherpa 3.x scripts or save files, submit them to us via the CXC Helpdesk and we will provide the CIAO/Sherpa 4.10 syntax to you.
|14 Nov 2007||rewritten for CIAO 4.0 Beta 3|
|29 Apr 2008||show_all command is available in CIAO 4.0|
|09 Dec 2008||figures moved inline with text|
|09 Dec 2008||updated for Sherpa 4.1|
|16 Feb 2009||example of guess functionality added|
|29 Apr 2009||new script command is available with CIAO 4.1.2|
|15 Dec 2009||updated for CIAO 4.2|
|09 Jul 2010||updated for CIAO 4.2 Sherpa v2: S-Lang version of thread removed|
|15 Dec 2010||updated for Sherpa in CIAO 4.3: use of log_scale replaced with set_xlog/set_ylog; list_models is available with new argument options; new functions calc_stat_info and get_stat_info return goodness-of-fit information|
|15 Dec 2011||reviewed for CIAO 4.4 (no changes)|
|13 Dec 2012||updated for CIAO 4.5: background data may now be filtered separately from associated source data using the new bkg_id argument of the notice_id/ignore_id commands|
|04 Jun 2013||added a paragraph on statistical and systematic errors to the section "Load the Spectrum and Instrument Responses". Made small edits to the text.|
|03 Dec 2013||reviewed for CIAO 4.6|
|06 Apr 2015||updated for CIAO 4.7, no content change|
|01 Dec 2015||updated for CIAO 4.8, outputs updated|
|01 Dec 2015||updated for CIAO 4.9, updated for Python 3 compatibility.|
|11 Apr 2018||updated for CIAO 4.10, outputs updated|