Hi,
I just realized that the effects of even moderate pile-up
fractions could have rather serious consequences for spectral fitting.
In particular, I am fitting a point source which is about 0.1 cts/s,
so it should be about 5-10% piled up, as far as I can tell (I assume
all the estimates on pile-up are for the standard, dithered observations).
Thinking about what sort of affect this would have on a power-law
spectrum, a few percent of photons going from the low energies to the
high energies can make a significant difference even if the
effective area of the instrument was flat with energy. The drop off at
higher energies makes things even worse, since every perceived high
energy photon counts all the more when fitting. It seems to me,
however, that since the effect is well understood with a simple small,
fixed probablity of each photon's energy being added to the energy of the
next one, that maybe some iterative process could be devised to correct
for the effects, at least in cases of modest pile-up. I was
wondering if anybody had worked on this and if so, if any solutions
were arrived at. Or at least some sensible way to estimate how much
a simple absorbed power-law spectrum would be changed by the effects
of pile-up for a given count-rate, nH, and photon index.
Cheers,
Mallory
This archive was generated by hypermail 2b29 : Tue Feb 12 2013 - 01:00:10 EST