Report of the January 31, 2002 CUC meeting

The thirteenth meeting of the CUC took place on January 31, 2002 at CfA. In attendance were Drs. K. Arnaud, M. Arnaud, Y-H Chu, M. Donahue, P. Henry, J. Hughes , J. Mohr, J. Turner, D. Kniffen (NASA HQ), A. Tennant (MSFC), B. Wilkes (CfA) and H. Tananbaum (CfA); the meeting was chaired by S. Kulkarni. A. Cool joined the meeting via telecon. K. Koyama was unable to attend the meeting.


As usual, the committee enjoyed Roger Brissenden's presentation of the status of the Observatory. We were happy to learn that the Observatory continues to function without any significant problems. Owing to the solar activity (this being a double maximum cycle) the efficiency of Chandra which had been near 70% (close to the theoretical maximum) hovered around 60%. This will stretch the current cycle and we understand the upcoming cycle will likely start later than November, 2002.

Roger (at our request) gave a detailed analysis of the time lag between observations and access by the user. The mean lag is 7 days with turnarounds as fast as one day (TOOs) and maximum delays of 13 days. We thank Roger and his crew for this outstanding work.

[1] Occasionally a few datasets are delivered well beyond the mean lag of 7 days. We suggest that each OBSID has a built-in alarm so that the appropriate scientists are informed when an OBSID has been unduly delayed.


Again at our request, we were given a summary of the Chandra press releases. We note with some pleasure the particularly wide coverage accorded to Daniel Wang's mosaic of the central regions of the Milky Way (the data required for this image mosaic resulted from a proposal in the "large project" category). We were thrilled to hear that over the past 12 months two Space Science Updates were based around Chandra discoveries.

We understand that the Director keeps a keen eye on results that may also result in potentially interesting press releases. While CXC regularly informs the users that it is interested in their results not all PIs are well informed regarding what results could potentially be news worthy.

[2] We recommend that more channels be opened by which either the Director or the CXC Outreach office can learn of potentially interesting results. Possibilities include charging CUC members to take some responsibility for identifying potentially appropriate science results and sensitizing PIs of successful large proposals of this issue.


Last Fall, the nearby neutron star RXJ 1856.5-3754 was observed for nearly half a million seconds under the auspices of the DD program. The data were made public and several groups analyzed the data. This was a bold investment of Chandra time on an important target and an important area of focus for Chandra. The lack of any pulsations and spectral features, while disappointing, serve to emphasize the bizarre nature of the nearest neutron star.

The DD program appears to have been responsive to a wide range of unanticipated (or at least not proposed) TOOs.


The Chandra mission was approved initially for five years (PRIME phase). This phase will end after cycle 5. NASA HQ has approved in principle the extension of the mission for an additional five years. In Fall of this year, NASA HQ will start negotiating with CXC and the GTO teams a contract for operation beyond the prime phase.

The original contract clearly states the fraction of time allocated to the instrument teams (this is the origin of the GTO allocation).

However, the document (as we understand it) is subject to interpretation about the GTO allocation beyond prime phase. Don Kniffen, representing NASA HQ, sought our opinion on this matter.

Clearly the CUC cannot (and will not) comment on the legal issues. The instrument teams have intimate knowledge of the instruments and provide service to the CXC under a contract from HQ. It is not entirely clear how the continuation or termination of GTO beyond prime phase would affect the instrument teams and Chandra operations. The CUC attempted to begin addressing this question by using CXC statistics provided by Fred Seward to examine GTO versus GO success in AO-3. The statistics indicate that GTO proposals won targets with approximately three times higher success rates than GO proposals. This suggests that the fraction of Chandra time going to the instrument teams might not change much if GTO were terminated at the end of prime phase. Additionally, there is broad agreement that the fiscal and scientific fitness of the instrument teams must be maintained as the Chandra mission moves beyond prime phase.

The CUC agreed to gather additional information before the next meeting regarding the effects of terminating GTO at the end of prime phase. A recommendation on GTO beyond prime phase will be issued following that discussion.


From time to time, there has been discussion (especially at NASA HQ) of a reduced proprietary period. To this end, Kulkarni made the following proposal: the default proprietary period to be set at 6 months (instead of the current 12 month period) but the proposer would be free to request, upon proper justification, a maximum proprietary period of 12 months. The rationale for this new rule is that the Chandra mission is now mature and that unlike other missions (e.g. HST) Chandra typically observes a much smaller number of targets, less than a thousand per year. Thus a shorter proprietary period will increase the amount of data in the archives (and also thereby allow results obtained in the current AO to play a role in the optimization of observations in the next AO).

The proposed motion did not find much support among CUC members. It was argued that users from smaller colleges (with heavier teaching loads) and young researchers (who do not have the luxury of a large standing research team) would be particularly disadvantaged by a shorter proprietary period. Furthermore, proper analysis of many datasets is still limited by the significant calibration uncertainties (see below).


Nominally, the large proposals are allocated 20% of the GO time and the regular proposals 75% of the time. The remaining 5% can be allocated to either category. Starting AO-3, the procedure to allocate this 5% is as follows. The merging panel first considers the large proposals and rank orders them. The chairs of the regular (subject based) panels bring their "gray area" proposals (i.e. the best one or two proposals which were not selected for observation because of a lack of available observing time) to the merging panel. The merging panel then considers the large proposal (usually only one) that lies in the gray area against the basket of gray area proposals from the subject based panels. A merging panel vote then determines whether the large proposal or regular proposal group is selects for the remaining 5% observing time.

[3] We concur and reaffirm the current practice.


The current approach to handling GTO target allocation was also discussed. Starting with AO-3 GTO teams are no longer allowed to reserve targets that GOs also find interesting. GTO teams submit confidential target lists before the GO proposal deadline. After the deadline, a search is carried out for targets (and instruments) that both GTO and GO proposals have in common. GTO teams are given the option of dropping the conflicted targets from their lists or competing head to head with GOs for those targets. GTO teams are given a week to submit proposals for those conflicted targets.

As mentioned above, Fred Seward provided statistics on this first round of GTO versus GO competition. The success rate of the GTO proposals is difficult to compare to the GO proposal success rate, because of the small numbers. However, an analysis of the target success rate is possible, because the number of conflicted targets is much higher. An analysis of the statistics indicates that the GTO success rate in winning a conflicted target is three times higher than the GO success rate in winning a conflicted target. Possible explanations for this large difference in success include: (1) the current process of target selection favors GTO proposals and (2) GTO teams are much stronger, on average, than GO teams.

Comparing the scientific strength of GO and GTO teams is clearly beyond our means, so the CUC discussed components of the current target selection process, which might favor GTOs. It was suggested that the fact that a panel that accepts a GTO proposal is not "charged" with any time for that observation may bias panels toward GTO proposals. It was pointed out that self selection by GTO teams (i.e. GTO teams only choose to write proposals for those conflicted targets which they presumably believe they have a high chance of winning) would enhance GTO success rates.

[4] We examined possible changes to this process. Having GTO teams write proposals for all their targets might provide the most equitable treatment for GOs and GTOs. However, this seems wasteful, given that a large fraction of the GTO targets will be unconflicted and automatically granted time. However, review panels do receive GO proposals from GTO team leaders in addition to the GTO proposals, and these GO proposals from GTO team leaders are treated no differently from the bulk of GO proposals. Moreover, it simply is not necessary for the review panels to know which proposals are GTO and which are GO, because determining whether to charge the time in a given proposal to the GO or GTO pool can all be handled afterwards. Therefore, we recommend that the GTO proposals be handled in the same way as GO proposals. Review panels will simply rank GO and GTO proposals with the expectation that a certain number of the proposals are indeed GTO proposals. After the panel rankings are complete, the bookkeeping to determine whether the time for a given observation is charged to the GO pool or a particular GTO pool will be carried out by the CXC.


Many CUC members would like to see a link between observations in the archive and publications which used those data. Belinda Wilkes discussed implementing these links. We recognize that there is significant human labor required to create the list of publications, but Belinda suggested that perhaps some of this information could be required fields in future Chandra proposals.

[5] The CUC whole heartedly supports the suggestion by B. Wilkes of having all future Chandra proposals require the PI to list previous Chandra observations they have been granted together with a list of their publications that use those data.

Wilkes raised the prospect of allowing for multi-year proposals in Cycle 5. The CUC did not have time to discuss the proposal and understand the full scope of the proposed option. However, there are a number of observations that benefit from this option, e.g. astrometry, timing and monitoring. We would like to hear further details in the next CUC meeting.

ChaSer and WebChaSer:

We were pleased to have the web version of ChaSer available for review. We understand and agree with the CXC strategy to build a web interface and a Java interface to the same services, as both interfaces serve the needs of certain audiences. We would like to emphasize that the CUC is most interested in the web browser version of the interface. We hope that the ChaSer interface fulfills the needs of the internal users and the users who need advanced access. We also hope that the CXC keeps the CUC up to date on ChaSer improvements and long-term plans, especially plans regarding access to the Level 3 products and coordination with analysis software. We expect that such development should be prioritized in context with other software development projects relevant to users, and data analysis in particular. We applaud the addition of a name resolver and a radial coordinate search to Webchaser, and we hope that the new version is released very soon. We encourage the CXC to ask the CUC members to review new versions of the interface (and if there are specific new features we should test, tell us what they are - such a request is more targeted than simply asking us to check it out).

[6] Our main suggestions regarding the archive interfaces are: to add the capability of searching by target category, to add NED to the choice of target resolvers, and to add a "suggestions for improvements" type of link to interfaces. An "advanced" version of the WebChaser interface should allow searches which query all parameters associated with the data headers and the proposal inputs. We also request that Chaser provide a web service that would allow retrievals and basic data searches via alternative interfaces, such as the HEASARC interface and StarView. Such services are not difficult to implement given the proper contacts and coordination, and is a basic implementation expected for a 21st-century archive center. (Niall Gaffney,, is the Java developer for StarView.) Finally, the CUC asks that the direct data access (i.e. a direct data deposit to the users' disk) be made available through WebChaser. If both interfaces access the same backend archive services, it does not seem impossible to open an ftp connection from those services, regardless of whether the Java interface or WebChaser is used to make the request.


Larry David summarized the inflight calibration program from launch to AO-3. The plate focus, boresight, optical axis, and PSF are all considered stable and no calibration observations have been made since AO-2. The ACIS and HRC filter transmissions are monitored using observations of Vega (for UV) and Betelgeuse (for red leak). The low energy ACIS response is tracked by raster scans of E0102 and LETG observations of PKS2155. The HRC QE versus energy is being more precisely measured using LETG observations of HZ43 and PKS2155 and the gain monitored by raster scans of AR Lac. The absolute QE of ACIS and HRC are monitored using observations of Cas A and G21.5. The grating dispersion relation and line spread function are tracked using observations of Capella and the QE versus energy using 3C273 and PKS2155 with the addition of HZ43 for the LETG/HRC-S combination. The total integration time for all AO-3 calibration observations is 880 ksec.

The calibration group believes that a few other observations are necessary. An additional 3 E0102 positions (24 ksec total) on I3 node 0 would provide better measurements of the effects of the radiation damage at the position it was most severe. A 100 ksec observation of 3C273 or Her X-1 would provide a better measurement of the wings of the PSF. A 40 ksec HRC-I integration on the Vela remnant would map the low energy QE uniformity on an arcminute scale. Finally, an additional 50 ksec of HETG/ACIS-S observations of 3C273 at different SIM_Z offsets would provide better QE versus wavelength measurements at pointing positions used by a number of guest observers.

Dick Edgar presented various aspects of the ACIS calibration. A few minor problems have turned up in the released S3 -120 response. Examination of the E0102 calibration observations shows a +16 eV zero point gain shift is required to fit the oxygen emission lines. The current gain relation doesn't include a jump at the Si edge and this, in combination with sparse sampling, causes spurious features in high S/N spectra in the 1-2 keV region. Both these problems should be fixed in new FEFs, hopefully included in the planned March CALDB release. The same release should also include FEFs for S3 at -110. The response for S1 has been improved. This is better than the current response for HETG order sorting but is not good enough for imaging spectroscopy.

An attempt to generate FEFs for non-CTI corrected data from the FI chips was not successful. Work has now started on generating FEFs for CTI-corrected data with the aim of a possible release in the summer. However, there are node-to-node variations in the E0102 observations that are not understood at present. [Following the completion of the report we became aware that the node-to-node variations in E0102 are now understood to be in PI space and not the instrument-produced PHA.]

The spectrum of the ACIS particle background has been determined using event histogram data accumulated when HRC-S is in focus and ACIS is screened from cosmic X-rays and its calibration source. The resulting spectrum is consistent with that obtained from the dark moon observation (incidentally showing that the ROSAT detection of X-rays from the dark moon was actually geocoronal). A memo with details is available from the calibration section of the web site.

Chandra ACIS-S3 and XMM-Newton EPIC spectra of G21.5, E0102, and MS1054.4-0321 (a high redshift cluster) have been compared in collaboration with the XMM-Newton project. The relative fluxes are in excellent agreement, with differences at the level of 5% and lower. The spectral shapes are broadly similar with the largest discrepancy being in the column measured to G21.5. The ACIS-S3 value is about 2E21 larger than that for XMM-Newton EPIC.

Terry Gaetz reported on progress on the calibration of the on-axis PSF. Observations of 3C273 (ACIS-S3), LMC X-1 (ACIS-I), and AR Lac (HRC-I) have been compared with ground calibration results. The core is well understood but there are still residual uncertainties in the wings probably due to ground calibration systematics. An additional 100 ksec ACIS-I observation of 3C273 would calibrate the PSF wings more accurately and improve the ability to measure dust scattering halos and low surface brightness emission around bright point sources.

Herman Marshall described the state of the HETGS calibration. The main issue is the QE versus energy for the ACIS chips. The ratio of BI/FI data shows that systematic errors are <10% above 1 keV but rise to around 20% at 0.6 keV. The plan is to correct the BI QE and publish the corrections. Comparing MEG and HEG data has led to small corrections in the MEG efficiencies which will be tested using the observations of PKS2155 and 3C273.

Hank Donnelly presented recent work on the HRC imaging calibration. The HRC team continues to make incremental improvements. The only major issue is in the QE uniformity at low energies. An observation of the Vela remnant has been proposed to characterize this.

Jeremy Drake reported on the status of the LETG calibration. The ground calibration left more uncertainties in the calibration for the LETG than the HETGS. The CXS team have taken a conservative approach and not adopted the technique advocated by the LETG team at SRON of modifying the efficiencies based on observations. The two calibrations differ by <15% when averaged over wavelength bands however there are larger differences over small wavelength ranges.

The dispersion relation for the LETG+ACIS-S observations of Capella indicates a different dispersion relation for the ACIS-S and HRC-S detectors. This may mean an error in the current pixel size for either or both detectors.

The line response function of the LETG is well described by a beta model I(lambda) = (1 + (lambda/lambda_0)2)^-beta with beta=-2.5+/-0.2.

Committee comments on the calibration presentation:

We thank the calibration group for their presentations and the hard work being done to make Chandra observations as scientifically productive as possible. We consider the calibration, especially of ACIS, the single most important unresolved issue facing the CXC.

[7] We were concerned about the science justification for additional calibration observations. It was not always clear to us whether the proposed observations were necessary for scientific results or would just allow some aspect of the system to be measured more precisely.

[8] We were very pleased to hear that a variation of the PSU CTI-corrector is now implemented within CIAO. We note that it would be useful to release this even if the calibration information is not yet available because it can be used to make narrow energy-band images in ACIS-I. We also note that the PSU CTI-corrector works on both parallel and serial CTI and hope that the CXC version will also do so.

[9] We appreciate the tremendous amount of work going into generating the responses for the ACIS chips. However, we do wonder whether resources are being used in an optimum manner and are concerned that the timescale to generate new responses is approaching that on which the response itself changes due to further radiation damage. Fitting multiple gaussians to simulation output is intrinsically unstable and requires a lot of manual intervention to give usable results. Alternative methods would include using a more physically-motivated model or building the response directly from the simulations.

[10] We would like to see some thought given to making calibration information available in tabular form on the web. The spirit of this request is that some products are of immediate value to the community and putting out such these products would save an interested user from re-reducing the calibration data. A case in point is PSF wings which is not in CALDB but is a by-product of Gaetz's analysis. Thus an ASCII table of the data would be a quick solution. Longer term fixes include adding to CALDB and/or the proposed web interface to the SAOSAC ray trace.

[11] Andrea Prestwich presented plans for a new version of the POG. This version will include many examples primarily as a teaching aid for new researchers. We applaud Andrea's enthusiasm and the hardwork she is already putting in into the effort. We understand that the new POG will be web based. We have one request: the POG should not be changed substantially from AO to AO, except of course to update performance and other numbers.