Husson gave an introduction on this pilot project (Appendix 8). Its overall goal is to identify and (help) eliminate blunders in the software and/or processing by individual analysis groups who want to contribute to official ILRS products: specifically, a test procedure is in the make which will give a pass/fail judgement on an analyst’s treatment of a particular test dataset of observations and the quality of the results.
To this aim, four different solution types have been defined (A-D), with various degrees of freedom for the analyst. To judge the characteristics and quality of each solution that is handed in for evaluation, 5 different types of criteria are proposed: range corrections, orbit solutions, EOP solutions, station coordinates solutions, and residuals. So far, 7 analysis groups have provided solutions.
Husson gave a flavor of the current status of the contributions by comparing these, taking the JCET solution as (arbitrary) reference (cf. Appendix 8). The orbit solutions are rather diverse: the products coming out of the “A” computations (direct integration) show along-track differences that may build up to several mm, but also to several meters (similarity of software may not necessarily play a decisive role here). In the “B” solutions, where the orbit is fitted, along-track orbit differences typically may reach the level of various dm. A similar degree of inconsistency is observed for the “C” orbits (solving for other parameters as well; computation model still prescribed), with the exception of the IAAK results that tend to reach several meters. The “D” results (free computation model) show diverse results, with differences building up to decimeters (GFZ, GEOS) or meters (ASI, DGFI).
The range corrections are typically very consistent: differences are about 0.1 mm at most. The differences in the residuals reflect orbit differences to a large extent, and show similar patterns as has been reported for “orbits”. As for station heights, this comparison addressed the vertical component only. The “C” results may show differences of up to 5 cm, whereas the “D” results are consistent to 1 cm (with the exception of the DGFI results, which has a different range bias treatment with corresponding effects on the station heights). The comparison of the EOP results identified a misinterpretation of the a priori values in the SINEX files generated by JCET (action item analysts).
Müller reported on the status of activities for the DGFI contributions to this benchmarking project (Appendix 9). At the moment DGFI is unable to contribute with “A”, “B” or “C” solutions since the prescribed computation model is not fully implemented in the DGFI software yet. This holds for ocean tides and loading, accelerations modeling, the C2,1 and S2,1terms of the gravity field, the model for solar radiation pressure and geocenter motion. Also, the LOD representation is still an issue. DGFI is in the process of including proper representations of these model elements, and expects to “deliver” within a few weeks.
Pavlis reported on his comparisons of contributions for the benchmark project (Appendix 10). When comparing the x/y/z components of orbit solutions by ASI and JCET, good agreements were observed for the A/B/C solutions, but differences of up to 50 cm were found for the D type. The comparison of the JCET solutions with GEOS solutions yielded a discrepancy of 200 cm for the “A” solutions, and values up to about 10 cm for the B/C/D solutions. The comparison with GFZ results showed differences of up to 100 cm for the “A” orbits, and about 10 cm for the B/C/D results.
As an alternative, Pavlis also looked at the differences in radial, cross-track and along-track directions (both for position components and for velocity components). This basically confirmed the problems identified with the x/y/z comparison (ASI “D” and GEOS “A”). A third option for comparison representations is in Keplerian elements. This brought to light a consistent 28 mm offset for the semi-major axis of solutions provided by NERC.