In Table 1 the down time statistics for the period 2002-03-12 to 2002-09-17 are reported, where time lost as 1 hr is counted as 1 hr. In a total of 49 End-Of-Night reports faults were reported, with an average time lost of 55 min per fault. Of these, 27 reported no time lost, 15 reported 1 hr lost, and 7 reported 2 or more hrs lost.
Night included | Time lost | Nights | Percentage | Last period |
All nights | 45 hr | 188 | 3.0% | 1.9% |
Scheduled observing nights | 30 hr | 137.5 | 2.0% | 1.2% |
Technical nights | 15 hr | 32.5 | 5.8% | |
Visitor instruments | 0 hr | 18 | 0% | |
Assuming an average of 8 hr per night | ||||
Excluding technical nights and visitor instruments |
This compares to a down time of 1.9% over all nights ( 1.2% on scheduled observing nights) in the period 2001-09-13 to 2002-03-12. In that period 48 fault were reported, of which 35 reported no time lost, 8 reported 1 hr lost, and 5 reported 2 or more hrs lost. The main difference between the current reporting period and the last reporting period is the time lost due to the problems with the primary mirror radial supports (see below).
Below are listed those faults for which 2 or more hours were lost. In principle, also repetitive errors which lose little time but occur frequently would be included here.
(Technical night.) The encoder of the rotator failed because of lightning during a snow storm. The encoder was replaced, which due to its position is very time consuming.
(Technical night.) The multiplexer between the instrument and the computer died and the FAPOL software had been wiped off the disk. Both these things were probably also caused by the lightning in the previous night. As this was a mix of problems, it took some time to sort out the real causes. In the end, the FAPOL software was re-installed and the multiplexer was replaced. The original multiplexer was later repaired.
(Technical night.) Various problems occurred when restarting observations after the re-aluminised mirrors were installed on the telescope. The problems included a broken optical fiber between NOTCam and the computer, computer problems, and some problems with the drying device for the compressed air of the mirror support system which needed cleaning, while the lack of a pointing model slowed down observations
(Technical night.) Time was lost the observer had some problems with the catalogue with blank fields on the TCS. Detailed instructions exist in the manual (also on the web). Furthermore, the support astronomer was not contacted at the time, and the total amount of time lost should normally have been much more limited.
(Technical night.) For unknown reasons the serial port on the computer on which the ALFOSC UIF is running was not configured properly. After reconfiguring the port things worked fine.
During the afternoon it was found that 4 of the 10 lower radial supports of the primary mirror had gotten unglued. This was detected during altitude tests with the new TCS, but it is not clear if these tests caused the problem. However, before reinstalling the aluminised primary mirror some rubber blocks which physically limited the vertical movement of the primary mirror were removed. It is believe that this might have allowed the mirror to move upward too much with respect to the radial support. Such a movement would create a big torque on the invar pads that connect the radial supports to the mirror, which could have caused them to break lose.
During the afternoon and the start of the night preparations were made to glue the invar pads back to the mirror again, after which they were left to dry.
Tests during the day had shown that the glue had not dried completely, and experience had shown that it typically takes three times longer for glue to dry at the observatory than under normal conditions. To be absolutely sure that the pads would stay glued, the time to let the glue dry was extended to 24 hr, as compared to the normal 8 hr drying time.
In the mean time the rubber blocks to limit the movement of the mirror have been reinstalled. Detailed check have not shown any problems with either the newly glued pads, or the other pads. The recovered sheets of glue from the pads that got lose are being examined to make sure that a deterioration of the glue might have contributed to the pads getting lose, which would imply that the other pads might follow soon.