Modern automated fluorescence microscopes are complex devices, dedicated to produce accurate representations of objects at a resolution exceeding the resolving power of the human visual system up to about 1000-fold. The fidelity and efficiency of this imaging process critically depends on the accurate ensemble performance of various optic and electronic devices. As the quality of the final image does not only depend on the performance of the microscope, but also on properties of the specimen, small performance flaws are often difficult to spot directly in any microscope image.
The use of the microscope response to sub-resolution light sources is a well established method for the assessment of instrument performance. Displaying and analyzing images, extracting relevant properties, comparing measurements to theoretical values and monitoring changes in the images and parameters over time though are laborious tasks. The automation of these tasks allows to dramatically reduce the workload and to apply the method frequently.
With the aim to distribute a procedure for a standardized equipment quality assessment within a community of partner sites, we chose the public domain image-processing software ImageJ  and assembled a set of semi-automatic image processing routines. The macro generates orthogonal projections from bead images along the lateral and axial dimensions which are displayed using a customized look-up-table to color code intensities. A Gaussian curve is fit to the intensity profile of a fluorescent bead image and full-with-at-half-maximum (FWHM) values are extracted. FWHM values of ideal bead images for various optical conditions were generated using the ‘PSF generator' or the HuygensPro software  and were stored within the macro. These values are displayed alongside the measured FWHM values to give a reference for the relative performance quality of the equipment (Fig.1).
Over a period of several months, we have analyzed the performance of four heavily used upright and inverted microscope setups. Our systematic study revealed that, objectives mainly suffered from careless use of automated microscope stages, particularly when different stage-inserts and sample-holders are used.
Imaging & Microscopy Issue 4 , 2012 as free epaper or pdf download
Front lenses of objectives are indeed very sensitive to scratches and hits, and can very easily be irreversibly damaged. This implies that, at sites where microscopes are heavily used by a group of customers with largely different experimental questions and professional skills -like in microscopy facilities-, it is necessary to assess the performance for objective lenses regularly. We found that checking performance at a weekly interval was a good compromise between workload and reaction time to spot performance flaws on systems that are used for more than 1500 hours per year.
The routine assessment also enabled us to immediately spot a number of performance shortcomings not originating from the damage of objective lenses, like a defective beam splitter (Fig. 3), vibrations generated from a broken camera fan, uneven illumination of the back focal plane in a spinning disc confocal microscope and fidelity flaws of a piezo stage.
The macro has been distributed to several scientific institutions in Europe and in the US, and has been well received as a tool in a number of imaging centers. A site list is available upon request.
Other procedures to test the performance of microscopes have been presented earlier, for example through generating Sectioned Imaging Property (SIP)-charts . The SIP-chart procedure reveals information about the axial resolving power of a microscope for each position inside the field of view and about the uniformity of field illumination alongside several other very useful parameters. It does not allow assessment of the lateral resolving power though. The preparation of suitable samples for the SIP charts is quite demanding, so we relied on kind gifts from the Brakenhoff lab. The analysis of the images is web based, requiring to load large sets of image data over the internet. The SIP-chart method is suited to assess sectioning, but not conventional microscope performance. The routine presented here complements the SIP chart method; it makes use of easy-to-prepare samples and reveals information about the apparent lateral and axial resolving power. The macro runs on any desktop computer and the method applies to both sectioning and non-sectioning systems. It does not provide information about the axial resolving power of a microscope over its entire field of view though or about the uniformity of field illumination.
The macro is freely available from the authors upon request. We appreciate any feedback on its performance.
Friedrich Miescher Institute
Tel: +41 61 697 3012
Fax: +41 61 697 3976