With this blog, we will continue a short series on comparing camera performance. Some of this can be done by comparing manufacturer provided specification sheets. Usually after you have narrowed down your selection, you want to evaluate the cameras for yourself and perform some measurements to allow for a quantified comparison. This can be a challenging task as you need to make measurements that are relevant to your application.

Here are just some parameters that quantify camera performance (please leave a comment with any other suggestions):

**MTF**- An indicator of camera's ability to resolve fine details**Sensitivity**- An indicator of low light performance**Quantum Efficiency**- Number of electrons created per incident photon**Dark Current**–**(non) Linearity**-Percentage deviation from the ideal linear response of an image sensor**Responsivity**- Image output voltage per incident light (volt / lux)**Full Well Capacity**– The maximum number of electrons per pixel**Dynamic Range -**An indicator of ability to image over a wide range of illumination levels

On September 12^{th}, we talked about MTF so now we will discuss sensitivity.

In practice, often the quantum efficiency (QE) is used to evaluate the sensitivity of an image sensor or camera. The QE is the probability that a photon generates an electron in a sensor at a given wavelength.

In a way it describes the expected response of the imager to light, in electrons. However, this signal is only reasonably detectable if it exceeds the noise level! Therefore, the QE by itself is an insufficient measure of sensitivity.

Alternatively, the EMVA 1288 standard offers a formal definition of “absolute sensitivity threshold” which is the mean number of photons required so that the Signal to Noise Ratio (SNR) is equal to one. In practice, this is definition is not straightforward to use as it requires detailed data not only of the quantum efficiency, but also the conversion gain and the illumination spectrum at hand.

As indicated, the QE governs the sensor response to light; the read noise determines the lower noise limit and therefore the minimum required signal. As a consequence the ratio of QE and read noise is a useful measure to compare sensitivity. It simply allows for quick comparison of imagers by only considering datasheet numbers QE and read noise.

In a previous blog comparing CCD vs. CMOS image sensors we showed results from measurements on a few different sensors.

**Figure 1. QE versus wavelength for various sensors**

CCD 1 and CCD 2 are Interline Transfer CCDs; CMOS 1 and CMOS 1-b are CMOS sensors with a Global Shutter.

**Figure 2. QE/Read Noise (Sensitivity) vs. Wavelength**

QE and read noise require an accurate measurement setup, so the method of comparison is mainly of use with data sheet numbers. A more practical and useful measure is the ratio of “camera response” and noise floor. “Camera response” to light is simply the average output level upon exposure to a certain illumination strength and integration time. Note that for various cameras the same illumination strength per pixel must be maintained. If the pixel sizes differ, this requires a change of optics or light source intensity. Furthermore, to be exact, the black level at the same integration time must be subtracted from the images. The noise floor RMS (DN) can be determined by capturing two dark images at a certain integration time, subtracting them and calculating the standard deviation divided by the √2. The subtraction takes out the spatial noise, the factor square root normalizes again.