Low light imaging requires special techniques to bring out the faintest details which may be at or even below the noise floor of the sensor and camera. Whether you’re an astronomer imaging faint nebulae or a life science researcher doing low light fluorescent imaging, chemiluminescence or spectroscopy, many of the issues are the same.
Since light is quantum in nature, the arrival of photons are random, discrete events. The uncertainty in a light signal is termed “shot noise” and displays a Poisson distribution which varies as the square root of the signal. If you double the signal level, the noise increases by about 1.4 (the square root of 2). If you increase the signal level by a factor of 4, the noise only goes up by 2, doubling the signal to noise ratio (SNR) of the image.
If the subject being photographed is stationary relative to the sensor, multiple images can be combined to increase the signal to noise ratio of the combined image. Astronomers do this regularly to capture images with hours of exposure on a single subject. Combining 4 images of the same subject doubles the SNR, combining 9 images increases the SNR by a factor of 3, 16 images by a factor of 4, etc.
QSI 516 Average 20 images
QSI 516 Average 50 images
QSI 516 Average 100 images
The images above of a standard optical chart were captured with a QSI 516 cooled CCD camera in very low light. The first image shows a single exposure. The very low signal to noise ratio makes the image look grainy. Subsequent images show the result of combining 20, 50 and 100 images. The improvement in image quality (and the signal to noise ratio in the image) is clearly visible. The combination of 100 images has a signal to noise ratio ten times greater than the single image. Continue reading
There are several factors to consider in selecting the best sensor for your imaging application. Does your application require the highest possible sensitivity? Fastest frame rates? Widest dynamic range? Highest resolution? Widest field of view (FOV)? Some combination of these factors and a few others, including price, will determine the best sensor to meet your specific requirements.
Highest Sensitivity – Quantum Efficiency and Spectral Response
- The sensitivity of digital image sensors varies with wavelength.
- Full frame sensors tend to have higher sensitivity toward the red end of the spectrum.
- Full frame sensors are better for Near Infrared (NIR) applications from 700 to 1100nm
- Interline transfer sensors have peak QE toward the blue end of the spectrum, with superior response in the ultraviolet below 300nm.
In the image below the red and green lines represent the Quantum Efficiency (QE) of Full-Frame sensors. Note that non-anti-blooming full-frame sensors (red line) have the highest QE. Continue reading
“Gain” on a CCD camera is a common source of confusion. Some common questions include, “Is gain like ISO on a DSLR?” Kind of. “Is it important to have the correct Gain setting?” Yes, if you want to maximize the dynamic range of your images. “What Gain setting should I use?” If the gain is selectable, the answer depends primarily on whether you’re binning the pixels larger than 1×1.
What is “Gain”?
Gain on a CCD camera represents the conversion factor from electrons (e-) into digital counts, or Analog-Digital Units (ADUs). Gain is expressed as the number of electrons that get converted into a digital number, or electrons per ADU (e-/ADU). Gain values are selected by the camera manufacturer to maximize the dynamic range available from any given sensor. The idea is to convert charge from the sensor (e-)for any given (possibly binned) pixel and convert that into a number that fits into a digital pixel value. For a 16-bit camera, that’s a number from 0-65,535 (64K, FFFF in hexadecimal). In most CCDs, the Horizontal Shift Register (HSR) and Output Register (OR) have a higher capacity than an individual pixel, so when binning larger than 1×1 you can maximize the dynamic range by using a different Gain setting than you would with 1×1 binning.
How Are Optimal Gain Values Determined?
As an example, the QSI 640 and RS 4.2 utilize a Truesense (formerly Kodak) KAI-04022 sensor. Truesense specifies the capacity of a single pixel as 40,000 electrons (e-). The actual capacity of most pixels is somewhat higher. Continue reading
With warm summer nights approaching, we’ve again seen an increasing number of questions from our astrophotography customers about the benefits of cooling the sensor in a CCD camera. “How cold should I go?” “What effect will another 5C or 10C drop have on noise?”
First, thermal, or dark current builds up in a CCD at a predictable rate whether the sensor is being exposed to light or not. CCD Imagers exploit this fact to remove the thermal current that builds up in their light frames by subtracting a dark frame (or frames).
Thermal current can be measured by subtracting a bias frame from a dark frame. The difference between the dark frame and the bias frame is a result of the thermal current (and noise). Here’s a graph showing Mean Thermal Current at various temperatures and exposure times. This particular data was acquired with a QSI 516, which has a Kodak KAF-1603ME sensor. Most other popular sensors have similar properties. Some terms, such as ADU, are defined at the bottom of this article.
And here’s a reduced set of mean thermal data from a QSI 583, which has a Kodak KAF-8300 sensor:
The key thing to note is that the mean Thermal current (Dark – Bias) grows very nearly linearly over time and that the thermal current is cut in half approximately every 6C (6.3C for KAF-1603ME, 5.8C for a KAF-8300). Continue reading
Selecting the best sensor for any given application requires matching your imaging goals to the capabilities of the sensor and camera. One key factor in that decision is the architecture of the sensor.
There are two dominant architectures for CCD image sensors today, “full-frame” and “interline transfer.” Each offer advantages depending on your imaging goals and application.
To start out I want to clarify the phrase “full frame” which has two common uses when discussing digital sensors. In a DSLR, a “full frame” sensor refers to a sensor the size of a 35mm film frame (36 x 24mm). A “full-frame” CCD refers to a CCD architecture where the entire image area is active, also referred to as having a fill factor of 100%.
At the highest level, full-frame sensors have higher Quantum Efficiency (QE) while interline transfer sensors offer high frame rates and very fast shutter speeds using a built-in electronic shutter. Full-frame sensors require a mechanical shutter for timing the exposure which typically limits the maximum frame rate. Below is a graph showing the QE of several Kodak full-frame and Interline Transfer sensors. The red, green and light blue lines are all full-frame sensors while the dark blue line is the QE of two popular interline transfer sensors.
Scientific imaging puts different requirements on a camera and the image processing workflow. Capturing Light discusses applications, techniques and issues encountered by scientific imagers. The examples will often involve QSI cameras, but the information will usually be applicable to any application or user of a scientific camera.