Skip to content

Strobe Illumination Module

Christie Nel edited this page Sep 27, 2019 · 6 revisions

Requirements Overview

We require a strobe illumination system that can be used with a camera connected with a Raspberry Pi via CSI-2 interface. An additional microprocessor may be used to control the strobe timing. Of particular interest is the Arducam range of camera boards, some of which offer a hardware strobe output from the camera sensor.

Strobe Illumination Concept

Strobe illumination is a method used to capture high speed motion. Instead of using a high speed camera, a strong light illuminates the subject for a fraction of a second in an otherwise unlit environment. The camera will thus be exposed to light only for this brief moment and would be similar to using a professional camera with an exposure time setting of the period for which the light was illuminated. Since LEDs and strobes can be illuminated and shut off instantaneously, the exposure time achievable is nearly infinitely short. The more sensitive the camera and more powerful the light, the shorter exposure times can be achieved and thus faster motion can be captured.

Camera Selection

The following factors should be considered:

  • Global shutter or rolling shutter
  • Strobe signal output
  • Pixel size / sensor size / sensitivity
  • Colour or mono
  • CMOS vs CCD
  • Optics

Rolling Shutter

The more common and cheaper cameras use a rolling shutter. This means that each line in turn is cleared/zeroed, then collects light (exposure) for a set period, then gets read back. This means that each line is in a different state of light integration. A good explanation can be found here, under "exposure time". If strobe illumination is used in an otherwise dark environment, this would result in an image that is part lit and part dark. There are ways to make strobe lighting work with rolling shutters:

  • Some camera boards, such as the Arducam B0031 with OV5642 sensor feature a global reset input (FREX). This will clear all lines simultaneously and start exposure of the whole image. A strobe can then be fired during this frame and the image collected with the rolling shutter during the next frame. This would require an external circuit to manage the global reset input and strobe timing. This method requires two frames per image. One for exposure, the next for reading back.
  • Long exposure times. Allow the rolling shutter to clear an entire frame, fire the strobe, then collect the image during the next rolling shutter. Similarly, this requires two frames per image. The camera exposure time setting has to be at least the time required to read back an entire frame plus the illumination time of the strobe.

Global Shutter

A camera with global shutter will clear the entire image, then expose the entire image and then read back the entire image. This means that firing a strobe in a dark environment at any time will produce a single, crisp image and can run at the full frame-rate of the sensor. The only requirement here is that the strobe is fired for the required time somewhere between the start and end of each frame. Examples of global shutter cameras are Arducam B0162 and Arducam B0165, both with 1MP OV9281 sensor.

Strobe Signal Output

An output is required to signal when the strobe must illuminate and for how long. The timing and synchronization with the camera sensor is crucial. Some camera boards such as the Arducam B0031 with OV5642 sensor and Arducam B0162 with OV9281 sensor offer a hardware strobe output from the sensor itself. This will illuminate the strobe once during each frame. Alternatively, the Raspberry Pi can be configured to output a strobe signal on a GPIO pin, but this relies on the operating system and its ability to service the camera interrupt. The characteristics of this signal can be interrogated using a logic analyzer and a decision made on its limitations and suitability. An additional microprocessor can be used to control the timing of the strobe as required, based either on a hardware strobe output from a sensor or from the interrupt-driven Raspberry Pi output. The microprocessor may feature an algorithm to clean up the Raspberry Pi-derived signal to produce more consistent strobe timing.

Pixel Size / Sensor Size / Sensitivity

The camera sensor determines how much light it collects in a given exposure time. Generally, the bigger the sensor and/or pixel size, the more light can be collected in a fixed time and thus the more sensitive the sensor will be and the sharper the image will be when capturing fast motion. Given that the physical space between pixels does not collect light, having more pixels reduces the sensitivity of a sensor. Therefore the ideal camera for this application will likely be lower resolution and stronger optics are required to compensate.

Colour or Mono

For argument sake, a colour sensor pixel effectively consists of three pixels, which increases the surface area of the dead space between light collecting area (as described above). Each of the three colours further collect only a narrow frequency range or light energy. These two factors mean that colour sensors are generally less sensitive than mono sensors.

CMOS vs CCD

CMOS and CCD are two different image sensor technologies. CCD produces better images, but CMOS is cheaper and much more common.

Optics

We will be using microscope objectives. The regular, short focal length and wide field of view optics that come with, or are compatible with, most budget Raspberry Pi cameras will be inadequate in the final product. However, they are good for development and testing on a desk.

Clone this wiki locally