Radiant metrics

Guide: collection of calibration data

General comments

In order to collect a data frame, one should fix the camera position and orientation with respect to the coding screen and record at least one complete deck of patterns. Any movement of the camera, the screen, or any lens adjustment during the recording may invalidate the previously collected data or initiate the creation of a new data frame. A calibration dataset includes one or more data frames. Generally, calibration algorithms require at least three data frames to run. However, a metrological - quality calibration of a typical perspective camera with a well - qualified uncertainty estimate may require 20 or more data frames. Once collected, a dataset may be used to calibrate the camera by any method implemented in the backend.

Laboratory space

The illumination in the lab should not interfere with the shooting (for example, some fluorescent or LED lamps may lead to “flickering” in the recorded video). For the most demanding applications, isolate the lab from any external illumination so that the coding screen remains the only light source. Make sure that there are no blinking LEDs or moving objects in the camera’s field of view (swinging trees, shadows of walking people, rotating ventilators). If possible, avoid capturing the reflections of light sources in the screen. Reflections of people or objects around are (usually) not a problem. Try to isolate the camera and the screen from vibrations. Set the camera on a stable tripod. If possible, use remote control to start/stop recording in order to avoid shaking the camera.

Camera and lens

If possible, turn off any automatic lens adjustments and image “enhancers” in the camera during the data collection. Set the lens (zoom ratio, focus distance, depth-of-field) based on the intended camera application. If possible, set the zoom, the focus and the aperture of the lens in advance and keep them fixed during the dataset collection. Use manual or automatic exposure adjustment in order to reduce over- and under-exposure effects. Try to shoot the images or video in the native matrix resolution. Save the images or video files in the highest-quality format that is supported by the camera and accepted by the backend.

Camera positions

The space observed by the camera is known as a “viewing frustum”. Its angular sizes are defined by the focal distance and the sensor sizes, while the focus distance and the aperture settings define its near and far boundaries. The objects inside the viewing frustum will appear in the image and will be sufficiently sharp.
Viewing frustum
For each new data frame, move and rotate the camera on the tripod and make sure it remains stable and fixed during the shooting. The marker in coding patterns must remain visible at all camera positions. For the best calibration, the collected data frames should span multiple camera positions and orientations with respect to the coding screen. Alternatively, the coding screen must be positioned in the viewing frustum at different distances and angles so as to cover as many pixels in image as possible. Some examples of valid camera positions are shown below.
Coding screenCameraScene
Resulting image
resul1t1
result2
result3
In combination, the data frames in a high-quality dataset must span a large range of screen angles. Most camera pixels should be covered in at least two datasets.
Screen positions CameraAll screen positions with respect to the cameraCamera frameData frameCoverage by one data frame No coverage Coverage by two data frames Coverage of camera pixels by data frames

Inspection of data frames

It makes sense to first collect one data frame with the screen at some “typical” distance and process it in order to verify the procedure. The inspection of data frames is based on data plots. In the best case, typical estimated decoding uncertainties will be of order 0.5-1.0 pixels on the screen.
Decoded x-, y-coordinates appear as smooth gradientsNo spurious “holes” in the validity plotsNo steps/stripes in estimated decoding error maps

Accompanying documentation (a recommendation)

The dataset-specific parameters may be stored in free form in the comment box of the project. Record the specific camera model and the lens model in the project description. Record all the relevant camera recording parameters (shooting mode, AE settings, white balance, video format, frame rate). Record the lens parameters as much as they can be inferred from the dials or from the camera menus. Roughly outline the parameters of the camera positions with respect to the coding screen: approximate closest/furthest camera distances, range from the ”leftmost” to the “rightmost”, “top” to “bottom” positions.