Guide: collection of calibration data
General comments
In order to collect a data frame, one should fix the camera position and orientation with respect to the coding screen and record at least one complete deck of patterns. Any movement of the camera, the screen, or any lens adjustment during the recording may invalidate the previously collected data or initiate the creation of a new data frame. A calibration dataset includes one or more data frames. Generally, calibration algorithms require at least three data frames to run. However, a metrological - quality calibration of a typical perspective camera with a well - qualified uncertainty estimate may require 20 or more data frames. Once collected, a dataset may be used to calibrate the camera by any method implemented in the backend.Laboratory space
The illumination in the lab should not interfere with the shooting (for example, some fluorescent or LED lamps may lead to “flickering” in the recorded video). For the most demanding applications, isolate the lab from any external illumination so that the coding screen remains the only light source. Make sure that there are no blinking LEDs or moving objects in the camera’s field of view (swinging trees, shadows of walking people, rotating ventilators). If possible, avoid capturing the reflections of light sources in the screen. Reflections of people or objects around are (usually) not a problem. Try to isolate the camera and the screen from vibrations. Set the camera on a stable tripod. If possible, use remote control to start/stop recording in order to avoid shaking the camera.Camera and lens
If possible, turn off any automatic lens adjustments and image “enhancers” in the camera during the data collection. Set the lens (zoom ratio, focus distance, depth-of-field) based on the intended camera application. If possible, set the zoom, the focus and the aperture of the lens in advance and keep them fixed during the dataset collection. Use manual or automatic exposure adjustment in order to reduce over- and under-exposure effects. Try to shoot the images or video in the native matrix resolution. Save the images or video files in the highest-quality format that is supported by the camera and accepted by the backend.Camera positions
The space observed by the camera is known as a “viewing frustum”. Its angular sizes are defined by the focal distance and the sensor sizes, while the focus distance and the aperture settings define its near and far boundaries. The objects inside the viewing frustum will appear in the image and will be sufficiently sharp.For each new data frame, move and rotate the camera on the tripod and make sure it remains stable and fixed during the shooting. The marker in coding patterns must remain visible at all camera positions. For the best calibration, the collected data frames should span multiple camera positions and orientations with respect to the coding screen. Alternatively, the coding screen must be positioned in the viewing frustum at different distances and angles so as to cover as many pixels in image as possible. Some examples of valid camera positions are shown below.



In combination, the data frames in a high-quality dataset must span a large range of screen angles. Most camera pixels should be covered in at least two datasets.