![matlab r2015a camera calibration kinect matlab r2015a camera calibration kinect](https://d3i71xaburhd42.cloudfront.net/8bb26f9cd8cbf9ee43fa33c4c6d7114bb393ae22/5-Figure1-1.png)
It is important that this image shows all calibration patterns without occlusions. We aim at calculating the spatial relation based on a single point cloud/camera image pair, as acquiring densely sampled point clouds can take up to multiple minutes.įirst, a virtual image of the point cloud has to be generated. Starting from a scene that shows multiple calibration patterns, e.g., checkerboards, we show how stereo calibration methods can be used to obtain the rotation and translation between the sensors. In this paper we propose a method for solving this task. However, to leverage laser scanner point clouds for range camera evaluation, it is necessary to calibrate the laser scanner to the camera. Also, this distance information can be obtained for arbitrary, not necessarily planar, surfaces. Laser scanners typically provide high-accuracy point clouds of a scene for larger operating ranges than camera-based solutions.
![matlab r2015a camera calibration kinect matlab r2015a camera calibration kinect](https://www.mathworks.com/help/examples/vision/win64/SingleCameraCalibrationExample_01.png)
Moreover, the accuracy of the ground truth quickly degrades as the distance between camera and calibration pattern increases.Ī laser scanner mitigates both issues. The first two approaches have limited information value as they typically provide reference distances only for planar regions. Measured with an additional, highly accurate 3-D sensor (e.g., a laser scanner) with known spatial relation to the evaluated camera. state three methods to acquire such ground truth :Ĭomputed from a calibration pattern and known camera intrinsic parametersĬomputed from a calibration pattern as seen from a second high-resolution camera with known intrinsic parameters and known spatial relation to the evaluated camera. Quantitative evaluation of range cameras requires scenes with ground truth distance measurements. Wasenmüller and Stricker compare the structured light Kinect V1 camera to the time-of-flight-based Kinect V2 camera. evaluated multiple time-of-flight cameras with respect to different error sources. presented a detailed study on the Kinect V2. analyze range cameras with respect to their applicability to robotics. Several studies that investigate the accuracies and error characteristics of range cameras have been presented in the past. This gives rise to thorough camera evaluations with respect to accuracy and other individual camera characteristics that influence the range measurements. The major problem with these sensors is their limited accuracy. Range cameras find widespread use, for example in the field of robotics, in space, automation in logistics or in augmented reality devices like the Google Tango phones. Nonetheless, the approach is not limited to this application and can also be used to calibrate a common 2-D camera to a laser scanner. The method proposed in this work has been specifically designed for generating reference distances for range camera evaluation. Knowing this relation enables a multitude of applications, for example coloring the point cloud, the generation of textured meshes, or the creation of high accuracy ground truth for range cameras. Furthermore, we utilize this data to investigate the accuracy of the Microsoft Kinect V2 time-of-flight and the Intel RealSense R200 structured light camera.įinding the spatial relation between a laser scanner and a 2-D or 2.5-D camera is crucial for sensor data fusion. As an example application, we use the calibration results to obtain ground-truth distance images for range cameras. This shows that the proposed algorithm reliably integrates the point cloud with the intensity image.
![matlab r2015a camera calibration kinect matlab r2015a camera calibration kinect](https://s1.manualzz.com/store/data/001013139_1-0d7fad46e10e7eb59e629f7e709f0ad6-360x466.png)
It is encouraging that the accuracies in our experiments are comparable to camera-camera stereo setups and outperform another of other target-based calibration approach. Stereo calibration on the virtual image and the camera image are then used to compute a refined, high-accuracy calibration. Then, we use the laser scanner reflectances to compute a virtual image of the scene. To this end, we first estimate a rough alignment of the coordinate systems of both modalities. However, several laser scanners additionally provide reflectances, which turn out to make calibration to a camera well feasible. To overlay and jointly use the data from both modalities, it is necessary to calibrate the sensors, i.e., to obtain the spatial relation between the sensors.Ĭomputing such a calibration is challenging as both sensors provide quite different data: cameras yield color or brightness information, laser scanners yield 3-D points. One popular multi-modal pair is cameras and laser scanners. Multi-modal sensory data plays an important role in many computer vision and robotics tasks.