This study proposes a method for matching well-known RGB images with point clouds obtained by the depth images of objects. The system has based on 3D digital data of objects transforming to RGB images obtained by utilizing the Kinect camera. We have described a calibration and conversion process between RGB and depth images after obtained the images and point cloud from the sensor camera. The voxels in which assigned to point cloud forming of the depth data are converted to pixel values of RGB images. During the conversion process, three-dimensional point cloud data are converted into a two-dimensional pixel values. When data have obtained for the conversion process, the point cloud library (PCL) is exploited. Point cloud data of objects to be transferred onto the same object image (RGB) the process are presented in the application. As application, 3D some objects are modeled with proposed method which is given depth information in RGB images.
Dergi Türü : Uluslararası
Benzer Makaleler | Yazar | # |
---|
Makale | Yazar | # |
---|