CycleGAN-Based Depth Completion for Autonomous Vehicles 


Vol. 47,  No. 5, pp. 781-788, May  2022
10.7840/kics.2022.47.5.781


PDF
  Abstract

Depth completion is a challenging task supporting the purpose of scene understanding and environment perception in an autonomous vehicle. The existing method considered multiple modals input such as RGB images and depth LIDAR images to utilize the complementary characteristics of those two sensors. However, traditional autoencoder approaches have shown limitations in representing the data in low dimensional space. Moreover, depth discontinuity also happened when fusing the camera image and LIDAR image due to the light sensitivity in the RGB image. In our study, we are adapting CycleGAN focusing on learning the distribution of the data rather than the pixel density to reconstruct the depth into dense one. We also consider the semantic segmentation as additional input to mitigate the depth discontinuity problem. Our framework is trained and evaluated on the KITTI benchmark with synchronized data capturing various road scenery. The experimental results prove the proposed framework to be competitive performance and efficient in depth completion task.

  Statistics
Cumulative Counts from November, 2022
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.


  Cite this article

[IEEE Style]

M. Nguyen and M. Yoo, "CycleGAN-Based Depth Completion for Autonomous Vehicles," The Journal of Korean Institute of Communications and Information Sciences, vol. 47, no. 5, pp. 781-788, 2022. DOI: 10.7840/kics.2022.47.5.781.

[ACM Style]

Minh-Tri Nguyen and Myungsik Yoo. 2022. CycleGAN-Based Depth Completion for Autonomous Vehicles. The Journal of Korean Institute of Communications and Information Sciences, 47, 5, (2022), 781-788. DOI: 10.7840/kics.2022.47.5.781.

[KICS Style]

Minh-Tri Nguyen and Myungsik Yoo, "CycleGAN-Based Depth Completion for Autonomous Vehicles," The Journal of Korean Institute of Communications and Information Sciences, vol. 47, no. 5, pp. 781-788, 5. 2022. (https://doi.org/10.7840/kics.2022.47.5.781)