@article{M779A2E3E, title = "Research on BEV Segmentation for Autonomous Driving", journal = "The Journal of Korean Institute of Communications and Information Sciences", year = "2024", issn = "1226-4717", doi = "10.7840/kics.2024.49.10.1345", author = "Woomin Jun, Sungjin Lee", keywords = "Autonomous Driving, BEV, Segmentation", abstract = "In autonomous driving technology, creating a 2D bird's eye view (BEV) map of the 3D environment surrounding the Ego vehicle facilitates vehicle steering and speed control. Particularly, the BEV Segmentation technology, which perfectly represents the objects in the surrounding road environment, including their positions and sizes, in real-time on a 2D map, is essential for safe driving. This study addresses the optimization of a BEV Segmentation model that operates in real-time in autonomous driving embedded environments and achieves high accuracy with a small footprint. To improve accuracy, various backbones were employed in the image encoder used for BEV segmentation, leading to a combination of techniques that outperformed the previous mIoU performance of the model. For model size and operation time reduction, Quantization was conducted. Experimental results achieved an mean Intersection over Union (mIoU) of 44.9, showing a 17.8% improvement in mIoU over existing technologies. On the other hand, through Quantization, the proposed enhanced accuracy model achieved a 3.6% reduction in latency and approximately 50% reduction in model size. By implementing this technique on an NVIDIA AGX Orin-based on-device system and analyzing performance in relation to power supply, it was found that sufficient power supply plays a crucial role in reducing latency." }