검색 상세

Depth-Aware Stitching Framework for Omnidirectional Vision

초록/요약

Omnidirectional vision systems provide a comprehensive 360-degree panoram- ic view, facilitating comprehensive environmental awareness in a range of fields, including advanced driver assistance systems (ADAS) and virtual reality (VR). The existing omnidirectional stitching methods rely on a single specialized 360-degree camera. However, due to the constraints of the hardware, such as the necessity for high mounting heights and the presence of blind spots, adapting these methods to vehicles of varying sizes and geometries is challenging. These challenges include limited generalizability due to the reliance on predefined stitching regions for fixed camera arrays, performance degradation from distance parallax leading to large depth differences, and the absence of suitable datasets with ground truth for multi- camera omnidirectional systems. To overcome these challenges, we propose a nov- el omnidirectional stitching framework and a stitching dataset tailored for varying distance scenarios with multiple cameras. The framework, designated as OmniS- titch, incorporates a Stitching Region Maximization (SRM) module for automated adaptation to diverse vehicles equipped with multiple cameras and a Depth-Aware Stitching (DAS) module to address depth discrepancies resulting from distance parallax between cameras. Furthermore, we have created and made an omnidirec- tional stitching dataset, designated GV360, which provides ground truth images that maintain the perspective of the 360-degree field of view (FOV). GV360 was designed with the explicit intention of supporting vehicle-agnostic systems. Exten- sive evaluations of this dataset demonstrate that our framework outperforms state- of-the-art stitching models, particularly in the context of handling varying parallax.

more

목차

1. Introduction 1
2. Related Work 5
2.1 Unidirectional Stitching 5
2.2 Omnidirectional Stitching 5
2.3 Image Stitching Datasets 6
3. Proposed Method 7
3.1 Overview 7
3.2 Stitching Region Maximization Module 8
3.3 Depth-Aware Stitching Module 9
3.4 Training Loss 13
4. GV360 Dataset 14
4.1 GV360 Dataset Construction 17
5. Experiments 19
5.1 Implementation Details 19
5.2 Baselines 19
5.3 Quantitative Results. 20
5.4 Qualitative Results 22
5.5 Ablation Study 26
5.6 Inference time and efficiency 31
6. Conclusion 32
References 33

more