Abstract
The ability of automated vehicles (AVs) to determine the position of objects in three-dimensional space plays a key role in motion planning. The implementation of algorithms that solve this problem is particularly difficult for systems that use only monocular cameras, as depth estimation is a non-trivial task for them. Nevertheless, such systems are widespread due to their relative cheapness and ease of use. In this paper, we propose a method to determine the position of vehicles (the most common type of objects in urban scenes) in the form of oriented bounding boxes in birds’-eye view based on an image obtained from a single monocular camera. This method consists of two steps. In the first step, a projection of the visible boundary of the vehicle in the birds’-eye view is computed based on 2D obstacle detections and roadway segmentation in the image. The resulting projection is assumed to represent the noisy measurements of the two orthogonal sides of the vehicle. In the second step, an oriented bounding box is constructed around the obtained projection. For this stage, we propose a new algorithm for constructing the bounding box based on the assumption of the L-shape model. The algorithm was tested on a prepared real-world dataset. The proposed L-shape algorithm outperformed the best of the compared algorithms in terms of the Jaccard coefficient (Intersection over Union, IoU) by 2.7%.