论文翻译-传感器融合的汽车应用.doc

上传人:桔**** 文档编号:559575613 上传时间:2023-03-12 格式:DOC 页数:4 大小:156.50KB
返回 下载 相关 举报
论文翻译-传感器融合的汽车应用.doc_第1页
第1页 / 共4页
论文翻译-传感器融合的汽车应用.doc_第2页
第2页 / 共4页
论文翻译-传感器融合的汽车应用.doc_第3页
第3页 / 共4页
论文翻译-传感器融合的汽车应用.doc_第4页
第4页 / 共4页
亲,该文档总共4页,全部预览完了,如果喜欢就下载吧!
资源描述

《论文翻译-传感器融合的汽车应用.doc》由会员分享,可在线阅读,更多相关《论文翻译-传感器融合的汽车应用.doc(4页珍藏版)》请在金锄头文库上搜索。

1、Sensor Fusion for Automobile ApplicationsPersonnel:Y. Fang, (I. Masaki, B.K.P.Horn)Sponsorship:Intelligent Transportation Research Center at MITs MTLIntroductionTo increase the safety and efficiency for transportation systems, many automobile applications need to detect detail obstacle information.

2、Highway environment interpretation is important in intelligent transportation systems (ITS). It is expect to provide 3D segmentation information for the current road situation, i.e., the X, Y position of objects in images, and the distance Z information. The needs of dynamic scene processing in real

3、 time bring high requirements on sensors in intelligent transportation systems. In complicated driving environment, typically a single sensor is not enough to meet all these high requirements because of limitations in reliability, weather and ambient lighting. Radar provides high distance resolution

4、 while it is limited in horizontal resolution. Binocular vision system can provide better horizontal resolution, while the miscorrespondence problem makes it hard to detect accurate and robust Z distance information. Furthermore, video cameras could not behave well in bad weather. Instead of develop

5、ing specialized image radar to meet the high ITS requirements, sensor fusion system is composed of several low cost, low performance sensors, i.e., radar and stereo cameras, which can take advantage of the benefit of both sensors. Typical 2D segmentation algorithms for vision systems are challenged

6、by noisy static background and the variation of object positions and object size, which leads to false segmentation or segmentation errors. Typical tracking algorithms cannot help to remove the errors of initial static segmentation since there are significant changes between successive video frames.

7、 In order to provide accurate 3D segmentation information, we should not simply associate distance information for radar and 2D segmentation information from video camera. It is expected that the performance of each sensor in the fusion system would be better than being used alone.AlgorithmOur fusio

8、n system introduces the distance information into the 2D segmentation process to improve its target segmentation performance. The relationship between the object distance and the stereo disparity of the object can be used to separate original edge map of stereo images into several distance-based edg

9、e layers in which we further detect whether there is any object and where the object is by segmenting clustered image pixels with similar ranges. To guarantee robustness, a special morphological closing operation is introduced to delineate vertical edges of candidate objects. We first dilate the edg

10、e to elongate the edge length so that the boundaries of target objects will be longer than that of noisy edges. Then an erosion operation gets rid of short edges. Typically the longest vertical edges are located at objects boundary, the new distance-range-based segmentation method can detect targets

11、 with high accuracy and robustness, especially for the vehicles in highway driving scenarios. For urban-driving situations, heavy background noise, such as trees etc., usually cause miscorrespondence, leading to edge-separation errors. The false boundary edge lines in the background area can be even

12、 longer than the boundary edge lines. Thus it is hard to eliminate false bounding boxes in background areas without eliminating foreground objects. The noisy background adds difficulties in segmenting objects of different sizes. To enhance the segmentation performance, background removal procedure i

13、s proposed. Without losing generality, objects beyond some distance range are treated as background. The pixels with small disparity represent the characteristics of the background. Sometimes, in assigning edge pixels to different edge layers, there exists ambiguity. Without further information it i

14、s hard to decide among multiple choices. Some algorithms simply pick one randomly, which might not be true in many situations. Typically, to avoid losing potential foreground pixels, edge pixels are assigned to all distance layers and edge-length filters can suppress ambiguity noise. However, when b

15、ackground noise is serious, algorithm picks only edge pixels without multiple choices. Eliminating pixels from the background in this way will lose significant pixels of target objects, making segmented region smaller than its real size. Thus, motion-based segmentation region expansion is needed to

16、compensate for performance degradation. The original segmentation result can be used as initial object segmentation seeds, from which larger segmentation boundary boxes will expand. The enlarging process is controlled by the similarity of segmentation seed boxes and surrounding edge pixels. With such region growing operations, the accurate target sizes are captured. The proposed depth/motion-based segmentation procedure successfully removes the im

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 高等教育 > 大学课件

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号