English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41990380      線上人數 : 1430
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86294


    題名: 強化特徵對齊的深度學習之3D物件偵測、辨識、與方位估計;Enhancing Feature Alignment for 3D Object Detection, Recognition, and Position Estimation using Deep Learning
    作者: 賴湘?;Lai, Hsiang-Te
    貢獻者: 軟體工程研究所
    關鍵詞: 深度學習;卷機神經網路;物件偵測;特徵對齊;方位估計;deep learning;convolution neural network;object detection;feature alignment;pose estimation;YOL
    日期: 2021-07-28
    上傳時間: 2021-12-07 12:28:26 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來,大量研究人力的投入,使得卷積神經網路 (convolution neural network, CNN) 在物件偵測與辨識的技術漸趨成熟。在3D物件偵測的應用上,卷積神經網路也協助了許多自動化任務,像是自駕車、工廠的機械手臂自動化生產技術等。在這些3D的應用任務中,目標物件的精確三維空間資訊是最重要的;但目前卷積神經網路在三維空間方位估計的精準度,還有改善的空間,因此在本研究中,我們藉由感興趣區域卷積 (RoI convolution) 的協助,並加入原始深度資料的再淬鍊,以迴歸出更準確的物體類別、3D位置、3D尺寸、及3D旋轉角度。
    本研究的網路模式是從實驗室所發展的9DoF SE-YOLO偵測網路繼續修改而來,稱為 9DoF ADM-YOLO;主要改進的部份有:i.加入對齊偵測模組 (align detection module, ADM),使得網路能調整錨框之大小及尺寸,並精確地擷取框內的特徵,使得最終迴歸能獲得更準確的結果;ii.加入原始深度資料分支,該分支從原始深度影像擷取特徵,能夠保留較準確的空間資訊,使得空間方面推論更為精確。
    在實驗中,我們使用 NVIDIA “墜落物件” (Falling Things) 資料集做測試;該資料集中每組影像包含 RGB 彩色影像與 D 深度影像,一共20類物件,每一類物件包含1,000組影像,共20,000組影像;其中90%組做為訓練樣本,其餘為測試樣本。原始9DoF SE-YOLO物件偵測辨識系統的 mAP 為 93.59%;經過一連串的分析與改進後,最終的 9DoF ADM-YOLO,以 416×416 影像解析度進行測試,其平均執行速度為每秒33張影像,而其 mAP 達到 96.84%;相較於原架構在空間推論的結果,其3D位置估計有14%的提升和改進,3D尺寸估計提升了4%,而3D旋轉角度估計有20%提升。
    ;In recent years, many researchers have been devoting in the studying of convolution neural networks (CNNs), such that the development of CNNs for object detection and recognition is gradually matured. CNN techniques have applied on many kinds of automated tasks, such as autonomous car, autonomous production, etc. In these 3D application tasks, precise three-dimensional spatial information of target objects is the most important. However, CNNs still need to be improved in respect of three-dimensional spatial estimation. Thus, in this study, we develop a 3D detection CNN to get more accurate estimation on 3D object’s class, 3D position, 3D size, and 3D rotation angle by using RoI convolution and extra lower features of the original depth information.
    The proposed CNN model is modified from our previous 9DoF SE-YOLO detection network. The key improvements are (i) adding align detection module to make the single-stage detector capable of generating region proposals, then extract precise features in those regions to infer a more accurate result in the final regression, and (ii) adding a raw depth image branch. This branch extracts lower-level features from the raw depth image to preserve more precise spatial information to infer more accurate spatial information.
    In the experiment, we used the “Falling Things” dataset presented by NVIDIA to validate the proposed CNN model. Every image pair in the dataset includes a RGB color image and a D depth image. There are 20 classes of objects and each class has 1,000 image pairs; thus totally we used 20,000 image pairs for the following experiments. 90% image pairs are taken as training set and the remaining are for validation. The mAP of the previous 9DoF SE-YOLO model is 93.59%. After a series of analysis and modifications, the mAP of the proposed 9DoF ADM-YOLO model reaches 96.84% with an average 33 fps execution speed and run on 416×416 images. In comparison with the result of spatial inference by previous network, it has improved 3D position estimation by 14%, 3D size estimation by 4%, and 3D rotation angle estimation by 20%.
    顯示於類別:[軟體工程研究所 ] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML113檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明