天天看點

100多 項開源視覺 SLAM 方案夠你用了嗎?

100多 項開源視覺 SLAM 方案夠你用了嗎?

前言:

最近在了解一些視覺SLAM算法,發現這篇文章總結的很不錯,是以給需要的朋友的也分享一下,感謝前輩的總結。

子產品介紹:

 1. 本文整理自我的 Github 倉庫(包括開源 SLAM 方案,近期論文):

Visual_SLAM_Related_Researchgithub.com

 2.本文簡單将各種方案分為以下 7 類(固然有不少文章無法恰當分類,比如動态語義稠密建圖的 VISLAM +_+):

  • 一、Geometric SLAM
  • 二、Semantic / Deep SLAM
  • 三、Multi-Landmarks / Object SLAM
  • 四、Sensor Fusion
  • 五、Dynamic SLAM
  • 六、Mapping
  • 七、Optimization
  • 100多 項開源視覺 SLAM 方案夠你用了嗎?

3. 由于此倉庫自 2019 年 3 月開始整理(20 年 3 月公開),是以以下的代碼除了經典的架構之外基本都集中在 19 年之後;此外個人比較關注 VO、物體級 SLAM 和多路标 SLAM,是以以下内容收集也不完整,無法涵蓋視覺 SLAM 的所有研究,也歡迎大家有好的方案可以在 issue 或評論中分享補充。

一、Geometric SLAM(23 項)

這一類是傳統的基于特征點、直接法或半直接法的幾何 SLAM。

1. PTAM

  • 論文:Klein G, Murray D. Parallel tracking and mapping for small AR workspaces[C]//Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on. IEEE, 2007: 225-234.
  • 代碼:github.com/Oxford-PTAM/
  • 工程位址:robots.ox.ac.uk/~gk/PTA
  • 作者其他研究:robots.ox.ac.uk/~gk/pub

2. S-PTAM(雙目 PTAM)

  • 論文:Taihú Pire,Thomas Fischer, Gastón Castro, Pablo De Cristóforis, Javier Civera and Julio Jacobo Berlles. S-PTAM: Stereo Parallel Tracking and Mapping. Robotics and Autonomous Systems, 2017.
  • 代碼:github.com/lrse/sptam
  • 作者其他論文:Castro G, Nitsche M A, Pire T, et al. Efficient on-board Stereo SLAM through constrained-covisibility strategies[J]. Robotics and Autonomous Systems, 2019.

3. MonoSLAM

  • 論文:Davison A J, Reid I D, Molton N D, et al. MonoSLAM: Real-time single camera SLAM[J]. IEEE transactions on pattern analysis and machine intelligence, 2007, 29(6): 1052-1067.
  • 代碼:github.com/hanmekim/Sce

4. ORB-SLAM2

  • 論文:Mur-Artal R, Tardós J D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.
  • 代碼:github.com/raulmur/ORB_
  • 作者其他論文:
    • 單目半稠密建圖:Mur-Artal R, Tardós J D. Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM[C]//Robotics: Science and Systems. 2015, 2015.
    • VIORB:Mur-Artal R, Tardós J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 796-803.
    • 多地圖:Elvira R, Tardós J D, Montiel J M M. ORBSLAM-Atlas: a robust and accurate multi-map system[J]. arXiv preprint arXiv:1908.11585, 2019.
以下 5, 6, 7, 8 幾項是 TUM 計算機視覺組全家桶,官方首頁

5. DSO

  • 論文:Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(3): 611-625.
  • 代碼:github.com/JakobEngel/d
  • 雙目 DSO:Wang R, Schworer M, Cremers D. Stereo DSO: Large-scale direct sparse visual odometry with stereo cameras[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 3903-3911.
  • VI-DSO:Von Stumberg L, Usenko V, Cremers D. Direct sparse visual-inertial odometry using dynamic marginalization[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 2510-2517.

6. LDSO

  • 高翔在 DSO 上添加閉環的工作
  • 論文:Gao X, Wang R, Demmel N, et al. LDSO: Direct sparse odometry with loop closure[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 2198-2204.
  • 代碼:github.com/tum-vision/L

7. LSD-SLAM

  • 論文:Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[C]//European conference on computer vision. Springer, Cham, 2014: 834-849.
  • 代碼:github.com/tum-vision/l

8. DVO-SLAM

  • 論文:Kerl C, Sturm J, Cremers D. Dense visual SLAM for RGB-D cameras[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013: 2100-2106.
  • 代碼 1:github.com/tum-vision/d
  • 代碼 2:github.com/tum-vision/d
  • 其他論文:
    • Kerl C, Sturm J, Cremers D. Robust odometry estimation for RGB-D cameras[C]//2013 IEEE international conference on robotics and automation. IEEE, 2013: 3748-3754.
    • Steinbrücker F, Sturm J, Cremers D. Real-time visual odometry from dense RGB-D images[C]//2011 IEEE international conference on computer vision workshops (ICCV Workshops). IEEE, 2011: 719-722.

9. SVO

  • 蘇黎世大學機器人與感覺課題組
  • 論文:Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry[C]//2014 IEEE international conference on robotics and automation (ICRA). IEEE, 2014: 15-22.
  • 代碼:github.com/uzh-rpg/rpg_
  • Forster C, Zhang Z, Gassner M, et al. SVO: Semidirect visual odometry for monocular and multicamera systems[J]. IEEE Transactions on Robotics, 2016, 33(2): 249-265.

10. DSM

  • 論文:Zubizarreta J, Aguinaga I, Montiel J M M. Direct sparse mapping[J]. arXiv preprint arXiv:1904.06577, 2019.
  • 代碼:github.com/jzubizarreta ;Video

11. openvslam

  • 論文:Sumikura S, Shibuya M, Sakurada K. OpenVSLAM: A Versatile Visual SLAM Framework[C]//Proceedings of the 27th ACM International Conference on Multimedia. 2019: 2292-2295.
  • 代碼:github.com/xdspacelab/o ;文檔

12. se2lam(地面車輛位姿估計的視覺裡程計)

  • 論文:Zheng F, Liu Y H. Visual-Odometric Localization and Mapping for Ground Vehicles Using SE (2)-XYZ Constraints[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 3556-3562.
  • 代碼:github.com/izhengfan/se
  • 作者的另外一項工作
    • 論文:Zheng F, Tang H, Liu Y H. Odometry-vision-based ground vehicle motion estimation with se (2)-constrained se (3) poses[J]. IEEE transactions on cybernetics, 2018, 49(7): 2652-2663.

13. GraphSfM(基于圖的并行大規模 SFM)

  • 論文:Chen Y, Shen S, Chen Y, et al. Graph-Based Parallel Large Scale Structure from Motion[J]. arXiv preprint arXiv:1912.10659, 2019.
  • 代碼:github.com/AIBluefisher

14. LCSD_SLAM(松耦合的半直接法單目 SLAM)

  • 論文:Lee S H, Civera J. Loosely-Coupled semi-direct monocular SLAM[J]. IEEE Robotics and Automation Letters, 2018, 4(2): 399-406.
  • 代碼:github.com/sunghoon031/ ;谷歌學術 ;示範視訊
  • 作者另外一篇關于單目尺度的文章 代碼開源 :Lee S H, de Croon G. Stability-based scale estimation for monocular SLAM[J]. IEEE Robotics and Automation Letters, 2018, 3(2): 780-787.

15. RESLAM(基于邊的 SLAM)

  • 論文:Schenk F, Fraundorfer F. RESLAM: A real-time robust edge-based SLAM system[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 154-160.
  • 代碼:github.com/fabianschenk ; 項目首頁

16. scale_optimization(将單目 DSO 拓展到雙目)

  • 論文:Mo J, Sattar J. Extending Monocular Visual Odometry to Stereo Camera System by Scale Optimization[C]. International Conference on Intelligent Robots and Systems (IROS), 2019.
  • 代碼:github.com/jiawei-mo/sc

17. BAD-SLAM(直接法 RGB-D SLAM)

  • 論文:Schops T, Sattler T, Pollefeys M. BAD SLAM: Bundle Adjusted Direct RGB-D SLAM[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 134-144.
  • 代碼:github.com/ETH3D/badsla

18. GSLAM(內建 ORB-SLAM2,DSO,SVO 的通用架構)

  • 論文:Zhao Y, Xu S, Bu S, et al. GSLAM: A general SLAM framework and benchmark[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 1110-1120.
  • 代碼:github.com/zdzhaoyong/G

19. ARM-VO(運作于 ARM 處理器上的單目 VO)

  • 論文:Nejad Z Z, Ahmadabadian A H. ARM-VO: an efficient monocular visual odometry for ground vehicles on ARM CPUs[J]. Machine Vision and Applications, 2019: 1-10.
  • 代碼:github.com/zanazakaryai

20. cvo-rgbd(直接法 RGB-D VO)

  • 論文:Ghaffari M, Clark W, Bloch A, et al. Continuous Direct Sparse Visual Odometry from RGB-D Images[J]. arXiv preprint arXiv:1904.02266, 2019.
  • 代碼:github.com/MaaniGhaffar

21. Map2DFusion(單目 SLAM 無人機圖像拼接)

  • 論文:Bu S, Zhao Y, Wan G, et al. Map2DFusion: Real-time incremental UAV image mosaicing based on monocular slam[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4564-4571.
  • 代碼:github.com/zdzhaoyong/M

22. CCM-SLAM(多機器人協同單目 SLAM)

  • 論文:Schmuck P, Chli M. CCM‐SLAM: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teams[J]. Journal of Field Robotics, 2019, 36(4): 763-781.
  • 代碼:github.com/VIS4ROB-lab/ | Video

23. ORB-SLAM3

  • 論文:Carlos Campos, Richard Elvira, et al.ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM[J]. arXiv preprint arXiv:2007.11898, 2020.
  • 代碼:github.com/UZ-SLAMLab/O | Video

二、Semantic / Depp SLAM(16 項)

SLAM 與深度學習相結合的工作目前主要展現在兩個方面,一方面是将語義資訊參與到建圖、位姿估計等環節中,另一方面是端到端地完成 SLAM 的某一個步驟(比如 VO,閉環等)。個人對後者沒太關注,也同樣歡迎大家在 issue 中分享。

24. MsakFusion

  • 論文:Runz M, Buffier M, Agapito L. Maskfusion: Real-time recognition, tracking and reconstruction of multiple moving objects[C]//2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2018: 10-20.
  • 代碼:github.com/martinruenz/

25. SemanticFusion

  • 論文:McCormac J, Handa A, Davison A, et al. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks[C]//2017 IEEE International Conference on Robotics and automation (ICRA). IEEE, 2017: 4628-4635.
  • 代碼:github.com/seaun163/sem

26. semantic_3d_mapping

  • 論文:Yang S, Huang Y, Scherer S. Semantic 3D occupancy mapping through efficient high order CRFs[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 590-597.
  • 代碼:github.com/shichaoy/sem

27. Kimera(實時度量與語義定位建圖開源庫)

  • 論文:Rosinol A, Abate M, Chang Y, et al. Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping[J]. arXiv preprint arXiv:1910.02490, 2019.
  • 代碼:github.com/MIT-SPARK/Ki ;示範視訊

28. NeuroSLAM(腦啟發式 SLAM)

  • 論文:Yu F, Shang J, Hu Y, et al. NeuroSLAM: a brain-inspired SLAM system for 3D environments[J]. Biological Cybernetics, 2019: 1-31.
  • 代碼:github.com/cognav/Neuro
  • 第四作者就是 Rat SLAM 的作者,文章也比較了十餘種腦啟發式的 SLAM

29. gradSLAM(自動分區的稠密 SLAM)

  • 論文:Jatavallabhula K M, Iyer G, Paull L. gradSLAM: Dense SLAM meets Automatic Differentiation[J]. arXiv preprint arXiv:1910.10672, 2019.
  • 代碼(預計 20 年 4 月放出):github.com/montrealrobo ;項目首頁,示範視訊

30. ORB-SLAM2 + 目标檢測/分割的方案語義建圖

  • github.com/floatlazer/s
  • github.com/qixuxiang/or
  • github.com/Ewenwan/ORB_

31. SIVO(語義輔助特征選擇)

  • 論文:Ganti P, Waslander S. Network Uncertainty Informed Semantic Feature Selection for Visual SLAM[C]//2019 16th Conference on Computer and Robot Vision (CRV). IEEE, 2019: 121-128.
  • 代碼:github.com/navganti/SIV

32. FILD(臨近圖增量式閉環檢測)

  • 論文:Shan An, Guangfu Che, Fangru Zhou, Xianglong Liu, Xin Ma, Yu Chen.Fast and Incremental Loop Closure Detection using Proximity Graphs. pp. 378-385, The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2019)
  • 代碼:github.com/AnshanTJU/FI

33. object-detection-sptam(目标檢測與雙目 SLAM)

  • 論文:Pire T, Corti J, Grinblat G. Online Object Detection and Localization on Stereo Visual SLAM System[J]. Journal of Intelligent & Robotic Systems, 2019: 1-10.
  • 代碼:github.com/CIFASIS/obje

34. Map Slammer(單目深度估計 + SLAM)

  • 論文:Torres-Camara J M, Escalona F, Gomez-Donoso F, et al. Map Slammer: Densifying Scattered KSLAM 3D Maps with Estimated Depth[C]//Iberian Robotics conference. Springer, Cham, 2019: 563-574.
  • 代碼:github.com/jmtc7/mapSla

35. NOLBO(變分模型的機率 SLAM)

  • 論文:Yu H, Lee B. Not Only Look But Observe: Variational Observation Model of Scene-Level 3D Multi-Object Understanding for Probabilistic SLAM[J]. arXiv preprint arXiv:1907.09760, 2019.
  • 代碼:github.com/bogus2000/NO

36. GCNv2_SLAM (基于圖卷積神經網絡 SLAM)

  • 論文:Tang J, Ericson L, Folkesson J, et al. GCNv2: Efficient correspondence prediction for real-time SLAM[J]. IEEE Robotics and Automation Letters, 2019, 4(4): 3505-3512.
  • 代碼:github.com/jiexiong2016Video

37. semantic_suma(雷射語義建圖)

  • 論文:Chen X, Milioto A, Palazzolo E, et al. SuMa++: Efficient LiDAR-based semantic SLAM[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 4530-4537.
  • 代碼:github.com/PRBonn/seman ;Video

38. Neural-SLAM(主動神經 SLAM)

  • 論文:Chaplot D S, Gandhi D, Gupta S, et al. Learning to explore using active neural slam[C]. ICLR 2020.
  • 代碼:github.com/devendrachap

39. TartanVO:一種通用的基于學習的 VO

  • 論文:Wang W, Hu Y, Scherer S. TartanVO: A Generalizable Learning-based VO[J]. arXiv preprint arXiv:2011.00359, 2020.
  • 代碼:github.com/castacks/tar
  • 資料集:IROS2020 TartanAir: A Dataset to Push the Limits of Visual SLAM,資料集位址

三、Multi-Landmarks / Object SLAM(15 項)

其實多路标的點、線、平面 SLAM 和物體級 SLAM 完全可以分類在 Geometric SLAM 和 Semantic SLAM 中,但個人對這一方向比較感興趣(也是我的研究所學生課題),是以将其獨立出來,開源方案相對較少,但很有意思。

40. PL-SVO(點線 SVO)

  • 論文:Gomez-Ojeda R, Briales J, Gonzalez-Jimenez J. PL-SVO: Semi-direct Monocular Visual Odometry by combining points and line segments[C]//Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016: 4211-4216.
  • 代碼:github.com/rubengooj/pl

41. stvo-pl(雙目點線 VO)

  • 論文:Gomez-Ojeda R, Gonzalez-Jimenez J. Robust stereo visual odometry through a probabilistic combination of points and line segments[C]//2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016: 2521-2526.
  • 代碼:github.com/rubengooj/st

42. PL-SLAM(點線 SLAM)

  • 論文:Gomez-Ojeda R, Zuñiga-Noël D, Moreno F A, et al. PL-SLAM: a Stereo SLAM System through the Combination of Points and Line Segments[J]. arXiv preprint arXiv:1705.09479, 2017.
  • Gomez-Ojeda R, Moreno F A, Zuñiga-Noël D, et al. PL-SLAM: a stereo SLAM system through the combination of points and line segments[J]. IEEE Transactions on Robotics, 2019, 35(3): 734-746.

43. PL-VIO

  • 論文:He Y, Zhao J, Guo Y, et al. PL-VIO: Tightly-coupled monocular visual–inertial odometry using point and line features[J]. Sensors, 2018, 18(4): 1159.
  • 代碼:github.com/HeYijia/PL-V
  • VINS + 線段:github.com/Jichao-Peng/

44. lld-slam(用于 SLAM 的可學習型線段描述符)

  • 論文:Vakhitov A, Lempitsky V. Learnable line segment descriptor for visual SLAM[J]. IEEE Access, 2019, 7: 39923-39934.
  • 代碼:github.com/alexandervak ;Video

點線結合的工作還有很多,國内的比如

+ 上交鄒丹平老師的 Zou D, Wu Y, Pei L, et al. StructVIO: visual-inertial odometry with structural regularity of man-made environments[J]. IEEE Transactions on Robotics, 2019, 35(4): 999-1013.

+ 浙大的 Zuo X, Xie X, Liu Y, et al. Robust visual SLAM with point and line features[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 1775-1782.

45. PlaneSLAM

  • 論文:Wietrzykowski J. On the representation of planes for efficient graph-based slam with high-level features[J]. Journal of Automation Mobile Robotics and Intelligent Systems, 2016, 10.
  • 代碼:github.com/LRMPUT/Plane
  • 作者另外一項開源代碼,沒有找到對應的論文:github.com/LRMPUT/PUTSL

46. Eigen-Factors(特征因子平面對齊)

  • 論文:Ferrer G. Eigen-Factors: Plane Estimation for Multi-Frame and Time-Continuous Point Cloud Alignment[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 1278-1284.
  • 代碼:gitlab.com/gferrer/eige ;示範視訊

47. PlaneLoc

  • 論文:Wietrzykowski J, Skrzypczyński P. PlaneLoc: Probabilistic global localization in 3-D using local planar features[J]. Robotics and Autonomous Systems, 2019, 113: 160-173.

48. Pop-up SLAM

  • 論文:Yang S, Song Y, Kaess M, et al. Pop-up slam: Semantic monocular plane slam for low-texture environments[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 1222-1229.
  • 代碼:github.com/shichaoy/pop

49. Object SLAM

  • 論文:Mu B, Liu S Y, Paull L, et al. Slam with objects using a nonparametric pose graph[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4602-4609.
  • 代碼:github.com/BeipengMu/ob ;Video

50. voxblox-plusplus(物體級體素建圖)

  • 論文:Grinvald M, Furrer F, Novkovic T, et al. Volumetric instance-aware semantic mapping and 3D object discovery[J]. IEEE Robotics and Automation Letters, 2019, 4(3): 3037-3044.
  • 代碼:github.com/ethz-asl/vox

51. Cube SLAM

  • 論文:Yang S, Scherer S. Cubeslam: Monocular 3-d object slam[J]. IEEE Transactions on Robotics, 2019, 35(4): 925-938.
  • 代碼:github.com/shichaoy/cub
  • 對,這就是帶我入坑的一項工作,2018 年 11 月份看到這篇論文(當時是預印版)之後開始學習物體級 SLAM,個人對 Cube SLAM 的一些注釋和總結:連結。
  • 也有很多有意思的但沒開源的物體級 SLAM
    • Ok K, Liu K, Frey K, et al. Robust Object-based SLAM for High-speed Autonomous Navigation[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 669-675.
    • Li J, Meger D, Dudek G. Semantic Mapping for View-Invariant Relocalization[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 7108-7115.
    • Nicholson L, Milford M, Sünderhauf N. Quadricslam: Dual quadrics from object detections as landmarks in object-oriented slam[J]. IEEE Robotics and Automation Letters, 2018, 4(1): 1-8.

52. VPS-SLAM(平面語義 SLAM)

  • 論文:Bavle H, De La Puente P, How J, et al. VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems[J]. IEEE Access, 2020.
  • 代碼:bitbucket.org/hridaybav

53. Structure-SLAM-PointLine (室内環境單目點線 SLAM)

  • 論文:Li Y, Brasch N, Wang Y, et al. Structure-SLAM: Low-Drift Monocular SLAM in Indoor Environments[J]. IEEE Robotics and Automation Letters, 2020, 5(4): 6583-6590.
  • 代碼:github.com/yanyan-li/St

54. PL-VINS

  • 論文:Fu Q, Wang J, Yu H, et al. PL-VINS: Real-Time Monocular Visual-Inertial SLAM with Point and Line[J]. arXiv preprint arXiv:2009.07462, 2020.
  • 代碼:github.com/cnqiangfu/PL

四、Sensor Fusion(19 項)

在傳感器融合方面隻關注了視覺 + 慣導,其他傳感器像 LiDAR,GPS 關注較少。

55. msckf_vio

  • 論文:Sun K, Mohta K, Pfrommer B, et al. Robust stereo visual inertial odometry for fast autonomous flight[J]. IEEE Robotics and Automation Letters, 2018, 3(2): 965-972.
  • 代碼:github.com/KumarRobotic ;Video

56. rovio

  • 論文:Bloesch M, Omari S, Hutter M, et al. Robust visual inertial odometry using a direct EKF-based approach[C]//2015 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2015: 298-304.
  • 代碼:github.com/ethz-asl/rov ;Video

57. R-VIO

  • 論文:Huai Z, Huang G. Robocentric visual-inertial odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 6319-6326.
  • 代碼:github.com/rpng/R-VIO ;Video

58. okvis

  • 論文:Leutenegger S, Lynen S, Bosse M, et al. Keyframe-based visual–inertial odometry using nonlinear optimization[J]. The International Journal of Robotics Research, 2015, 34(3): 314-334.
  • 代碼:github.com/ethz-asl/okv

59. VIORB

  • 論文:Mur-Artal R, Tardós J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 796-803.
  • 代碼:github.com/jingpang/Lea (VIORB 本身是沒有開源的,這是王京大佬複現的一個版本)
  • VI-ORB-SLAM2:github.com/YoujieXia/VI

60. VINS-mono

  • 論文:Qin T, Li P, Shen S. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.
  • 代碼:github.com/HKUST-Aerial
  • 雙目版 VINS-Fusion:github.com/HKUST-Aerial
  • 移動段 VINS-mobile:github.com/HKUST-Aerial

61. VINS-RGBD

  • 論文:Shan Z, Li R, Schwertfeger S. RGBD-Inertial Trajectory Estimation and Mapping for Ground Robots[J]. Sensors, 2019, 19(10): 2251.
  • 代碼:github.com/STAR-Center/ ;Video

62. Open-VINS

  • 論文:Geneva P, Eckenhoff K, Lee W, et al. Openvins: A research platform for visual-inertial estimation[C]//IROS 2019 Workshop on Visual-Inertial Navigation: Challenges and Applications, Macau, China. IROS 2019.
  • 代碼:github.com/rpng/open_vi

63. versavis(多功能的視慣傳感器系統)

  • 論文:Tschopp F, Riner M, Fehr M, et al. VersaVIS—An Open Versatile Multi-Camera Visual-Inertial Sensor Suite[J]. Sensors, 2020, 20(5): 1439.
  • 代碼:github.com/ethz-asl/ver

64. CPI(視慣融合的封閉式預積分)

  • 論文:Eckenhoff K, Geneva P, Huang G. Closed-form preintegration methods for graph-based visual–inertial navigation[J]. The International Journal of Robotics Research, 2018.
  • 代碼:github.com/rpng/cpi ;Video

65. TUM Basalt

  • 論文:Usenko V, Demmel N, Schubert D, et al. Visual-inertial mapping with non-linear factor recovery[J]. IEEE Robotics and Automation Letters, 2019.
  • 代碼:github.com/VladyslavUse ;Video;Project Page

66. Limo:雷射單目視覺裡程計

  • 論文:Graeter J, Wilczynski A, Lauer M. Limo: Lidar-monocular visual odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 7872-7879.
  • 代碼:github.com/johannes-gra ;Video

67. LARVIO(多狀态限制卡爾曼濾波的單目 VIO)

  • 論文:Qiu X, Zhang H, Fu W, et al. Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End[J]. Sensors, 2019, 19(8): 1941.
  • 代碼:github.com/PetWorm/LARV
  • 北航邱笑晨博士的一項工作

68. vig-init(垂直邊緣加速視慣初始化)

  • 論文:Li J, Bao H, Zhang G. Rapid and Robust Monocular Visual-Inertial Initialization with Gravity Estimation via Vertical Edges[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 6230-6236.
  • 代碼:github.com/zju3dv/vig-i

69. vilib(VIO 前端庫)

  • 論文:Nagy B, Foehn P, Scaramuzza D. Faster than FAST: GPU-Accelerated Frontend for High-Speed VIO[J]. arXiv preprint arXiv:2003.13493, 2020.
  • 代碼:github.com/uzh-rpg/vili

70. Kimera-VIO

  • 論文:A. Rosinol, M. Abate, Y. Chang, L. Carlone, Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping. IEEE Intl. Conf. on Robotics and Automation (ICRA), 2020.
  • 代碼:github.com/MIT-SPARK/Ki

71. maplab(視慣建圖架構)

  • 論文:Schneider T, Dymczyk M, Fehr M, et al. maplab: An open framework for research in visual-inertial mapping and localization[J]. IEEE Robotics and Automation Letters, 2018, 3(3): 1418-1425.
  • 代碼:github.com/ethz-asl/map
  • 多會話建圖,地圖合并,視覺慣性批處理優化和閉環

72. lili-om:固态雷達慣性裡程計與建圖

  • 論文:Li K, Li M, Hanebeck U D. Towards high-performance solid-state-lidar-inertial odometry and mapping[J]. arXiv preprint arXiv:2010.13150, 2020.
  • 代碼:github.com/KIT-ISAS/lil

73. CamVox:Lidar輔助視覺 SLAM

  • 論文:ZHU, Yuewen, et al. CamVox: A Low-cost and Accurate Lidar-assisted Visual SLAM System. arXiv preprint arXiv:2011.11357, 2020.
  • 代碼:github.com/ISEE-Technol

五、Dynamic SLAM(8 項)

動态 SLAM 也是一個很值得研究的話題,這裡不太好分類,很多工作用到了語義資訊或者用來三維重建,收集的方案相對較少,歡迎補充 issue。

74. DynamicSemanticMapping(動态語義建圖)

  • 論文:Kochanov D, Ošep A, Stückler J, et al. Scene flow propagation for semantic mapping and object discovery in dynamic street scenes[C]//Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016: 1785-1792.
  • 代碼:github.com/ganlumomo/Dy ;wiki

75. DS-SLAM(動态語義 SLAM)

  • 論文:Yu C, Liu Z, Liu X J, et al. DS-SLAM: A semantic visual SLAM towards dynamic environments[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 1168-1174.
  • 代碼:github.com/ivipsourceco

76. Co-Fusion(實時分割與跟蹤多物體)

  • 論文:Rünz M, Agapito L. Co-fusion: Real-time segmentation, tracking and fusion of multiple objects[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 4471-4478.
  • 代碼:github.com/martinruenz/ ;Video

77. DynamicFusion

  • 論文:Newcombe R A, Fox D, Seitz S M. Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 343-352.
  • 代碼:github.com/mihaibujanca

78. ReFusion(動态場景利用殘差三維重建)

  • 論文:Palazzolo E, Behley J, Lottes P, et al. ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals[J]. arXiv preprint arXiv:1905.02082, 2019.
  • 代碼:github.com/PRBonn/refus ;Video

79. DynSLAM(室外大規模稠密重建)

  • 論文:Bârsan I A, Liu P, Pollefeys M, et al. Robust dense mapping for large-scale dynamic environments[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 7510-7517.
  • 代碼:github.com/AndreiBarsan
  • 作者博士學位論文:Barsan I A. Simultaneous localization and mapping in dynamic scenes[D]. ETH Zurich, Department of Computer Science, 2017.

80. VDO-SLAM(動态物體感覺的 SLAM)

  • 論文:Zhang J, Henein M, Mahony R, et al. VDO-SLAM: A Visual Dynamic Object-aware SLAM System[J]. arXiv preprint arXiv:2005.11052, 2020.(IJRR Under Review)
    • 相關論文
      • IROS 2020 Robust Ego and Object 6-DoF Motion Estimation and Tracking
      • ICRA 2020 Dynamic SLAM: The Need For Speed
  • 代碼:github.com/halajun/VDO_ | video

六、Mapping(21 項)

針對建圖的工作一方面是利用幾何資訊進行稠密重建,另一方面很多工作利用語義資訊達到了很好的語義重建效果,三維重建、SFM 本身就是個很大的話題,開源代碼也很多,以下方案收集地可能也不太全。

81. InfiniTAM(跨平台 CPU 實時重建)

  • 論文:Prisacariu V A, Kähler O, Golodetz S, et al. Infinitam v3: A framework for large-scale 3d reconstruction with loop closure[J]. arXiv preprint arXiv:1708.00783, 2017.
  • 代碼:github.com/victorprad/I ;project page

82. BundleFusion

  • 論文:Dai A, Nießner M, Zollhöfer M, et al. Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration[J]. ACM Transactions on Graphics (TOG), 2017, 36(4): 76a.
  • 代碼:github.com/niessner/Bun ;工程位址

83. KinectFusion

  • 論文:Newcombe R A, Izadi S, Hilliges O, et al. KinectFusion: Real-time dense surface mapping and tracking[C]//2011 10th IEEE International Symposium on Mixed and Augmented Reality. IEEE, 2011: 127-136.
  • 代碼:github.com/chrdiller/Ki

84. ElasticFusion

  • 論文:Whelan T, Salas-Moreno R F, Glocker B, et al. ElasticFusion: Real-time dense SLAM and light source estimation[J]. The International Journal of Robotics Research, 2016, 35(14): 1697-1716.
  • 代碼:github.com/mp3guy/Elast

85. Kintinuous

  • ElasticFusion 同一個團隊的工作,帝國理工 Stefan Leutenegger 谷歌學術
  • 論文:Whelan T, Kaess M, Johannsson H, et al. Real-time large-scale dense RGB-D SLAM with volumetric fusion[J]. The International Journal of Robotics Research, 2015, 34(4-5): 598-626.
  • 代碼:github.com/mp3guy/Kinti

86. ElasticReconstruction

  • 論文:Choi S, Zhou Q Y, Koltun V. Robust reconstruction of indoor scenes[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 5556-5565.
  • 代碼:github.com/qianyizh/Ela ;作者首頁

87. FlashFusion

  • 論文:Han L, Fang L. FlashFusion: Real-time Globally Consistent Dense 3D Reconstruction using CPU Computing[C]. RSS, 2018.
  • 代碼(一直沒放出來):github.com/lhanaf/Flash ;Project Page

88. RTAB-Map(雷射視覺稠密重建)

  • 論文:Labbé M, Michaud F. RTAB‐Map as an open‐source lidar and visual simultaneous localization and mapping library for large‐scale and long‐term online operation[J]. Journal of Field Robotics, 2019, 36(2): 416-446.
  • 代碼:github.com/introlab/rta ;Video ;project page

89. RobustPCLReconstruction(戶外稠密重建)

  • 論文:Lan Z, Yew Z J, Lee G H. Robust Point Cloud Based Reconstruction of Large-Scale Outdoor Scenes[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 9690-9698.
  • 代碼:github.com/ziquan111/Ro ;Video

90. plane-opt-rgbd(室内平面重建)

  • 論文:Wang C, Guo X. Efficient Plane-Based Optimization of Geometry and Texture for Indoor RGB-D Reconstruction[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2019: 49-53.
  • 代碼:github.com/chaowang15/p

91. DenseSurfelMapping(稠密表面重建)

  • 論文:Wang K, Gao F, Shen S. Real-time scalable dense surfel mapping[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 6919-6925.

92. surfelmeshing(網格重建)

  • 論文:Schöps T, Sattler T, Pollefeys M. Surfelmeshing: Online surfel-based mesh reconstruction[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.
  • 代碼:github.com/puzzlepaint/

93. DPPTAM(單目稠密重建)

  • 論文:Concha Belenguer A, Civera Sancho J. DPPTAM: Dense piecewise planar tracking and mapping from a monocular sequence[C]//Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst. 2015 (ART-2015-92153).
  • 代碼:github.com/alejocb/dppt
  • 相關研究:基于超像素的單目 SLAM:Using Superpixels in Monocular SLAM ICRA 2014 ;谷歌學術

94. VI-MEAN(單目視慣稠密重建)

  • 論文:Yang Z, Gao F, Shen S. Real-time monocular dense mapping on aerial robots using visual-inertial fusion[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 4552-4559.
  • 代碼:github.com/dvorak0/VI-M ;Video

95. REMODE(單目機率稠密重建)

  • 論文:Pizzoli M, Forster C, Scaramuzza D. REMODE: Probabilistic, monocular dense reconstruction in real time[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 2609-2616.
  • 原始開源代碼:github.com/uzh-rpg/rpg_
  • 與 ORB-SLAM2 結合版本:github.com/ayushgaud/ORgithub.com/ayushgaud/OR

96. DeepFactors(實時的機率單目稠密 SLAM)

  • 帝國理工學院戴森機器人實驗室
  • 論文:Czarnowski J, Laidlow T, Clark R, et al. DeepFactors: Real-Time Probabilistic Dense Monocular SLAM[J]. arXiv preprint arXiv:2001.05049, 2020.
  • 代碼:github.com/jczarnowski/ (還未放出)
  • 其他論文:Bloesch M, Czarnowski J, Clark R, et al. CodeSLAM—learning a compact, optimisable representation for dense visual SLAM[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 2560-2568.

97. probabilistic_mapping(單目機率稠密重建)

  • 港科沈邵劼老師團隊
  • 論文:Ling Y, Wang K, Shen S. Probabilistic dense reconstruction from a moving camera[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 6364-6371.
  • 代碼:github.com/ygling2008/p
  • 另外一篇稠密重建文章的代碼一直沒放出來 Github :Ling Y, Shen S. Real‐time dense mapping for online processing and navigation[J]. Journal of Field Robotics, 2019, 36(5): 1004-1036.

98. ORB-SLAM2 單目半稠密建圖

  • 論文:Mur-Artal R, Tardós J D. Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM[C]//Robotics: Science and Systems. 2015, 2015.
  • 代碼(本身沒有開源,賀博複現的一個版本):github.com/HeYijia/ORB_
  • 加上線段之後的半稠密建圖
    • 論文:He S, Qin X, Zhang Z, et al. Incremental 3d line segment extraction from semi-dense slam[C]//2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018: 1658-1663.
    • 代碼:github.com/shidahe/semi
    • 作者在此基礎上用于指導遠端抓取操作的一項工作:github.com/atlas-jj/ORB

99. Voxgraph(SDF 體素建圖)

  • 論文:Reijgwart V, Millane A, Oleynikova H, et al. Voxgraph: Globally Consistent, Volumetric Mapping Using Signed Distance Function Submaps[J]. IEEE Robotics and Automation Letters, 2019, 5(1): 227-234.

100. SegMap(三維分割建圖)

  • 論文:Dubé R, Cramariuc A, Dugas D, et al. SegMap: 3d segment mapping using data-driven descriptors[J]. arXiv preprint arXiv:1804.09557, 2018.
  • 代碼:github.com/ethz-asl/seg

101. OpenREALM:無人機實時建圖架構

  • 論文:Kern A, Bobbe M, Khedar Y, et al. OpenREALM: Real-time Mapping for Unmanned Aerial Vehicles[J]. arXiv preprint arXiv:2009.10492, 2020.
  • 代碼:github.com/laxnpander/O

七、Optimization(6 項)

個人感覺優化可能是 SLAM 中最難的一部分了吧 +_+ ,我們一般都是直接用現成的因子圖、圖優化方案,要創新可不容易,分享三川小哥的入坑指難。

102. 後端優化庫

  • GTSAM:github.com/borglab/gtsa ;官網
  • g2o:github.com/RainerKuemme
  • ceres:ceres-solver.org/

103. ICE-BA

  • 論文:Liu H, Chen M, Zhang G, et al. Ice-ba: Incremental, consistent and efficient bundle adjustment for visual-inertial slam[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 1974-1982.
  • 代碼:github.com/baidu/ICE-BA

104. minisam(因子圖最小二乘優化架構)

  • 論文:Dong J, Lv Z. miniSAM: A Flexible Factor Graph Non-linear Least Squares Optimization Framework[J]. arXiv preprint arXiv:1909.00903, 2019.
  • 代碼:github.com/dongjing3309 ; 文檔

105. SA-SHAGO(幾何基元圖優化)

  • 論文:Aloise I, Della Corte B, Nardi F, et al. Systematic Handling of Heterogeneous Geometric Primitives in Graph-SLAM Optimization[J]. IEEE Robotics and Automation Letters, 2019, 4(3): 2738-2745.
  • 代碼:srrg.gitlab.io/sashago-

106. MH-iSAM2(SLAM 優化器)

  • 論文:Hsiao M, Kaess M. MH-iSAM2: Multi-hypothesis iSAM using Bayes Tree and Hypo-tree[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 1274-1280.
  • 代碼:bitbucket.org/rpl_cmu/m

107. MOLA(用于定位和建圖的子產品化優化架構)

  • 論文:Blanco-Claraco J L. A Modular Optimization Framework for Localization and Mapping[J]. Proc. of Robotics: Science and Systems (RSS), FreiburgimBreisgau, Germany, 2019, 2.

首發于:2020 年 03 月 31 日

Email:[email protected]

更新日志:

2020.03.31 補充 +1:TUM Basalt ,分類在 Sensor Fusion 中;

2020.04.04 補充 +1:Limo,分類在 Sensor Fusion 中;

2020.04.04 補充 +1:DynSLAM,分類在 Dynamic SLAM 中;

2020.04.25 補充 +2:Map2DFusion、CCM-SLAM,分類在 Geometric SLAM 中;

2020.04.25 補充 +2:北航 LARVIO、浙大 vig-init,分類在 Sensor Fusion 中;

2020.04.25 補充 +2:GCNv2_SLAM、semantic_suma 分類在 Sematic/Deep SLAM;

2020.05.24 補充 +2:vilib、Kimera-VIO 分類在 Sensor Fusion;

2020.05.24 補充 +1:Voxgraph 分類在 Mapping;

2020.05.24 補充 +1:Neural-SLAM 分類在 Sematic/Deep SLAM;

2020.05.24 補充 +1:VPS-SLAM 分類在 Multi-Landmarks/Object SLAM;

2020.06.27 補充 +1:maplab 分了在 Sensor Fusion 中;

2020.06.27 補充 +1:SegMap 分了在 Mapping 中;

2020.07.27 補充 +1:ORB-SLAM3 分類在 Geometric SLAM 中(本文的第 100 項開源代碼的位置獻給 ORB-SLAM3 );

2020.08.27 補充 +1:Structure-SLAM-PointLine 分類在 Multi-Landmarks/Object SLAM 中;

2020.08.27 補充 +1:VDO-SLAM 分類在 Dynamic SLAM 中;

2021.02.14 補充 +1:OpenREALM 分類在 Mapping 中;

2021.02.14 補充 +1:PL-VINS 分類在 Multi-Landmarks/Object SLAM 中;

2021.02.14 補充 +2:lili-om、CamVox 分類在 Sensor Fusion 中;

2021.02.14 補充 +1:TartanVO 分類在 Semantic / Deep SLAM 中。

繼續閱讀