這篇部落客要記錄自己對于OriginBot-相機驅動與可視化代碼的學習與了解,我會注釋寫在代碼檔案中。
在文檔中,提供了兩種驅動攝像頭的方法:一個啟動之後可以通過頁面實時展示畫面和人體檢測算法的結果,另一種方法啟動之後隻是通過一個話題來釋出圖像資料。
可以通過浏覽器檢視的啟動方式
文檔裡面說的很清楚,用以下指令啟動:
ros2 launch originbot_bringup camera_websoket_display.launch.py
啟動之後用浏覽器打開 http://IP:8000 即可,
這個指令最後執行的代碼是originbot.originbot_bringup.launch.camera_websoket_display.launch.py, 具體内容如下:
import os
from launch import LaunchDescription
from launch_ros.actions import Node
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from ament_index_python import get_package_share_directory
from launch.actions import DeclareLaunchArgument
from launch.substitutions import LaunchConfiguration
def generate_launch_description():
mipi_cam_device_arg = DeclareLaunchArgument(
'device',
default_value='GC4663',
description='mipi camera device')
# 這裡是實際啟動攝像頭的Node,最終執行的事mipi_cam.launch.py,會在下面單獨解釋這個代碼
mipi_node = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
os.path.join(
get_package_share_directory('mipi_cam'),
'launch/mipi_cam.launch.py')),
launch_arguments={
'mipi_image_width': '960',
'mipi_image_height': '544',
'mipi_io_method': 'shared_mem',
'mipi_video_device': LaunchConfiguration('device')
}.items()
)
# nv12->jpeg
# 這裡調用了TogetheROS.Bot的圖像編解碼子產品,目的是為了提升性能,具體參考:
# https://developer.horizon.cc/documents_tros/quick_demo/hobot_codec
jpeg_codec_node = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
os.path.join(
get_package_share_directory('hobot_codec'),
'launch/hobot_codec_encode.launch.py')),
launch_arguments={
'codec_in_mode': 'shared_mem',
'codec_out_mode': 'ros',
'codec_sub_topic': '/hbmem_img',
'codec_pub_topic': '/image'
}.items()
)
# web
# 這個就是啟動web的部分,實際上背後是一個Nginx靜态伺服器,
# 訂閱了image來展示圖檔,訂閱了smart_topic來擷取人體檢測的資料
# 這裡最後是執行了websocket.laucn.py這個代碼,下面再詳細解釋
web_smart_topic_arg = DeclareLaunchArgument(
'smart_topic',
default_value='/hobot_mono2d_body_detection',
description='websocket smart topic')
web_node = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
os.path.join(
get_package_share_directory('websocket'),
'launch/websocket.launch.py')),
launch_arguments={
'websocket_image_topic': '/image',
'websocket_smart_topic': LaunchConfiguration('smart_topic')
}.items()
)
# mono2d body detection
# TogetheROS.Bot的人體檢測功能,
# 會訂閱/image_raw或者/hbmem_img的圖檔資料來做檢測,
# 然後把檢測結果釋出到hobot_mono2d_body_detection,
# 我在https://www.guyuehome.com/45835裡面有用到這個子產品,也有相對詳細的介紹,可以檢視
# 源碼和官方文檔在:https://developer.horizon.cc/documents_tros/quick_demo/hobot_codec
mono2d_body_pub_topic_arg = DeclareLaunchArgument(
'mono2d_body_pub_topic',
default_value='/hobot_mono2d_body_detection',
description='mono2d body ai message publish topic')
mono2d_body_det_node = Node(
package='mono2d_body_detection',
executable='mono2d_body_detection',
output='screen',
parameters=[
{"ai_msg_pub_topic_name": LaunchConfiguration(
'mono2d_body_pub_topic')}
],
arguments=['--ros-args', '--log-level', 'warn']
)
return LaunchDescription([
mipi_cam_device_arg,
# image publish
mipi_node,
# image codec
jpeg_codec_node,
# body detection
mono2d_body_pub_topic_arg,
mono2d_body_det_node,
# web display
web_smart_topic_arg,
web_node
])
上面的代碼裡面調用了mipi_cam.launch.py和 websocket.launch.py, 現在分别來介紹。
以下是originbot.mipi_cam.launch.mipi_cam.launch.py的内容:
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node
def generate_launch_description():
return LaunchDescription([
DeclareLaunchArgument(
'mipi_camera_calibration_file_path',
default_value='/userdata/dev_ws/src/origineye/mipi_cam/config/SC132GS_calibration.yaml',
description='mipi camera calibration file path'),
DeclareLaunchArgument(
'mipi_out_format',
default_value='nv12',
description='mipi camera out format'),
DeclareLaunchArgument(
'mipi_image_width',
default_value='1088',
description='mipi camera out image width'),
DeclareLaunchArgument(
'mipi_image_height',
default_value='1280',
description='mipi camera out image height'),
DeclareLaunchArgument(
'mipi_io_method',
default_value='shared_mem',
description='mipi camera out io_method'),
DeclareLaunchArgument(
'mipi_video_device',
default_value='F37',
description='mipi camera device'),
# 啟動圖檔釋出pkg
Node(
package='mipi_cam',
executable='mipi_cam',
output='screen',
parameters=[
{"camera_calibration_file_path": LaunchConfiguration(
'mipi_camera_calibration_file_path')},
{"out_format": LaunchConfiguration('mipi_out_format')},
{"image_width": LaunchConfiguration('mipi_image_width')},
{"image_height": LaunchConfiguration('mipi_image_height')},
{"io_method": LaunchConfiguration('mipi_io_method')},
{"video_device": LaunchConfiguration('mipi_video_device')},
{"rotate_degree": 90},
],
arguments=['--ros-args', '--log-level', 'error']
)
])
這段代碼其實也很簡單,就是一些參數聲明,但是如果使用了OriginBot一段時間的小夥伴應該記得,小車啟動攝像頭後,會通過一個叫做/image_raw的話題釋出圖像資料,這個話題在這裡沒有提到。
這一部分在originbot.mipi_cam.src.mipi_cam_node.cpp 裡面的236行, 函數如下:
點選OriginBot源碼學習之攝像頭驅動 - 古月居可檢視全文