Sensors & Effectors/Camera

[Active IR Stereo Camera][Intel Realsense][ROS Melodic] Start the realsense camera node

jstar0525 2021. 8. 19. 11:44
반응형

https://github.com/IntelRealSense/realsense-ros/blob/development/README.md#installation-instructions

 

GitHub - IntelRealSense/realsense-ros: Intel(R) RealSense(TM) ROS Wrapper for D400 series, SR300 Camera and T265 Tracking Module

Intel(R) RealSense(TM) ROS Wrapper for D400 series, SR300 Camera and T265 Tracking Module - GitHub - IntelRealSense/realsense-ros: Intel(R) RealSense(TM) ROS Wrapper for D400 series, SR300 Camera a...

github.com

https://dev.intelrealsense.com/docs/ros-wrapper

 

ROS & ROS2

The ROS wrapper allows you to use Intel RealSense Depth Cameras D400, SR300 & L500 series and T265 Tracking Camera, with ROS and ROS2. These are the currently supported ROS Distributions: Noetic Ninjemys (Ubuntu 20.04 Focal)Melodic Morenia (Ubuntu 18.04 Bi

dev.intelrealsense.com

 

 

Test ENV.

HW : Jetson AGX Xaiver with jetpack 4.5.1(Ubuntu 18.04)

ROS : ROS Melodic

 

 

 

Usage Instructions


Start the camera node

To start the camera node in ROS:

$ roslaunch realsense2_camera rs_camera.launch

This will stream all camera sensors and publish on the appropriate ROS topics.

Other stream resolutions and frame rates can optionally be provided as parameters to the 'rs_camera.launch' file.

 

SUMMARY

========

 

PARAMETERS

 * /camera/realsense2_camera/accel_fps: -1

 * /camera/realsense2_camera/accel_frame_id: camera_accel_frame

 * /camera/realsense2_camera/accel_optical_frame_id: camera_accel_opti...

 * /camera/realsense2_camera/align_depth: False

 * /camera/realsense2_camera/aligned_depth_to_color_frame_id: camera_aligned_de...

 * /camera/realsense2_camera/aligned_depth_to_fisheye1_frame_id: camera_aligned_de...

 * /camera/realsense2_camera/aligned_depth_to_fisheye2_frame_id: camera_aligned_de...

 * /camera/realsense2_camera/aligned_depth_to_fisheye_frame_id: camera_aligned_de...

 * /camera/realsense2_camera/aligned_depth_to_infra1_frame_id: camera_aligned_de...

 * /camera/realsense2_camera/aligned_depth_to_infra2_frame_id: camera_aligned_de...

 * /camera/realsense2_camera/allow_no_texture_points: False

 * /camera/realsense2_camera/base_frame_id: camera_link

 * /camera/realsense2_camera/calib_odom_file: 

 * /camera/realsense2_camera/clip_distance: -2.0

 * /camera/realsense2_camera/color_fps: -1

 * /camera/realsense2_camera/color_frame_id: camera_color_frame

 * /camera/realsense2_camera/color_height: -1

 * /camera/realsense2_camera/color_optical_frame_id: camera_color_opti...

 * /camera/realsense2_camera/color_width: -1

 * /camera/realsense2_camera/confidence_fps: -1

 * /camera/realsense2_camera/confidence_height: -1

 * /camera/realsense2_camera/confidence_width: -1

 * /camera/realsense2_camera/depth_fps: -1

 * /camera/realsense2_camera/depth_frame_id: camera_depth_frame

 * /camera/realsense2_camera/depth_height: -1

 * /camera/realsense2_camera/depth_optical_frame_id: camera_depth_opti...

 * /camera/realsense2_camera/depth_width: -1

 * /camera/realsense2_camera/device_type: 

 * /camera/realsense2_camera/enable_accel: False

 * /camera/realsense2_camera/enable_color: True

 * /camera/realsense2_camera/enable_confidence: True

 * /camera/realsense2_camera/enable_depth: True

 * /camera/realsense2_camera/enable_fisheye1: False

 * /camera/realsense2_camera/enable_fisheye2: False

 * /camera/realsense2_camera/enable_fisheye: False

 * /camera/realsense2_camera/enable_gyro: False

 * /camera/realsense2_camera/enable_infra1: False

 * /camera/realsense2_camera/enable_infra2: False

 * /camera/realsense2_camera/enable_infra: False

 * /camera/realsense2_camera/enable_pointcloud: False

 * /camera/realsense2_camera/enable_pose: False

 * /camera/realsense2_camera/enable_sync: False

 * /camera/realsense2_camera/filters: 

 * /camera/realsense2_camera/fisheye1_frame_id: camera_fisheye1_f...

 * /camera/realsense2_camera/fisheye1_optical_frame_id: camera_fisheye1_o...

 * /camera/realsense2_camera/fisheye2_frame_id: camera_fisheye2_f...

 * /camera/realsense2_camera/fisheye2_optical_frame_id: camera_fisheye2_o...

 * /camera/realsense2_camera/fisheye_fps: -1

 * /camera/realsense2_camera/fisheye_frame_id: camera_fisheye_frame

 * /camera/realsense2_camera/fisheye_height: -1

 * /camera/realsense2_camera/fisheye_optical_frame_id: camera_fisheye_op...

 * /camera/realsense2_camera/fisheye_width: -1

 * /camera/realsense2_camera/gyro_fps: -1

 * /camera/realsense2_camera/gyro_frame_id: camera_gyro_frame

 * /camera/realsense2_camera/gyro_optical_frame_id: camera_gyro_optic...

 * /camera/realsense2_camera/imu_optical_frame_id: camera_imu_optica...

 * /camera/realsense2_camera/infra1_frame_id: camera_infra1_frame

 * /camera/realsense2_camera/infra1_optical_frame_id: camera_infra1_opt...

 * /camera/realsense2_camera/infra2_frame_id: camera_infra2_frame

 * /camera/realsense2_camera/infra2_optical_frame_id: camera_infra2_opt...

 * /camera/realsense2_camera/infra_fps: 30

 * /camera/realsense2_camera/infra_height: 480

 * /camera/realsense2_camera/infra_rgb: False

 * /camera/realsense2_camera/infra_width: 848

 * /camera/realsense2_camera/initial_reset: False

 * /camera/realsense2_camera/json_file_path: 

 * /camera/realsense2_camera/linear_accel_cov: 0.01

 * /camera/realsense2_camera/odom_frame_id: camera_odom_frame

 * /camera/realsense2_camera/ordered_pc: False

 * /camera/realsense2_camera/pointcloud_texture_index: 0

 * /camera/realsense2_camera/pointcloud_texture_stream: RS2_STREAM_COLOR

 * /camera/realsense2_camera/pose_frame_id: camera_pose_frame

 * /camera/realsense2_camera/pose_optical_frame_id: camera_pose_optic...

 * /camera/realsense2_camera/publish_odom_tf: True

 * /camera/realsense2_camera/publish_tf: True

 * /camera/realsense2_camera/rosbag_filename: 

 * /camera/realsense2_camera/serial_no: 

 * /camera/realsense2_camera/stereo_module/exposure/1: 7500

 * /camera/realsense2_camera/stereo_module/exposure/2: 1

 * /camera/realsense2_camera/stereo_module/gain/1: 16

 * /camera/realsense2_camera/stereo_module/gain/2: 16

 * /camera/realsense2_camera/tf_publish_rate: 0.0

 * /camera/realsense2_camera/topic_odom_in: odom_in

 * /camera/realsense2_camera/unite_imu_method: 

 * /camera/realsense2_camera/usb_port_id: 

 * /rosdistro: melodic

 * /rosversion: 1.14.11

 

NODES

  /camera/

    realsense2_camera (nodelet/nodelet)

    realsense2_camera_manager (nodelet/nodelet)

 

 

 

$ rqt_graph

rqt_graph

 

Published Topics

The published topics differ according to the device and parameters. After running the above command with D455 attached, the following list of topics will be available (This is a partial list. For full one type rostopic list):

$ rostopic list
  • /camera/color/camera_info
  • /camera/color/image_raw
  • /camera/color/image_raw/compressed
  • /camera/color/image_raw/compressed/parameter_descriptions
  • /camera/color/image_raw/compressed/parameter_updates
  • /camera/color/image_raw/compressedDepth
  • /camera/color/image_raw/compressedDepth/parameter_descriptions
  • /camera/color/image_raw/compressedDepth/parameter_updates
  • /camera/color/image_raw/theora
  • /camera/color/image_raw/theora/parameter_descriptions
  • /camera/color/image_raw/theora/parameter_updates
  • /camera/depth/camera_info
  • /camera/depth/image_rect_raw
  • /camera/depth/image_rect_raw/compressed
  • /camera/depth/image_rect_raw/compressed/parameter_descriptions
  • /camera/depth/image_rect_raw/compressed/parameter_updates
  • /camera/depth/image_rect_raw/compressedDepth
  • /camera/depth/image_rect_raw/compressedDepth/parameter_descriptions
  • /camera/depth/image_rect_raw/compressedDepth/parameter_updates
  • /camera/depth/image_rect_raw/theora
  • /camera/depth/image_rect_raw/theora/parameter_descriptions
  • /camera/depth/image_rect_raw/theora/parameter_updates
  • /camera/extrinsics/depth_to_color
  • /camera/motion_module/parameter_descriptions
  • /camera/motion_module/parameter_updates
  • /camera/realsense2_camera_manager/bond
  • /camera/rgb_camera/auto_exposure_roi/parameter_descriptions
  • /camera/rgb_camera/auto_exposure_roi/parameter_updates
  • /camera/rgb_camera/parameter_descriptions
  • /camera/rgb_camera/parameter_updates
  • /camera/stereo_module/auto_exposure_roi/parameter_descriptions
  • /camera/stereo_module/auto_exposure_roi/parameter_updates
  • /camera/stereo_module/parameter_descriptions
  • /camera/stereo_module/parameter_updates
  • /diagnostics
  • /rosout
  • /rosout_agg
  • /tf
  • /tf_static

 

Using an L515 device the list differs a little by adding a 4-bit confidence grade (pulished as a mono8 image):
  • /camera/confidence/camera_info
  • /camera/confidence/image_rect_raw
It also replaces the 2 infrared topics with the single available one:
  • /camera/infra/camera_info
  • /camera/infra/image_raw

 

The "/camera" prefix is the default and can be changed. Check the rs_multiple_devices.launch file for an example. If using D435 or D415, the gyro and accel topics wont be available. Likewise, other topics will be available when using T265 (see below).

 

 

 

Available services:

  • reset : Cause a hardware reset of the device. Usage: rosservice call /camera/realsense2_camera/reset
  • enable : Start/Stop all streaming sensors. Usage example: rosservice call /camera/enable False

 

 

Launch parameters

The following parameters are available by the wrapper:

  • serial_no: will attach to the device with the given serial number (serial_no) number. Default, attach to available RealSense device in random.
  • usb_port_id: will attach to the device with the given USB port (usb_port_id). i.e 4-1, 4-2 etc. Default, ignore USB port when choosing a device.
  • device_type: will attach to a device whose name includes the given device_type regular expression pattern. Default, ignore device type. For example, device_type:=d435 will match d435 and d435i. device_type=d435(?!i) will match d435 but not d435i.
  • rosbag_filename: Will publish topics from rosbag file.
  • initial_reset: On occasions the device was not closed properly and due to firmware issues needs to reset. If set to true, the device will reset prior to usage.
  • align_depth: If set to true, will publish additional topics for the "aligned depth to color" image.: /camera/aligned_depth_to_color/image_raw, /camera/aligned_depth_to_color/camera_info.
    The pointcloud, if enabled, will be built based on the aligned_depth_to_color image.
  • filters: any of the following options, separated by commas:
  • colorizer: will color the depth image. On the depth topic an RGB image will be published, instead of the 16bit depth values .
  • pointcloud: will add a pointcloud topic /camera/depth/color/points.
    • The texture of the pointcloud can be modified in rqt_reconfigure (see below) or using the parameters: pointcloud_texture_stream and pointcloud_texture_index. Run rqt_reconfigure to see available values for these parameters.
    • The depth FOV and the texture FOV are not similar. By default, pointcloud is limited to the section of depth containing the texture. You can have a full depth to pointcloud, coloring the regions beyond the texture with zeros, by setting allow_no_texture_points to true.
    • pointcloud is of an unordered format by default. This can be changed by setting ordered_pc to true.
  • hdr_merge: Allows depth image to be created by merging the information from 2 consecutive frames, taken with different exposure and gain values. The way to set exposure and gain values for each sequence in runtime is by first selecting the sequence id, using rqt_reconfigure stereo_module/sequence_id parameter and then modifying the stereo_module/gain, and stereo_module/exposure.
    To view the effect on the infrared image for each sequence id use the sequence_id_filter/sequence_id parameter.
    To initialize these parameters in start time use the following parameters:
    stereo_module/exposure/1, stereo_module/gain/1, stereo_module/exposure/2, stereo_module/gain/2
    * For in-depth review of the subject please read the accompanying white paper.
  • The following filters have detailed descriptions in : https://github.com/IntelRealSense/librealsense/blob/master/doc/post-processing-filters.md
    • disparity - convert depth to disparity before applying other filters and back.
    • spatial - filter the depth image spatially.
    • temporal - filter the depth image temporally.
    • hole_filling - apply hole-filling filter.
    • decimation - reduces depth scene complexity.
  • enable_sync: gathers closest frames of different sensors, infra red, color and depth, to be sent with the same timetag. This happens automatically when such filters as pointcloud are enabled.
  • <stream_type>_width, <stream_type>_height, <stream_type>_fps: <stream_type> can be any of infra, color, fisheye, depth, gyro, accel, pose, confidence. Sets the required format of the device. If the specified combination of parameters is not available by the device, the stream will be replaced with the default for that stream. Setting a value to 0, will choose the first format in the inner list. (i.e. consistent between runs but not defined).
    *Note: for gyro accel and pose, only _fps option is meaningful.
  • enable_<stream_name>: Choose whether to enable a specified stream or not. Default is true for images and false for orientation streams. <stream_name> can be any of infra1, infra2, color, depth, fisheye, fisheye1, fisheye2, gyro, accel, pose, confidence.
  • tf_prefix: By default all frame's ids have the same prefix - camera_. This allows changing it per camera.
  • <stream_name>_frame_id, <stream_name>_optical_frame_id, aligned_depth_to_<stream_name>_frame_id: Specify the different frame_id for the different frames. Especially important when using multiple cameras.
  • base_frame_id: defines the frame_id all static transformations refers to.
  • odom_frame_id: defines the origin coordinate system in ROS convention (X-Forward, Y-Left, Z-Up). pose topic defines the pose relative to that system.
  • All the rest of the frame_ids can be found in the template launch file: nodelet.launch.xml
  • unite_imu_method: The D435i and T265 cameras have built in IMU components which produce 2 unrelated streams: gyro - which shows angular velocity and accel which shows linear acceleration. Each with it's own frequency. By default, 2 corresponding topics are available, each with only the relevant fields of the message sensor_msgs::Imu are filled out. Setting unite_imu_method creates a new topic, imu, that replaces the default gyro and accel topics. The imu topic is published at the rate of the gyro. All the fields of the Imu message under the imu topic are filled out.
    • linear_interpolation: Every gyro message is attached by the an accel message interpolated to the gyro's timestamp.
    • copy: Every gyro message is attached by the last accel message.
  • clip_distance: remove from the depth image all values above a given value (meters). Disable by giving negative value (default)
  • linear_accel_cov, angular_velocity_cov: sets the variance given to the Imu readings. For the T265, these values are being modified by the inner confidence value.
  • hold_back_imu_for_frames: Images processing takes time. Therefor there is a time gap between the moment the image arrives at the wrapper and the moment the image is published to the ROS environment. During this time, Imu messages keep on arriving and a situation is created where an image with earlier timestamp is published after Imu message with later timestamp. If that is a problem, setting hold_back_imu_for_frames to true will hold the Imu messages back while processing the images and then publish them all in a burst, thus keeping the order of publication as the order of arrival. Note that in either case, the timestamp in each message's header reflects the time of it's origin.
  • topic_odom_in: For T265, add wheel odometry information through this topic. The code refers only to the twist.linear field in the message.
  • calib_odom_file: For the T265 to include odometry input, it must be given a configuration file. Explanations can be found here. The calibration is done in ROS coordinates system.
  • publish_tf: boolean, publish or not TF at all. Defaults to True.
  • tf_publish_rate: double, positive values mean dynamic transform publication with specified rate, all other values mean static transform publication. Defaults to 0
  • publish_odom_tf: If True (default) publish TF from odom_frame to pose_frame.
  • infra_rgb: When set to True (default: False), it configures the infrared camera to stream in RGB (color) mode, thus enabling the use of a RGB image in the same frame as the depth image, potentially avoiding frame transformation related errors. When this feature is required, you are additionally required to also enable enable_infra:=true for the infrared stream to be enabled.
    • NOTE The configuration required for enable_infra is independent of enable_depth
    • NOTE To enable the Infrared stream, you should enable enable_infra:=true NOT enable_infra1:=true nor enable_infra2:=true
    • NOTE This feature is only supported by Realsense sensors with RGB streams available from the infra cameras, which can be checked by observing the output of rs-enumerate-devices
반응형