Skip to content
Gerardo Aragon-Camarasa edited this page Feb 12, 2015 · 2 revisions

Tutorial: Running the ug_stereomatcher ROS package

Introduction

The ug_stereomatcher ROS package comprises the University of Glasgow GPU stereo matcher and nodes to compute point clouds from disparity maps in full resolution and foveated modes. Both operational modes:

  • Full resolution: Compute disparity maps over a image pyramid at full resolution in 10 seconds on 16MP RGB images
  • Foveated resolution: Compute foveated disparity maps for each level on the pyramid in 3 seconds on 16MP RGB images with a fixed fovea size of 615 by 407 (More info about foveated imaging)

The operational mode is set on the ROS parameter server. The foveated parameter can take either 0 for full resolution and 1 for foveated mode.

In order to run the stereo matching and point cloud computation nodes, it is required to publish a pair of stereo images on the topics:

together with camera calibration parameters for each camera/image on the topics:

By default, camera calibration parameters in ug_stereomatcher are located in ug_stereomatcher/calibrations/. Paths to the calibration parameters are defined in the parameter server with following names:

  • camera_info_url_left
  • camera_info_url_right

Publishing images

publish_images.cpp is an example on how to publish a pair of stereo images and camera calibration files on the topics described above. This node reads the image list provided in input_images_models.xml. This XML file must contain a list of the image-pairs used to generate the disparity maps. This file should look similar to the example below:

<?xml version="1.0"?>
<opencv_storage>
<!-- List images by pairs, e.g.
	 left1.tif
	 right1.tif
	 left2.tif
	 right2.tif
	 ...
	 leftn.tif
	 rightn.tif
 -->
<images>
/path/to/image/1_Left.tif
/path/to/image/1_Right.tif
/path/to/image/2_Left.tif
/path/to/image/2_Right.tif
</images>
</opencv_storage>

NOTE: Paths to images should be absolute!

Launch files

ug_stereomatcher already provides a launch file example using publish_images.cpp for each operational mode. These can be found in ug_stereomatcher/launch/. A brief description of each launch file is given below.

stereo_nodes.launch

stereo_nodes.launch - launches the full-resolution stereo matcher, point cloud and publish image nodes. Disparity maps are published to the topics

and a coloured point cloud attached to the left camera reference frame is published to the topic:

stereo_nodes_foveated.launch

stereo_nodes_foveated.launch - launches the foveated stereo matcher, point cloud and publish image nodes. To select the foveated level required, it is required to define fovLevel on the parameter server. fovLevel can take values from 0 to 6. Foveated disparity maps are published to the topics:

Foveated disparity maps topics contains a stack of disparity for the last 7 image pyramid levels. For instance, a 16MP foveated disparity map, either vertical or horizontal, is a 1-channel 615 by 2849 pixels. Each level is defined at every 407 pixels; therefore 407 multiplied by 7 equals equals to 2849. A foveated coloured point cloud attached to the left camera reference frame is published to the topic:

A session with ug_stereomatcher

In the terminal, one normally will start either of the above launch files. In this example, the full-resolution mode is used. It is assumed that the image list in input_images.xml has been defined previously.

roslaunch ug_stereomatcher stereo_nodes.launch

The above will start the stereo matcher, point cloud computation and publish images nodes. It will also set up foveated and fovLevel accordingly and defined the calibration parameters and image list xml paths. If everything went well and all nodes are running, a matching and point cloud computation can be started by typing in a new terminal:

rostopic pub -1 acquire_images ug_stereomatcher/CamerasSync '{timeStamp: now, data: full}'

The above command will publish on the topic acquire_images [ug_stereomatcher/CamerasSync] the current ROS timestamp for stereo synchronisation and the type of capturing (full or preview). In this packaged, preview mode is irrelevant...

After matching and generating the point cloud, one should get something like the image below (this snapshot is from the CloPeMa robot at the University of Glasgow). However, one can start rviz and add a plugin that listens to the point cloud topic in order to visualise the point cloud (More info here)

Clone this wiki locally