Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags. |
Running the object pickup pipeline
Description:Keywords: grasp, grasping, pickup, manipulation, tabletop
Tutorial Level: ADVANCED
Contents
Robot bring-up and set up
Nothing special here; just bring up the robot as you normally would with prX.launch.
Position the robot so that it's looking at the objects on the table. Make sure objects are within arm's reach. Do leave some space between robot and table though so that the robot can go through its calibration routine.
IMPORTANT: the robot must be calibrated (laser <-> stereo <-> arm). Talk to Vijay if unsure about which robots are calibrated.
Arm navigation and Perception
Start the file object_grasping.launch from arm_navigation_pr2:
roscd arm_navigation_pr2 roslaunch object_grasping.launch
This will start the planning pipeline: collision maps, FK and IK, planning, move_arm, etc. Note that in the launch files that get called from here you can change various topics and parameters, such as padding for the collision map, etc.
This should also start the perception pipeline, using both laser and narrow stereo.
Narrow Stereo
Note that we are using narrow_stereo_textured so if you want to see points in rviz you need to be listening on /narrow_stereo_textured/points.
If the projector is not started, you can start it with reconfigure_gui:
roscd dynamic_reconfigure scripts/reconfigure_gui
Select /camera_synchronizer_node from the drop-down list, then set projector_mode to ProjectorOn. Also make sure that narrow_stereo_trig_mode is set to WithProjector. You should be able to see a nice point cloud in rviz.
You will need to see the table well for anything to work; if you can't see the table in stereo, put a sheet of paper (preferably A3 or larger) under your objects.
Tabletop object detector
This provides the object detection services. Start it with
roscd tabletop_object_detector roslaunch launch/tabletop_node.launch
The tabletop_node has two modes of operation:
continuous, where it runs the detection on every frame it receives and publishes the results to a topic. The default topic name is tabletop_detector_models
service based (default), where it only runs detection when it gets a service request and gives back the result. The default service name is object_detection
You can set the operation mode in the launch file.
The information in both cases comes out as model_database/DatabaseModelPose.msg:
int32 model_id geometry_msgs/PoseStamped pose
The model_id is the unique identifier for the detected model in the model database, and the pose is it's location. By default, tabletop_node publishes model poses in the same frame and with the same stamp that the incoming point clouds are coming in.
The tabletop node can also publish lots of debug markers, by default, the topic is /tabletop_detector_markers. You can enable or disable the kinds of markers it publishes by editing the launch file.
Note that for each cluster of points, the tabletop_node will fit a model, but only publish as a result those fits that exceed a quality threshold. If you look at the model fit markers it publishes, those shown in blue are the "good ones" that get published or returned as results; the ones shown in red are known to be bad so are not published back.
Interpolated IK server
This generates a path from pre-grasp to grasp by using interpolated IK. Start it with:
roscd fingertip_reactive_grasp src/fingertip_reactive_grasp/interpolated_ik_motion_planner.py r
Object pickup
The object pickup node puts stuff together:
- calls the detection service to get a list of models
- manages the collision map
- asks the user which object should be picked up
- gets the list of grasps from the database
- uses move arm to execute the grasps
It has a new and improved console-based interface.
Start it with:
roscd object_pickup roslaunch launch/object_pickup_node.launch
Then, do the following:
- hit 'o' to open the gripper. All grasps will fail if the gripper starts out closed
- if the arm is in the way, hit 'm' to send it to the side. Note that move arm often reports failure to get there, even though the arm does move out of the way
- hit 'd' to detect objects. Make sure rviz is listening for Marker on the /collision_model_markers topic. You should see the table and your objects show up as green outlines. If the object is not being detected properly, try moving it around a bit and hit 'd' again.
- hit 's' to generate the stable collision map. This must be done AFTER the objects have been detected. If you are displaying the collision map in rviz, you should see no collision points on/around the detected obejcts
- hit 'p' to pick up an object, then select the object you want to pick up. Make sure rviz is listening to Markers on the /object_pickup_markers topic. You will see the grasps show up, and change colors as they are tried out:
- blue: grasp has not been tested yet
- red: grasp currently being tested
- yellow: initial IK check for pre-grasp has failed
- light blue: generation of interpolated IK trajectory from pre-grasp to grasp has failed
- orange: all checks OK, but move arm reported failure to reach the pre-grasp
- green: grasp has been successfully executed
- this is also the order of operations for each grasp. If any of these steps fails, the grasp is aborted and the next one is tried.
To try again, you can just start from the beginning of this list; no need to kill object_pickup and restart it.