Wednesday, August 19, 2015

Team Applied Robotics APC 2015 Summary


After participating in the inaugural Amazon Picking Challenge held alongside the IEEE ICRA 2015 conference in Seattle, we would like to describe our system in more detail. Our system was designed and implemented over a six-month period in our free time on a shoestring budget and delivered satisfactory results, resulting in a 10th place out of 28 participating teams from all over the world.


One of the first design decisions we made was to use a vacuum gripper as a mechanical solution was too large to be able to grab objects placed in the back of the shelf. An additional benefit was that compliant gripping was easy to achieve.


The robot arm we used was a UR5. Because we had access to the same robot for testing as that would be supplied at APC, we did not have to ship the actual robot. We did, however, need to ensure the robot base and gripper could be mounted and transported easily. We designed a base that could be taken apart completely and reassembled quickly. The gripper and camera bracket was also quickly mounted onto the end of the robot arm with just a couple of bolts. After assembly only two things needed to be calibrated: the position of the camera with respect to the robot arm and the position of the robot with respect to the cabinet. By defining a clear procedure for the set-up and calibration, the system was up and running within an hour.

The software architecture was based on ROS with separate modules for strategy, perception, motion planning and the interface to the UR controller. The motion planning used a heuristic approach to create Cartesian paths between the pick-up position (as determined by the perception module) and the drop-off position. For checking the Cartesian paths for collisions and reachability, URDF models of the robot and the shelf were used. After ensuring a collision-free path had been found, the robot controller executed the linear paths.


To acquire depth information, we used the Kinect V2 sensor mounted on the robot arm. This gave us sufficient resolution although issues with reflection due to the Time of Flight technique meant we needed careful filtering of the point clouds before processing. 



For distinguishing objects from the shelf, recognising them and determining their poses, we took a very pragmatic approach. Because the pose of cabinet was known, we could easily filter the required bin from the pointcloud. Then we used basic PCL functionality to segment the objects. Here, we assumed that objects were not touching. Although this was true for the majority of the bins, we knew that picking all objects would be unlikely. From each object segmented from the shelf we determined the dimensions and matched them against the target object. If there was a match, the grasp poses of the object were determined by approximating the object by a bounding box. Using this approach, we were able to correctly pick approximately 80% of the objects in realistic shelf set-ups.