Simulating Humanoid Robots in the Cloud: the testing behind the biggest world competition
- Track: Testing and Automation devroom
- Room: UA2.220 (Guillissen)
- Day: Saturday
- Start: 15:35
- End: 16:20
June, 2015: a raceway in California hosts the biggest real size humanoid robot competition in history with 26 teams from all over the world. It was the Darpa Robotics Challenge. The goal of the contest was to push the limits in robotics to assisting humans in responding to natural and man-made disasters. Two years before, as the first part of the contest, took place the Virtual Robotics Challenge (VRC) which consisted on replicate the same set of tasks proposed in challenge finals but using cloud-based robotic simulation instead of real robots.
The Open Source Robotics Foundation (OSRF), thanks to its open source robotics simulator Gazebo and the ROS (robot operative system) framework, was selected to rule this virtual contest. It was a challenge to manage the software infrastructure, from the simulator to machine provisioning, and testing played a key role. The talk will review the testing practices that were designed and implemented during the development of the software infrastructure used for the Virtual Robotics Challenge.
How was the testing of a robotics contest in the cloud done? What did we learn about testing software from organizing the VRC? How using open source software helped to organize the VRC?
The talk will review the testing practices that were designed and implemented during the development of the software infrastructure used for the Virtual Robotics Challenge. The scope of the techniques applied goes from automated testing of VRC software pieces (Gazebo simulator, DRCSim - DRC specific ROS wrappers and materials and the Cloudsim web provisioning tool) to the manual testing plan. Interesting points to explore in the talk:
- The root of all testing done: our continuous testing integration (CI) setup. Jenkins, and how GPU testing is done on it for different platforms.
- The importance of moving continuous testing integration to run as soon as possible in production machines (real challenge machines).
- How different cloud providers were tested and when and how the problems appeared. Experience here is nice to know since problems were not trivial.
- How the “release often, release early” philosophy was applied during the development cycle to keep users in the testing loop. How nightly builds played a fundamental role in this.
- How the ideal testing does not exists and how can real production tests with real users be the best of your test suites. How was that scheduled in VRC.
- How exceptional cases, like identifying an exploit hours before the competition, were managed to not affect the testing done before.
The goal would be to provide the audience with feedback and conclusions about software testing and testing decisions in a real big open source robotics software event from first hand.
Speakers
Jose Luis Rivero |