PERMA 2012, Poznan workshop

1st International Workshop on Perception for  Mobile Robots Autonomy – PEMRA 2012

 

Official website of the event

September 27, 2012 (Thursday)

08.00 – 09.00    —  Registration of participants 09.00 – 09.20    —  Opening 09.20 – 10.20    —  Plenary lecture 10.20 – 10.40    —  Coffee break 10.40 – 12.20    —  Session I – Localization and mapping

12.20 – 13.00    —  Lunch 13.00 – 14.40    —  Session II – 3D perception

14.40 – 15.00    —  Coffee break 15.00 – 16.00    —  Tour of research laboratories 19.00 – 22.00    —  Banquet

September 28, 2012 (Friday)

09.00 – 10.00   —  Plenary lecture 10.00 – 10.20   —  Coffee break 10.20 – 12.00   — Session I – Multi-sensor system

12.00 – 12.40   —  Lunch 12.40 – 14.20   —  Session II – Sensors and applications

14.20 – 14.40   —  Coffee break 14.40 – 15.00   —  Closing session 18.00 — Researchers’ Night

Technical program

Plenary lecture (Thursday)

VISUAL SLAM: ALGORITHMS, CHALLENGES AND APPLICATIONS
Juan D. Tardós

Session I – Localization and mapping (Thursday)

1.Absolute localization for low capability robots in structured environments using barcode landmarks (pdf)
Duarte Dias and Rodrigo Ventura
2.Interactive mapping using range sensor data under localization uncertainty (pdf)
Pedro Vieira and Rodrigo Ventura
3.On augmenting the visual SLAM with direct orientation measurement using the 5-point algorithm (pdf)
Adam Schmidt, Marek Kraft, Michał Fularz and Zuzanna Domagała
4.Simultaneous localization and mapping for tracked wheel robots combining monocular and stereo vision (pdf)
Filipe Jesus and Rodrigo Ventura

Session II – 3D perception (Thursday)

1.Comparative assessment of point feature detectors in the context of robot navigation (pdf)
Adam Schmidt, Marek Kraft, Michał Fularz and Zuzanna Domagała
2.Toward Rivh Geometric Map for SLAM: Online Detection of Planes in 2D LIDA (pdf) Cyrille Berger
3.Autonomous Person Following with 3D LIDAR in Outdoor Environment (pdf)
Karsten Bohlmann, Andreas Beck-Greinwald, Andreas Zell, Sebastian Buck and Henrik Marks
4.Understanding 3D shapes being in motion (pdf)
Janusz Będkowski
 

Plenary lecture (Friday)

TOWARDS 3D SEMANTIC PERCEPTION Andreas Nüchter

Session I – Multi-sensor systems (Friday)

1.Enhancing Sensor Capabilities of Walking Robots Through Cooperative Exploration with Aerial Robot (pdf)
Georg Heppner, Arne Roennau and Ruediger Dillmann 2.Sensory System Calibration Method for a Walking Robot (pdf)
Przemysław Łabęcki and Dominik Belter

3.A Unified Approach to Extrinsic Calibration Between a Camera and a Laser Range Finder Using Point-plane Constraints (pdf) Edmond So, Filippo Basso, Alberto Pretto and Emanuele Menegatti 4.The influence of Drive Unit on Measurement Error of Ultrasonic Sensor in Multi-Rotor Flying Robot (pdf)
Stanisław Gardecki, Jarosław Gośliński and Wojciech Giernacki

Session II – Sensors and applications (Friday)

1.Tactile Sensing for Ground Classification (pdf) Krzysztof Walas

2.Assisted Teleoperation of Quadcopters Using Obstacle Avoidance (pdf) João Mendes and Rodrigo Ventura

3.Target Tracking by a Mobile Camera in Obstacle Cluttered Environment Carlos-Alberto Díaz-Hernández, José-Luis Muñoz-Lozano and Juan López-Coronado

4.The Registration System for the Evaluation of Indoor Visual Slam and Odometry Algorithms (pdf) Adam Schmidt, Marek Kraft, Michał Fularz and Zuzanna Domagała

Presentation Guidelines

The oral presentation should be 20 minutes long: about 15 min. for the presentation, and about 5 min. for the discussion (questions), with the additional 5 minutes to change the presentation. PC with Windows/Linux will be available, however the presenters could use their own computer.

Two plenary talks are planned, one starting each day of the workshop. These presentations will be kindly provided by:

Juan D. Tardós, Universidad de Zaragoza

Visual SLAM: Algorithms, Challenges and Applications

Abstract:

Recent research is showing that computing medium-scale visual maps for rigid scenes in real time is feasible. Current maps, composed of a sparse set of points, are adequate for accurate camera location, but quite poor for performing high-level tasks such as robot navigation, object manipulation or human-computer interaction. Furthermore, most available techniques are intensive in computer requirements, being unable to map large environments.

In this talk we will present algorithms developed at the University of Zaragoza for large-scale visual mapping with monocular and stereo cameras, and efficient place recognition techniques for reliable loop detection.  We will also discuss several applications and the challenges they pose for visual SLAM: most robotics and human interaction applications require scene understanding and object recognition techniques to boost the semantic contents of the maps; novel medical applications will require the ability to map non-rigid scenes.

 

Towards 3D Semantic Perception (pdf)

Abstract:

Intelligent autonomous action in ordinary environments calls for maps. 3D geometry is generally required for planning motions in complex scenarios and to self localize with six degrees of freedom (6 DoF) (x, y, z positions, roll, yaw, and pitch angles). Meaning, in addition to geometry, becomes inevitable if one is supposed to interact with its environment in a goal-directed way. A semantic stance enables systems to reason about objects; it helps disambiguate or round off sensor data; and knowledge becomes reviewable and communicable.

The talk describes an approach and two integrated robotic systems for semantic 3D mapping. The prime sensors are 3D laser scanners. Individual scans are registered into a coherent 3D geometry map by 6D SLAM, our bundle adjustment framework for laser scanner data. Coarse scene features (e.g., walls, floors in a building) are determined by semantic labeling. More delicate objects are then detected and localized by a trained classifier or by matching 3D Google warehouse models. In painting methods are used to reconstruct occlusions. In the end, the semantic maps can be visualized for human inspection. We sketch the overall architecture of the approach, explain the respective steps and their underlying algorithms, give examples based on implementation at working robots, and discuss the findings.

SECRETARIAT

PEMRA2012 Secretariat www.pemra2012.put.poznan.pl pemra@put.poznan.pl T: +48 616652365 F: +48 616652563