Workshop @ RO-MAN’17

EMSHRI’17 – Third International Workshop on Evaluation Methods Standardization for Human-Robot Interaction

An international workshop held in conjunction with the 26th IEEE International Symposium on Robot and Human Interactive Communication, (RO-MAN 2017).

Official website of the conference and partner site of the workshop.

Social robots can have several roles such as home care robots (e.g. for seniors), mediators (e.g. for persons with autism spectrum disorders), and companions (e.g. for children alone at home). When a new application/behavior is created on a robot, researchers need to validate it. Obviously, they need to validate technical aspects: has the robot correctly executed its tasks, has it correctly moved its actuators, etc. Nevertheless, they also need to validate psychological aspects. Indeed, the literature shows that the interaction between a robot and a human is complex. Robots, with their presence, and their capabilities to act on our environment, influence people. And literature also shows that humans have a tendency to anthropomorphize robots, and can reject a robot (for example if the robot does not respect particular social norms). Evaluating an application on a robot is complex because the need to understand how humans experience the interaction is not easily met with our current methodologies. Some common objectives in HRI are to “maximize” well-being, to build robots which are acceptable, to build robots which can efficiently help people. We need to understand the relationship between robots and humans, for example studying which social skills are important, what is the impact of robots, which roles the robot can and cannot fulfill and so on. To learn about the robots and about the interaction, we need to study humans when interacting with robots.

People who create robot applications are typically computer scientists or roboticists. They often are not experts in evaluating human-robot interactions and their effects. As such input from psychologists, ethologists, sociologists, philosophers, anthropologists, ergonomists (not an exhaustive list), who are specialists in analyzing human behaviors and attitudes, is invaluable. These disciplines use different methodologies, but all are to a large extent readily available for Human-Robot Interaction studies. For example, Human-Robot Interactions are mainly evaluated in controlled environments, such as laboratory settings. Even if these types of evaluations bring knowledge, they do not help with evaluating Human-Robot Interactions in natural contexts. We also notice that the existing literature shows articles presenting studies performed without specialists, which may contain some methodological errors or biases. Therefore, we believe it is necessary to standardize Human-Robot Interaction evaluation methods.

Target Audience

This workshop focuses only on the interaction/relationship between a human and a robot, but not on robot’s evaluation. Our main objective is to define new evaluation methods in order to make HRI research reproducible. This call is open to all experts of Human-Robot Interaction studies: psychologists, ethologists, ergonomists, sociologists, philosophers, anthropologists, computer scientists, and roboticists.

Topics of interest include, but are not limited to:

A non-exhaustive list of topics to be covered in the workshop is as follows:

  • Application to HRI evaluation of ethology, ergonomics, psychology, sociology, philosophy, anthropology, …
  • New approaches to evaluate Human-Robot Interaction
  • Human-Human interaction studies
  • Human-Animal interaction studies
  • Human-Computer Interaction studies
  • Human-Robot Interaction studies
  • Communication studies
  • Good practices in the evaluation of Human-Robot Interaction
  • Human factors
  • User-experience
  • Evaluation methods
  • Evaluation metrics
We invite extended abstracts and short papers of 2 to 6 pages in PDF and IEEE format (choose “Templates for Transactions”).

All papers must be submitted via the EasyClair submission site.

Important Dates:

Submission deadline: June 18th, 2017 July 2nd, 2017

Acceptance notification: July 13th, 2017 July 13th, 2017

Camera-ready submission: July 31st, 2017 August 5th, 2017

Workshop event: August 28th, 2017

09:00 – 10:30 Céline Jost – Introduction and feedback about previous EMSHRI workshops
Dimitris Chrysostomou – Feedback about a related workshops series
Nigel Crook – Title to be defined
10:30 – 10:50 Coffee Break
10:50 – 12:20 Papers session
12:20 – 14:00 Lunch Break
14:00 – 15:30 Small groups will work on the following topics:

A. Determining the pros and cons of existing evaluation methods when applied to Human-Robot Interaction.

B. Providing the state-of-the-art of protocols, which can be replicated.

C. Providing the state-of-the-art of questionnaires, which can be reused.

D. Listing evaluation criteria, which are needed to evaluate Human-Robot Interaction and to ensure valid results and replicability (e.g. studies conducted in the real world, experimental settings…).

E. Establishing rules about statistical analyses. Which are the required conditions, which ensure the validity of results?

15:30 – 15:50 Coffee Break
15:50 – 17:00 Interactive session conclusion
A representative per group will present a summary of their group’s discussions.
17:00 – 17:20 Closing remarks
Workshop conclusion and discussion about the international book outline and writing organization.
Tony Belpaeme, Plymouth University, United Kingdom
Cindy Bethel, Mississippi State University, USA
Dimitrios Chrysostomou, Aalborg University, Denmark
Nigel Crook, Oxford Brooks University, United Kingdom
Marine Grandgeorge, University of Rennes I, France
Céline Jost, Paris 8 University, France
Brigitte Le Pévédic, University of South Brittany, France
Nicole Mirnig, University of Salzburg, Austria