EMSHRI’17 – Third International Workshop on Evaluation Methods Standardization for Human-Robot Interaction
An international workshop held in conjunction with the 26th IEEE International Symposium on Robot and Human Interactive Communication, (RO-MAN 2017).
People who create robot applications are typically computer scientists or roboticists. They often are not experts in evaluating human-robot interactions and their effects. As such input from psychologists, ethologists, sociologists, philosophers, anthropologists, ergonomists (not an exhaustive list), who are specialists in analyzing human behaviors and attitudes, is invaluable. These disciplines use different methodologies, but all are to a large extent readily available for Human-Robot Interaction studies. For example, Human-Robot Interactions are mainly evaluated in controlled environments, such as laboratory settings. Even if these types of evaluations bring knowledge, they do not help with evaluating Human-Robot Interactions in natural contexts. We also notice that the existing literature shows articles presenting studies performed without specialists, which may contain some methodological errors or biases. Therefore, we believe it is necessary to standardize Human-Robot Interaction evaluation methods.
This workshop focuses only on the interaction/relationship between a human and a robot, but not on robot’s evaluation. Our main objective is to define new evaluation methods in order to make HRI research reproducible. This call is open to all experts of Human-Robot Interaction studies: psychologists, ethologists, ergonomists, sociologists, philosophers, anthropologists, computer scientists, and roboticists.
Topics of interest include, but are not limited to:
A non-exhaustive list of topics to be covered in the workshop is as follows:
- Application to HRI evaluation of ethology, ergonomics, psychology, sociology, philosophy, anthropology, …
- New approaches to evaluate Human-Robot Interaction
- Human-Human interaction studies
- Human-Animal interaction studies
- Human-Computer Interaction studies
- Human-Robot Interaction studies
- Communication studies
- Good practices in the evaluation of Human-Robot Interaction
- Human factors
- Evaluation methods
- Evaluation metrics
All papers must be submitted via the EasyClair submission site.
June 18th, 2017 July 2nd, 2017
July 13th, 2017 July 13th, 2017
July 31st, 2017 August 5th, 2017
Workshop event: August 28th, 2017
|09:00 – 10:30||
Céline Jost – Introduction and feedback about previous EMSHRI workshops
Dimitris Chrysostomou – Feedback about a related workshops series
Nigel Crook – Title to be defined
|10:30 – 10:50||Coffee Break|
|10:50 – 12:20||Papers session|
|12:20 – 14:00||Lunch Break|
|14:00 – 15:30||Small groups will work on the following topics:
A. Determining the pros and cons of existing evaluation methods when applied to Human-Robot Interaction.
B. Providing the state-of-the-art of protocols, which can be replicated.
C. Providing the state-of-the-art of questionnaires, which can be reused.
D. Listing evaluation criteria, which are needed to evaluate Human-Robot Interaction and to ensure valid results and replicability (e.g. studies conducted in the real world, experimental settings…).
E. Establishing rules about statistical analyses. Which are the required conditions, which ensure the validity of results?
|15:30 – 15:50||Coffee Break|
|15:50 – 17:00||Interactive session conclusion|
A representative per group will present a summary of their group’s discussions.
|17:00 – 17:20||Closing remarks
Workshop conclusion and discussion about the international book outline and writing organization.
Cindy Bethel, Mississippi State University, USA
Dimitrios Chrysostomou, Aalborg University, Denmark
Nigel Crook, Oxford Brooks University, United Kingdom
Marine Grandgeorge, University of Rennes I, France
Céline Jost, Paris 8 University, France
Brigitte Le Pévédic, University of South Brittany, France
Nicole Mirnig, University of Salzburg, Austria