List of Sessions:
Assertions in ethics of environmental robotics
Ensuring the Safety of AI and Robotics: An Interdisciplinary Approach to Global Standards and Governance
Investigating Skills and Ethical Challenges in Industry: Towards a Human-Centred Approach to Workplace Design
New Technology, Cybernetic Avatars and Their Economic Impact
Regulating AI-innovation in a turbulent geopolitical era
Standards for Robotics in 2025
Tall tales and real myths: collective imagining of a living robotics lexicon
The Future of Human Enhancement: Policy and Law for Cybernetic Avatars
Assertions in ethics of environmental robotics
Organizers:
Avgi Stavrou, PhD Autonomous and Interactive Multi-agent Systems
avgi.stavrou@bristol.ac.uk
Matimba Swana, PhD Robotic Nanosystems and Bioethics
matimba.swana@bristol.ac.uk
Ella Maule, PhD Community Robotics
ella.maule@bristol.ac.uk
Khulud Alharthi, PhD Swarm Robotics
khulud.alharthi@bristol.ac.uk
Description:
Issues caused by climate change, such as rising temperatures, extreme weather events, and declining food security, have driven an increased deployment of autonomous systems. These systems range from simple devices equipped with sensors that monitor soil conditions to more complex technologies such as crop-monitoring drones and AI-powered wildlife tracking systems.
However, environmental and agricultural robots present challenging ethical considerations that extend beyond traditional robotics ethics frameworks. This is due to their interaction not only with humans but with complex ecosystems, raising questions about ecological footprint, biodiversity preservation, and sustainable development. To address this, the robotic community should explore environmental robots that balance human-centered and interspecies design thinking.
This workshop will bring together interdisciplinary expertise, across fields of robotics, environmental science, ethics, policy, sociology, and philosophy, to foster debate that shifts focus beyond human-robot interactions and consider the wider needs of the planet, ecosystems and environment. The workshop will achieve this by using speakers to present provocations that stimulate thinking, curiosity and exploration.
As active members of the Proteas group (protecting the earth with autonomous systems), we have organised both national and international events and participated in conversations with experts across diverse sectors. These interactions have stressed the necessity for a space that encourages discussions open to diverse disciplines. Therefore, we believe this conference provides the perfect forum for such dialogue.
The workshop follows a provocation-based format for engaging and inspiring participants. The structure is designed to move from provocative statements to substantive debate. There will be 2-4 expert speakers that will do a short presentation on a contention, with a view of having an opinion and justification on that contention for example “this house believes non-human centered robotics is key to global sustainability”. This should spark discussion on the broader implications, challenges and/or benefits of designing robotics with a focus on the entire ecosystem rather than just human interaction. The outcome of the session is to help shape the research agenda by identifying and prioritizing research requirements for environmental and agricultural robots.
Speaker information:
Dr Richard Mawle, University of West England
richard2.mawle@uwe.ac.uk
Richard is a Programme Leader for BA (Hons) Product Design and a Senior Lecturer in Product Design Engineering at the University of the West of England (UWE), based in the College of Arts, Technology and Environment (CATE). Most recently his research has been focused on Robotics and AI Ethics in the context of Sustainable Product Design and Circular Economy.
Peter Winter, University of Bristol
peter.winter@bristol.ac.uk
Dr Peter Winter is a sociologist in the field of Science and Technology Studies (STS) specialising in the analysis of complex sociotechnical systems, particularly sociotechnical systems involving Artificial Intelligence (AI) applications. He is particularly interested in the challenges of developing, integrating and implementing AI technologies for use in real-world contexts (e.g. the role of regulation, trust and transparency) as well as its effects on professional practice and organisational structure.
Prof Thomas Scott, University of Bristol, TBC
T.B.Scott@bristol.ac.uk
Thomas Scott is a professor of Nuclear Materials and Spin-hout Executive Director. His research is based around ageing, corrosion and characterisation of radioactive materials in engineered and environmental systems, and has resulted in over 60 published papers and 3 patents. He is the academic lead for the Sellafield UK Centre of Expertise for Uranium and Reactive Metals. Working with Sellafield he has successfully developed and deployed two novel radiation detection technologies in the past 5 years, including the Advanced Airborne Radiation Monitoring (AARM) system which won the 2014 RAEng ERA award for innovation as well as an extreme dose radiation detection system made from diamond which provides inspiration for the onwards development of diamond-based nuclear energy harvesters and power cells.
Workshop Outcomes:
Summary of the key priorities identified during the session and write a report that outlines the next steps for developing the research agenda and how participants can stay involved. The workshop insights will be compiled into a paper. This publication will provide a reference for researchers, industry practitioners, and policymakers, highlighting advancements in ethical environmental robotics, particularly in human-robot interaction and maintenance. We will also share the findings on the PROTEAS website and through our networks. This will help us disseminate our findings to a broader audience beyond the academic world. The workshop addresses a gap in robotics ethics by focusing on environmental applications, aiming to guide future ethical research and development in environmental robotics.
Ensuring the Safety of AI and Robotics: An Interdisciplinary Approach to Global Standards and Governance
Organizers:
Keio University Moonshot Research and Development Project “Realization of a Society that can Use Cybernetic Avatars Safely and Securely.”
Fumio Shimpo, Professor,
Faculty of Policy Management, Keio University, shimpo@sfc.keio.ac.jp
Kyoko Yoshinaga (*corresponding organizer), Project Associate Professor,
Graduate School of Media and Governance, Keio University, kyokoy@sfc.keio.ac.jp
Description:
As advancements in AI and robotics accelerate, ensuring their safety has become a complex and pressing challenge. While these technologies have the potential to revolutionize industries, improve efficiency, and address global issues, they also presenting significant risks that require multidimensional and balanced approaches. Ensuring safety involves technical, ethical, legal, and societal considerations, making it a multifaceted issue.
Efforts to address these challenges are underway on both international and national levels. Global organizations such as the G7, G20, OECD-GPAI, and the United Nations have been pivotal in developing principles and frameworks for the safe development and deployment of AI. These principles emphasize transparency, fairness, accountability and respect for human rights. Moreover, the International Organization for Standardization (ISOkyokoy@sfc.keio.ac.jp) is actively working to establish common technical standards for AI and robotics, ensuring interoperability, reliability, and safety across borders. These efforts aim to build trust and foster global cooperation.
At the national level, many countries are creating regulations and establishing AI Safety Institutes to address these issues. However, the concept of “safety” in AI and robotics varies among nations, reflecting differences in cultural values, economic priorities, technological readiness, and also by “context”. Therefore, achieving safety requires more than technological solutions or regulations – it requires a holistic, interdisciplinary approach.
To effectively address the safety challenges of AI and robotics, it is essential to foster interdisciplinary discussions. By bringing together experts from fields such as computer science, engineering, ethics, law, sociology, anthropology, philosophy, psychology, history, and economics, it becomes possible to address the issue holistically. These discussions help to clarify the limits of technical safeguards, identify the risks, and explore innovative approaches to governance.
This session aims to foster interdisciplinary collaboration by bringing together experts from diverse fields on an international scale. It is open to all participants who submit a 2-5 page paper on a related topic by 15 February 2025, and receive acceptance. (For further details, please refer to the “Instructions for Authors” page). Participants will present their perspectives on how to ensure the safety of AI and robotics and engage in vibrant discussions to explore practical solutions. Key questions include: How can we implement effective technical safeguards? What regulatory models best promote safety while encouraging innovation? How can global standards, such as those developed by ISO/IEC, be integrated into national frameworks?
By fostering mutual understanding and generating actionable insights, this session aims to advance the conversation on AI and robotics safety. The ultimate goal is to develop effective, practical approaches that address the risks of these technologies while maximizing their potential to benefit humanity.
Investigating Skills and Ethical Challenges in Industry: Towards a Human-Centred Approach to Workplace Design
Organizers:
Maryam Bathaei Javareshk,
Cranfield University, United Kingdom
Iveta Eimontaite, Cranfield University, United Kingdom
Sarah Fletcher, Cranfield University, United Kingdom
Description:
Understanding human needs and designing workplaces to address these needs are critical for enhancing workforce productivity and sustainability. As industries increasingly adopt automation and robotics, it becomes essential to consider not only the technical challenges but also the ethical implications and workforce skill requirement changes these innovations bring — both to the workforce and to society at large. Ethical concerns extend beyond integrating technology into operations; they involve ensuring that robots are implemented in a way that safeguards the workforce’s well-being and mitigates potential long-term societal impacts.
With robots increasingly being combined with manual tasks in advanced manufacturing, human factors must be prioritised throughout the design and implementation stages to ensure acceptance and compatibility with human operators. To this end, current EU initiatives, such as AI-PRISM and CONVERGING, focus on integrating human-centred design into AI-based automation solutions. Incorporating human-centred design into these projects ensures better alignment with operators’ requirements while addressing ethical challenges related to participation, skill retention, and upskilling, as well as considering the broader societal impacts of these changes.
The special session proposed here will explore the challenges emerging of skill changes faced by manufacturing employees who are or will be working with or alongside robots ranging from operators to managers — across differential industrial settings. The session will be opened with a discussion of participatory design workshops conducted on the AI-PRISM and CONVERGING projects. These participatory workshops were designed for the operators to share their experiences, concerns, and suggestions regarding AI and robotic integration into their daily tasks, and thus further drive technology development and integration. The discussion of these workshops’ findings will be followed by 4/5 presentations of the invited speakers on the topics of robot ethics and changing skills in manufacturing.
The session will end with a group activity of the attendees discussing a given example scenarios, capturing the attendees opinions on the changing skills in the scenario and attempt to address ethical issues. Furthermore, attendees will be guided to consider how to address ethical concerns, and outline how various stakeholders might address these needs and challenges in the future world.
New Technology, Cybernetic Avatars and Their Economic Impact Session
Organizer:
Tatsuma Wada, Keio University, Japan
twada@keio.jp
Description:
This session will explore the impact of emerging technologies, particularly cybernetic avatars (CA), on our economy and daily lives. Specifically, we will examine whether CA can enhance human happiness, whether they might replace human labor across a wide range of jobs, and how market activities, such as stock market transactions using CA, should be regulated. These questions are critical to address, as CA technology is expected to emerge in the near future. However, empirically testing related hypotheses remains highly challenging, given that CA has not yet been fully implemented. Despite this limitation, we aim to infer the potential societal impacts of CA using existing data and newly collected samples during this session.
The confirmed papers and abstracts to be presented are as follows.
- Exploring the Impact of Cybernetic Avatars and Head-Mounted Displays on Various Aspects of Well-Being by Shinichi Yamaguchi et al. (to be presented by Shinichi Yamaguchi, International University of Japan) The paper is available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4982172
This study aims to clarify the relationship between the use of cybernetic avatars (CA) and head mounted displays (HMD) and people’s well-being, focusing on how these emerging technologies are perceived in terms of well-being. The results show that while CA use alone is not significantly related to higher well-being, combining CA with HMD leads to significantly higher well-being across multiple dimensions, including long-term life satisfaction. - Calculating Exposure Score with Japanese Databases by Hideyuki Kawashima, Keio University.
Michael Webb proposed a method for calculating exposure score to AI, software, and robotics. This method calculates the relevance between a US patent dataset and an occupational database called O-NET. In this study, we show the results of calculating exposure score using a patent database in Japan and an occupational database called JobTag, which a Japanese version of O-NET. - Circuit Breakers for Stock Markets with Cybernetic Avatars by Tatsuma Wada, Keio University.
Many stock exchanges implement circuit breakers to prevent excessive price fluctuations that could lead to panic in the market. However, the optimal threshold for these circuit breakers remains uncertain, and the potential impact on the stock market in their absence is not fully understood. In this study, we use time series data to analyze counterfactual stock price scenarios under varying circuit breaker thresholds. - Automation Bias in the AI Act: On the legal implications of attempting to de-bias human oversight of AI by Johann Laux and Hannah Ruschemeier (to be presented by Johann Laux, Oxford University)
- Anthropomorphism in Human-Computer Interaction by Wen Hai, Osaka University et al. (to be presented by Sotaro Katsumata, Osaka University)
Regulating AI-innovation in a turbulent geopolitical era
Organizers:
Roeland De Bruin (Utrecht University), The Netherlands
Lesley Brooks (University of Twente), The Netherlands
Description:
Whilst the ink of the Artificial Intelligence Regulation (AI-act) of the European Union is still drying up, foreseen and unforeseen challenges for both innovators and consumers of AI-technology come to the fore already. From an internal AI-act perspective, one of the two envisaged cornerstones for enforcement of its rules has been removed from the regulatory agenda: the AI-liability Directive no longer has the priority of the Union legislators. Meanwhile, many of the norms and standards which are crucial to provide clarity to fill in the, often vague norms of the AI-act, are yet (to be) expected. As the AI Office and the national authorities are preparing to provide guidance regarding their strict public enforcement capabilities, the addressees of the AI Act, mainly innovators, are seeking how to comply in a viable way. From an external perspective, the geopolitical cards are being reshuffled. As it appears, regulation of AI no longer has priority in the United States, and uncertainty increases regarding the trustworthiness of US-based AI-providers in terms of availability and performance of (also) AI-related functionality, data analysing, -processing and -storage facilities, as well as related infrastructures, not to mention the envisaged uses of AI-related content, information and data.
Against these backgrounds, the purpose of the Special Session is twofold. Firstly, we will investigate to what extent AI regulations such as the EU AI-act are still serving their purposes in terms of stimulating innovation on the one hand, and protecting citizens on the other hand. From an innovators perspective, regulatory factors can be assessed such as legal certainty, stringency of the rules in relation to their aims, and flexibility to both adapt norms to changing technological and societal circumstances, and to equip innovators to suit the rules to their specific businesses. From a consumers perspective, the rules can be evaluated in terms of which risks they either mitigate or implicate for citizens, and trust they may facilitate in terms of safety, ethics, protection and security, as well as human rights. Secondly, we seek to provide recommendations for AI-regulators to improve the regimes with a keen eye to protect citizens and stimulate AI-innovation at the same time.
Outline of the session (circa 2 – 2,5 hours):
- Introduction and opening statement (10 minutes)
- 4 – 6 contributions (15 minutes each + 5 minutes Q&A)
- Panel discussion (20 minutes) + Group discussion
- Closing statement (10 minutes)
Contribution submission plan:
- Deadline for abstract (paper or presentation): 25 April
- Review and notification: 9 May
- Contribution submission deadline (max 8 pages; or 20 sheets): 23 May
Standards for Robotics in 2025
Organizers:
Mohammad Osman Tokhi, School of Engineering, London South Bank University, UK
Sarah R. Fletcher, Centre for Robotics and Assembly, Cranfield University, United Kingdom
Description:
Rapid advances in robotics and sensors have changed the world in recent years, bringing new technologies that have transformed many aspects of everyday life. This has naturally driven concerns about the potential impacts of these numerous changes and, therefore, to what regulation is needed to ensure safety and performance is controlled. As standards are developed or updated whenever advances in knowledge and innovation create a need for new guidance, the rapid rise of new robotic systems has led to a multitude of new and/or revised standards for robotics in recent years.
This special session aims to bring together key contributions that will reflect progress in the standardisation of robotics and present a snapshot of the current landscape in 2025. We invite authors to submit papers on any relevant topic including:
• An individual standard or a set of standards for robotics
• The standards related to a specific topic area or context of robots
• The general current state or landscape of robot standards
Submissions:
• We invite authors to submit papers that are no more than 12 pages long, including figures, but excluding references
• All papers for the workshop must be formatted according to the conference instruction found at: https://clawar.org/icres2025/instructions-to-authors/
• Deadline for abstract submission: April 14th (500 words; please state the title of the workshop)
• Notification of paper acceptance will be issued by April 21st
• Deadline for full camera-ready papers, by May 16th.
• At least one author of each accepted paper must attend the workshop
Tall tales and real myths: collective imagining of a living robotics lexicon
Organizers:
Kit Kuksenok, PhD
kit.kuksenok@gmail.com
Suet Lee
University of Konstanz, Germany
suet.lee@uni-konstanz.de
Matimba Swana
University of Bristol, UK
matimba.swana@bristol.ac.uk
Description:
Storytelling is a powerful tool for sharing community narratives, simplifying complex issues, and influencing policymakers [1]. When it comes to telling stories about robots, what terms are used, by whom, and why? The technical terminology in robotics synthesises academic research, industry practices, and standardisation efforts [2]. As technologies evolve, their design, implementation, and use are influenced by both technological capabilities and society [3]. Terms used to describe, understand, and enact technology is also constructed by storytelling for wider audiences. For example, using the term “autonomous” stresses a robot’s independence, which influences expectations of responsibility and accountability. All technical terms present both a colloquial face and a formal face [4], sometimes placing “accurate” and “evocative” uses in tension or disagreement. The more a term’s technical and colloquial uses diverge, the more influence storytelling can have.
Policy articulation and implementation, as well as political will, is informed by understanding and expectations of both the policy makers and the body politic.
This workshop aims to build bridges between multiplicities of technical and colloquial uses to empower collaborative storytelling about the role and development in robots. We draw on design fiction theories and techniques to bridge imagination and reality. The proposed workshop invites people from different backgrounds to collectively develop a free and open-source collective dictionary of terminology needed for robust discussion of robotics ethics and standards today. The resulting Living Robotics Lexicon is a dynamic document that aims to capture the multiple narratives at play in different disciplinary and societal milieus by sharing stories and their readings. The narrative approach spans use cases in reality and in fiction; empirical observations of emotional responses with terms; and other texts brought into a shared space of meaning-making. This approach would help workshop participants share their respective understandings of the impact of robotics on society, while helping to develop a more robust shared lexicon. Rather than constituting challenges, the different interpretations of each term become essential in building bridges.
The lexicon begins as a list of terms and concepts submitted within an open call starting in May 2025 and open until the start of the workshop, which we curate as part of facilitation. Then, with workshop participants, each term is expanded to explore its technological and narrative capacity, with a possibility to expand the lexicon. We propose either a series of three interactive facilitated sessions, or an exhibition with three sets of interactive artifacts to engage participants. Each of the three sections/sessions focus on a different temporality of storytelling about robotics: past, near-future, and far-future. In either case, the facilitation protocol is shared as part of the dictionary upon the event’s completion.
Narratives of robotics and how participants interact with different terms. Participants are encouraged to explore each term, associating benefits, challenges, ethical dilemmas, and misunderstandings to help facilitate deeper reflection. We aim to provide a space for creative and critical thinking that results in a free, open source resource that can continue to evolve over time, and be shared with ever-wider constituencies. Creating the Living Robotics Lexicon can aid in clearer communication, more impactful advocacy, and engaging storytelling. A shared understanding promotes a more inclusive approach to development and integration robotic technologies.
[1] Davidson, B. Storytelling and evidence-based policy: lessons from the grey literature. Palgrave Commun 3, 17093 (2017).
https://doi.org/10.1057/palcomms.2017.93
[2] Guizzo, E., & Ackerman, E. (2023) Robotics glossary. Robots created by IEEE Spectrum. https://robotsguide.com/learn/robotics-glossary [Accessed January 13, 2025]
[3] Kudina, O., van de Poel, I. A sociotechnical system perspective on AI. Minds & Machines 34, 21 (2024). https://doi.org/10.1007/s11023-024-09680-2
[4] Agre, Philip E. “Toward a critical technical practice: Lessons learned in trying to reform AI.” Social science, technical systems, and cooperative work. Psychology Press, 2014. 131-157.
The Future of Human Enhancement: Policy and Law for Cybernetic Avatars
Organizers:
Kyoto University Moonshot Research and Development Project
Masahiro SOGABE, Professor,
Graduate School of Law, Kyoto University
sogabe@law.kyoto-u.ac.jp
Description:
This session addresses the challenges of implementing cybernetic avatars (CAs) in society from multiple perspectives. CAs refer to technologies that significantly enhance human physical, cognitive, and sensory capabilities. While robotics plays a crucial role, CAs encompass a broader range of technologies. In the “Society 5.0” era, where physical and virtual worlds converge, CAs go beyond virtual avatars to enable new possibilities for physical activities in the real world. For instance, individuals restricted by illness, disabilities, or other limitations can use CAs to expand opportunities for self-fulfillment by engaging in work or other social activities. Additionally, CAs can enable humans to acquire abilities beyond natural limits.
For CAs to achieve widespread acceptance, it is necessary to address not only technical issues but also a variety of legal and ethical challenges. However, discussions on these matters remain insufficient, necessitating urgent attention. This session will explore these issues from three key perspectives:
- Legal-Policy Measures for Social Acceptance
After reviewing the concept of CAs, this section examines the legal and policy measures required to ensure societal acceptance of CAs, particularly those using robotics. Cyborgs—individuals whose abilities are enhanced through robotics—are often prone to social rejection due to their differences. Promoting understanding and acceptance of cyborgs requires various government interventions. For example, public awareness campaigns, anti-discrimination policies, and regulations to facilitate integration into workplaces and public spaces are essential. This session proposes specific legal frameworks to support these initiatives. - Human rights in the use of enhancement technologies
This section examines the human rights issues that may occur in the use of enhancement technologies. Recent developments in drugs, procedures, and devices provide new possibilities to lead a more satisfying life, whereas the use of these technologies may cause discrimination against the enhanced individuals due to their extraordinary appearances, abilities or performances. Of great importance is the establishment of new methods that protect the equal rights of enhanced individuals in education, work, and social life while appropriately evaluating the activities and achievements of both enhanced and non-enhanced individuals. It is also essential to protect the right to make personal choices whether to pursue enhancement or not in case of enhancement coercion. - Metaverse-Based Trials
While real-world applications of CAs remain a future challenge, their use in the metaverse is already growing, especially in entertainment and industry. This section examines the potential and challenges of metaverse-based trials, which have been tested in countries like Colombia and China. These trials demonstrate benefits, such as accessibility and transparency, but also raise issues, including reliance on technology providers for due process and challenges to traditional courtroom procedures. Metaverse trials can enhance transparency by storing data for retrospective analysis but simultaneously raise concerns about privacy and data security. This session analyzes these opportunities and challenges through a comparative legal perspective.
By addressing these three perspectives, this session aims to highlight pathways for responsibly integrating CAs into society while ensuring technological, legal, and ethical considerations are met.