Prof. Raja Chatila
Sorbonne University, France
Webpage
Title of presentation: Robust, Safe and Explainable Intelligent and Autonomous Systems. A Red Herring or the Path to Trustworthiness?
Abstract
As Machine Learning - based AI (MLAI) systems, statistically processing data to make decsion and predict outcomes, have become of widespread use in almost all sectors, from Healthcare to Warfare, the need to ensure they "do the right thing" and provide reliable results has become of primary importance. Indeed, deploying unproven systems in critical applications (and even in not critical ones) is irresponsible, and therefore unethical and shouldn’t be acceptable. Hence a full research stream was started, trying to address the blackbox paradigm limitations characterizing such systems. With millions of parameters computed from data using optimization processes, the use of various off-the-shelf components to build new systems without solid verification and validation processes, the absence of causal links between inputs and outputs, what does it mean, concretely, to make MLAI systems robust, safe and explainable? Is this a reachable objective at all? And will this lead to trustable
Intelligent and Autonomous Systems?
Short Bio
Raja Chatila, IEEE Fellow, is Professor of Artificial Intelligence, Robotics and Ethics at Sorbonne University in Paris, France. He is director of the SMART Laboratory of Excellence on Human-Machine Interactions and former director of the Institute of Intelligent Systems and Robotics. He contributed in several areas of Artificial Intelligence and autonomous and interactive Robotics along his career. And published about 160 papers. His research interests currently focus on human-robot interaction, machine learning and ethics.
He was President of the IEEE Robotics and Automation Society in 2014-2015. He is chair of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, member of the High-Level Expert Group on AI with the European Commission, and member of the Commission on the Ethics of Research on Digital Science and Technology (CERNA) in France.
Selmer Bringsjord
Rensselaer AI & Reasoning (RAIR) Lab
Department of Cognitive Science
Department of Computer Science
Lally School of Management
RPI
Troy NY 12180 USA
Webpage
Alexander Bringsjord
PwC
Stamford CT 06901 USA
Title of presentation: Paternalistic Taxation to Address the Massive Mess Machine Learning is Fast Making
We, I, describe a serious problem; II, report the (laudable) desire on the part of many to solve it, and III defend the position that the only available way to solve the problem is via paternalistic international taxation at the corporate level.
I: The technologized world is fast sliding into a royal mess made by statistical, data-driven machine learning, which we refer to simply as ‘ML.’ The basic idea behind ML is dirt simple: Start with a machine M (e.g. an artificial neural network) bereft of any such thing as the sort of propositional/declarative knowledge that distinguishes human intelligence and reasoning (e.g., you know that p, for many p, where p is a declarative proposition; M doesn’t know that p, for any declarative proposition p*), since M is solely numerical in nature. Feed M large amounts of data, in such a way that it can, in the future, approximately compute some function F. Next, in that future, when M receives an element x of the domain of F, it will hopefully be able to spit out the corresponding value F(x). Facial recognition, perception in self-driving vehicles, shallow machine translation of natural language, shallow text generation systems, recommender systems, online advertising systems in browsers, primitive NLU/NLG systems such as Alexa, health-care data analysis, and so on — the vast majority of the “AI” in these domains is just this simple ML. Unfortunately, there are a few rather disturbing problems; here are three from among a myriad (human disemployment being on we leave aside**): (P1) M must receive, and increasingly in our world does in fact consume, massive amounts of private data about human beings. (P2) Because the internals of M are entirely numerical, whereas human science, mathematics, law, legislation, ethics, and standards are propositional/declarative in nature, humans in these spheres have no real idea what M did, is doing, and will do. (P3) M can be used to create, and for nefarious purposes use, and increasingly can itself be, deceptive simulacra: fake personas, fake tweets, fake emails, etc. Courtesy of the trio of (P1)–(P3), ML is fast making a mess of our world. Note that when we say ‘fast,’ there is no hyperbole. Keep in mind for instance that China has declared openly that it intends to reign supreme among nations in AI by 2030 — and since avowedly its chief strategic advantage is that its vast population is essentially a proprietary data-generating engine, the PRC, clearly, is simply seeking to pursue AI as ML, and hence the cancer that is the trio (P1)–(P3) promises to spread like wildfire across its land.
II: Some of those who have enough sense to see what is happening wish to control the situation, with regulation, ethics, standards, and AI itself. For example, the new president of the European Commission, Ursula von der Leyen, has recently declared that in her first 100 days in office, she will produce legislation “for a coordinated European approach on the human and ethical implications of AI.” Tim Cook, Apple CEO, has long called for such legislation in the U.S., which currently has none. In Brussels recently (Jan 20 2020), the new CEO of Alphabet (Google’s parent) admitted “There is no question in my mind that AI needs to be regulated.” When the CEO of ML-based Facebook, Mark Zuckerberg, testified before the U.S. Congress, he repeatedly said that the answer to rooting out fake profiles and hidden trolls and hate speech on his platform is to develop AI up to this task — but such AI, by definition, would be well beyond the ML variety.
III: The only way to solve the problem is via paternalistic taxation, to simultaneously disincentivize use of ML (in favor of logic-based AI*), and obtain the massive amount of money it will take to develop regulations, ethics, standards, and technology to clean up the current mess, and forestall a bigger one. While the use of paternalistic taxation at the individual level (so-called “sin” taxes) are of questionable value (for reasons we canvass); while high taxes at the individual level for such purposes as engineering so-called “egalitarian” societies run afoul of a class of theorems the first members of which were introduced by Mirrlees; and while from a pure productivity/innovation point of view there perhaps shouldn’t be any corporate taxation, nonetheless, alas, rational paternalism dictates that severe and serious international taxation on the corporate purveyors of ML is the only option.
References
*Bringsjord, S., Govindarajulu, N.S., Banerjee, S., & Hummel, J. (2018) “Do Machine-Learning Machines Learn?'' in Müller, V., ed., Philosophy and Theory of Artificial Intelligence 2017 (Berlin, Germany: Springer SAPERE), pp. 136–157, Vol. 44 in the book series. http://kryten.mm.rpi.edu/SB_NSG_SB_JH_DoMachine-LearningMachinesLearn_preprint.pdf
**Bringsjord, S. & Bringsjord, A. (2017) “The Singularity Business: Toward a Realistic, Fine-grained Economics for an AI-Infused World” in Powers, T., ed., Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics} (Springer: Cham, Switzerland), pp. 99–119. The edited book in which this paper appears is Volume 128 in Springer's Philosophical Studies Series, Editors-in-Chief: L. Floridi and M. Taddeo. http://kryten.mm.rpi.edu/The_Singularity_Business.pdf
Extended Biography available on his personal website: link
Yeh-Liang Hsu, PhD
- Y. Z. Hsu Yuan Ze Chair Professor, Mechanical Engineering Department
- Director, Gerontechnology Research Center, Yuan Ze University, Taiwan
- Editor-in-Chief, Gerontechnology
- webpage
Title of presentation: Robots that look after grandma? A gerontechnology point of view
Abstract
Facing widespread population aging, people naturally consider applying technologies to provide positive solutions in maximizing the efficiency and effectiveness of workforce and resources for the care of older adults. “Gerontechnology” is about designing for people. As defined by the International Society for Gerontechnology (ISG),
“Gerontechnology: designing technology and environment for independent living and social participation of older persons in good health, comfort, and safety.”
Gerontechnology research is only valuable if the research can be turned into real products for daily applications. After decades of development, many research projects and technological products are aiming to help older adults and their caregivers. However, few of these technological products, including robots, have been widely adopted for the care of older adults.
For a long time in history, in novels, stage drama, and movies, human has had the fantasy of building “artificial people” – robots, to work as servants for human beings. Various robots have been developed to care for older adults. However, how close are they to be used in everyday lives at home? What are the critical factors for acceptance by older adults and caregivers? What is the proper form of robot to be used at home? Will there ever be “robots that look after grandma?” This speech will attempt to address the issue of “robot@home” from the gerontechnology point of view.
Short Bio
Professor Yeh-Liang Hsu received his bachelor’s degree in mechanical engineering from National Taiwan University and was conferred PhD by Stanford University in 1992. He then became a professor at Yuan Ze University, Taiwan, where he has had many important roles, including Secretary General and Dean of Academic Affairs.
Professor Hsu directed his research interest in design to gerontechnology and established the Gerontechnology Research Center in 2003, which is the pioneering research institute in this field in Taiwan. He has published many papers, books, and patents in gerontechnology. Professor Hsu has been actively involved in the International Society for Gerontechnology (ISG). He has chaired the 9th World Conference of Gerontechnology and is concurrently Editor-in-Chief for “Gerontechnology” and IT Director of ISG.
In 2016, Professor Hsu founded Seda GTech Co. Ltd. Working with 8 young cofounders who were his students, he has been pushing gerontechnology research to real products for daily applications by older adults and caregivers.
Prof. YAMADA,Yoji
- Nagoya University Professor, Department of Mechanical Science and Engineering, Graduate School of Engineering
- webpage
Title of presentation: Social Implementation of Service Robots Based upon Safety Guidelines
Abstract
After a brief introduction of his achievements, the presenter starts his talk by stressing the significance of designing a social system to help manufacturers develop safe service robots which perform useful tasks for humans or equipment in frequent contact with them. He proposed to incorporate such a system into a national project of developing service robots with manufacturers. He will show how the social system comprising regulation, standardization, and certification was constructed in Japan including the establishment of a certification system. The people involved in the social system was interconnected by the international safety standard ISO 13482.
Then, the presenter will show a guideline which he played a key role of compiling. The objective of "A Guideline for Ensuring The Safety of Personal Care Robots and Robot System" is to ensure the robots and robot System safety by specifying the conditions to be complied with at design, field testing, sales and operation phases for some types of personal care robots. The guideline was complied with in a subsequent national project of service robots social implementation.
Short Bio
Yoji Yamada received a doctor degree from Tokyo Institute of Technology.
He had been with Toyota Technological Institute, Nagoya, Japan, since 1983 and became an associate professor in the Graduate School of the Institute in 1993. In the meantime, he joined the Center for Design Research, Stanford University in 1992.
In 2004, he joined National Institute of Advanced Industrial and Science Technology (AIST), as a group leader of Safety Intelligence Research Group, Intelligent Systems Research Institute of AIST. In 2009, he moved to the Department of Mechanical Science and Engineering, Graduate School of Engineering, Nagoya University as a professor.
His current research interests include safety and intelligence technology in human-machine systems, and assistive robotics. He is the convener of ISO/TC199/WG12 called “Human machine interaction”. He was awarded a METI Award in 2013, and a Prime Minister`s Award in 2015.