SAN JOSE, Calif. — Today, the IEEE kicked off a broad initiative to make ethics a part of the design process for systems using artificial intelligence. The effort, in the works for more than a year, hopes to spark conversations that lead to consensus-driven actions and has already generated three standards efforts.
The society published a 138-page report today that outlines a smorgasbord of issues at the intersection of AI technology and values. They range from how to identify and handle privacy around personal information to how to define and audit human responsibilities for autonomous weapon systems.
The report raises a laundry list of provocative questions, such as whether mixed-reality systems could be used for mind control or therapy. It also provides some candidate recommendations and suggests a process for getting community feedback on them.
“The point is to empower people working on this technology to take ethics into account,” said Raja Chatila, who heads up the initiative as chair of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.
The effort seeks input not only from engineers but from business people, lawyers, economists, and philosophers. “The answers will come from society — not only the experts but the system users and other stakeholders,” said Chatila, who is director of research at the French National Center of Scientific Research and teaches about AI and robotics at Pierre and Marie Curie University in Paris.
To read the rest of this article, visit EBN sister site EE Times.