Ethical Black Box
Alan Winfield and Marina Jirotka first discussed the possibility for an ethical black box for robots in a 2017 paper. It is designed as a mechanism to address the potential future ubiquity of social robots and concerns over the damage they might cause when they malfunction.
Alan Winfield and Marina Jirotka first discussed the possibility for an ethical black box for robots in a 2017 paper. It is designed as a mechanism to address the potential future ubiquity of social robots and concerns over the damage they might cause when they malfunction.
What is the Ethical Black Box for social robots?
What is the Ethical Black Box for social robots?
New heights of machine autonomy create fear as well as optimism. The Ethical Black Box (EBB) is an innovative idea that can enhance safety in automated systems whilst also advancing public trust.
- the EBB is equivalent to the flight data recorders used in aviation;
- it continuously records sensor and relevant internal status data and can be extended in scope to also capture the AI decision-making process and environmental factors occurring before an adverse incident;
- just as black boxes in aviation can be drawn on to provide crucial evidence following an accident, the EBB can also be used in incidents involving robots;
- information provided by the EBB can help us to understand why a robot behaved in the way it did and then make recommendations for changes to prevent similar incidents or limit the potential damage caused;
- the black box therefore offers to significantly advance the safety of robots. In addition, it can also foster the societal acceptability of these innovations. The presence of the black box, and associated professional groups making use of it in the course of an investigation, would demonstrate that the robots and their developers are responsible and accountable for their behaviours;
- this process also provides transparency so that members of the public can see these processes of responsibility and accountability in action.
RoboTIPS and the Ethical Black Box
RoboTIPS and the Ethical Black Box
The RoboTIPS project team will develop and test the Ethical Black Box in a range of social robots.
- the scope of the sampling an EBB might do, for example the AI decision-making and environmental factors occurring in situ,will form part of the research;
- when an incident occurs, we will explore what sorts of (human) sense-making (of what the robot tells us) is possible and what levels of reconstruction can be supported; how this is done as a practical accomplishment and how it works as an instantiation of responsibility in practice (in what ways this may be embedded into everyday institutional processes);
- the inclusion of a natural language explainer system will enable the robot to explain its own behaviour; eventually we want to allow the robot’s users to be able to ask the ethical robot ‘why did you just do that?’ or even ‘what would you do if…?’ and the robot to give a simple intelligible explanation;
- the research will not necessarily take the robot's word for what it did at face value. In aviation it is the investigation, not the black box data per se, which concludes why an air incident occurred. It is anticipated this will also be true for incidents involving robots, where an investigation will draw upon black-box information amongst other information to determine the reason for an incident;
- hence, alongside the technical parameters of what to record within a robot black box, the research will also consider how the interpretation of those recordings fits into the conduct of an investigation.