If robots are to be trusted, especially when interacting with humans, then they will need to be more than just safe. This paper explores the potential of robots capable of modelling and therefore predicting the consequences of both their own actions, and the actions of other dynamic actors in their environment. We show that with the addition of an 'ethical' action selection mechanism a robot can sometimes choose actions that compromise its own safety in order to prevent a second robot from coming to harm. An implementation with e-puck mobile robots provides a proof of principle by showing that a simple robot can, in real time, model and act upon the consequences of both its own and another robot's actions. We argue that this work moves us towards robots that are ethical, as well as safe. © 2014 Springer International Publishing.
Winfield, A. F., Blum, C., & Liu, W. (2014). Towards an ethical robot: Internal models, consequences and ethical action selection. Lecture Notes in Artificial Intelligence, 8717 LNAI, 85-96. https://doi.org/10.1007/978-3-319-10401-0_8