The field of robotics has been advancing increasingly rapidly in the past few years, with prominent scientists saying that artificial intelligence is on the horizon. However Paul Gorby questions how important is it that robots feel human emotions?
It’s true that ethical issues continue to surround the creation of robots. Recent engineering advances mean that we will soon be able to program robots to feel something similar to what we call pain, which raises the question of whether it is wrong to build robots in such a way?
 
A gut response to the question of whether we should build robots that can feel pain is one of rejection. After all, why would we design them in such a way that they would suffer? Isn’t that the very definition of cruelty? 
 
If it is ethical to reduce suffering, surely if you were building a robot that either could or could not feel pain then going with the latter option would be sadistic. 
 
This intuitive response is one that I share, but it is also one that I want to argue against. Sometimes the right thing isn’t always immediately obvious. 
 
The biggest problem with building robots that are incapable of experiencing pain is that, by definition, they would be incapable of experiencing either sympathy or empathy for human beings.
 
If we don’t know what it is like to suffer in any way we cannot care for those who do, simply because we have no reference to understand what they are feeling. 
 
This is not just a problem for robots, but human beings as well: if I have never been the victim of some form of harassment, it is harder for me to know just how awful it is to be subjected to it. 
 
However, since pain is something that we all experience at some point, I know what it means to suffer in general terms. 
 
Because of this, when I learn about someone suffering due to harassment, I can, however imperfectly, sympathise with them and recognise their experience as a bad thing. This is not always the case even for humans, but it is at least a possibility.
 
The problem with building robots that cannot feel pain is that they will have no way to understand that suffering can be seen as a bad thing, something that it is good to help others to avoid and to avoid inflicting on others. 
 
There seems to be no way around this problem. If we try to program robots to recognise specific situations as bad, we would have to program them for every single conceivable way in which human beings are known to suffer. 
 
Ignoring the issue of whether such a thing is possible, it means that a select group of people would have to determine what constitutes suffering and determine how great one type of suffering is relative to another. 
 
This would be problematic since the engineers designing robots – who are predominantly relatively wealthy white men – would inevitably be biased by their own life experiences. 
 
All of this thinking is not a pointless thought experiment either, it will have very real implications in the near future. 
 
There are already robots being designed to assist, and in some cases even entirely replace, doctors and lawyers. 
 
Most people, when given the choice, would prefer for their doctor or lawyer to be able to offer sympathy, to understand their situation and comfort them while providing whatever medical or legal assistance they came for. 
 
Studies have shown that when doctors take the time to comfort their patients and tend to them at an emotional level the patients recover quicker. 
 
A medical robot may be perfectly efficient at solving whatever physical ailment the patient is suffering from, but it cannot help them deal with their fear of an upcoming surgery. It cannot gently break the news that a loved one has passed to a waiting family. 
 
These are not optional extras, but fundamental aspects of medical care that cannot be neglected. 
 
Perhaps it would be cruel to force robots to feel pain and suffering, but this does not mean that, at least in some cases, it will not be necessary for those robots to feel something.