Wednesday, December 6, 2017

Ethical Behaviourism in the Age of the Robot




[Thanks to the Singularity Bros podcast for inspiring me to write this post. It was a conversation I had with the hosts of this podcast that prompted me to further elaborate on the idea of ethical behaviourism.]

I’ve always been something of a behaviourist at heart. That’s not to say that I deny conscious experience, or that I think that external behavioural patterns are constitutive of mental states. On the contrary, I think that conscious experience is real and important, and that inner mental states have some ontological independence from external behavioural patterns. But I am a behaviourist when it comes to our ethical duties to others. I believe that when we formulate the principles that determine the appropriateness of our conduct toward other beings, we have to ground those principles in epistemically accessible behavioural states.

I think this is an intuitively sensible view, and I am always somewhat shocked to find that others disagree with it. But disagree they do, particularly when I apply this perspective to debates about the ethical and social status of robots. Since these others are, in most cases, rational and intelligent people — people for whom I have the utmost respect — I have to consider the possibility that my view on this is completely wrongheaded.

And so, as part of my general effort to educate myself in public, I thought I would use this blogpost to explain my stance and why I think it is sensible. I’m trying to work things out for myself in this post and I’d be happy to receive critical feedback. I’ll start by further clarifying the distinction between what I call ‘mental’ and ‘ethical’ behaviourism. I’ll then consider how ethical behaviourism applies to the emerging debate about the ethical and social consequences of robots. Then, finally, I’ll consider two major criticisms of ethical behaviourism that emerge from this debate.


1. Mental vs Ethical Behaviourism

Mental behaviourism was popular in psychology and philosophy in the early-to-mid twentieth century. Behaviourist psychologists like John Watson and BF Skinner revolutionised our understanding of human and animal behaviour, particularly through their experiments on learning and behavioural change. Their behaviourism was largely methodological in nature. They worried about the scientific propriety of psychologists postulating unobservable inner mental states to explain why humans act the way they do. They felt that psychologists should concern themselves strictly with measurable, observable behavioural patterns.

As a methodological stance, this had much to recommend to it, particularly before the advent of modern cognitive neuroscience. And one could argue that even with the help of the investigative techniques of modern cognitive neuroscience, psychology is still essentially behaviouristic in its methods (insofar as it focuses on external, observable, measurable phenomena). Furthermore, methodological behaviourism is what underlies the classic Turing Test for machine intelligence. But behaviourism became more than a mere methodological posture in the hands of the philosophers. It became an entire theory of mind. Logical behaviourists, like Gilbert Ryle, claimed that descriptions of mental states were really just abbreviations for a set of behaviours. So a statement like ‘I believe X’ is just a shorthand way of saying ‘I will assert X in context Y’, ‘I will do action A in pursuit of X in context Z’ and so on. The mental could be reduced to the behavioural.

This is what I have in mind when I use the term ‘mental behaviourism’. I have in mind the view that reduces the mental — the world of intentions, beliefs, desires, hopes, fears, pleasure, and pain — to the behavioural. As such, I think is pretty implausible. It stretches common sense to believe that mental states are actually behavioural, and it is probably impossible to satisfactorily translate a description of a mental state into a set of behaviours.

Despite this, I think ethical behaviourism is pretty plausible and common sensical. So what’s the difference? One difference is that I think of ethical behaviourism as essentially an application of methodological behaviourism to the ethical domain. To me, ethical behaviourism says that the epistemic ground or warrant for believing that we have certain duties and responsibilities toward other entities lies in their observable behavioural relations and reactions to us (and the world around them), not in their inner mental states or capacities.

It is important to note that this is an epistemic principle, not a metaphysical one. Adopting a stance of ethical behaviourism does not mean giving up the belief in the existence of inner mental states, nor the belief that those inner mental states provide the ultimate metaphysical warrant for our ethical principles. Take consciousness/sentience as an example. Many people believe that conscious awareness is the most important thing in the world. They think that the reason we should respect other humans and animals, and why we have certain ethical duties toward them, is because they are consciously aware. An ethical behaviourist can accept this position. They can agree that conscious awareness provides the ultimate metaphysical warrant for our duties to animals and humans. They simply modify this slightly by arguing that our epistemic warrant for believing in the existence of this metaphysical property, derives from an entity’s observable behavioural patterns. After all, we can never directly gain epistemic access to their inner mental states; we can only infer this from what we do. It is the practical unavoidability of this inference, that motivate ethical behaviourism.

It is also important to note that ‘behaviour’ needs to be interpreted broadly here. It is not limited to external physical behaviours (e.g. the movement of limbs and lips); it includes all directly observable patterns and functions, such as the operation of the brain. This might seem contradictory, but it’s not. Brain states are directly observable and recordable; mental states are not. Even in cognitive neuroscience no one thinks that observations of the brain are directly equivalent to observations of mental states like beliefs and desires. Rather, they infer correlations between those brain patterns and postulated mental states. What’s more, they ultimately verify those correlations through other behavioural measures. So when a neuroscientist tells us that a particular pattern of brain activity correlates with the mental state of pleasure, they usually work this out by asking someone in a brain scanner what they are feeling when this pattern of activity is observable.


2. Ethical Behaviourism and Robots
Ethical behaviourism has consequences. One of the most important concerns comparative claims to moral status. If you are an ethical behaviourist and you’re asked whether an entity (X) has certain rights and duties, you will determine this by comparing their behavioural patterns to the patterns of another entity (Y) that we think already possesses those rights and duties. If the two are behaviourally indistinguishable, you’ll tend to think that X has those rights and duties too. The only thing that might upset this conclusion is if you are not particularly confident in the belief that those behavioural patterns justify the ascription of rights to Y. In that case, you might use the behavioural equivalence between X and Y to reevaluate the epistemic grounding for your ethical principles. Put more formally:

The Comparative Principle of EB: If an entity X displays or exhibits all the behavioural patterns (P1…Pn) that we believe ground or justify our ascription of rights and duties to entity Y, then we must either (a) ascribe the same rights and duties to X or (b) reevaluate our use of P1…Pn to ground our ethical duties to Y.

Again, I think this is a sensible principle, but it has significant implications, particularly when it comes to debates about the ethical status and significance of robots. To put it bluntly, it maintains that if there is behavioural equivalence between a robot and some other entity to whom we already owe ethical duties (where the equivalence relates specifically to the patterns that epistemically ground our duties to that other entity) we probably owe the same duties to the robot.

To make this more concrete, suppose we all agree that we owe ethical duties to certain animals due to their capacity to feel pain. The ethical behaviourist will argue that the epistemic ground for this belief lies not in the unobservable mental state of pain, but rather in the observable behavioural repertoire of the animal, i.e. because it yelps or cries out when it is hurt, because it recoils from certain pain-inducing objects in the world. Then, applying the comparative principle, it would follow that if a robot exhibits the same behavioural patterns, we owe it a similar set of duties. Of course, we could reject this if we decide to reevaluate our epistemic grounding for our belief that we owe animals certain duties, but this reevaluation will, if we follow ethical behaviourism, result in our simply identifying another set of behavioural patterns which it may be possible for a robot to emulate.

This has some important repercussions. It means that we ought to take much more seriously our ethical duties towards robots. We may easily neglect or overlook ways in which we violate or breach our ethical duties to them. Indeed, I think it may mean that we have to approach the creation of robots in the same way that we approach the creation of other entities of moral concern. It also means that robots could be a greater source of value in our lives than we currently realise. If our interactions with robots are behaviourally indistinguishable from our interactions with humans, and if we think those interactions with humans provide value in our lives, it is also possible for robots to provide similar values. I’ve defended this idea elsewhere, arguing that robotic ‘offspring’ could provide the same sort of value as human offspring, and that it is possible to have valuable friendships with robots.

But isn’t this completely absurd? Doesn’t it shake the foundations of common sense?


3. Objections to Ethical Behaviourism
Let me say a few things that might make it seem less absurd. First, I’m not the only one who argues for something along these lines. David Gunkel and Mark Coeckelbergh have both argued for a ‘relational turn’ in our approach to both animal and machine ethics. This approach advocates that we move away from thinking about the ontological properties of animals/machines and focus more on how they relate to us and how we relate to them. That said, there are probably some important differences between my position and theirs. They tend to avoid making strong normative arguments about the moral standing of animals/machines, and they would probably see my view as being much closer to the traditional approach that they criticise. After all, my view still focuses on ontological properties, but simply argues that we cannot gain direct epistemic access to them.

Second, note that the behavioural equivalence between robots and other entities to whom we owe moral duties really matters on this view. They must be equivalent with respect to all the behavioural patterns that are relevant to the epistemic grounding of our moral duties. And, remember, this could include internal functional patterns as well as external ones. This means that the threshold for the application of the comparative principle could be quite high (though, for reasons I am exploring in a draft paper, I think they may not be that high). Furthermore, as robots become more behaviourally equivalent to animals and humans, we could continue to reevaluate which behavioural patterns really count (think about the shifting behavioural boundaries for establishing machine ‘intelligence’ over the years).

This may blunt some of the seeming absurdity, but it doesn’t engage with the more obvious criticisms of the idea. The most obvious is that ethical behaviourism is just wrong. We don’t actually derive the epistemic warrant for our ethical beliefs from the behavioural patterns of the entities with whom we interact. There are other epistemic sources for these beliefs.

For example, someone might argue that we derive the epistemic warrant for our belief in the rights and duties of other humans and animals from the fact that we are made from the same ‘stuff’ (i.e. biological, organic material). This ‘material equivalence’ gives us reason for thinking that they will share similar mental states like pleasure and pain, and hence reason for thinking that they have sufficient moral status. Since robots will not be made from the same kind of stuff, we will not have the same confidence in accepting their moral status.

It’s possible to be unkind about this argument and accuse it of thinking that there is some moral magic to being made out of flesh and bone. But we shouldn’t be too unkind. Why matter gives rise to consciousness and mentality is still essentially mysterious, and it’s possible that there is something about our biological constitution that makes this possible in a way that an artificial constitution would not. I personally don’t buy this. I believe in mind-body functionalism. According to this view the physical substrate does not matter when it comes to instantiating a conscious mind. This would mean that ‘material equivalence’ should not be the epistemic grounding for our ethical beliefs. But it actually doesn’t matter whether you accept functionalism or not. I think the mere fact that there is uncertainty and plausible disagreement about the relevance of biological material to moral status is enough to undercut this as a potential epistemic source for our moral beliefs.

Another argument along these lines might focus on shared origins: that one reason for thinking that we owe animals and other humans moral duties is because they came into being through a similar causal process to us, i.e. by evolution and biological development. Robots would come into being in a very different way, i.e. through computer programming and engineering. This might be a relevant difference and give us less epistemic warrant for thinking that robots would have similar rights and duties.

There are, however, several problems with this. First, with advances in gene-editing technology, it’s already the case that animals are brought into being through something akin to programming and engineering, and it’s quite possible in the near future that humans will be too. Will this cause them to lose moral status? Second, it’s not clear that the differences are all that pronounced anyway. Many biologists conceive of evolution and biological development as a type of informational programming and engineering. The only difference is that there is no conscious human designer. Finally, it’s not obvious why origins should be ethically relevant. We usually try to avoid passing moral judgment on someone because of where they came from, focusing instead on how the behave and act toward us. Why should it be any different with machines?

This brings me to what I think might be the most serious objection to ethical behaviourism. One critical difference between humans/animals and robots has to do with how they are owned and controlled and this give rise to two related objections: (i) the deception objection and (ii) the ulterior motive objection.

The deception objection argues that because robots will be owned and controlled by corporations, with commercial objectives, those corporations will have every reason to program the robot to behave in a way that deceives you into thinking that you have some morally significant relationship with them. The ‘hired actor’ analogy is often used to flesh this out. Imagine if your life were actually a variant on the Truman Show: everyone else in it was just an actor hired to play the part of your family and friends. If you found this out, it would significantly undercut the epistemic foundations for your relationships with them. But, so the argument goes, this is exactly what will happen in the case of robots. They will be akin to hired actors: artificial constructs designed to play the part of our friends and companions (and so on).

I’m not sure what to make of this objection. It’s true that if I found out that all my friends were actors, it would require a significant reevaluation of my relationship to them. But it wouldn’t change the fact that they have a basic moral status and that I owe them some ethical duties. There are different gradations or levels of seriousness to our moral relationships with other beings. Removing someone from one level does not mean removing them from all. So I might stop being friends with these actors, but that’s a separate issue from their basic moral status. That could be true for robots too. Furthermore, I have to find out about the deception in order for it to have any effect. As long as everyone consistently and repeatedly behaves towards me in a particular way, then I have no reason to doubt their sincerity. If robots consistently and repeatedly behave toward us in a way that makes them indistinguishable from other objects of moral concern, then I think we will have no reason to believe that they are being deceptive.

Of course, it’s hard to make sense of the deception objection in the abstract because usually people are deceptive for a particular reason. This is where the ulterior motive objection comes into play. Sometimes people have ulterior motives for relating to us in a particular way, and when we find out about them it disturbs the epistemic foundations of our relationships with them. Think about the ingratiating con artist and how finding out about their fraud can quickly change a relationship from love to hate. One claim that is made about robots is that they will always have an ulterior motive underlying their relationships to us. They will be owned and controlled by corporations and will ultimately serve the profit motives of those corporations. Thus, there will always be some divided loyalty and potential for betrayal. We will always have some reason to be suspicious about them and to worry that they are not acting in our interests. (Something along these lines seems to motivate some of Joanna Bryson’s opposition to the creation of person-like robots).

I think this is a serious concern and a reason to be very wary about entering into relationships with robots. But let me say a few things in response. First, I don’t think this objection upsets the main commitments of ethical behaviourism. Divided loyalties and the possibility of betrayal are already a constant feature of our relationships with humans (and animals) but doesn’t negate the fact that they have some moral status. Second, ulterior motives do not always have to undermine an ethically valuable relationship. We can live with complex motivations. People enter into intimate relationships for a multiplicity of reasons, not all of them shared explicitly with their partners. This doesn’t have to undermine the relationship. And third, the ownership and control of robots (and, more importantly, the fact that they will be designed to serve corporate commercial interests) is not some fixed, Platonic truth about them. Property rights are social and legal constructs and we could decide to negate them in the case of robots (as we have done in the case of humans in the past). Indeed, the very fact that robots could have significant ethical status in our lives might give us reason to do that.

All that said, the very fact that companies might use ethical behaviourism to their advantage when creating robots, suggest that people who defend it (like me, in this post) have a responsibility to be aware of and mitigate the risks of misuse.


4. Conclusion
That’s all I’m going to say for now. As I mentioned above, ethical behaviourism is something that I intuit to be correct, but which most people I encounter disagree with. This post was a first attempt to reason through my intuitions. It could be that I am completely wrong-headed on this and that there are devastating objections to my position that I have not thought through. I’d be happy to hear about them in the comments (or via email).





No comments:

Post a Comment