Photo by Nickolas Burr
There must be some criterion for when an AI would have animal rights if animals do in fact have rights. We should not deny this possibility for the same reasons we should not deny the possibility of programming an AGI (this because of the universality of computation).
All criteria that attempt to assign animal rights so far posited either:
1. Fail to exclude simple robots and computers currently in existence.
2. Don’t explain the difference between animals and people.
3. Fail to include some conscious artificial intelligences or alien lifeforms.
4. Fail to include animals.
Therefore, and correspondingly, either:
A. Most robots and computers currently in existence have animal rights.
B. Animals are in fact people so human rights and animal rights are actually just the same thing.
C. Non-biological or extra-terrestrial persons do not have rights.
D. Animals don’t have rights.
I assert that A-C are unacceptable implications and so some new criterion is required to avoid accepting D.
Here are the typical arguments that have the consequences laid out in 1, 2, and 3.
1. Claims that whenever some entity has a response indicating a movement away or defensive mechanism include robots programmed to respond in that way. Most animal rights activists would not advocate for the rights of Ferbies.
Slightly stronger behaviorist claims, such as that whenever something responds in a way similar to an animal or human in pain also fail. These cannot discriminate between an animal and an animatronic robot that is programmed to respond to a certain stimulus in a way that simulates the response of an animal perfectly.
2. One criterion often posited as being required for AI to achieve human-level rights is the ability to be creative. The animal rights activist might also accept this as the requirement for animal rights and claim that animals are creative as well. This cannot be dismissed outright, of course. But it is difficult to explain the behavior of animals if they have the same creative capacity as ourselves. Having less creative capacity would either mean that the animal has less processing power and/or less memory. But it would then have to be explained why the processing power or memory are enough to enable the animals to exhibit the wide variety of behaviors they do and yet exhibit a lower level of creativity than a toddler.
Another rough criterion used might be that anything with a central nervous system should be afforded the same rights. This would not differentiate between people and any given type of animal. The problem with this criterion is that it does not explain what is important about having a central nervous system. Does a robot’s CPU count? Does a dead animal count? They would according to this criterion, so (unless one is willing to bite these bullets) it is insufficient. Perhaps it might be mended by saying that the nervous system must have neurons that are firing. But this could count out conscious robots that do not have neurons (so 3). If widened to firing neurons or wiring we fall into category 1 again, because this would include all computers.
3. Biological theories, for instance those that attribute rights based on the existence of hormones or neurons, fail to ascribe rights to persons in the form of conscious robots or aliens with different physiologies. Failing to attribute rights to people that are not made of the same stuff as us as is simply bigotry.
4. Due to the reasoning in 2, creativity is an example of a criterion that does not include animals.
I do not imagine the above categories to be exclusive. There is of course still room for some other explanation of what entities are afforded animal rights. As far as I am aware, however, none avoid the pitfalls above.
What about consciousness? Consciousness can often explain why it is wrong to do certain things to people. It is immoral for a surgeon to work on a person when they could use an anesthetic, or for a coroner to conduct an autopsy on a living person.
One contingent problem with the criterion of consciousness is that we do not understand consciousness. It is not possible to apply a criterion when one does not know whether or not it has been met. However, let us assume that proponents of this theory are correct when they attribute consciousness to animals.
Consciousness is necessary for the attribution of rights but not sufficient. If a rock is conscious, as the panpsychists would have it, it would not appear to have any right not to be kicked or broken up into pieces. This is because it does not have wants, pain receptors, or any sense organs at all. Whatever consciousness it might have is extremely limited.
An animal is unlike the panpsychist’s rock in many respects. It has pain receptors and a central nervous system capable of processing inputs from receptors into reactions. We must accept that animals have a far more complex form of consciousness than rocks, if both are indeed conscious. It seems dubious that if animals are conscious they would not feel pain. If consciousness is to serve any purpose at all, it must at least synthesize information about harm and benefit to the organism. However, we must be careful in thinking about what this pain would be like in a non-human organism, just as we should be careful not to impute too much of our own experience onto a panpsychist’s rock. When a Ferby is stuck upside-down and complains, we should not be tempted to imagine that the Ferby is suffering. This is because it does not have any sort of interpretive capabilities. If animals also do not have interpretations of the pain, but instead an automatic response like the Ferby, then they also are not suffering.
We shouldn’t be too quick to dismiss the idea that animals do not interpret their experiences. We do not have a good explanation for why they are different from robots in this way. In attempting to defend the idea that robots must be different from animals, people usually refer to things like the fact that the animal is biological or that they show very complex reactions to events around them. But neither of these succeed as explanations. Being reliant on the fact that animals are biological rejects the possibility that an animal could be programmed into a computer. Showing complex reactions is also contingent, because we are capable of programming a robot to have very complex reactions to events and are getting ever better at this.
We have a criterion capable of separating all humans from animals and all current computers. That criterion is the creativity of individuals (to be distinguished from the sort of creativity inherent in natural selection). No other animal or computer is capable of this. Seemingly novel results of computer programs are in fact just the implications of the programmer’s inputs and are therefore results of her creativity. Animals are only capable of attaining knowledge through their genes or through mimicry. They cannot create new knowledge.
There are plenty of criteria that could separate humans from other animals that would be arbitrary, such as that we are “featherless bipeds.” Being a featherless biped does not actually explain why we have moral value, and it seems clear that we should not confer moral value on Diogenes’ plucked chicken. Creativity succeeds in picking out morally valuable entities by explaining how moral value and suffering are created.
Contrary to popular belief, pain is not inherently bad. When one works out, she may feel pain or discomfort that, felt upon waking up in the middle of the night, might prompt her to call an ambulance. Pain can therefore be interpreted as a positive. Pain also appears to be neutral in the moments of its occurence, for instance in the moment one’s finger has been pricked before one starts thinking about what has happened. I guess this is also part of what is happening when a child falls and looks around at the faces of adults before deciding whether to cry. This is not to say that all suffering is created by our conscious minds. Much of the suffering we experience consciously is the result of unconscious activity percolating up into our awareness.
It is difficult in the realm of phenomenology to make conclusive arguments. If someone would like to disagree with the idea that pain can be a positive or neutral experience there is not much that I can do but point to experiences in which I experience pain in those ways. In my experience, however, I have only come across one counterexample to this theory of pain. This person claimed that although pain may be reinterpreted to be a positive with our creative capacities, it is inherently negative. Therefore, animals have no choice but to experience pain as a negative because they do not have our ability to reinterpret it.
One advantage of my theory of pain is that it better explains when pain is bad, namely when the pain causes suffering. Suffering is a state of imagining a state of affairs different from those in actuality and wishing things were that way. Being unable to move from one state of affairs to the other is experienced as bad almost by definition, since the experiencer imagines what she wishes would come about would be better. It is difficult to understand where the badness comes in on the opposing theory. The receptors and neurons that fire due to pain are not much different from those that fire due to pleasure, so a reductionist theory cannot account for the difference and it’s unclear. I don’t think the statement “pain is bad” has a good explanatory theory behind it (and also runs into counterexamples), while “suffering is bad” does.
Most people accept the moral difference between pain and suffering and therefore insist that animals feel suffering. It is hard to imagine why this would be the case. What advantage would imagining an alternative state of affairs confer if one could not create new ways to reach that state of affairs or explanations for why one wasn’t in that other state of affairs? There may be a preprogrammed set of alternative states of affairs and ways of moving towards those states of affairs based on a preprogrammed present situation (notice that this would again include many machine learning algorithms). But without the capability of coming up with new solutions this more complex programming would not have any advantage over inborn solutions to given specific problems which run automatically whenever a problem is encountered.
We don’t fully understand suffering because we don’t fully understand consciousness. However, what we do understand about consciousness is not compatible with the theory that animals are conscious. One may decide to err on the side of caution in the case that what we think we know turns out to be wrong. I simply aimed in the above to show that there is reason for doubting that animals have inherent moral value and that it is possible that the avoidance of killing animals for meat for the sake of the suffering of animals is a mistake.