I am continuing to look at Sean Carroll's criticisms of Sam Harris' view on the scientific possibility of moral truth.
(See Sean Carroll: You Can't Derive 'Ought' from 'Is')
The next argument that Carroll uses happens to be the same argument that I use when defending desirism against brain-state theories of value.
Brain-state theories of value are theories that state that value is found in putting the brain in a particular state. Different theories suggest different states as being the ultimate holder of this value. Jeremy Bentham argued that it was pleasure and the absence of pain. John Stuart Mill suggested it was happiness. Harris has asserted that the "well-being of conscious creatures" has to do with being in a certain state of consciousness (or one of several states each of which may serve as its own isolated 'peak' of value).
Against this, Carroll wrote:
Imagine that you are able to quantify precisely some particular mental state that corresponds to a high level of well-being; the exact configuration of neuronal activity in which someone is healthy, in love, and enjoying a hot-fudge sundae. . . . Now imagine that we achieve it by drugging a person so that they are unconscious, and then manipulating their central nervous system at a neuron-by-neuron level until they share exactly the mental state of the conscious person in those conditions. Is that an equal moral good...?
Well, one could argue that an unconscious person cannot possibly be in the same mental state as a conscious person.
My arguments along the same line ask about putting people in an experience machine that puts their brain in a particular state. Or they have suggested the option of putting the brain in a loop where it constantly recycles through some five-minute script of great joy without the memory of having done this 10,000 times before.
In another example (serving a different purpose but still useful here), I have asked about a parent choosing to falsely believe that their child is healthy and happy while the child is being tortured, versus falsely believing that the child is being tortured while the child, in fact, is healthy and happy. The caveat being that the brain state created as a result of one's choice will not include the memory of being asked and answering the question.
The arguments differ slightly, but they point to the same result. A lot of people just don't seem to value brain states.
All brain-state theories are vulnerable to this type of objection.
However, once again Carroll falls victim to thinking that defeating Sam Harris is the same as proving that 'ought' cannot be derived from 'is'.
Desirism is not a brain-state theory and is immune to these states of affairs.
A desire that P is fulfilled by any state of affairs S where P is true in S.
The surgeon, in this case, will not only have to create the correct brain states "on a neuron-by-neuron level", but will also need to create an external state of affairs S so as to make it the case that P is true in S.
It is not enough to create a brain that believes that one's child is healthy and happy. One will also have to create a state of affairs in which the child is, as a matter of fact, healthy and happy.
We can approach this issue from another direction that gives us the same conclusion.
It is absurd, at best, to think that animals evolved to have one and only overriding interest – that is in establishing and maintaining a particular brain state. It is much more reasonable to expect that animals grew to have concerns with states of affairs in the real world, more so than with states of affairs of the between their ears.
However, these are problems with brain-state theories of value, not with the possibility of morality-as-science. It is a mistake to confuse the two.
If we make everyone happy by means of drugs or hypnosis or direct electronic stimulation of their pleasure centers, have we achieved moral perfection?
No. We will not.
Does the answer of "no" prove that we cannot derive 'ought' from 'is'?
No, it does not.