Here is the latest update on my moral theory work, for those keen on following it in-depth. This post is deliberately long, so those not so keen can skip this one. It assembles notes I've been sitting on for a while for lack of time to get them up.
On my last trip to St. Louis I debated a fellow atheist on my own goal theory of moral values vs. the desire utilitarianism of Alonzo Fyfe. That debate was on "Is Happiness the Goal of Morality?" and I was up against Mike McKay, stumping for Fyfe. The video is available on YouTube. I also posted my opening statement on Facebook. McKay put his opening statement up as well. Now Ben is working on an argument map for analyzing and continuing the debate.
Here I will fill in some of the blanks left open at the end of that debate, and others explored by Ben in our consultations as he develops the argument map. The following will therefore assume you have watched or listened to the Carrier-McKay debate, or at least read both opening statements. Note that I had earlier exchanges on many of the same issues with Fyfe directly before (see the index at Common Sense Atheism). And the most detailed formal defense of my thesis is in my chapter "Moral Facts Naturally Exist (and Science Could Find Them)" in The End of Christianity, pp. 333-64, ed. John Loftus (Prometheus 2011), with even more detail, though less formally presented, in Sense and Goodness without God, Part V, pp. 291-348 (AuthorHouse 2005).
Seven Unclosed Threads
I detected and noted seven unclosed threads at the end of our debate (left unresolved largely because they didn't come up in Q&A or didn't get discussed enough to resolve any of them within the time available):
(1a) The Problem of Infinite Regress. The competing views were GT (Goal Theory) and DU (Desire Utilitarianism), and I argued GT is a subset of DU, and thus they are not at odds, but rather GT is what you end up with when you perfect DU, whereas DU is in effect "unfinished." For example, one key reason to prefer GT over DU is that our desire to be satisfied is the end of infinite regress. Without GT, DU is stuck in infinite regress, incapable of deciding which values should be held supreme, i.e. which should override others. The question we must ask is what criterion identifies one desire as "more important / more desirable" than another, in any given situation? And what is it that all such overriding desires have in common? The desire for the most satisfactory outcome. Hence GT.
(2a) There Is Always a Reason to Prefer One Desire to Another. Thus, for example, when McKay says near the end of our debate that he wants his family to be well for no reason, I believe that's untrue. He has a reason: it makes him happy, contented, satisfied to know they are or will be happy, and it satisfies him to want those things for his family; if it didn't, he wouldn't want it. In other words, having this desire satisfies him more than not having that desire, or even having the opposite desire. There can be no other explanation for why he would prefer that desire to another.
(3a) Satisfaction Maximization Is the Only Thing We Desire for Itself Alone. So when McKay says my criterion of satisfaction is just synonymous with "the sum total value," he is making my point for me. Satisfaction maximization is the one thing that all supreme values share. It's how we decide what values to keep, and what desires to let override other desires; what desires, in other words, to put in that set of supreme desires. "It satisfies us more to put that desire in that box of supreme values than not to." Or more exactly, "it would satisfy us more, were we drawing logically valid conclusions from what we know, and if we knew all we'd need to know to align our conclusions with the true facts of the world."
(4a) Deception Doesn't Work. As to the idea floated near the end of the debate, of deception-as-means: pretending to be good is of no value to ultimate satisfaction because (a) you can't "pretend" to enjoy the satisfactions that come from experiencing and feeling compassion, for example (only by actually being compassionate can one achieve that state), likewise other virtues (see Sense and Goodness without God, pp. 300-02, 316-23); and (b) the statistical reliability of the behavior (and thus all its benefit) is reduced when that behavior is a constant labor and a fragile commitment, rather than habitual and automatic (i.e. pretenders have a lower probability of maintaining the behavior required to enjoy sustained benefits than genuine good folk have, hence even in respect to external returns on investment it's safer and less of a risk to actually be good than to pretend to be).
Thus for both external and internal reasons, pretense doesn't work. Therefore, "one ought to pretend to be moral" is false. It does not maximize our ultimate goals. It may work better than being profligately immoral, but actually cultivating the virtues of compassion, reasonableness and integrity works far better still. Therefore "one ought to develop in oneself the virtues of compassion, reasonableness, and integrity" is an objective fact about ourselves and the world.
Thus for both external and internal reasons, pretense doesn't work. Therefore, "one ought to pretend to be moral" is false. It does not maximize our ultimate goals. It may work better than being profligately immoral, but actually cultivating the virtues of compassion, reasonableness and integrity works far better still. Therefore "one ought to develop in oneself the virtues of compassion, reasonableness, and integrity" is an objective fact about ourselves and the world.
(5a) Why GT Perfects DU. So what KcKay never answered in the debate is this: How do we decide what values go into the box of supreme, overriding desires? There must be some standard S that all those values hold in common. Once you decide on what S is, then you must prove that values conforming to S really are supreme overriding desires, i.e. that we really ought to treat them so; if you cannot connect those two, then S is arbitrary and produces a false value system no rational person will have any actual reason to obey (and thus no "true" moral system can result). But if you connect them up, then you end up appealing to some S that is a supreme desire state, that which we desire above all other things, which one will find is satisfaction. (And I already explained in my opening statement why identifying a supreme desire state entails true moral propositions.)
Because one must ask this: why ought we pursue the values in your box of supreme values? If you cannot give an answer, and one that genuinely compels agreement, then your value system is false, or in no meaningful sense "true" (it is not what we "ought" to value); but if you can give such an answer, it will be my answer. In other words: why do we put S values into that box? Because it satisfies us more to do so than not to do so. There must be some true sense in which we ought to put S values into that supreme values box. And our greater satisfaction in doing so is the only available reason.
Because one must ask this: why ought we pursue the values in your box of supreme values? If you cannot give an answer, and one that genuinely compels agreement, then your value system is false, or in no meaningful sense "true" (it is not what we "ought" to value); but if you can give such an answer, it will be my answer. In other words: why do we put S values into that box? Because it satisfies us more to do so than not to do so. There must be some true sense in which we ought to put S values into that supreme values box. And our greater satisfaction in doing so is the only available reason.
(6a) Why There Are Always Reasons for Desires. McKay claimed near the end that "some desires do not have reasons" for them, but that isn't true. We always have conflicting desires (actual or potential), so how do we decide which desires will and ought to prevail? The answer is a greater state of satisfaction: you will be more satisfied choosing one way rather than the other. Otherwise you are just choosing at random, from which no "true" moral system can derive (because then no choice can ever be wrong, even in principle).
Thus, "it will satisfy us most to pursue that desire over any other" is the ultimate reason for every desire we ultimately pursue, and likewise for every desire we ought to ultimately pursue: the former are the desires we happen to have, while the latter are the desires we would have if we knew better; the former desires can be based on errors of reasoning and false beliefs, while the latter are by definition the desires that are not. Thus the ultimate reason for all desires, that "it will satisfy us most to pursue that desire over any other," can be false: you might believe pursuing that desire will satisfy you most, when in fact it will not, and some other desire will. Thus empirical facts trump private subjective belief. And that's why we need science and scientific methods involved in ascertaining true moral facts. (See my earlier blog on the ontology of this moral theory.)
Thus, "it will satisfy us most to pursue that desire over any other" is the ultimate reason for every desire we ultimately pursue, and likewise for every desire we ought to ultimately pursue: the former are the desires we happen to have, while the latter are the desires we would have if we knew better; the former desires can be based on errors of reasoning and false beliefs, while the latter are by definition the desires that are not. Thus the ultimate reason for all desires, that "it will satisfy us most to pursue that desire over any other," can be false: you might believe pursuing that desire will satisfy you most, when in fact it will not, and some other desire will. Thus empirical facts trump private subjective belief. And that's why we need science and scientific methods involved in ascertaining true moral facts. (See my earlier blog on the ontology of this moral theory.)
(7a) We Mustn't Confuse Criteria of Truth with Criteria of Effective Persuasion. Finally, a confusion I think McKay never escaped from despite my trying to disentangle it in Q&A, is that we must distinguish how we determine which values are true, from how to motivate or persuade people to adopt those values. Those are not the same thing (and I do think I made this point well enough in Q&A). Because we are not perfectly rational, the latter is not the same process as the former.
It seemed to me that McKay confused these two a lot. That true desires are those that are rationally informed is a standard for determining which values are true, not a recipe for changing people's behavior--by analogy with ordinary facts: that a correct conclusion is one that is rational and informed is a standard for truth, not (necessarily) what will actually convince people to change their beliefs. As psychology has shown, many people do not respond rationally to correct information and thus do not change their beliefs when faced with decisive refutation. But that in no way changes the fact of what the correct (i.e. true) belief is: a true conclusion is one that follows with logical validity from true premises, regardless of who accepts that or how you convince people of that. Thus "people will never be rational" is an irrelevant objection to a correct standard of truth, because the truth is still the truth, whether people are rational or not; and as in all facts, so in moral facts.
So, for example, when someone asks the question "But what would you say to a ruthless dictator?" the answer is, if he is irrational or insane, that that question is moot, since there is no statement you can make that will cause him to become a moral person (which is why generally we kill them). Whereas a rational and sane person would not be a "ruthless dictator" in the first place. They would already recognize there are much better lives to be had. It's the same thing as asking "But what would you say to a paranoid schizophrenic?" in matters of ordinary nonmoral belief (such as whether it was a fact that the CIA was out to get her): the answer is, there simply isn't anything you can say that will convince them of the truth, because they're crazy--i.e., not functioning rationally. That you can't convince a crazy person of the truth doesn't make that truth all of a sudden not true.
Thus the issue of what's true and the issue of how you convince people of that are simply not the same issue. Regarding the latter (how we convince people) our objective overall must be to make and keep as many people as we can as rational, sane, and informed as possible. While regarding the former (finding out what's true), we must simply already be in that state. This is no different in any other field of scientific inquiry.
It seemed to me that McKay confused these two a lot. That true desires are those that are rationally informed is a standard for determining which values are true, not a recipe for changing people's behavior--by analogy with ordinary facts: that a correct conclusion is one that is rational and informed is a standard for truth, not (necessarily) what will actually convince people to change their beliefs. As psychology has shown, many people do not respond rationally to correct information and thus do not change their beliefs when faced with decisive refutation. But that in no way changes the fact of what the correct (i.e. true) belief is: a true conclusion is one that follows with logical validity from true premises, regardless of who accepts that or how you convince people of that. Thus "people will never be rational" is an irrelevant objection to a correct standard of truth, because the truth is still the truth, whether people are rational or not; and as in all facts, so in moral facts.
So, for example, when someone asks the question "But what would you say to a ruthless dictator?" the answer is, if he is irrational or insane, that that question is moot, since there is no statement you can make that will cause him to become a moral person (which is why generally we kill them). Whereas a rational and sane person would not be a "ruthless dictator" in the first place. They would already recognize there are much better lives to be had. It's the same thing as asking "But what would you say to a paranoid schizophrenic?" in matters of ordinary nonmoral belief (such as whether it was a fact that the CIA was out to get her): the answer is, there simply isn't anything you can say that will convince them of the truth, because they're crazy--i.e., not functioning rationally. That you can't convince a crazy person of the truth doesn't make that truth all of a sudden not true.
Thus the issue of what's true and the issue of how you convince people of that are simply not the same issue. Regarding the latter (how we convince people) our objective overall must be to make and keep as many people as we can as rational, sane, and informed as possible. While regarding the former (finding out what's true), we must simply already be in that state. This is no different in any other field of scientific inquiry.
Ten Threads More
Those were the seven unclosed threads I saw, which I hope now to have tied off and finished. But Ben found ten other unclosed threads, many quite interesting, and to those I responded as follows, originally for Ben's use in building out his argument map, and now I write those responses up for this blog:
(1b) Total Knowledge vs. Limited Knowledge. In our debate there was a digression on the murkiness of war and whether one could ever be in a situation in which it would be genuinely right for both sides to continue fighting, i.e. with both sides acting perfectly morally with all the same correct information. I considered that a red herring, that one should stick with the example I started with, that of slavery, because I use it as an explicit example for analysis in my chapter on GT in The End of Christianity. But war would succumb to the same analysis. The result is that two sides acting on true moral facts with the same information would stop fighting. Therefore in war, always one side is in some way wrong (or both sides are)--which means either morally wrong, or wrong about the facts.
In the slavery case, the short of it is that there are objective facts about being a slavemaster which, if you knew them, you would not desire more than the alternative (I gave a pretty good example of the alternative in the debate Q&A). I have no idea how McKay, or DU generally, answers this question. If you supremely desire owning slaves, then on DU you ought to own slaves. In other words, why should the slavemaster give a shit about what the slaves want? This is the problem of fantasies (moral systems you wish were true but that are not) vs. facts (moral systems that are in fact true--i.e. that describe how the slavemaster ought in actual fact behave). A true moral system is one that makes true statements about how the slavemaster ought to behave; all other moral systems are false--by definition, because they don't make true statements about how we ought to behave, but only statements about how we want someone to behave. (On this point see my responses to Garren Hochstetler's attempt to get around this argument.)
So there has to be a reason not to put that value (for keeping slaves) in the box of "supreme values" if slavery is to be immoral in any true sense. And that reason has to be true for the slavemaster, or else slavery is not immoral for that slavemaster in any true sense. McKay might say the reason "not" to put that value in the box of supreme values is that you should put in that box instead a value for what the slaves want, but that just gets us back to the same question: why are we obligated to put that value in the box? That is, in what sense is it "true" that we "ought" to do so? Once you start answering that question, you'll find similar outcomes for any war scenario as well.
So for this digression, on the murkiness of war, the key issue should be the true facts of the case, not what our beliefs are. Just as we can be in the dark about any other fact, we can be in the dark about the moral fact of a situation. So we can act morally within the restraints of our limited knowledge, but if we had unlimited knowledge we would recognize the truth is different. And discovering that would be the goal of anyone aiming to prove one side wrong in a war, or to test whether they should be waging war themselves. On this see my TEC chapter, pp. 424-26, notes 28, 34, and 35.
In Q&A I gave an analogy from surgery and knowledge thereof, and the difference between doing the best you know while recognizing there are things you don't know, such that if you knew them you'd be doing something different. There is a difference between the "best way to perform a surgery that I know" and the "best way to perform a surgery" full stop, which you might not yet know, but you still know you would replace the one with the other as soon as you know it's better. Ditto the war scenario.
On all moral systems we will be in this state often, so it is not a defect of any. So the only difference there can be between DU and GT here is what would pan out in the total knowledge situation, i.e. what is the truth that we can in principle discover, not the limited knowledge we have so far obtained. And in that total knowledge situation, there can never be a just war in which both sides are in the right. In the limited knowledge situation, both sides can be warranted in believing they are right, and thus not be culpable, but still even then one of them must actually be wrong and just unaware of it, and would want to be aware of it if there were any means to be.
Because in any total knowledge scenario, negotiation will always be the moral option, not war, except perhaps in certain extremely bizarre lifeboat scenarios. But even in an insoluble lifeboat scenario, there will almost always be better choices to make than trial by combat, e.g. casting lots or objectively determining or negotiating best outcomes and abiding by the results. Anything else would usually be irrational, and willful irrationality is immoral, for the reasons I gave in the debate, and more formally in the associated footnote in my TEC chapter (pp. 426-27 n. 36). And in the rarest and most extremely bizarre lifeboat scenario, in which, by some sort of Rube Goldbergesque device, trial by combat actually is the most rational option, you will have described a scenario so enormously different from any we will ever find ourselves in that the example will be completely irrelevant to human life. At best it might make for an entertaining novel.
In Q&A I gave an analogy from surgery and knowledge thereof, and the difference between doing the best you know while recognizing there are things you don't know, such that if you knew them you'd be doing something different. There is a difference between the "best way to perform a surgery that I know" and the "best way to perform a surgery" full stop, which you might not yet know, but you still know you would replace the one with the other as soon as you know it's better. Ditto the war scenario.
On all moral systems we will be in this state often, so it is not a defect of any. So the only difference there can be between DU and GT here is what would pan out in the total knowledge situation, i.e. what is the truth that we can in principle discover, not the limited knowledge we have so far obtained. And in that total knowledge situation, there can never be a just war in which both sides are in the right. In the limited knowledge situation, both sides can be warranted in believing they are right, and thus not be culpable, but still even then one of them must actually be wrong and just unaware of it, and would want to be aware of it if there were any means to be.
Because in any total knowledge scenario, negotiation will always be the moral option, not war, except perhaps in certain extremely bizarre lifeboat scenarios. But even in an insoluble lifeboat scenario, there will almost always be better choices to make than trial by combat, e.g. casting lots or objectively determining or negotiating best outcomes and abiding by the results. Anything else would usually be irrational, and willful irrationality is immoral, for the reasons I gave in the debate, and more formally in the associated footnote in my TEC chapter (pp. 426-27 n. 36). And in the rarest and most extremely bizarre lifeboat scenario, in which, by some sort of Rube Goldbergesque device, trial by combat actually is the most rational option, you will have described a scenario so enormously different from any we will ever find ourselves in that the example will be completely irrelevant to human life. At best it might make for an entertaining novel.
(2b) The Magic Pill Challenge. The "magic pill" objection, ever popular in metaethical debate, also came up in Q&A, and I think my responses might not have been sufficiently clear. The gist of the question is this: what if there is a pill that, if you take it, a thousand people suffer and die horribly but you get a thousand years of a fully satisfying life and no downside, and (this is key) the pill also erases all your memory and knowledge of those other thousand people, so you never even know the cost of your good outcome. I answered, in effect, that you'd have to be evil to knowingly take such a pill, thus by definition it would always be immoral to do so. Thus the desire for greater satisfaction, in that moment, would prevent you taking the pill, because though you would forget later, you know now what the cost is, and it would not satisfy you to pursue your own happiness at that cost. Indeed, anyone who had already cultivated their compassion in order to reap the internal and external benefits to their own life satisfaction, as one ought to do (because it maximizes life satisfaction to do so, i.e. all else being equal, it is more satisfying to live in that state than not), would be incapable of choosing the pill.
My first point was not that the pill makes you evil, but that you would have to be evil to take it, so either you are evil (and therefore it's immoral to take it) or you are not evil (and you won't, therefore, take it; because, not being evil, you cannot, any more than you can choose to believe something you know to be false). You could, of course, take it out of irrationality (e.g. mistakenly thinking it's good), but that creates a different dichotomy: either you are irrational (and therefore incapable of recognizing true conclusions, therefore what choice you make cannot be informative of what's true, just as a Schizophrenic's talking to faeries in no way demonstrates faeries exist) or you must willfully choose to be irrational so as to disregard the feelings of others (which is evil, per above). Here by the term "evil" I mean simply malignant (thus I imply no circularity: see Sense and Goodness without God, V.2.2.7, pp. 337-39). Malignancy ("being evil") is then immoral on GT because having a malignant personality does not maximize life satisfaction, but in fact greatly depresses it (in comparative terms). As I explained with the slavemaster example.
Only after we recognize this do we get to the issue of weighing long-term effects: to take the pill is to be evil (first) but also, in result, to reward an evil person, which was the other point I made: your future self, by having taken the pill, must therefore be a malignant (uncaring) person, whatever you may think you are before taking the pill. And yet if you are not evil, you will not choose to reward an evil person (much less do so by killing a thousand people); you will prefer instead the quantity of satisfaction that comes from remaining good (i.e. your alternate life without taking the pill) because you will find that more satisfying when fully informed--which, incidentally, you can only be when you don't take the pill, since the pill by definition erases knowledge and thus makes you uninformed, and not merely uninformed, but willfully uninformed (a state in which it is logically impossible to make the right choice, since the truth only follows from true facts, not their absence). Hence the ultimate moral truth of the matter follows from the true facts of the case, not your ignorance of those facts; and when your ignorance is willful, you have no excuse.
Still, one must not confuse a future "you" without memory of doing evil, with a future "you" as you are. Since you can only have made the choice if you are evil, that future you would be evil--not have become it, but already been it. Or you are and will be irrational, and thus make the wrong choice by being illogical or uninformed, but then the choice is by definition incorrect, since a true conclusion cannot follow from an irrational thought process, so your choice in that case is no longer informative of what the correct choice was.
In both cases the logic that follows can be observed in the analysis of Gary Drescher, who gives a very similar model, from Game Theory, for why we would prefer even what has no reward, using a very different but also famous "magic box" example (see the later chapters of his book Good and Real).
In both cases the logic that follows can be observed in the analysis of Gary Drescher, who gives a very similar model, from Game Theory, for why we would prefer even what has no reward, using a very different but also famous "magic box" example (see the later chapters of his book Good and Real).
This is easily tested. Just ask anyone which person they would rather be right now: (A) a mass murderer without a memory of it, or (B) someone with reasonably accurate memories of who they are and what they've done. Would it disturb them to find out they were (A)? Yes. So why would they choose to be that person? Indeed, when would they ever? The very fact that the scenario requires erasure of your memory gives away the game: you know it's factually evil, so much so you can only live with it by concealing the crime even from yourself (which amounts to insanity: delusionally disbelieving what you've done). I'd rather have my memory intact and know what I've done and cope with the consequences, than go insane. But more to the point, it is logically impossible for a "good" me to enjoy the fruits of the pill, because only an "evil" me (a mass murderer) would ever be doing so. Hence the question at facing the magic pill is to create or reward a happy evil person (whom you would not want to be) or to remain a struggling good person (whom you would always, when aware of all the facts, prefer to be). Again, the truth follows from the true facts of the case, not your ignorance. Only ignorance follows from ignorance.
That's why the question is, what sort of person would you rather be right now? (A) or (B)? Which would it satisfy you more to be? If (B), then you will be more satisfied being (B) every single moment. Because the same question, asked at any moment, gets the same answer, thus all your moments added up gives you no greater satisfaction in (A) than in (B). And this extends even after death. By the same logic: would you rather be (A) or cease to exist? I would rather not exist (because I would not wish that "evil me" on the world). Again, and I've used this example before, the character dilemmas faced by the character Angel in his eponymous series are morally on target, whereby a certain action can convert him from a good version of himself to an evil version of himself, and his good self would always prefer to be killed than take that action and become his evil self.
To make this dilemma more visceral: if some just persons come to kill you to avenge those thousand people you killed, would you honestly disagree with their right to kill you? You could not rationally do so (because you would agree with them--indeed, you might even ignorantly be helping them find the culprit). So why would you want to live for a thousand years in a state of knowing you could be a mass murderer who deserves to die? The only way you can rationally not live in that state is to know you will never have chosen (A), and the only way you can know you will never have chosen (A) is if you never choose (A). Therefore your life satisfaction always depends on never choosing (A). You can avoid this outcome only by being irrational, but you should never want to be irrational, for the reasons I outlined in the debate and in TEC (again: pp. 426-27 n. 36), therefore "to be irrational" can never even be a proper object of desire, and thus fails to obtain as a moral choice even in DU (and therefore even on DU it can never be a correct decision), and likewise also in GT. It follows that no state that depends on being irrational can be a correct choice on DU or GT either (except in games and play in which no moral harm is at issue, but even then one must still be rationally enough aware of the circumstances to always know the difference).
Thus there is no issue here of preferring one moment to a thousand years. The issue is one of comparing the satisfaction differential of all compared moments, as perceived by a rational person (whom you must always want to be, and thus whose conclusions you ought always agree are correct). In every moment of the thousand years, a rational person will still find state (B) more satisfying than state (A) and thus will always agree, if they are in state (A), they are not living the most satisfying life they could be. Conversely, in every moment of an always-(B)-chooser, they will know they are living a more satisfying life than were they an (A)-chooser. Thus life (B) is always more satisfying than (A), regardless of any physical or temporal differences between them. This conclusion may seem weird, but that's because the scenario is weird. Indeed, what would be weird is if such a weird scenario evoked a correct answer that wasn't weird.
Thus something Ben once said to me about an amped up version of the "magic pill" example was spot on--in which one massively alters every conceivable thing about yourself and the universe so as to produce a "get-out-of-every-conceivable-negative-outcome" scenario. It's the reason why all these "magic pill" examples are lame: if you have to completely change reality to get a different result, then that actually proves my point. My conclusions follow from reality as it is, and even these "magic pill"-proposing objectors agree with me on that. In fact they agree with me so inescapably that the only way they can escape my conclusion is by completely changing reality. (And I do believe I said something to that effect in Q&A.)
Thus something Ben once said to me about an amped up version of the "magic pill" example was spot on--in which one massively alters every conceivable thing about yourself and the universe so as to produce a "get-out-of-every-conceivable-negative-outcome" scenario. It's the reason why all these "magic pill" examples are lame: if you have to completely change reality to get a different result, then that actually proves my point. My conclusions follow from reality as it is, and even these "magic pill"-proposing objectors agree with me on that. In fact they agree with me so inescapably that the only way they can escape my conclusion is by completely changing reality. (And I do believe I said something to that effect in Q&A.)
In a separate case McKay gave a seemingly unrelated example of two options, choosing to help science cure a disease for no recognition and not living to see the benefits of your cure, or screwing scientific progress over just for glory and reward. But this is just a more realistic reiteration of the magic pill scenario: as in that case, which person would you be more satisfied being right now, (A) or (B)? (Assuming those are really the only two options; they almost never are.) If you would always prefer to be (B), then you need to be sure you will always choose (B). The only way to be sure you will always choose (B) is to always choose (B). Ergo (B) is always more satisfying. Only in this case, instead of mass murder vs. quasi-immortality, the options are ignorantly screwing science over vs. helping science but seeing little of its effects personally. In fact McKay is actually proving my point by claiming he would be more satisfied doing the latter. Indeed. That's GT.
(3b) What If Rape Is Fun? One thing that came up in McKay's responses was something like, "What if rape is fun?" On that very question, see my remarks in The Christian Delusion, pp. 100-01, ed. John Loftus (Prometheus 2010). You can also adapt what I said about the "good" option for a slavemaster in Q&A and in TEC, to a "reformed" rapist scenario: compare the two lives; which would they have preferred? Once rational and informed, they will always prefer the latter. Related are my remarks on psychopaths in Sense and Goodness without God (index) and the sources cited in TEC (pp. 323 and 427 n. 41).
The rapist "thrill seeker" scenario is actually answered by a good episode of Law & Order: Criminal Intent (Blink): in the end a psychopath breaks down, realizing his fearless thrill-seeking was going to destroy the woman he loved, and that in consequence he couldn't experience the joys of a real loving relationship with a woman at all, and thus was better off in jail--a tale reminding us to consider not just the more evident consequences of a behavior or disposition, but all the things it deprives you of as well. Ben similarly proposed a scifi example of the character named Fran in an ep of Stargate Atlantis (Be All My Sins Remember'd), in which she was engineered to be happy suiciding herself to destroy a whole population. Of course that's a perfect reason never to make one of those. See my remark on AI in the endnote in the TEC (p. 428 n. 44). There I gave the Dark Star and 2010 films as exemplifying a similar problem.
Nevertheless, objectively speaking, Fran will be missing out on a lot (temporally and qualitatively) and thus can be objectively described as defective--e.g. if she could choose between being Fran and being Unfran, an otherwise exactly identical being who saves a civilization of people instead of killing them and is just as happy doing so, which then would she choose? This brings us to the law of unintended consequences: if she would really see no difference between those two options, then she must lack compassion, which is a state of character that has a whole galaxy of deprivation consequences on her happiness and which cannot fail to mess up her behavior in every other life endeavor. Thus Fran, as conceived in that ep, is logically impossible, unless her compassion and other virtues were intact, in which case she would prefer to be Unfran, when rational and informed (and thus to convince her of this, you need to make her rational and informed). An example of just this (though programming overrides free will in the end) is the Philip K. Dick story that was made into the movie Impostor.
(4b) Why Would Morality Evolve to Maximize Happiness? McKay asked at some point in the debate why on GT should morality just happen to optimize happiness. On GT the behavior that optimizes happiness is the definition of what is moral, so his question is analytically meaningless. Possibly here McKay was confusing different definitions of happiness (see point 6 to follow), replacing satisfaction maximization with "joy" or "pleasure" or something, even though I specifically warned against that mistake in my opening.
But McKay also confused the fact that our desires are products of evolution with the claim (which I never made) that we ought to behave as we evolved to (a claim easily refuted by pointing out how self-defeating that ultimately becomes). On this point one should read Volume 1 of Moral Psychology, ed. by Walter Sinnot-Armstrong.
But McKay also confused the fact that our desires are products of evolution with the claim (which I never made) that we ought to behave as we evolved to (a claim easily refuted by pointing out how self-defeating that ultimately becomes). On this point one should read Volume 1 of Moral Psychology, ed. by Walter Sinnot-Armstrong.
In summary:
On the one hand, evolution used happiness to modulate cooperative behavior as a survival strategy (thus we have been trending toward "cooperation-causes-happiness" pairings of action and result for millions of years now), so I really don't see why McKay should find it surprising that evolution has made us more moral. The Sinnot-Armstrong book excellently lays this out. For example, the sexual selection of moral virtues hypothesis is compelling. There is a whole chapter on that: women prefer mates who have certain moral virtues; the end result is reproductively obvious. Even though it is not decisive, e.g. female preferences can be overridden by rape and the male dominance of unwilling women, one only needs the slightest differential (e.g. just 1% is enough) for a huge effect over tens of thousands of years. And women, in the aggregate, are smarter than rapists. Contrary to recent claims, compared to the intelligent-selection effects of women's decision-making, rape is simply reproductively disadvantageous in the long run. And evolution is all about the long run. Which is precisely why rape is no longer our standard mating strategy, and hasn't been for thousands of years. It is always exceptional, and therefore consistently beat. And even apart from this sexual selection effect, the evolutionary contract-theory argument is solid, not only having abundant confirmation in brain science and evo-devo systems modeling, but in actually reflecting biological adaptation toward Game Theoretic behavior.
Which gets us to the other hand: Game Theory entails that a system of agents will always be better off with certain kinds of cooperation and tit-for-tat; therefore evolution will always trend that way, once the relevant strategy gets started (e.g. social animals) and the necessary cognitive architecture gets built (mirror neurons, language, causal reasoning, etc.). In fact the latter probably evolved partly in an arms race to exploit the advantages of the former: i.e. if you want to evolve toward the best Game Theoretic solution (maximizing cooperation and minimizing getting duped), you will want to evolve the cognitive architecture we have; is it a coincidence, then, that that's exactly what we did? Pursuing the most effective survival strategies has indeed pushed us toward more cognitive awareness of other people, better causal reasoning, language, and other instruments of enhancing cooperation and combating deception.
There is another pathway (a hive mind), but that never leads to moral cognition, so isn't relevant to our debate. Hives are just one strategy for adaptive social animals. But when you are already starting with an advanced individualized brain, hives are no longer an accessible evolutionary path. Additionally, the most intelligent animals tend to be (not exclusively but to a very high ratio) predators. Predation requires the kind of cognitive machinery that hives generally are much less efficient at producing. Thus moral animals will more likely evolve from social predators, grafting Game-Theory-exploiting systems onto units already serving predation (such as reasoning and outwitting deception). Which explains quite a lot about why humans are the way they are.
There is another pathway (a hive mind), but that never leads to moral cognition, so isn't relevant to our debate. Hives are just one strategy for adaptive social animals. But when you are already starting with an advanced individualized brain, hives are no longer an accessible evolutionary path. Additionally, the most intelligent animals tend to be (not exclusively but to a very high ratio) predators. Predation requires the kind of cognitive machinery that hives generally are much less efficient at producing. Thus moral animals will more likely evolve from social predators, grafting Game-Theory-exploiting systems onto units already serving predation (such as reasoning and outwitting deception). Which explains quite a lot about why humans are the way they are.
More to the point, since Game Theory entails that a system of agents will always be better off with certain kinds of cooperation and tit-for-tat, we can actually jump ahead of evolution now. Evolution meanders down that path, it doesn't perfect everything right away, or sometimes ever. But once we see where it's going, we can just go straight there--just as we do in vision (telescopes, microscopes, glasses) or locomotion (cars, planes, the wheel) or hand-to-hand combat, e.g. weapons extend the power of our arms by exploiting the laws of mechanics (leverage, wedge physics); helmets extend the function of our skulls as an evolved defense against brain injury; etc. Just as we can see evolution's trajectory and skip ahead in all other areas, so we can in the area of enhanced social cooperation and rationality.
So one might then ask whether we have also evolved to like bad things, for instance sugar. Certainly. But the "evolved to like bad things" concept is analogous to the "evolved to make bad decisions" concept--e.g. see my Skepticon III talk, and the first two chapters of The Christian Delusion: we evolved to make serious cognitive mistakes in ascertaining knowledge, yet correct knowledge is vital to our survival. It makes no sense to say "we evolved to make those mistakes, therefore we should just believe them" when in fact the consequences are self-defeating or even fatal. In other words, just because our screwed-up processes increase differential reproductive success (DRS) doesn't mean they maximize differential reproductive success.
The proof of this is in earthquake building codes: evolution doesn't teach animals to make houses to withstand earthquakes; we had to develop a way of thinking correctly that bypassed our errors in order to reach that incredible maximization of DRS; so, too, CPR (cardiopulmonary resuscitation); for example: you're not likely to "evolve" a knowledge of CPR, yet a knowledge of it increases DRS. Likewise the technology of morality, which is just the same thing: a technology for maximizing our ability to exploit the best scenarios achievable according to Game Theory, which bypasses our evolved error-making. Even though evolution has moved us in that direction already, just as in all these other areas, it still has a lot farther to go, and technology is what we use to fill that gap.
I think the primitive vengeance code destroying the Middle East is proof enough of that: following our evolved moral sense fucks us over often enough that we ought to be smart enough now to realize we need to run a software patch on that, just as we did in the case of logic, mathematics, and scientific method (as I explained in the debate). But it does not follow that we did not still evolve many useful tricks for reasoning and thinking, and many useful systems for enhancing cooperation and welfare. We just aren't perfectly designed. Which is why now, being intelligent designers, we can do much better. And thus we build technologies to perfect our imperfections, of which "morality," like "language" or "science," is one.
The proof of this is in earthquake building codes: evolution doesn't teach animals to make houses to withstand earthquakes; we had to develop a way of thinking correctly that bypassed our errors in order to reach that incredible maximization of DRS; so, too, CPR (cardiopulmonary resuscitation); for example: you're not likely to "evolve" a knowledge of CPR, yet a knowledge of it increases DRS. Likewise the technology of morality, which is just the same thing: a technology for maximizing our ability to exploit the best scenarios achievable according to Game Theory, which bypasses our evolved error-making. Even though evolution has moved us in that direction already, just as in all these other areas, it still has a lot farther to go, and technology is what we use to fill that gap.
I think the primitive vengeance code destroying the Middle East is proof enough of that: following our evolved moral sense fucks us over often enough that we ought to be smart enough now to realize we need to run a software patch on that, just as we did in the case of logic, mathematics, and scientific method (as I explained in the debate). But it does not follow that we did not still evolve many useful tricks for reasoning and thinking, and many useful systems for enhancing cooperation and welfare. We just aren't perfectly designed. Which is why now, being intelligent designers, we can do much better. And thus we build technologies to perfect our imperfections, of which "morality," like "language" or "science," is one.
(5b) The Role of Rationality in Defining Truth. Related to that point is where again I think McKay confuses criteria of truth (what makes a proposition about the facts of the world true) with criteria of persuasion (what convinces someone that a proposition is true). Because the same goes for what causes someone to behave differently. Humans don't function well in terms of converting beliefs into behavior, due to the physical separation and poor neural connections between the action-generating centers of the brain and the reasoning centers of the brain (the one having evolved so much later than the other: read Kluge); so it is not enough to convince the rational brain of a proposition in order to generate the corresponding behavior, because behavior is generated elsewhere in the brain. But that's a defect of our information processing, and such defects don't have anything to do with what is actually true, i.e. just because we are programmed by evolution to think fallaciously (e.g. we are innately prone to confirmation bias) does not make fallacious reasoning correct (e.g. confirmation bias is still erroneous, regardless of who ever realizes this fact or whether anyone does anything about it).
Hence when I say moral facts follow from what a rationally informed person would conclude, I am not saying one has to be perfectly rational to be moral, but that moral facts are what rationally follows from the facts of the world (whether we or anyone ever recognize this). That is, just as the true facts of physics are "what we would conclude if we were adequately informed and reasoned from that information with logical validity" so the true facts of morality are "what we would conclude if we were adequately informed and reasoned from that information with logical validity." Getting people to believe such conclusions is a completely unrelated problem; likewise getting people to behave consistently with their beliefs.
Likewise the question "what if our evil desires are simply overwhelming," which essentially sets up a fork between sanity and insanity. There are certainly insane people who can't conform their desires or behavior to rational beliefs, even when they have them; or who are incapable of forming rational beliefs, regardless of what technologies and "software patches" we equip them with. Hence my point in my TEC chapter: it is no defect of any moral theory that madmen cannot be persuaded of it. Thus it can be no defect of GT, any more than of DU. This again falls under the same category, of the difference between what is true and what will actually persuade people (as I already discussed in section (7a) above)--that difference entails that madmen can never be a counter-example to any theory of what is true, and therefore this objection becomes a non-starter.
There is a similar point to make regarding the fact that "perfect knowledge entails knowing all loopholes" (comparable to the supermagical pill case I discussed in section (2b) above), i.e. someone with perfect knowledge (like a god, say) could navigate all social consequences so as to lie and cheat etc. precisely when nothing bad would result. But I would argue that anyone actually possessed of that knowledge would be morally right to exploit all those loopholes, it's just there aren't actually as many such loopholes to exploit as we think (see Sense and Goodness without God V.2.1.4, p. 323). Because (as noted earlier here) compassion is necessary for happiness maximization, and you can't hide the consequences of your actions from yourself--not really even with magic (per the above discussion of magic pills).
Of course, even apart from that, we would know that an omniscient neighbor could and would exploit all loopholes, so our behavior would change accordingly, which would create an arms race scenario that is unlikely to go well for any libertine omniscient neighbor--which an omniscient person would in turn know. So how would such a person prevent the extreme restrictions on their liberty and happiness that would result from the extreme self-protective measures taken by everyone else? By being more consistently moral. Or ending up dead. It's obvious which is the better choice.
Of course, even apart from that, we would know that an omniscient neighbor could and would exploit all loopholes, so our behavior would change accordingly, which would create an arms race scenario that is unlikely to go well for any libertine omniscient neighbor--which an omniscient person would in turn know. So how would such a person prevent the extreme restrictions on their liberty and happiness that would result from the extreme self-protective measures taken by everyone else? By being more consistently moral. Or ending up dead. It's obvious which is the better choice.
There is a partial exploration of this scenario in an episode of Fringe (The Plateau), where someone became so intelligent they could predict everything that would happen and thus could always make the perfect choice to have any outcome they wanted, even by extremely bizarre chains of causation--to which the solution was, society had to kill him (although he went insane before they got to), an example of "it's not going to go well for the omniscient." But that scenario differed from a relevant example in one key respect: this person clearly lacked compassion, and this lack was (even as depicted) destructive of their happiness; whereas if they had cultivated happiness-causing compassion, they would have used their power in such a different way as we would call properly moral, since they wouldn't even want to exploit "loopholes" that would hurt innocent people, because the hurting of other innocent people always causes direct harm to themselves, via their conscience. They would instead exploit loopholes to do a great deal of rewarding good. They would then have a much more satisfying life.
(6b) But Isn't the Pursuit of Happiness at Odds with Morality? The question kept being asked why should we want happiness above all things? Maybe there's something we want more? (And indeed, for DU to not logically collapse automatically into GT, it is necessary to believe there is such a thing.) This lies at the heart of what I think is illogical about McKay's position: What could you possibly want more than happiness? Because once you identified a happiness-overriding desire, in what way would that not then be what makes you happy? This is why I used the satisfaction criterion, to avoid the equivocation fallacy that seems to be confusing him: he means by "happiness" different things in different places, for instance often confusing it with "joy" or "pleasure" or something like that. It can consist in that, but not solely in that. Satisfaction is always more rewarding, and indeed we have a desire for joy and pleasure precisely because these things are satisfying states to us, although not the only ones. They are therefore subordinate to, and only varieties of, satisfaction maximization, and not the sole or always overriding variety.
Thus when we rephrase the matter, McKay's objection dissolves: a mother aiming to sacrifice herself to save her daughter (TEC p. 350) will always choose the option that most satisfies her. The only time she will not is (1) when she does so uninformed (realizing after the fact, say, that she would have been happier the other way around), but that's precisely why the truth follows from what would have been the informed decision (and her acknowledging this after the fact proves the point that it was indeed the correct decision and her actual decision was in error--not necessarily a moral error, but certainly at least an error of ignorance, since had she known the actual consequences she would have chosen differently, thereby demonstrating what the actually correct decision was) or (2) when she does so irrationally (deducing how to behave by a logically invalid inference from the facts), but no correct decision can ever result from an irrational decision making process (so her decision in that case is by definition incorrect, unless she lights upon the correct decision purely by accident, as when a fallacious argument just "by accident" produces a true conclusion, but then of course her decision won't have been incorrect--she then will be more satisfied in result of it than she would have been otherwise).
It makes no sense to object to this. To object to it is like saying we desire most above all things to be less satisfied with our lives than we could be by choosing differently. No one but the irrational and the insane ever has such a desire, and no one ever should have such a desire (i.e. there is no provably true proposition of the form "you ought to prefer being dissatisfied with your life than being satisfied with it"). Therefore, our greatest overriding desire is always to be more satisfied with our lives overall. All that then remains (in order to know what is actually "true" about how we ought to behave) is to align that desire with the true facts of the world (i.e. to know what the facts are, and draw conclusions from those facts in a logically valid way).
(7b) What About Desiring Results That We Ourselves Will Never See or Enjoy? McKay gave an example of desiring his kids' future college education and claimed this desire didn't benefit his happiness in any way, indeed he might not even live to see the fruit of his labors toward it. But that's not a correct analysis. In that scenario we have two desires to choose from: to desire (a) our children's future college educations, or (b) not. Which desire will satisfy us the more to have, and to act on, and to fulfill? If we desire (a) more, then by definition (a) satisfies us more; whereas if (b) would satisfy us more, we would want (b) more, and thus even DU would entail we should want (b) and not (a); there can therefore be no logically valid objection to GT here. Any DU theorist must accept the exact same conclusion as GT produces in this case.
The reality is that the having of a desire itself brings us pleasure and satisfaction. It is not all about seeing the outcome. It is about presently knowing what that outcome will be, or could be with our help. We are thus motivated by that satisfaction, and not just the seeing of the results. Similarly one can enjoy exercise in and of itself, and likewise enjoy the dedication you feel to it, even if you know you will die soon and thus it definitely won't benefit you in the long run (despite that being what exercise is usually undertaken to do). In matters of the fates of your loved ones, the same occurs--as well as the satisfaction of knowing the outcome will occur (or at least more likely will).
The reality is that the having of a desire itself brings us pleasure and satisfaction. It is not all about seeing the outcome. It is about presently knowing what that outcome will be, or could be with our help. We are thus motivated by that satisfaction, and not just the seeing of the results. Similarly one can enjoy exercise in and of itself, and likewise enjoy the dedication you feel to it, even if you know you will die soon and thus it definitely won't benefit you in the long run (despite that being what exercise is usually undertaken to do). In matters of the fates of your loved ones, the same occurs--as well as the satisfaction of knowing the outcome will occur (or at least more likely will).
(8b) What If We Only Have Dissatisfying Choices? Another issue regards outcomes that cannot be achieved, i.e. sometimes you just can't maximize your satisfaction or must live with a dissatisfying result. But in every case the correct act is the one that produces the greater satisfaction of all available choices. Unavailable outcomes are irrelevant. When deciding between all options, say A, B, and C, the one that is, of those three, the most satisfying is the morally correct option to take. The question is then simply to determine which of those that is, which requires knowing empirical facts about the world (regarding what all the probable consequences will really be) and yourself (regarding what really will be more satisfying to you). That's GT.
(9b) Avoiding Circular Rebuttals. Several times McKay used the common fallacy of assuming a conclusion a priori and then judging GT by the fact that it doesn't produce that conclusion. On why that's a fallacy see my TEC chapter (p. 350, with pp. 343-47). There are many "question begging" objections to moral theories like this, which are perversely popular among philosophers, who of all people ought to know better. McKay repeatedly assumes a priori that a decision he describes is the correct one, and then finds GT doesn't entail it, and then he concludes GT is false. That's circular reasoning. If GT is true, and if it is true that GT really doesn't entail that "expected" result, then McKay is simply wrong (that, for instance, self-sacrifice is moral). So he then can't use his erroneous belief (e.g. that self sacrifice is moral) to demonstrate GT is false.
It's already a false premise anyway that GT doesn't sometimes warrant moral self-sacrifice (see Sense and Goodness without God V.2.3.1, pp. 341-42). I would add that there are also Game Theoretic considerations, e.g. a world in which no one was willing to die for some better outcome is a world in which better outcomes will always be suppressed by threats of death, and such a world cannot be lived in with any satisfaction; therefore you may be faced with a choice between, in your own action, realizing a state of the world that no one can be happy in, not even you, or dying to realize a state of the world at least someone can be happy in, even if it won't be you. If you are rational and informed, you will die very happy at that thought indeed. But it does not thereby follow that self-sacrifice is always morally right, and we ought for that very reason not expect it of people (at least in most cases), which seems a more obviously correct conclusion--and is indeed the conclusion GT leads us to.
It's already a false premise anyway that GT doesn't sometimes warrant moral self-sacrifice (see Sense and Goodness without God V.2.3.1, pp. 341-42). I would add that there are also Game Theoretic considerations, e.g. a world in which no one was willing to die for some better outcome is a world in which better outcomes will always be suppressed by threats of death, and such a world cannot be lived in with any satisfaction; therefore you may be faced with a choice between, in your own action, realizing a state of the world that no one can be happy in, not even you, or dying to realize a state of the world at least someone can be happy in, even if it won't be you. If you are rational and informed, you will die very happy at that thought indeed. But it does not thereby follow that self-sacrifice is always morally right, and we ought for that very reason not expect it of people (at least in most cases), which seems a more obviously correct conclusion--and is indeed the conclusion GT leads us to.
(10b) Wouldn't GT Destroy Individuality? Finally, McKay argues that GT would destroy individuality by requiring everyone to behave exactly the same. But this confuses covering laws with instantiations. See my TEC chapter (pp. 355-56) for a complete reply to this kind of objection. Individuality is simply never at risk here. Morality does not consist in doing exactly the same things. As long as all the things we do are moral, then even if we follow the same moral rules, ample individual variation will result. In TEC I give the example of the covering law that we ought to look after our own financial and material security by procuring employment (or otherwise earning our keep by trade in labor, so far as we are able). This in no way entails everyone should pursue exactly the same job. There are infinitely many ways to fulfill the same moral goal.
Additionally, even apart from that conclusion, if it were more satisfying to be unique, then on GT it would be a true moral fact that we ought to be unique. Hence GT would not threaten but in fact command uniqueness. Whereas if it were more satisfying not to be unique, then we ought not be unique. Hence GT would then be correct that we shouldn't be. Whereas if there is no difference which is the more satisfying, then there is no moral fact of the matter whether we are or are not unique. Either way there is no logically possible objection to GT here.
McKay's Post-Debate Questions
In addition to those ten points, some time after the debate McKay asked me seven questions, which I shall answer here:
(1c) McKay: Do you think things like condemnation and punishment can be useful in making it less likely that people act immorally (by your definition of immoral)?
Carrier: Yes. Game Theory entails a well-structured tit-for-tat system maximizes our life satisfaction. Punishment is therefore a technology of happiness, which must be tuned empirically so as not to be self-defeating (e.g. by costing too much or having too many unwanted side effects). Thus science needs to get busy on that.
(2c) McKay: From a self-interested perspective, shouldn't the rest of society use the same type of condemnation and punishment against selfish desires that benefit the individual but hurt everyone else in society (if we discover that such selfish desires do exist)?
Carrier: Yes. Unless doing so is self-defeating. And, again, whether it is or not is an empirical question, and therefore the job of science to answer. For example, some selfish desires (e.g. ambition, curiosity, liberty, self-sufficiency) are actually necessary for the good of society, and only become malignant when immoderate. For example, by owning a home I am "hurting" everyone else by preventing them from living on my land. But such exclusive zones of control are socially and psychologically necessary, and everyone in society accepts them and ought to, so long as we all have reasonable access to them or shared ground-rules for acquiring them. Only if I started owning all the land would society start to have no reason to continue tolerating me.
Conversely, some things that would benefit society actually harm the individual, negating any reason the individual could rationally have to support it. For example, forced collectivization of all goods and land--in other words, total deprivation of private property. Since society is comprised of individuals, there is no sense in which society could ever rationally support what no individual would. Thus one must avoid the fallacy of conflating "society" with what is actually just some special interest group within society seeking to exploit the remainder. On some of the complexities of how these two realities pan out, see for example my blog series on Factual Politics, and the referenced works therein.
(3c) McKay: What is the evidence that everything which benefits the individual also benefits other people?
Carrier: It would be a straw man fallacy to say that GT entails "everything" that benefits an individual benefits other people. GT entails no such thing. GT only entails that the most satisfying option for an individual is the one option all other rational people will agree that individual ought to take. It is therefore the only morally correct option. It just so happens that that option usually also benefits other people, as unmitigated selfishness is self-defeating to the individual, not only internally (e.g. by depriving them of the joys and satisfactions of a more compassionate life) but externally as well (e.g. by turning society against them and their interests).
What, then, is the evidence that the individual's most satisfying options will typically happen to benefit others as well? One example of such evidence is Game Theory, and all its empirical confirmations since its formal demonstration. Yet more evidence comes from the empirical studies of happiness and life satisfaction and their connection to certain prosocial behaviors and to our having modulated certain psychological traits (magnifying some and minimizing others). For example, it is in your own best interests to cultivate compassion, because of its internal and external benefits. This will then produce inevitable benefits to others, because your compassion then compels you to act compassionately.
But ultimately there are going to be some things that an individual ought not do for the public good, simply because it is unconscionable to expect them to. Thus even social conscience recognizes (or ought to recognize) that every individual's life satisfaction is a proper moral goal for that individual. Society wouldn't even deserve to exist otherwise, as if it does not benefit the individual, the individual would be better off without it. And since every "individual" is among the "others" relative to any other individual (in other words, since society is just a collection of individuals), the requirement that society ought to benefit the individual entails that the individual ought to benefit others (directly or indirectly).
(4c) McKay: Why do you think evolution is irrelevant?
Carrier: I did not say evolution is completely irrelevant (since obviously all our needs and abilities are derived from it), but that it is not normative. What evolution has made us into is ultimately irrelevant when deciding how we ought to behave, in precisely the same sense that how evolution has caused us to think is ultimately irrelevant for determining what actually is the correct way to think (e.g. logic, mathematics, scientific method, and all other techniques of error correction, which did not evolve, but that we invented to correct for all of evolution's mistakes in constructing our brains and senses).
Knowing the ways evolution has made us is not irrelevant to correcting its mistakes, certainly. But evolution can't tell us what those mistakes are. Only a software-patched reasoning and scientific method can. And though, in what evolution has done, we can find evidence for what we ought to do, only a software-patched reasoning and scientific method can ascertain which things evolution has done are good things to enhance or bad things to correct for. Therefore we cannot argue "evolution made us thus, therefore we ought to behave thus." That would obviously be false in matters of epistemology; and so, for all the same reasons, it is also false in matters of moral philosophy. (See TEC, pp. 427-28 n. 43.)
(5c) McKay: You said in your debate that you can just ignore hypotheticals (like the pill question). But your whole moral system is based around an impossible hypothetical, someone being fully informed and rational. Can you explain why some hypotheticals don't deserve consideration, while others are crucially important?
Carrier: This goes to the heart of your misunderstanding, and I have addressed it several times in this blog post above.
First, I did not say I can ignore all hypotheticals, but that we can ignore hypotheticals that involve rearranging reality into something that doesn't exist and never happens. When you create completely implausible scenarios, or indeed scenarios that change the very laws of physics, you can only derive conclusions relevant to worlds we don't live in, and situations we will never be in. Since we don't live in those worlds, those conclusions are irrelevant to us. Magic pills simply don't matter. Because they don't exist. And probably never will.
Second, in GT moral facts are not dependent on unrealistic hypotheticals, but reality-based hypotheticals, in fact not just hypotheticals, but proper empirical hypotheses. When you say "I think doing x will bring me the most satisfying outcome" you are stating a hypothesis. That hypothesis can be false. How do you determine if it is false? By ascertaining whether doing x really will bring you the most satisfying outcome. But ascertaining that requires knowing something about what all the actual consequences of x are, as well as the consequences of the alternatives to it. And among those consequences are your actual satisfaction states--rather than just your beliefs (i.e. hypotheses) about what your satisfaction states will be, because those beliefs can be false, and therefore also require tests or some other conditions warranting your belief that they are true.
It's exactly the same as for any other empirical hypothesis in science, e.g. "the earth is round." How do you know that's true? Only if it is a conclusion drawn with logical validity from premises that you are warranted in believing are true. Of course you can always be wrong. Thus all knowledge is probabilistic. As with the rest of science and all our knowledge of the world, so for morality. Thus the key is to reduce the likelihood of error as much as is reasonably possible. When you do that, such as by applying good methods, then your beliefs regarding how satisfying certain outcomes will actually be relative to each other, and your beliefs regarding what the outcomes themselves will actually be, will be warranted, and thus so will your conclusions regarding what is morally right in any given circumstance. Exactly as in any other empirical science.
But if, after the fact, you discover your warranted beliefs were in fact false, you will conclude that you did the wrong thing, that it was in fact morally wrong, and you just didn't know that at the time (and thus, typically, we will forgive you--but we won't go around saying that it is morally right to do that once we know it is not). Analogously, if you discovered facts confirming the earth is actually flat and all the evidence it was round was (let's say) forged or an optical illusion, you would agree that your previous belief that it was round was false, even though you didn't know that at the time. In other words, "the earth is round" was false even when you were warranted in believing it was true. Likewise, "I ought to do x" was false even when you were warranted in believing it was true.
It follows that moral truth (not moral warrant, but moral truth), just like all other empirical truths, follows from deriving conclusions with logical validity from true premises, therefore it is necessarily the case that the moral thing to do is what a rational, correctly informed person would conclude. Accordingly, when we make moral decisions, we ought to strive to be as near to that sort of person as we can be, and in fact warranted moral beliefs exist only when we are as near to that sort of person as we can get ourselves to be in the given circumstances. But always, regardless of what we are warranted in believing, what is true is something else: the latter (the truth) is what a perfectly rational and informed person would conclude in our shoes. And it is precisely because that is true that the former (i.e. warranted belief) can only be what we conclude when we strive to be as near to that ideal as possible (as otherwise it wouldn't be warranted belief), just as a scientist strives to be as epistemically rational and informed as possible in any other inquiry, whether astronomy or chemistry or anything else. Even though he can never be perfectly so, he knows his belief is more likely to be correct the closer he comes to that ideal, and so he knows he must get as close to that ideal as he can. (In fact his probability of error will quite simply consist of the gap between that ideal and how far he fell from it.)
Frankly, I shouldn't have to have explained this. It should be obvious that true scientific facts are conclusions derived with logical validity from true premises. Why would it be any different for morality?
(6c) McKay: I see one of our two main differences as being over whether fully informed and rational people will always act for the good of others.
Carrier: I never said fully informed and rational people will "always" act "for the good of others." This looks like another fallacy of circular argument: you seem to be assuming a priori that what is moral is what is done "for the good of others." I do not assume that. Indeed, I think it's obviously false. As in Marxism: sacrificing all your property and income "for the good of others" is in fact immoral. Conversely, reasonably respecting an individual's privacy and personal property is moral, which entails accepting that an individual who acts reasonably to protect their privacy and personal property is behaving morally, too.
We will react negatively to someone who does this in a fashion that harms other people (and our negative response to that is one of two major reasons why it is not in an individual's best interests to do that, the other being that it is in an individual's best interests to be compassionate, due to the internal states compassion produces, which internal states will then harm them directly when they knowingly harm others, thus it is again not in an individual's best interests to do so), but we do not expect them to protect their own property and privacy "for the good of others," as if their own property and privacy "must" benefit anyone but themselves.
It is morally proper to have and care for your own stuff. It doesn't have to benefit anyone else. It's when you start hoarding stuff beyond your needs, while others thus become deprived by your avarice, that you become an uncaring nuissance, which will then have those negative internal and external consequences upon yourself. And then such behavior is no longer in your best interest. Only the ignorant or irrational do not realize this, i.e. only those who are not making decisions based on conclusions drawn with logical validity from true premises.
GT explains why this is the morally correct view of the matter, drawing on ample scientific evidence already gathered, and making further testable scientific hypotheses that have a high prior probability of being confirmed, owing to the evidence of similar past cases and prior confirmed hypotheses (like Game Theory).
(7c) McKay: The other [of those two disagreements] might be over whether we should morally condemn people who cause harm through actions in their own self-interest. If you agree that we rationally should react to such an action in the same way that we react to immoral actions, then that part of our dispute is merely terminological.
Carrier: I don't even see a terminological dispute. Acting in my own self-interest entails making alliances with my neighbors to defend our interests from attacks by others. One such mode of defense (among the cheapest in fact) is public condemnation. I don't know why you would think I had said anything the contrary. Indeed, GT entails two warrants for such behavior: not only must we use condemnation as a tool to defend our interests (e.g. it is in our interest to protect minorities from exploitation because a system that does not do that won't protect us when we become the minority), but also, because cultivating compassion is psychologically and socially in our interest (e.g. science has already documented that it entails a more satisfying life), and compassion directly causes us to condemn behavior that unnecessarily harms others, simply because such behavior causes us pain, too.
And on GT, moral condemnation would be premised on both facts: when we morally condemn, we are in effect saying that the immoral actor is actually acting against his own interests and therefore behaving irrationally or ignorantly (or both), first by disregarding how he will be adversely affected by the social system that his own acts only reinforce (and not just in the form of our retaliation, but also that as well), and second, by depressing the quality of his life by suppressing his experience (for example) of empathy.
(1c) McKay: Do you think things like condemnation and punishment can be useful in making it less likely that people act immorally (by your definition of immoral)?
Carrier: Yes. Game Theory entails a well-structured tit-for-tat system maximizes our life satisfaction. Punishment is therefore a technology of happiness, which must be tuned empirically so as not to be self-defeating (e.g. by costing too much or having too many unwanted side effects). Thus science needs to get busy on that.
(2c) McKay: From a self-interested perspective, shouldn't the rest of society use the same type of condemnation and punishment against selfish desires that benefit the individual but hurt everyone else in society (if we discover that such selfish desires do exist)?
Carrier: Yes. Unless doing so is self-defeating. And, again, whether it is or not is an empirical question, and therefore the job of science to answer. For example, some selfish desires (e.g. ambition, curiosity, liberty, self-sufficiency) are actually necessary for the good of society, and only become malignant when immoderate. For example, by owning a home I am "hurting" everyone else by preventing them from living on my land. But such exclusive zones of control are socially and psychologically necessary, and everyone in society accepts them and ought to, so long as we all have reasonable access to them or shared ground-rules for acquiring them. Only if I started owning all the land would society start to have no reason to continue tolerating me.
Conversely, some things that would benefit society actually harm the individual, negating any reason the individual could rationally have to support it. For example, forced collectivization of all goods and land--in other words, total deprivation of private property. Since society is comprised of individuals, there is no sense in which society could ever rationally support what no individual would. Thus one must avoid the fallacy of conflating "society" with what is actually just some special interest group within society seeking to exploit the remainder. On some of the complexities of how these two realities pan out, see for example my blog series on Factual Politics, and the referenced works therein.
(3c) McKay: What is the evidence that everything which benefits the individual also benefits other people?
Carrier: It would be a straw man fallacy to say that GT entails "everything" that benefits an individual benefits other people. GT entails no such thing. GT only entails that the most satisfying option for an individual is the one option all other rational people will agree that individual ought to take. It is therefore the only morally correct option. It just so happens that that option usually also benefits other people, as unmitigated selfishness is self-defeating to the individual, not only internally (e.g. by depriving them of the joys and satisfactions of a more compassionate life) but externally as well (e.g. by turning society against them and their interests).
What, then, is the evidence that the individual's most satisfying options will typically happen to benefit others as well? One example of such evidence is Game Theory, and all its empirical confirmations since its formal demonstration. Yet more evidence comes from the empirical studies of happiness and life satisfaction and their connection to certain prosocial behaviors and to our having modulated certain psychological traits (magnifying some and minimizing others). For example, it is in your own best interests to cultivate compassion, because of its internal and external benefits. This will then produce inevitable benefits to others, because your compassion then compels you to act compassionately.
But ultimately there are going to be some things that an individual ought not do for the public good, simply because it is unconscionable to expect them to. Thus even social conscience recognizes (or ought to recognize) that every individual's life satisfaction is a proper moral goal for that individual. Society wouldn't even deserve to exist otherwise, as if it does not benefit the individual, the individual would be better off without it. And since every "individual" is among the "others" relative to any other individual (in other words, since society is just a collection of individuals), the requirement that society ought to benefit the individual entails that the individual ought to benefit others (directly or indirectly).
(4c) McKay: Why do you think evolution is irrelevant?
Carrier: I did not say evolution is completely irrelevant (since obviously all our needs and abilities are derived from it), but that it is not normative. What evolution has made us into is ultimately irrelevant when deciding how we ought to behave, in precisely the same sense that how evolution has caused us to think is ultimately irrelevant for determining what actually is the correct way to think (e.g. logic, mathematics, scientific method, and all other techniques of error correction, which did not evolve, but that we invented to correct for all of evolution's mistakes in constructing our brains and senses).
Knowing the ways evolution has made us is not irrelevant to correcting its mistakes, certainly. But evolution can't tell us what those mistakes are. Only a software-patched reasoning and scientific method can. And though, in what evolution has done, we can find evidence for what we ought to do, only a software-patched reasoning and scientific method can ascertain which things evolution has done are good things to enhance or bad things to correct for. Therefore we cannot argue "evolution made us thus, therefore we ought to behave thus." That would obviously be false in matters of epistemology; and so, for all the same reasons, it is also false in matters of moral philosophy. (See TEC, pp. 427-28 n. 43.)
(5c) McKay: You said in your debate that you can just ignore hypotheticals (like the pill question). But your whole moral system is based around an impossible hypothetical, someone being fully informed and rational. Can you explain why some hypotheticals don't deserve consideration, while others are crucially important?
Carrier: This goes to the heart of your misunderstanding, and I have addressed it several times in this blog post above.
First, I did not say I can ignore all hypotheticals, but that we can ignore hypotheticals that involve rearranging reality into something that doesn't exist and never happens. When you create completely implausible scenarios, or indeed scenarios that change the very laws of physics, you can only derive conclusions relevant to worlds we don't live in, and situations we will never be in. Since we don't live in those worlds, those conclusions are irrelevant to us. Magic pills simply don't matter. Because they don't exist. And probably never will.
Second, in GT moral facts are not dependent on unrealistic hypotheticals, but reality-based hypotheticals, in fact not just hypotheticals, but proper empirical hypotheses. When you say "I think doing x will bring me the most satisfying outcome" you are stating a hypothesis. That hypothesis can be false. How do you determine if it is false? By ascertaining whether doing x really will bring you the most satisfying outcome. But ascertaining that requires knowing something about what all the actual consequences of x are, as well as the consequences of the alternatives to it. And among those consequences are your actual satisfaction states--rather than just your beliefs (i.e. hypotheses) about what your satisfaction states will be, because those beliefs can be false, and therefore also require tests or some other conditions warranting your belief that they are true.
It's exactly the same as for any other empirical hypothesis in science, e.g. "the earth is round." How do you know that's true? Only if it is a conclusion drawn with logical validity from premises that you are warranted in believing are true. Of course you can always be wrong. Thus all knowledge is probabilistic. As with the rest of science and all our knowledge of the world, so for morality. Thus the key is to reduce the likelihood of error as much as is reasonably possible. When you do that, such as by applying good methods, then your beliefs regarding how satisfying certain outcomes will actually be relative to each other, and your beliefs regarding what the outcomes themselves will actually be, will be warranted, and thus so will your conclusions regarding what is morally right in any given circumstance. Exactly as in any other empirical science.
But if, after the fact, you discover your warranted beliefs were in fact false, you will conclude that you did the wrong thing, that it was in fact morally wrong, and you just didn't know that at the time (and thus, typically, we will forgive you--but we won't go around saying that it is morally right to do that once we know it is not). Analogously, if you discovered facts confirming the earth is actually flat and all the evidence it was round was (let's say) forged or an optical illusion, you would agree that your previous belief that it was round was false, even though you didn't know that at the time. In other words, "the earth is round" was false even when you were warranted in believing it was true. Likewise, "I ought to do x" was false even when you were warranted in believing it was true.
It follows that moral truth (not moral warrant, but moral truth), just like all other empirical truths, follows from deriving conclusions with logical validity from true premises, therefore it is necessarily the case that the moral thing to do is what a rational, correctly informed person would conclude. Accordingly, when we make moral decisions, we ought to strive to be as near to that sort of person as we can be, and in fact warranted moral beliefs exist only when we are as near to that sort of person as we can get ourselves to be in the given circumstances. But always, regardless of what we are warranted in believing, what is true is something else: the latter (the truth) is what a perfectly rational and informed person would conclude in our shoes. And it is precisely because that is true that the former (i.e. warranted belief) can only be what we conclude when we strive to be as near to that ideal as possible (as otherwise it wouldn't be warranted belief), just as a scientist strives to be as epistemically rational and informed as possible in any other inquiry, whether astronomy or chemistry or anything else. Even though he can never be perfectly so, he knows his belief is more likely to be correct the closer he comes to that ideal, and so he knows he must get as close to that ideal as he can. (In fact his probability of error will quite simply consist of the gap between that ideal and how far he fell from it.)
Frankly, I shouldn't have to have explained this. It should be obvious that true scientific facts are conclusions derived with logical validity from true premises. Why would it be any different for morality?
(6c) McKay: I see one of our two main differences as being over whether fully informed and rational people will always act for the good of others.
Carrier: I never said fully informed and rational people will "always" act "for the good of others." This looks like another fallacy of circular argument: you seem to be assuming a priori that what is moral is what is done "for the good of others." I do not assume that. Indeed, I think it's obviously false. As in Marxism: sacrificing all your property and income "for the good of others" is in fact immoral. Conversely, reasonably respecting an individual's privacy and personal property is moral, which entails accepting that an individual who acts reasonably to protect their privacy and personal property is behaving morally, too.
We will react negatively to someone who does this in a fashion that harms other people (and our negative response to that is one of two major reasons why it is not in an individual's best interests to do that, the other being that it is in an individual's best interests to be compassionate, due to the internal states compassion produces, which internal states will then harm them directly when they knowingly harm others, thus it is again not in an individual's best interests to do so), but we do not expect them to protect their own property and privacy "for the good of others," as if their own property and privacy "must" benefit anyone but themselves.
It is morally proper to have and care for your own stuff. It doesn't have to benefit anyone else. It's when you start hoarding stuff beyond your needs, while others thus become deprived by your avarice, that you become an uncaring nuissance, which will then have those negative internal and external consequences upon yourself. And then such behavior is no longer in your best interest. Only the ignorant or irrational do not realize this, i.e. only those who are not making decisions based on conclusions drawn with logical validity from true premises.
GT explains why this is the morally correct view of the matter, drawing on ample scientific evidence already gathered, and making further testable scientific hypotheses that have a high prior probability of being confirmed, owing to the evidence of similar past cases and prior confirmed hypotheses (like Game Theory).
(7c) McKay: The other [of those two disagreements] might be over whether we should morally condemn people who cause harm through actions in their own self-interest. If you agree that we rationally should react to such an action in the same way that we react to immoral actions, then that part of our dispute is merely terminological.
Carrier: I don't even see a terminological dispute. Acting in my own self-interest entails making alliances with my neighbors to defend our interests from attacks by others. One such mode of defense (among the cheapest in fact) is public condemnation. I don't know why you would think I had said anything the contrary. Indeed, GT entails two warrants for such behavior: not only must we use condemnation as a tool to defend our interests (e.g. it is in our interest to protect minorities from exploitation because a system that does not do that won't protect us when we become the minority), but also, because cultivating compassion is psychologically and socially in our interest (e.g. science has already documented that it entails a more satisfying life), and compassion directly causes us to condemn behavior that unnecessarily harms others, simply because such behavior causes us pain, too.
And on GT, moral condemnation would be premised on both facts: when we morally condemn, we are in effect saying that the immoral actor is actually acting against his own interests and therefore behaving irrationally or ignorantly (or both), first by disregarding how he will be adversely affected by the social system that his own acts only reinforce (and not just in the form of our retaliation, but also that as well), and second, by depressing the quality of his life by suppressing his experience (for example) of empathy.
The Final Question
That completes the questions I received from McKay. Last but not least, there is another objection I've heard from other persons, one of whom I quote anonymously here (with some truncation of grammar), and conclude with my response:
Question: "I do not believe all goals seek changes in happiness, and not all that we want is some conscious experience. My goal to help my mom is not a goal to get the good feelings from helping my mom, it is to actualize the state of the world where my mother is helped."
To which I replied: That can't be true, and is a common example of people often not being good at psychoanalyzing themselves. Because you simply must have a reason to want to actualize that state of the world, and that reason must be psychologically motivating--so motivating, in fact, that it overrides other desires (like to be lazy or selfish or help someone else). Otherwise you wouldn't do it (and thus it wouldn't really be a goal of yours).
When we analyze what motivating reason that could possibly be, we will always find that it's a desire for a conscious experience (more correctly, a conscious state, that of satisfaction, which consists partly in the absence of perturbative feelings, e.g. of having failed one's mother). The same will follow for any other goal you actually have. All the scientific literature on emotion and motivation confirms this. And from that empirical fact, GT necessarily follows--as I formally demonstrate in TEC (pp. 359-64).
The analysis is always the same. If we are moral because being so is enjoyable (we "like" being good, and like seeing or thinking about its results), then we are serving our self-interest through pleasure-seeking. If we are moral because our conscience steers us clear of doing what we will be ashamed of or feel guilty for, then we are serving our self-interest through pain-avoidance. Likewise if we are seeking or avoiding the external consequences of either behavior (making friends and earning favors vs. making enemies and creating a hostile environment for ourselves). And yet that constitutes a complete description of a moral person: one whose conscience prevents them doing harm, and who enjoys doing good, thus making them a good person. That they are doing this in order to produce better outcomes for themselves than would otherwise be, is simply why being moral and being rational are synonymous.
When we analyze what motivating reason that could possibly be, we will always find that it's a desire for a conscious experience (more correctly, a conscious state, that of satisfaction, which consists partly in the absence of perturbative feelings, e.g. of having failed one's mother). The same will follow for any other goal you actually have. All the scientific literature on emotion and motivation confirms this. And from that empirical fact, GT necessarily follows--as I formally demonstrate in TEC (pp. 359-64).
The analysis is always the same. If we are moral because being so is enjoyable (we "like" being good, and like seeing or thinking about its results), then we are serving our self-interest through pleasure-seeking. If we are moral because our conscience steers us clear of doing what we will be ashamed of or feel guilty for, then we are serving our self-interest through pain-avoidance. Likewise if we are seeking or avoiding the external consequences of either behavior (making friends and earning favors vs. making enemies and creating a hostile environment for ourselves). And yet that constitutes a complete description of a moral person: one whose conscience prevents them doing harm, and who enjoys doing good, thus making them a good person. That they are doing this in order to produce better outcomes for themselves than would otherwise be, is simply why being moral and being rational are synonymous.
11 comments:
I wrote a paper on defining well-being recently and I have to admit your analysis of morality here kinda steals my thunder a bit. I fully understand your theory, and that's why it's giving me a headache. I do feel like there is still use for my definition of well-being despite "satisfaction" potentially replacing it as the highest moral imperative. I'm not sure what that use would be though. And I still feel a bit of skepticism whether satisfaction would actually trump well-being (i can't quite articulate this yet).
Last night while thinking about this I realized well-being could be more useful when it comes to assessing what you should do for another creature (in the real world, with limited knowledge). It's simply easier to look at how your actions might affect their well-being than look at how it might affect their "satisfaction."
Maybe I could get you to read my definition of well-being to see your thoughts. It's only 5 pages and is rather easy to follow imo :D
Yet more logorrhea and moralistic drivel from Richard the Preacher.
As though the wolf were ever "morally" wrong for proudly wringing the neck of the lamb.
(Not counting, of course, when the lamb invents a mystery "true" world with which to wage nihilistic revenge against the actual world we know. The moralist, too, is an "immoral" warrior.)
How is it that the pagans, in their divination of suffering and becoming and life, were so fundamentally healthy and yet Christianity has made us so ill that even 2000 years later some are still spouting this moralistic nonsense about the wrongness of suffering, and hence ultimately of life itself?
I'm really digging Goal Theory, but I have two questions:
1.) For someone who is generally concerned with being compassionate, what moral rules should we use to guide our action, if any? What reasoning would we use to solve moral issues like gay marriage or abortion? Should we use utilitarianism, and say that we should do whatever maximizes the happiness for everyone?
2.) Do we have any moral obligations to entities that are not capable of being moral agents because they lack the intelligence (psycopaths, nonhuman animals, trees)? If so, how would we derive them?
I suspect one way to make progress with these questions is to consider a 3rd actor observer with his own premises and thoughts of what "ought" to be done in addition to just the doer and the person to whom something is done.
>>one way to make progress with these questions is to consider a 3rd actor observer with his own premises and thoughts of what "ought" to be done
We should give him a name too, maybe something like "God"? Or "Richard"?
solon, your tenacity is comical. What motivates you to attack Dr. Carrier so vehemently?
Offtopic: Your skepticon 4 video is up at HamboneProductions's YouTube Channel.
I don't even know what Solon is talking about anymore. It's so disconnected from anything my blog actually says that it's basically unintelligible in the present context.
chancecosm said... It's simply easier to look at how your actions might affect their well-being than look at how it might affect their "satisfaction."
It's possible many translation models can be useful in different contexts. By analogy, it's easier to talk biochemistry without framing everything in terms of reductive quantum mechanics, even though that's all biochemistry is. Thus perhaps it's easier to talk about well-being than satisfaction, even though the one reduces to the other. That's a matter of pragmatics and applications, and I am only addressing foundations here.
As to the matter of application, though, my only worry would be that the culturally charged notion of "well-being" may lead you to class states as "well-being" that are less satisfying than other states, and in practice an informed agent will always prefer the latter. In effect, they will then have no use for your "well-being" because they can get something better, something more satisfying.
So either you are just defining well-being in terms of greater net satisfaction (in which case they are synonyms, your definition just being an expansion or elaboration of what is already meant by satisfaction) or you are not talking about the ultimate goal of human action anymore and thus are no longer talking about morality (because if there is some Y that one ought to do instead of X, X is no longer a moral imperative, Y is, as a moral imperative is by definition an imperative that supersedes all other imperatives).
Peter said... For someone who is generally concerned with being compassionate, what moral rules should we use to guide our action, if any? What reasoning would we use to solve moral issues like gay marriage or abortion? Should we use utilitarianism, and say that we should do whatever maximizes the happiness for everyone?
On these sorts of questions, read Sense and Goodness without God, in particular noting how all moral systems are actually unified and inseparable. Utilitarianism, Kantian deontology, and virtue ethics are all the same thing. Always it's the consequences that make the difference in merit for any action (utilitarianism), but among those consequences are the net consequences to yourself in the form of what sort of person you become in acting a certain way (Kantian deontology), and the only reliable way to gain the benefits of right action is to habituate and normalize moral dispositions so that they are actually enjoyable and automatic (virtue ethics). Or read another way, it's the consequences of the sort of person you are that matters overall and that you realize in your actions (consequences = utilitarianism; sort of person that you are = virtue ethics; that you realize in your actions = Kantianism).
They can't be separated. A utilitarian cannot rationally ignore the consequences of what sort of person he becomes when he acts (Kant) or what sort of person he becomes overall (virtue ethics) as those are real and serious consequences. Likewise a Kantian cannot even derive a categorical imperative without reference to consequences (as in deciding what rules can be universalized, one can only select based on the consequences of all the options) or virtues (as the decisions you make that determine what sort of person you develop into are themselves the proper objects of categorical imperatives, e.g. you would will to be a universal law that one ought to cultivate a compassionate character in themselves).
The best way to think of a proper system is to use a top-down hierarchy of (1) virtue ethics, (2) with virtues identified according to the total consequences of the actions they produce and the effects they produce in you, (3) keeping in mind such consequences as how your behavior influences others or what the consequences would be if everyone acted that way (Game Theory has now replaced Kant in that respect, as being far more scientific, and thus Game Theory is now far more important to understand and study than Kant ever was).
Peter said... Do we have any moral obligations to entities that are not capable of being moral agents because they lack the intelligence (psycopaths, nonhuman animals, trees)? If so, how would we derive them?
The answer is yes, but not necessarily the same kinds of moral obligations. Overall life satisfaction depends on being compassionate, and compassion compels you not to enjoy or want pointless torment to exist, no matter what or who is experiencing it. It would cause you pain, and thus diminish your life satisfaction, to be a cruel or wholly indifferent person. But destroying an animal, for example, is not destroying a person (an animal's life is indifferent to when it dies, because it does not become anything and has no awareness of being something). Thus eating animals is fine as long as you aren't torturing them.
Trees can't experience torment, so that's not an issue. But how you treat trees has consequences to animals and people that you must consider. For example, it is a scientific fact that the density of healthy public trees in a neighborhood is inversely proportional to the crime rate (and causality has been established, thus increasing trees reduces crime; decreasing trees increases crime). As crime reduction is necessary to human satisfaction (everyone's, including yours), reasonable tree protection and development is a moral imperative (and removing trees unnecessarily is immoral--once one is aware of why). The relationship to crime happens to channel through the relationship between tree health and density and human happiness (you actually enjoy living in such neighborhoods more) and thus the life satisfaction effect of trees extends well beyond its role in modulating crime. But like many things, the cost-benefit ratio can make concern over tree density a low priority item in the scale of moral imperatives (e.g. needlessly cutting down a tree is not equivalent to murder).
How we treat psychopaths, meanwhile, is going to be more a matter of protecting society from them than anything to do with the psychopath themselves. Torturing them for fun is wrong. Even merely retributive justice is pointless and thus irrational. But depending on the circumstances you may have to cause them considerable pain or loss for the greater good, whenever doing so is the lesser evil. This becomes a matter of weighing relative risks and costs. But a hardened psychopath experiences no significant compassion and thus our compassion for them will be much more limited and more approximate to compassion for a shark or rabid dog. Pointless torment, bad. Killing them, not so much. Provided they remain an incurable threat.
Post a Comment