Thursday, February 23, 2017

The Journey to Graduate School Officially Begins

186 days until the first class . . .

And, the process for admission officially began today. When I received my admittance letter in 2016, I asked for a one year deferment so that I could make some more money. Now, that year is coming to an end. I just got my letter announcing my admission into the graduate program for 2017.

So, now begins all of the bureaucracy and red tape for becoming an actual student.

...you will receive more information about fall orientation meetings and workshops, registration, housing in the Boulder area, etc.

Well, this is what I wanted.

In other news, I promised to create a document on the foundations of desirism by mid-march. That document is progressing. The second part of the document is going along a path other than that which I originally intended though, actually, I think it makes more sense.

The first 15 or so sections of the document came from my blog postings, "Desirism Book". They start with the story of Alph alone on a planet with one desire - a desire to gather stones. It then describes in various ways how value is a relationship between a state of affairs and a desire. Things like: a desire is a propositional attitude that can be expressed in the form, "Agent desires that P" where "P" is a proposition. A desire that P provides Agent with a motivating reason to realize any state of affairs in which "P" is true. That which is useful in bringing about S has instrumental value, and that which results from realizing S but is not aimed for is an unintended consequence.

It then applies this concept of value to Robert Nozick's experience machine, G.E. Moore's "beautiful world", J.S. Mill's Socrates unsatisfied versus a fool satisfied, pushpin versus poetry, and similar questions that moral philosophers like to talk about.

Then I introduce a second person, Bett, and explain how Alph's desire to gather stones counts as a reason to give Bett a desire to scatter stones.

Then the discussion goes from there to introduce the fact that it is possible to mold desires using reward and punishment - including praise and condemnation.

Here is where that booklet took an unintended course. I made the society more complex and introduced the fact that it is reasonable in such a society to use reward and punishment to promote aversions to such things as lying, theft, assault, rape, and murder. These are aversions that people generally have many and strong reasons to create. I then went on to explain how this accounts for moral obligation, prohibition, and non-obligatory permission. I have written a section on how praise and condemnation are included in the meaning of moral terms.

The next section I will write has to do with supererogatory actions and the concept of an "excuse".

With those two sections written, I should then have a brief account of how - building up from premises that no materialist should have any trouble with - we can get a materialist account of morality that works in the real world that science has revealed to us.

I will then be posting that document in the desirism group on Facebook, asking folks to help me to edit it and improve it so that it serves as a useful introduction to desirism. It should be a document that people can hand out to others and say, "If you want a simple introduction to the theory, this is it."



Tuesday, February 21, 2017

Emotion and Morality

188 days until the first class.

Sometimes, I have my doubts about my ability to do this whole philosophy thing and whether I can get into graduate school.

On Friday, when I was at the university, I overheard one professor say to another, that, unless an applicant shows no absolutely no aptitude, as long as the student can afford it, to go ahead and let them in the Master's program and see what that person can do.

Is that my status? I show at least some aptitude and can afford to pay for it?

Well . . . whatever . . . things are as they are. I will give it a shot and see what comes of it.

I have started my paper for this class I am somewhat auditing. An aspect of the class is concerned with the question of whether moral conclusions are based on emotion or reason. I wish to defend the idea that emotion is not a reliable foundation for moral beliefs; it is a reliable foundation for identifying the prejudices of one's age, but not all of them are moral. However, emotion (or desire) is essential for action, and emotions (desires) are, to some degree, malleable. Consequently, it is important to ask, "What emotions (desires) should we have?" This, of course, will get to the heart of desirism.

The first part of the course spoke about emotions as molded by evolution. This is true, to some extent. I have already sent the professor an email arguing that this gives us reason to deny that there are external reasons. If external reasons did exist we would have to postulate a strange coincidence between the perception of external reasons and that which will produce reproductive success. The best explanation we have suggests that we have evolved internal reasons that tend to motivate us to engage in behavior that is useful in an evolutionary sense. However, evolution has given us malleable brains that allow us to adapt to a wide variety of environments, and this malleability suggests that we have some influence over the desires that other people have.

The second part of the course has had to do with neuroscience and the different parts of the brain that "light up" when people make moral judgments. Here, too, I have already sent the professor an email that says that we can study the parts of the brain that "light up" as people address questions such as the age of the earth and the possibility that humans evolved from simpler forms of life. However, the former would not make us astronomers and the latter would not make us evolutionary biologists. There are differences between investigating beliefs about X and investigating X.

I do not know what she thought about those answers. She has not made any comments yet.

I am wondering what she would say about receiving actual paper.

Monday, February 20, 2017

Some Thoughts on Moral Luck

189 days until classes start.

I have not been posting to the frequency that I should. I need to be writing more - even if it means reading less. Though writing is not the absence of learning. I have learned that trying to explain my ideas forces me to make more sense of themselves myself. I get more from my writing than anybody else does, I think.

I fear that I have been having some doubts about my ability to do this philosophy thing. It's standard self-doubting, something that I am certain many people experience. I just have to keep plugging away and see what comes of it. If I am unable to do a decent job at this, at least I have not been so foolish as to think I could do a decent job at something on which the lives of others depend, like being President of the United States.

Imagine having an incompetent self-important person in that position. The consequences could be catastrophic.

I recently posted another paper on the Desirism forum of Facebook. This one concerns the moral failings of the Bernie Sanders presidential campaign. Even though the election is long past, this topic still comes up, particularly by people who think that Sanders would have defeated Trump and brought into existence a new golden age of global peace, harmony, and understanding.

The paper does not concern the question of who would have won the election. These types of questions are outside of my area of expertise. Instead, it concerns a question that I have addressed a few times in this blog - the moral failings of the Sanders campaign.

There were three that particularly concerned me.

(a) Sanders' preference for political ideology over scientific fact, particularly in the areas of nuclear power, genetically modified foods, alternative medicine, and fracking.

(b) Sanders use of an "us versus them" political message to rally a group of supporters against "them" who he branded, as a group, as the enemies of "us". Trump used this technique targeting immigrants. Sanders used this technique targeting billionaires.

(c) Sanders' total disregard for the well-being of the global poor.

As I mentioned, I discussed this issue in some previous posts. However, in the paper I posted I put in more work in finding sources and in producing complete arguments. You are invited to find the posting on the Desirism facebook group if you want to know the details of the argument.

In the mean time, I have had an opportunity to attend the Phil 5100 class at the university twice and see three other presentations.

In one of these class lectures, I acquired a new and different perspective on the issue of moral luck.

Moral luck concerns the fact that a person can be condemned and punished for things that are not their fault. The example used in class concerned two friends who shared some drinks and tried to drive home. Both were intoxicated. One of them left the road and crashed into a tree. The other left the road and crashed into a young child, killing the child. We declare that the second person deserves more punishment, even though both agents are equally culpable. No difference can be found in their character - in their desires - yet one deserves more punishment than the other.

I have long thought that this was a problem. What we should do is determine what the normal amount of harm is by taking into account the cases in which there was extreme harm and the cases in which there was no harm at all, average the harms, and punish each person according to the amount of this average harm.

However, in this class I realized that all of the effort to average these cases is unnecessary if we only punish people for harms done.

Let us assume that there are 10 such accidents, and the amount of harm they inflict is: 0, 10, 2, 0, 0, 1, 9, 5, 0, and 4, for a total of 21 units of harm. We could add this all together, come up with the number "2.1 units of harm per accident" and punish everybody we catch according to this average risk of harm.

However, if we punish each person for harm done we get effectively the same result.

The "0" values represent people who made it home safely and do not get caught. I am assuming that they will get away with their crimes. The others got caught - some of them inflicting minor damage, and a couple inflicting extensive damage. Punish each person according to the damage done and, in the end, one would inflict punishment according to an average of 2.1 units of harm. The difference is that, instead of each person being punished as if they had inflicted 2.1 units of harm (including those who did not get caught), this represents the average punishment which, in some cases, is 0 and in others is quite high.

This option saves society the effort of determining what an average harm is. Yet, one still inflicts an average punishment that is proportional to the average harm, even though some of the perpetrators (those who made it home without incident) do not get punished at all.

Another benefit from this type of system rests in the fact that there may be ways of reducing the risk of harming others or the amount of harms inflicted. Such a system invites people to search for and follow those procedures. Doing so will reduce the chance or the amount of harms they inflict on others, and would reduce their level of punishment. We may not be able to consciously identify those factors, but that does not mean that they are not out there to be considered.

We allow for luck in a number of areas. An example of moral luck with respect to credit can be seen in the case of two soldiers who rush out of their shelter to assault an enemy stronghold. One of them, in our example, gets shot right away, while the other manages to get up to the strong hold, throw in a couple of grenades, and neutralize the stronghold. The former person becomes just another casualty. The latter becomes a hero. Yet, in this example, there is no difference between the individuals that determined these different outcomes. It is just that one of them happened to get in the way of a bullet, and the other did not.

And we allow for luck in a number of cases of actions that are neither heroic or blameworthy, but which are morally permissible. Some people buy a lottery ticket and enjoy a great reward. Most others get nothing for their effort. Here, too, their rewards are not based on any difference in their moral character, but we allow the differences in luck to stand nonetheless.

Concerning the other presentations I attended, I fear two of those presentations did me little good. One of them concerned the existence of sets and, I am afraid to say, the discussion went past me. It is as if the speaker was speaking a foreign language. Another presentation on corporate responsibility was given by a speaker whose accent was difficult for me to understand. This made his argument difficult to follow.

The third presentation, on the other hand, concerned conceptions of free will. It concerned two different conceptions of free will I have been trying to find time to write up some comments on that presentation, and hope to at least post something here in the next day or two.

I have spent much of today working on my paper on the basics of desirism. I am up to 15000 words at the moment, and it is still growing. It will likely be 25000 words when I finish, then it will go up on the desirism forum. According to my self-imposed deadline, I need this submitted by the middle of March. I should be treating this like a school assignment with a deadline.


Monday, February 13, 2017

The Value of an Interest

196 days until the start of class.

Nervousness abounds.

In my continuing work in the one class I am "auditing", I have continued reservations about the idea that neuroscience or evolutionary theory can debunk moral claims in the ways in which some people claim. I am still looking at Joshua Greene's work, “The Secret Joke of Kant’s Soul,” and responses to it.

I find a frequent frustration in philosophy to occur in cases in which philosophers are involved in a dispute focused on some specific premise or conclusion, where, at least, the people seem to agree on the truth of one of the relevant premises. Then somebody comes along and questions that mutually agreed upon premise, throwing the whole discussion into chaos. It is enough to drive a person to scream and run from the room.

Yet, that is what I am going to do here.

In discussing trolley cases, researchers seem to agree that whether a person harms another in an “up close and personal” way (e.g., by physically pushing that person onto the tracks in front of an oncoming trolley), or remotely (by pulling a switch that opens a trap door that drops the person onto the tracks) is morally irrelevant. This does not represent a morally significant difference. However, there seems to be a number of people who would not push an individual in front of a runaway trolley to prevent it from running over five others but who would drop that person onto the tracks through a remotely operated trap door.

Honestly, I do not think that this is a morally relevant difference, and those who see it as a difference are making a mistake. However, the way that Greene handles this mistake does not seem to work.

Greene explains the difference between the two in terms of an emotional reaction that, itself, has an evolutionary source.

The rationale for distinguishing between personal and impersonal forms of harm is largely evolutionary. “Up close and personal” violence has been around for a very long time, reaching far back into our primate lineage (Wrangham & Peterson, 1996). Given that personal violence is evolutionarily ancient, predating our recently evolved human capacities for complex abstract reasoning, it should come as no surprise if we have innate responses to personal violence that are powerful but rather primitive.

However, we can say the same thing about our aversion to individual pain. I tend to pay far more attention to my own pain – finding it much more important to avoid or to relieve my own pain than an equal pain suffered by any other person. This stronger reaction to my own pain than that of any other person is “evolutionarily ancient” and “predating our recently evolved human capacities for abstract reasoning.” Yet, I am permitted to treat this as morally relevant. I have a moral permission to give my own pain a priority over the pain of any other person.

Similarly, a parent’s interest in the well-being of their child – or our general interest in the well-being of children generally – probably has an evolutionary component. The physical features of children (and puppies, and kittens) likely arouse in most people protective sentiments that urge us to put the interests of children above those of adults. Yet, we are comfortable with the idea of arguing that whether the interests in question are those of children – and, in particular, one’s own children – is morally relevant.

If this evolutionary account is at all relevant, it seems that we should dismiss the distinction between our own pain and the pain of others, and the distinction between the well-being of children (and, in particular, of our own children) over the well-being of others as well. On the other hand, if our own pain and the well-being of (our own) children retains its moral relevance in spite of this evolutionary explanation, then providing an evolutionary account of the distinction between up-close and personal harm versus remote harm should not debunk that moral sentiment either.

There are consequentialist who would, in fact, argue that we should treat all three of these cases the same. A person ought not to consider their pain more important than anybody else’s claim and ought not to put the interests of children (even their own) above those of any other person. This is the type of consequentialism I wrote about in my previous communication – the type that implies that any interest other than an interest in general utility is a temptation to do evil.

At the same time, we cannot argue that all interests where we can provide an evolutionary account must be respected. Perhaps we can give an evolutionary explanation for a disposition to favor those who “look like us” (for example, with respect to skin color) since they are likely to share more of our genes, or a genetic disposition for males to be less concerned about consent in seeking sex. This would not argue for the moral permissibility of racism or rape. If there is a moral difference to be found here, the fact that we can tell an evolutionary story about a sentiment neither supports nor debunks the moral relevance of that sentiment.

I tend to think that the secret formula concerns the tendency of an interest to fulfill or thwart other interests. But that's just me.

Friday, February 10, 2017

The Impossibility of Consequentialism

198 days until the first class.

Work has gotten exceptionally busy these days, and I am coming to resent its ability to cut into my ability to spend time studying moral philosophy.

I have been able to keep up with my readings . . . and I have continued to send comments to the professor. (And I have continued to fear that this is a poor way to do this.)

Nonetheless, in my most recent comments to the professor I opted to give a slightly detailed argument on the impossibility of consequentialism.

[I]We are all deontologists.

I am sorry for the length of this. Even with this, I fear that I cover some things far too lightly.
 
Joshua Greene, in “The Secret Joke of Kant’s Soul,” argues that deontology is merely a confabulation – a make-believe explanation – that attempts to account for moral judgments that are actually the product of evolved sentiments. Evolution has made us so that we are disposed to object to - for example - “up close and personal” assault. The deontological claim that this violates some right or duty or some aspect of human dignity is a made-up explanation to try to justify these evolved sentiments. But, in reality, they were nothing more than evolved sentiments.

Greene provided an analogy whereby a friend – who goes on multiple dates – offers a number of reasons for preferring some individuals over others such as sense of humor. However, hypothetically we notice that all of the people she likes are exceptionally tall (above 6’ 4”), and those she does not like are shorter. Since height is a better predictor of who she likes or dislikes, we draw the conclusion that she is really judging these people on the basis of height. The other issues she brings up – such as sense of humor – are mere confabulations.

Of course, he must assume that there is no correlation between a sense of humor and height.
 
Yet, as I see it, consequentialism cannot exist without at least a little deontology.

According to Greene, consequentialism involves the cognitive portions of the brain as the individual goes through the effort of evaluating the consequences of various actions. But what does one do with this answer? For example, let us assume that an agent goes through a cognitive process to determine the effects of various actions on the overall number of paperclips in the universe. Even after he computes that one action will produce more paperclips than the other, he still has to care about how many paperclips there are in the universe before this conclusion has any significance.

Admittedly, I am assuming that internalism with regard to reasons for action is true.
 
Now, let us invent an agent who cares about how much overall utility he creates. The more utility he creates, the more he cares. In this case, the agent has an option to do something that will produce 104 units of total utility. Let us further assume, for this agent, producing 104 units of utility has an importance of 4. I use this number only for illustrative purposes. The only thing that matters for the sake of this example is that higher numbers represent greater importance to the agent, and lower numbers represent less importance.

In this example, the agent cares about more than just overall utility. Our agent also has an aversion to personal pain. The more severe the pain, or the longer it lasts, the more important it is to that agent to avoid that pain.

Now, let us consider a couple of cases.
 
Case 1: Let us imagine that the 104 units of utility that the agent will produce has the following distribution: 105 units for everybody else, and -1 unit for the agent's pain. In this case, producing the utility has an importance of 4 while avoiding the pain, let us assume, has an importance of 1. Finding utility to be more important, our agent chooses to bring about utility.
 
Case 2: In this case, the action will also produce 104 units of utility. However, its distribution consists of 109 units of overall utility and -5 units due to the agent’s pain. The agent, in this case, assigns a value of 5 to avoiding this much pain. It is very important to him. It is so important, that the agent will sacrifice the opportunity to create 104 units of utility.

In the second case, how are we to judge this person who sacrificed overall utility for the sake of this competing interest?

The consequentialist response seems to require that we understand his aversion to pain as a temptation to do evil. Without it, he would have given his service to realizing the greater overall utility. However, the aversion to personal pain motivated him in this case to sacrifice this greater good for something that was personally important to him.

In fact, many of our interests other than the interest in overall utility will turn out to be temptations to do evil. With any other interest, we are likely to encounter situations where the importance of this interest will be greater than the importance of the utility one can create. Utility will find itself outweighed most often in cases where the increase in utility is small, but it can also happen where a particularly strong interest goes up against a larger amount of utility.

In contrast, the deontologist will tell us that it is perfectly acceptable to sacrifice overall utility under some circumstances - that other values have a greater priority. We may not subject a person to a great deal of pain, even if it would bring about some small increase in overall utility.
I think I can make this clearer by applying this to some of the "moral dilemmas" that Greene refers to where deontological thinking seems to override consequentialist thinking.
But, first, I wish to look at another sort of case.

At the end of the movie "Mad Max", Max handcuffs a man by his ankle to an overturned vehicle about to explode. He then gives the man a hacksaw. He tells the man that it would take him about ten minutes to cut through the handcuffs, but five minutes to cut through his leg.

It would be useful to have some empirical research to back this up, but I suspect that many people (like the villain in the movie) would be reluctant to cut through their leg, even to save their own life. It would simply be very difficult to do. A person who finds it difficult to cut through is own ankle even to save his own life would generally find it even more difficult to cut through his ankle for the sake of overall utility. Overall utility just is not important enough to most agents.

Now, I would like to compare this to some of the moral dilemmas that Greene mentions in his studies.

For example, there is the case of the mother who is reluctant to suffocate her child to keep the child from crying and drawing the attention of a murderous gang. The "pain" of suffocating one's own child would be like the pain of cutting off one's own foot. In fact, for many, it would be worse. Cutting through one's ankle would be easy by comparison. This is a situation like Case 2 above where an interest in something other than overall utility outweighs the interest in overall utility, motivating the agent to sacrifice overall utility for some other end.

Both types of pains can be explained by appeal to the same types of evolutionary forces. Greene wrote:

The rationale for distinguishing between personal and impersonal forms of harm is largely evolutionary. “Up close and personal” violence has been around for a very long time, reaching far back into our primate lineage (Wrangham & Peterson, 1996). Given that personal violence is evolutionarily ancient, predating our recently evolved human capacities for complex abstract reasoning, it should come as no surprise if we have innate responses to personal violence that are powerful but rather primitive. (P. 43)

The aversion to pain, or to cutting off one's own limb, or to suffocating one's own child is open to the same type of explanation.
 
However, Greene goes further and says that this is something more than a simple desire or aversion. Instead, he claims to be explaining a "moral sense" that something is good - or bad - to do. In the case of "up close and personal" battery, he wrote:

Nature doesn’t leave it to our powers of reasoning to figure out that ingesting fat and protein is conducive to our survival. Rather, it makes us hungry and gives us an intuitive sense that things like meat and fruit will satisfy our hunger. Nature doesn’t leave it to us to figure out that fellow humans are more suitable mates than baboons. Instead, it endows us with a psychology that makes certain humans strike us as appealing sexual partners, and makes baboons seem frightfully unappealing in this regard. And, finally, Nature doesn’t leave it to us to figure out that saving a drowning child is a good thing to do. Instead, it endows us with a powerful “moral sense” that compels us to engage in this sort of behavior (under the right circumstances). (P. 60)

Insofar as a "moral sense" that something is good or bad to do is different from a simple desire or aversion, Greene actually needs to do a little more work to give us an evolutionary explanation for this moral sense. In the same way that nature can motivate our behavior with a mere desire to eat without a "moral sense" that eating is a good thing to do, and a simple desire to have sex without a "moral sense" that having sex is a good thing to do, it can motivate us with to avoid suffocating our own children, to avoid committing battery against another person, and to rescue a drowning child without a "moral sense" that these are good things to do.

However, I do not think that would be necessary. Instead, Greene can give up his idea that we have evolved some type of moral sense and simply acknowledge that we have evolved to have certain preferences, and that those preferences might, in some circumstances, outweigh an agent's concern for overall utility. That, at this point, we must either brand all interests temptations to do evil, or acknowledge that there is a moral permission to pursue interests other than an interest in overall utility. There is a point at which any of us who value things other than overall utility will sacrifice overall utility for one of these other goods.

Since all of us have an interest in at least one thing other than overall utility, and since none of us think that morality requires that we view that interest as a temptation to do evil, it follows that we are all - at some point - deontologists. Sometimes we can sacrifice overall utility for the sake of something else that we value.[/I]

Wednesday, February 01, 2017

Belief, Justification, and the Coming War with China

208 days until classes start.

If we live that long.

I have been spending the last day contemplating the proposition that a war between the United States and China is highly likely.

This, in turn, got me to thinking about beliefs and the justifications for beliefs.

Do you know that the vast majority of complex facts that people claim to know are false? Consider, for example, religious beliefs. There is a wide variety of beliefs about the nature of a God, or even whether God exists at all. Furthermore, many of these beliefs are mutually contradictory. Not only is there a large number of different religions, there is a wide variety of beliefs within any religion. Consequently, at best, only a small handful of people can have true beliefs. The vast majority of people must have false beliefs – regardless of how certain their beliefs are.

Many atheists mock theists on the grounds that, “With all of the various religions out there, isn’t amazing that you got the correct religion and that everybody else is wrong?”

Of course, if you simply add atheism to this set of beliefs, we can make the same claim.

However, this also applies to secular matters. It applies to beliefs about the nature of morality. Of all of the various ideas out there – none of which are actually held by more than a small number of people, “Isn’t it amazing that you managed to pick the correct one?”

So, in considering the proposition that a war with China seems likely, I consider it more likely that I would not be able to make a correct judgment as to what is likely. This is one of those complex beliefs that would benefit from a lot of specialized knowledge that I do not have.

Still, let’s consider the evidence I do have for this belief.

If the Trump administration establishes a blockade of the islands in the South China Sea that China claims is their sovereign property, then war is almost certain. So, the probability of war is to be determined by the probability that the Trump Administration will establish a blockade around those islands.

Let’s examine the evidence for this claim.

If the United States were to establish such a blockade, the Chinese people will force the Chinese government to stand up to the American bully. The government must either challenge the blockade or appear to be weak and unfit in the eyes of the Chinese people. They would not want to do this. Therefore, they will challenge the blockade.

When China challenges the blockade, the American government will either have to use violence to enforce the blockade (attacking the ships or airplanes that attempt to run the blockade), or back down. The United States will almost certainly not back down. Therefore, the United States will almost certainly shoot at those shops and planes.

When the US shoots at those ships and planes, the Chinese will shoot back. And the war begins.

Each of these steps is highly likely – virtually certain. Therefore, if the US sets up a blockade, it is virtually certain that the US will be at war with China.

So, what are the odds that the United States will set up a blockade?

President Trump and Secretary of State Tillerson have already spoken positively about setting up a blockade. (See, Trump Vows to Stop China Taking South Sea Islands). There is a distinction between talking about something that will cause a war and doing it. However, talking about it has no value unless others believe that one will actually do it (otherwise, they ignore the talk as irrelevant).

However, such an attitude can lead to action – which leads to war – in two ways.

The first way is bungling incompetence. That is to say, the Trump administration announces a blockade under the assumption that the Chinese will not dare to challenge them and will back down. I consider this to be a stupid assumption – the people of China themselves will demand that their government stand up to the American bullies or will insist on replacing them with somebody who will.

The second way is if the Trump Administration wants a war. If it did, this would be an easy way to start one short of naked aggression. But why would the Trump administration want a war with China? The main reason is to bolster support for the President. If one wants to improve a leader’s popularity, one proven and effective way of doing so is to start a war. History shows us that leaders tend to be overconfident to the point of delusion in thinking, “They can’t stand up against us. We’ll be home by Christmas.”

Given the way Trump has handled its executive orders and other decisions to date, the first option seems likely. And given Trump’s verbal cruelty, and his eagerness to use the courts against those he has not liked in the past, the fact that he prefers bullying to negotiation and compromise, and the fact that he has created an image that he is a person of strength, it seems reasonable to believe that he will be quick to do something that will start a war and will not back down from that outcome.

By accident or by intention, there is reason to believe that Trump is likely to establish a blockade of the South Pacific islands and, from there, the laws of nature (including human nature) dictate that war is the necessary outcome.

But, then again, though a great many people have an attitude of certainty about such conclusions (even though only a small percentage of these types of conclusions can be true), each person is almost always wrong – I can draw some comfort from the fact that though this chain of events appears likely to me, it is almost certainly wrong.

I hope.