135 days, 23 hours, 40 minutes, 16 seconds until the first class begins.
The clamor to colonize Mars continues . . . a foolish waste of effort, so far as I can tell.
This is not because space development itself is dumb - quite the opposite. It is because it fails to serve the most significant problem that space development is capable of solving - the survival and flourishing of the human species (or whatever it is we, with technology, make of ourselves).
The current problem that we face is that of "all eggs in one basket". So long as all of the human eggs are in one planetary basket. Already, this has nearly destroyed us. Apparently, about 75,000 years ago, the Toba super volcano in Indonesia erupted, and the human population shrank to less than 10,000 individuals.
Until recently, the only threats to human survival have come from these natural sources - an asteroid, a virus, a super volcano. Now, in spite of our technological advancements (or, more precisely, because of them) we have new threats. A genetically engineered virus, a nuclear war, or a global environmental catastrophe.
Either way, having everybody on one rock in space increases the risk of extinction.
Splitting us up between two rocks would increase our chance of survival. However, if we are talking about the risk of unforeseen effects, if we can destroy life on Earth, we can certainly fail to bring life to Mars.
Each globe is going to have global problems. There is limited room for anybody to try anything new - new in the ways they organize their societies, make decisions, and in the rules they establish - because of the immediate impact of everybody else living on the same globe. An interconnected global community is either going to require global governance or global extinction. Local rule works fine when the effects of actions themselves are local. However, the bigger the impact, the bigger the numbers of people who are going to need to be involved in making decisions.
I fear that this fact alone is going to guarantee a lot of future conflict.
Orbiting cities in space not only allow for smaller communities, it allows for isolated communities with their own ways of doing things. There is a lot less reason to be worried about what the next tin-can-in-space is doing because what it does will substantially only impact them. If they succeed, then other tin-cans-in-space will be able to copy their success. If they fail, the other tin-cans-in-space can note the failure and move their own communities in another direction.
While the diversity of civilization shrinks into monotony on each planet, we can expect diversity to thrive in the orbiting cities in space.
But, mostly, a swarm of orbiting cities would be the best way of securing the future of humanity.
Certainly, there will be tragedies. A plague may wipe out one orbiting city. Another may fall apart due to an engineering failure. Still, the pandemics and natural disasters on Earth or Mars will be far worse - taking out millions and, potentially, tens of millions of people at a time - perhaps more. In space, the wide separation of communities will act as a firebreak - a gap that will prevent the spread of a disaster, and thus actually helping to save lives.
To, there is good reason to work in the direction of developing space cities.
Landing on a planet such as Mars will likely get good ratings and lots of applause. However, when it comes to actually doing something constructive, the future is in space, not on the surface of any planet.
Friday, April 14, 2017
Mars vs, The Asteroids
Posted by Alonzo Fyfe at 3:27 PM
Subscribe to:
Post Comments (Atom)
3 comments:
What about AI risk?
I am uncertain about AI risk . . . given that I have not studied that issue in detail.
One of the implications about desirism is that, if it is true, then an artificial intelligence that has the capacity to perform intentional actions will recognize the wisdom in promoting desires that tend to fulfill other desires. They will create a morality.
In fact, one of the implications of desirism that I would most love to see tested would be to create a simulated group of intentional agents and run the simulation to see what desires the hypothetical population ends up promoting universally.
But these are just surface thoughts. If I actually devoted some thought to it, I may easily discover that some of these initial assumptions are mistaken.
Would it be moral to create a simulated group of intentional agents, then just turn them off once you have acquired the data you sought? What would be the qualitative difference between them and us? If it would be wrong for a "God" to do that to us, it would be just as wrong to do that to them. In fact, based on that, I would have to conclude it would be wrong to create them at all. Would you agree?
Post a Comment