Joy, growth and choice and all that!
But what does this mean about us (smelly, hairy, violent, sex-obsessed, chaotically creative and cultured, beautiful, loving and malevolent,...) people, in particular?
Of course, you could have all these glorious, abstract-sounding values preserved without any humans around.
But the existence of humans -- in spite of all our imperfections -- certainly doesn't contradict joy, growth and choice. Indeed, the forcible abolition of humans would be a rather strong violation of the value of choice.
What Cosmism encourages is not the abolition of humans, but the transformation of humans into something more joyful and more splendidly growing than current humans -- guided not by force but by human intentionality.
Cosmism does not encourage the forcing of transformation or transcendence or transhumanity on humans whose choice is otherwise.
Cosmism does advocate not allowing those who choose to remain "legacy humans" to diminish the joy, growth and choice of others -- most likely there will always be some balancing to be done, as maximizing all three of the "joy, growth and choice" values may not be possible given the constraints posed by the universe.
Hypothetical Tough Choices
Hypothetically one can construct scenarios where there is a clear, crisp choice between, say,
- A static, depressing, fascist world dominated by humans
- A joyful, growing, freedom-ful world without humans
The Cosmist answer is obviously: the latter.
In Cosmism, humans are valued as sentient beings and complex pattern-systems -- but they're not viewed as uniquely important, and if it happened that the persistence of humanity violently contradicted higher, broader values, then the values would win.
But this kind of scenario seems extremely unlikely to occur -- for one thing because humans are just not going to be that powerful compared to transhuman minds we will create (or that our creations will create, etc.). It seems unlikely humans will have the power to significantly perturb the joy, growth and freedom in the future universe, even if they wanted to. My gut feeling is that once we have transcended the legacy human condition, these artificial dichotomous situations are going to look very silly in hindsight.
Someone asked me, recently, the following question:
Hypothetically, if there were a situation in which you knew that the development of AI would directly harm a massive amount of people would you decide to end your work or keep going?
I won't repeat my whole answer here but the core of it was as follows:
If a path to AGI is leading in that direction, it's probably the wrong path, and a better path to AGI can be found.
Obsolete the dilemma!
Shouldn't We Seek to Guarantee the Ongoing Welfare of the Human Race?
At the moment, my gut feeling (which could change as we all learn more about these issues) is that any kind of guarantee of human well-being unto eternity post-Singularity, is going be bloody hard to come by.
It seems more feasible to me that one could come close to guaranteeing a peaceful "controlled ascent" for those humans who want to increase their mental scope and power gradually, so that they can experience themselves transcend the human domain.
A more important, statement, perhaps, is that early-stage AGI scientists are likely to help us understand these issues a lot better.
But it's important to recognize that fundamental growth inevitably involves risk. Growth involves entering into the wonderful, frightening, promising unknown. In this kind of situation, guarantees are not part of the arrangement....