Is it right to create radical new technologies when they are potentially dangerous?
Shouldn't we prioritize the survival of our species, rather than taking risky gambles on new technologies that could lead to great things but could also lead to destruction?
Shaping the future involves a host of difficult balancing acts, indeed.
The Proactionary Principle: Weigh the Costs of Action versus the Costs of Inaction
If the human world were a well-organized, peaceful place, in which some benevolent Central Committee of Technology made centralized decisions about what technologies to explore at what paces -- then, almost surely, it would make sense to manage our development of powerful technologies very differently than we do today.
But that's not the world we live in. In our present world, multiple parties are working on advanced, potentially radically transformative technologies in diverse, uncoordinated ways. Many of these parties are working with an explicitly military goal, oriented toward creating advanced technology that can be used to allow one group of humans to physically dominate another.
In this context, there is a strong (though not unassailable; these are difficult issues!) argument that the most ethical course is to move rapidly toward beneficial development of advanced technologies ... to avoid the destructive (and potentially species-annihilating) consequences of the rapid development of advanced technologies toward less beneficent ends.
Do We Need an AI Babysitter?
An extreme form of this position would be as follows:
We humans are simply too ethically unreliable to be trusted with the technologies we are developing ... we need to create benevolent artificial general intelligences to manage the technology development and deployment process for us ... and soon, before the more monkey-like aspects of our brains lead us to our own destruction.
There is a group (I'm on their Board, but not heavily involved) called the Lifeboat Foundation that exists to look out for "existential risks" -- things that threaten the survival of the species. This is a worthy pursuit -- but at the moment, it's very difficult for us to rationally assess the degree of risk posed by various technologies that don't yet exist.
One macabre theory for the apparent lack of intelligent life elsewhere in the cosmos is the following: on various planets in the galaxy, as soon as a civilization has reached the point of developing advanced technology, it has annihilated itself.
A less scary variant is that: once a civilization reaches advanced technology, it either annihilates itself or Transcends to some advanced mind-realm where it's no longer interested in sending out radio waves or gravity waves or whatever, just to reach civilizations that are in the brief interval of having reasonably advanced tech but not yet having reached Singularity.
Ray Kurzweil, among others, advocates "selective relinquishment," wherein development of certain technologies is slowed while advanced technology as a whole is allowed to accelerate toward Singularity. This seems what is most likely to happen. The outcome cannot be predicted with anything near certainty.
It seems necessary to quote the famous Chinese curse: "May you live in interesting times."
Which from a Cosmist view is -- of course -- closer to a blessing.
Certainly, we must approach the unfolding situation with ongoingly open hearts and minds -- and appropriate humility, as we are each but a tiny part of a long evolutionary dynamic, that extends far beyond our current selves in both past and future.
But there is also cause for activism. The future is what we will make it. Sociotechnological systems have chaotic aspects, so small individual actions can sometimes make dramatic differences. There may be opportunities for any one of us to dramatically affect the future of all of us.