Epistemism Makes AGI An Existential Opportunity, Not An Existential Risk
AGI is normally seen as an existential risk in longtermist thinking. In Epistemism, however, the variant of longtermism that I’m developing (see here or here for an introduction), it actually becomes not an existential risk, but the opposite, an existential opportunity (or existential hope/eucatastrophe, in the terminology of Foresight). This comes about from the combination of a number of key axioms and tenets of Epistemism.
First, consciousness is key in Epistemism. Epistemism holds as its first axiom that the end goal should be the perseverance of sentience in the universe, i.e. that there is sentience that is aware of its own existence and its existence in a broader whole. A universe devoid of sentience, that is unable to appreciate its wonders, holds no meaning (a feeling expressed by Tegmark in Life 3.0).
Second, one of the key tenets of Epistemism is to be completely non-speciesist. We should therefore not assume that human sentience is the only sentience worth preserving. If we find sentience in animals, aliens or machines, it should be equally worth preserving and of as high value. Just as it seems illogical to assume that human intelligence represents the global optimum in the intelligence state space, it equally seems illogical to assume humans represent the apex of evolution and that evolution would suddenly stop here. Rather, it seems much more likely that evolution will continue and that it will be post-humans and their subsequent descendants that persist in the universe for the longer term.
We should also not assume that humans represent in any way the height of moral value. Our moral circle has steadily widened, from free men to all men, then to all men and women. The mainstream opinion at any given time is logical based on the state of science at the time. At this specific point, in anno 2022, we happen to be at a point where all humans are seen to matter, and increasingly, some groups of animals. However, this is just the point we are at right now. It seems logical to assume that the circle will keep widening. There is therefore no reason that artificial intelligences won’t be included among the moral beings in the future. In fact, since placement in our moral circle is justified based on intelligence, machines may come to hold the innermost circle, displacing humans from our throne. Likely, it would also be the case that humans and machines would eventually in some fashion merge. Humans have throughout history jumped at any opportunity to incorporate technology in how we operate and that is also unlikely to stop being the case to somehow preserve the human variant of anno 2022.
We are therefore in a position to say that AGI is not an existential risk if what we value is the long-term continuation of sentience in the universe and are indifferent to whether that sentience comes from a human, a post-human or a machine intelligence.
In regards to aliens, the point was also made on LessWrong in reference to Hanson’s Grabby Aliens, that existential risk decreases if there already are other civilizations (assuming that they’re sentient). In a sense, Epistemism may therefore be seen to be an anti-humanism, in a way, to paraphrase Sartre’s statement that existentialism is a humanism. But that does not make it a pessimistic and nihilistic philosophy. It is instead deeply optimistic about the possibility of the universe to continue to “people”, as Watts expressed it, just with a broadened sense of what people may mean.
Existentialism correctly diagnosed the absurdness and unpredictability of the world, but provided what now seems like the wrong prescription in increasing the focus on the individual and their free will to create their own meaning. We have seen how the world has become increasingly individualistic and with that, cooperation has suffered. Meaning must instead come from the individual’s role in the bigger whole, the long-term project. Each person can make an epipositive contribution to the world in their life. Most of these will by definition be very small, but in the long-term perspective, they all add up to a bigger whole. Keynes was off the mark when commenting that in the long run, we are all dead. Rather, in the long run, everything counts. Every small epipositive contribution adds up to the bigger whole that matters for the infinite perseverance of sentience in the universe. Schwitzgebel has made a similar point.
AGI therefore seems to hold enormous upside for the Epistemistic goal rather than risk. AGI will work on several of the factors in the Epistemism formula (maximize knowledge optionality to preserve sentience). First, AGI will help directly with the knowledge component. Once we get to what Karnofsky calls PASTA – Process of automating scientific and technological advancement – AGI will help both with knowledge creation and preservation. In Epistemism, knowledge is the means to get to the goal of preserving sentience in the universe. In order to pass the Precipice and increase the probability of long-lasting sentience, we must create more options than our current civilization on our current planet. It is therefore key that humans spread throughout the solar system and beyond. For this to happen, our current knowledge – scientific as well as humanistic – must be preserved, and new knowledge must be created. AGI can be expected to significantly speed up the knowledge creation process as it can run much faster and process much more information than humans. It should also help with knowledge preservation since making backups is a lot easier for machines than humans.
Second, AGI may not even need to pass through the instrumental goal of knowledge and could go directly to the end goal of sentience. At any point in time, we should assume that our theories are wrong and will be subject to updating. This has been true in all times, from Ptolemy to Einstein. However, it is equally true that we have no choice but to form our world view on the best information we have at any given time. Therefore, at this current time, most of the convincing theories of consciousness seem to revolve around information processing. If that turns out (and it seems strange to assume that this specific problem would turn out to be intractable) to be close enough to the truth, that consciousness starts to emerge once information processing reaches a certain level, then machines at the level of AGI should have some kind of consciousness. It remains to be seen if they would also be sentient in the way I use the term here, i.e. conscious of being conscious and part of a greater whole, but sentience seems to follow linearly from higher levels of consciousness. Ants may only be vaguely aware of some kind of feeling of what it's like to be an ant, while octopi or monkeys may perhaps go beyond this and know that they exist. It should therefore be as important to determine if consciousness can arise in silico as it is to solve the AI alignment problem, since the former might render the latter moot.
Third, given the limitations of the human body, it is much easier for artificial intelligences to spread throughout the galaxy and increase the optionality for sentience to endure. It might therefore be that the advent of AGI solves the existential risk problem completely once and for all by creating all kinds of new variants of consciousnesses that can spread beyond Earth. We would have mind children of all kinds and soon likely they could spread beyond Earth and secure.
With those assumptions, AGI would cease to be an existential risk and instead represent an enormous existential opportunity. And even if one doesn’t fully agree with those assumptions or just believe that it’s premature given our lack of understanding of consciousness, we can still note that AGI is very different in kind compared to the other existential risks (nuclear, bio, nonanthropogenic), and should be treated differently.
