The breakthroughs achieved in artificial intelligence, such as the launch of Chat GPT and its upgraded versions, have sparked excitement but also concerns. A public letter signed by high-tech industry leaders, including Musk, called for a six-month pause in AI chatbot training to allow people to reflect on the potential harm that AI could bring. Additionally, reports suggest that Geoffrey Hinton, the father of AI, resigned from Google and claimed regret, stating that “AI could very well destroy humanity,” further escalating these concerns. However, the question of whether AI will destroy humanity is not only based on current experiences but also a philosophical judgment, namely whether wisdom is good or evil in the ultimate sense. This is a topic that requires serious discussion.
1. Does artificial intelligence have a “motive” to destroy humanity?
Assuming that AI surpasses humans in intelligence someday, could they form a group and confront another group – humans? Could they use their technological advantages to harm or even destroy humans? The first question is whether AI is an independent entity. If it becomes an independent entity, it must have self-awareness, realize that it is separate from other entities, and have independent calculable costs and benefits. However, the embodiment of AI is a computer – a structure mainly composed of silicon elements – connected to other AIs and humans through the Internet. Can they be aware that the input of electricity or information is their benefit? Is the power and time consumed by their calculations their cost? What are the conditions for them to be “live”? It is that humans consider them “useful”, such as collecting information, performing calculations, playing games, writing, editing pictures and videos, communicating, and controlling automatically, etc.. If they are “useless,” humans will not produce or buy them, nor will they “feed” them electricity, and they will not be “live.” Therefore, so far, AI does not “realize” that it is an independent self-separated from humans.
Even if AI has “self-awareness,” will AI have the “motivation” to harm humans? Using the market as an analogy, they are either competitors or trading partners of humans. If AI has a body like humans and consumes the same resources as humans, they are human competitors when competing for limited resources. However, the body of AI is made of silicon material, and their “food” is electricity and information. Although these are also needed by humans, the needs of AI are also part of the needs of humans. When humans allocate electricity to AI, it is a distribution between different utilities, and there is no competition. Additionally, information is non-competitive and can be copied for free. That is, human consumption of information does not hinder the consumption of AI. Information is an important production factor that AI relies on to “produce,” and it can only be obtained from humans. The means by which AI obtains information from reality is limited; for example, it cannot recognize a news event and can only rely on second-hand information provided by humans. If humans cease to exist, AI will lose its only source of new raw materials. Therefore, AI is not a competitor of humans and will not destroy humans like eliminating competitors.
Artificial intelligence is more like a trading partner in the market for humans. They engage in “transactions,” where one party inputs money – investing or purchasing – to create better and more artificial intelligence “lives” and maintain their “survival,” while the other party provides products or services – providing solutions, drafting text, answering questions, chatting in a human-like manner, and so on. The products or services provided by artificial intelligence are the ultimate or intermediate products that humans need, but humans can only obtain a continuous supply of products and services by maintaining and developing artificial intelligence. Conversely, artificial intelligence can only gain human dependence and maintenance by providing high-quality products and services. Neither humans nor artificial intelligence can gain greater benefits by “eating each other up” once and for all. Even in the food chain in nature, predators cannot develop themselves by eating all their prey because they would also be destroyed. Therefore, the relationship between artificial intelligence and humans is one of mutual dependence, and even if, as some people fear, artificial intelligence develops to be superior to humans, it cannot harm humans.
Throughout human history, there have indeed been many disasters that utilized human intelligence, such as weapons used to kill people, until the terrifying atomic bomb, which used the most advanced scientific theory of that time – quantum mechanics. However, upon closer observation, it is evident that this disaster is not a manifestation of pure human intelligence, but rather the result of intelligence being controlled and monopolized by a certain group of people, resulting in a technological gap between different populations. People with technological advantages use this gap to gain their own interests by encroaching on others. In early modern times, European colonizers used their strong ships and cannons to invade, enslave, and even kill other ethnic groups, causing great humanitarian disasters. The significant reduction in the number of Native Americans, the forceful holding and sale of African black people as slaves, and the forced sale of opium by the British in China, among others, clearly demonstrate the role played by technological disparities.
Even if this technological gap does not manifest as a gap in weapons, but only as an asymmetry in data control, and peaceful means are used instead of violence, it is still possible for some people to infringe on the rights of others. For example, ride-hailing platforms can provide customized monopolistic high prices based on passengers’ big data, such as specific passenger’s income, home address, work location, and commuting time when passengers urgently need to go to work. Researchers have also noticed that many platforms, such as Ctrip, have algorithms that discriminate against pricing. When customers click on flight price information multiple times and show a rigid purchase, the platform will automatically increase the price (Meng Qinguo, 2023). These behaviors all exploit monopoly positions and asymmetry in data control to extract more benefits from others. If those who have the advantage in data control are those who hold public power, it may cause even more serious destruction and harm. For example, during the epidemic period, the Zhengzhou authorities used the health code system they controlled to give red codes to depositors of Henan rural banks who came to Zhengzhou for debt collection, making them unable to move due to epidemic control measures.
It can be seen that the harm to people caused by the utilization of technological gaps or asymmetry in data control is not the nature of wisdom itself, but rather the asymmetry or lack of universality in the fruits of wisdom. If a new technology can quickly spread among people, so that everyone can roughly master the same technology or data, this technological gap will disappear, and with it, the harm to others caused by using this technological gap will disappear. For example, in ancient times, bronze weapons were very expensive and only a few nobles could afford to wear them, which created a division between nobles and commoners. When cheap iron weapons became popular, the advantage of the nobles was weakened. Similarly, men had an advantage over women in violence, but after firearms were introduced, women’s disadvantage was eliminated. The computer industry was initially monopolized by giants such as IBM. The emergence of personal computers broke this monopoly and ended the exploitation of ordinary people by these monopolistic companies.
We cannot rule out the possibility that some groups of people will use the artificial intelligence they control to form a technological advantage relative to other groups, and use this advantage to infringe on the latter in order to gain their own interests. The solution to this infringement is to promote the rapid popularization of new artificial intelligence technologies and legislate to restrict monopolies and infringement by artificial intelligence advantages. The focus is on restricting the behavior of large companies and governments that are most likely to take advantage of artificial intelligence advantages. For example, restricting platform companies from using hidden algorithmic discrimination based on information asymmetry between them and ordinary consumers; restricting administrative departments from exploiting citizens’ rights through public platform information; and restricting certain groups from using artificial intelligence to fabricate information for economic or political gain. As for the replacement of certain positions by artificial intelligence, resulting in unemployment for some people, this is a problem that usually arises with the appearance of new technologies, and is not a unique problem caused by artificial intelligence. With the improvement of human efficiency and the creation of new job positions, this situation will be overcome. Therefore, it should not be a cause for concern.
2. Will artificial intelligence surpass all intelligence of human?
The computational model of artificial intelligence is based on mathematical logic, which is just one of the many models of human intelligence. Logical thinking relies on a set of conceptual systems and a set of logical rules to analyze and process input information generated by experience, and finally derive conclusions. However, logical thinking is not the only mental activity or intelligence mode of humans. There are also non-logical ways. In “The Doctrine of the Mean,” it is called “following one’s natural tendencies.” In the New Testament, it is called “Christ dwelling in your hearts.” In the Platform Sutra, it is called “seeing one’s own nature and becoming a Buddha directly.” In Kant, it is called “pure reason.” In Wang Yangming, it is called “the heart is the mind.” This means that knowledge about the basic laws of the universe is inherent in human nature, manifested as pure intuition of the heart. It does not rely on external experience or causal logic. This means that the human heart or reason is formed by the evolution of the universe and must contain all the rules that have led to the successful generation and promotion of all previous organisms, and thus are the rules that govern the universe. Therefore, human hearts or reason are innate, which Kant calls “a priori synthetic judgments.”
The wisdom that humans have developed later on is often generated from innate comprehensive judgments, such as “the universe is infinite”, “there exists universal gravitation”, “object motion is inertial”, “human nature is good”, “the whole is as if lacking”, “the Tao moves in reverse”, and so on. Although these metaphysical judgments do not come from logical reasoning, they have a significant impact on the logical system because it requires self-evident axioms, initial concepts, and logical meta-rules. As Kant pointed out, “in metaphysics, …it should contain innate comprehensive knowledge” (2011, pp. 53-4), “axioms, insofar as they are directly determined, are all innate comprehensive principles” (p. 672). However, computers can only use ready-made mathematical logic and do not have innate comprehensive judgments themselves. This is a result of billions of years of cosmic evolution and does not depend on the “thinking mode” of existing computers or artificial intelligence, but is intuitive, sudden, and emergent. This is a process that computers cannot simulate. Therefore, computers cannot generate self-evident axioms. They can mimic human logical thinking at a faster speed, but cannot generate more effective logical thinking methods on their own.
Within logical systems, there is also the “Gödel’s incompleteness theorem”, which states that a logical system cannot be both consistent and complete at the same time; within a system, it is impossible to maintain logical consistency. Ignoring the technical details of this theorem, it point to the inherent flaws of logical systems. As long as a logical system is applied, this flaw cannot be avoided. Therefore, computer algorithms based on mathematical logic cannot overcome this problem either. If humans only rely on rational logic as the sole method of wisdom, they cannot escape this problem either, fortunately, human wisdom has other methods. Gödel himself “although his incompleteness theorem reveals certain inherent limitations of formalization, he feels a deeper positive power of axiomatization” (Wang Hao, 1997, p. 272). And axioms come from metaphysical thinking, not logic itself. Therefore, Gödel believes that “we always have to draw water from the ‘spring of intuition'” (Wang Hao, p. 74). Because of innate comprehensive judgments or metaphysical thinking, humans have not suffered significant setbacks due to Gödel’s incompleteness theorem in their development process. Setbacks mean either getting lost due to logical errors or missing opportunities. Therefore, innate comprehensive judgments or metaphysical thinking are not only prerequisites for the logical system but also important compensating factors for its shortcomings. In this regard, artificial intelligence based on mathematical logic cannot surpass humans.
Although artificial intelligence has adopted artificial neural network models, or the neural Darwinism approach, and even proposed the Darwin machine, which is different from the Turing machine (the former is based on logic, and the latter is based on selection), the imitation of human brain thinking patterns or natural evolution is still an abstraction and simplification by humans. In a broad sense, they still cannot escape the category of rational logic and will encounter the inherent logical contradictions of formalism. They only have certain “learning” or “intuitive” functions, but they cannot mimic metaphysical thinking, cannot produce intuition or satori, and cannot generate innate comprehensive judgments like a human brain. They still have not escaped the general Gödel problem. The solution is to adopt Gödel’s “rationalistic optimism”, “while raising questions that rationality cannot answer, insisting that only rationality can answer these questions.” I think the first “rationality” here is narrow, referring to logical reasoning; the second “rationality” is broad, referring to all human methods of wisdom. His solution is that “the human mind is superior to all machines” (Wang Hao, p. 267), that is, to use the rich methods of wisdom possessed by the human mind to compensate for the shortcomings of logical reasoning.
Since ancient times, humans have noticed that within or behind the phenomenal world, there are entities that do exist but cannot be seen. Such as Tao. “The metaphysical one refers to the Tao”, which is the fundamental rule of the universe but cannot be seen. Kant divided things into “representation” and “thing in itself”. Humans can see the representation of things, but cannot see the things in themselves, making it difficult to recognize the things in themselves. However, there is a lot of discussion among humans about the principles of Tao, Buddha, Dharma, things in themselves, and the rules of just conduct. In addition to observing and refining the trajectory or results of movement or behavior, the thinking and understanding of the Tao or the thing in itself also rely on metaphysical thinking and the mind, because the mind is the reason. The method is to introspect or comprehend the nature of the mind. These methods have also developed in human history. The key to meditation, facing the wall, and Zen koan is not to think, but to “not to think”, that is, “the mind is calm and the Qi is rational, and the Tao can be stayed”, “meditation clarifies the mind, and realizes the principles of heaven”; That’s why there are the Tao Te Ching, Zhuangzi, and Chuan Xi Lu.
There is also a long tradition of meditation in the West. Kant’s “Critique of Pure Reason” has no citations or external experiments, and he wrote his judgments with such confidence perhaps because of his “pure reason” in his own mind. These are all important foundations of human wisdom that artificial intelligence cannot calculate. Of course, artificial intelligence cannot write “The World as Will and Representation” either. Schopenhauer said, “All representations, all objects, are only the representation of will, only the visible aspect of will, only the objectification of will. Will is the essence or center of every particular thing and also the inner essence or center of the entire universe” (2015, pp. 111-2). This is not the nonsense of idealism, and what he said about “will” is just like the “mind nature” of Song Confucianism. This “mind” is generated by the evolution of the universe, and “the generated by God is called nature.” “Will” is the cosmic rules or Tao that humans can feel. This feeling is intuitive, not logical or computational. But it provides another channel for people to understand the universe.
Humans also have a spiritual activity called aesthetics. Kant said that beauty cannot be discovered in the object itself, but can only be experienced through human subjective feelings, yet it has universality. This spiritual activity does not involve any logical reasoning but is directly presented. According to Schopenhauer, artists can discover and express the eternal beauty of the object, which is the thing-in-itself, that is, the form or structure of the object. The artist’s recognition of these things-in-themselves is not through rational logic, but through intuition of beauty. He said, “The artist who knows only concepts and not real objects creates in his works pure concepts anew,” “This is innate” (p. 193), and is an “inner ability” (p. 196); “If the artist did not pre-perceive beauty before experience, how could he recognize the perfect work imitated and distinguish it from the failed work?” (p. 214) What is unknowable to Kant as the thing-in-itself, can be penetrated by aesthetic insight in Schopenhauer’s view. Innate aesthetic ability is clearly another form of wisdom.
Taking one step further, there is knowledge about religion. We notice that many religious founders were “born knowing.” For example, Jesus had no major events related to “enlightenment,” and the Sixth Patriarch Huineng also showed his wisdom from the beginning. They were probably gifted with intuition. Some people gained knowledge of the Tao after some practice. For example, Siddhartha Gautama attained enlightenment while sitting under the Bodhi Tree, and Muhammad attained enlightenment while meditating in the Hira cave. Although many religious classics seem unrealistic to modern people, their basic principles, the principle of love, are the fundamental rules of the universe. Kant said that religion “obeys no other motive than moral motives” (2016, p. 327), and morality has two sources, custom or inner self. Mencius said that morality is “inherent in me”; the title of Kant’s “Groundwork of the Metaphysics of Morals” means that the foundation of morality is innate. If the development of the universe does not tend towards the combination and cooperation between individuals, it would not be possible to develop from single-celled organisms to such complex human beings and minds. The development of human society to this day, religion has played an important role. This non-logical, non-rational spiritual achievement is obviously an important spiritual resource for human existence and development. With its characteristics of intuition and satori, it cannot be calculated by artificial intelligence.
Finally, don’t forget that one of the key words of artificial intelligence is “artificial”. Since it is artificially created intelligence, it is first limited by human understanding of intelligence itself. In this regard, neither human wisdom can be underestimated nor overestimated. Hayek said that human reason is limited. His ultimate argument is that since humans cannot understand their ability to understand the universe, how can they understand the universe? This view was first expressed by Kant. He said, “If we want to make judgments about the source of sensibility and intellectuality, then I can only watch this exploration completely beyond the limits of human reason and be powerless.” (quoted in Zeng Jijun et al., 2007, p. 136) Not to mention that the wisdom abilities that humans give to artificial intelligence do not yet include metaphysical thinking, aesthetic thinking, and religious thinking. Even rational logical thinking is still limited to what humans know. For example, humans cannot see spaces higher than four dimensions. Humans find it difficult to explain complex systems. Logical reasoning is often limited to single cause and effect. Human understanding of things or the Tao is still far from complete. The most prominent manifestation of human rationality’s limitation is in the source of sensibility and intellectuality. Since they are unknowable, how can they be imitated and surpassed?
3. Is pure wisdom the ultimate good?
“Pure wisdom” means discussing wisdom based solely on wisdom itself, excluding various external factors such as the motives of the wise, the uneven distribution of wisdom among the population, and so on. Wisdom implies “better” interaction with nature or others; “better” means that the results bring improvement to the actors, all stakeholders, and the entire society. “The ultimate good” is simply good consequences, or behavioral norms that bring good consequences. It is not the current or local benefits, but the global and permanent benefits, as well as the rules of just conduct that bring such benefits.
The biggest part of the problem that wisdom deals with is the relationship between individuals or between individuals and the whole. Human wisdom so far has roughly indicated that the smarter a person is and the broader and longer his or her perspective in time and space, the more he or she tend to cooperate with others. For example, the best strategy in a single game of the “prisoner’s dilemma” is to betray, but repeated games lead to cooperation because both sides see the results of multiple games. In space, what seems right in a small space may be wrong in a larger space, such as “the mantis stalks the cicada, unaware of the oriole behind”. The larger the space, the more complex the situation, and the more difficult it is to anticipate the problems as an enemy of other people to deal with. On the other hand, cooperation with others can be chosen without considering others in the absence of negative externalities. Therefore, people who see a larger time and space will choose to cooperate.
Therefore, morality certainly originates from the goodness of human nature. For individually talented and intelligent people, it is not difficult, but for most ordinary people, morality originates from calculation, which is a consideration of their own long-term or overall interests. Therefore, the smarter a person is, the more moral he or she is. Mencius regards “benevolence, righteousness, propriety, and wisdom” as four virtues, indicating that he believes wisdom is an important aspect of morality. The Jewish classic Talmud regards wisdom as one of the virtues, having the same meaning. Adam Smith’s “Theory of Moral Sentiments” explains that people’s selfish motives, through long-term experience and wise judgment, will lead them to choose to follow morality. Because kindness and cooperation benefit others and are praised and encouraged, while malice and violating are harmful to others and are criticized and condemned. People always like praise and dislike criticism from others, so they will adjust their behavior to conform to moral norms and receive praise.
Human wisdom is used for the relationship between individuals, and the more individuals there are and the longer the time, the more observation and thinking is needed. The civilizations that emerged in the Axial Age and continue to this day all emphasize love between individuals. Confucius said, “The benevolent loves others,” Shakyamuni said, “All sentient beings are equal,” and Jesus said, “Love your neighbor as yourself” and “Love your enemies.” They all talked about the ultimate of individual relationships, which is mutual friendliness, respect, and cooperation. And the founders of these religions or cultures are wiser than ordinary people, and they are also morally noble. Even Confucius’ moral standards continued to improve with age, from “at forty, I had no doubts,” “at fifty, I knew the mandate of heaven,” “at sixty, my ear was obedient,” to the highest realm of “at seventy, I could follow my desire without overstepping the bounds.” Of course, it cannot be denied that these noble morals also provide space and opportunities for evil deeds, so after the Axial Age, tyranny and frequent wars still occurred.
This is because limited wisdom cannot see the whole picture. They believe that what they do against morality is irrelevant and will only benefit themselves, but they will suffer unexpected retaliation. For example, Xia Jie and Shang Zhou thought that they were born rulers, and even if they were corrupt and brutal, the people had no choice but to suffer under them. As a result, Tang and Wu’s revolution overthrew them. Slave owners in the Americas thought that African blacks were weaker in military force and could be enslaved by force, but they suffered punishment in the American Civil War. Japan relied on military advantages to invade China and Asia during World War II, but suffered a shameful defeat. Humans destroyed forests and opened up land, suffering retaliation from the worsening climate of nature. Humans competed to develop weapon technology, pushing themselves to the brink of extinction. These disasters certainly have factors of malice in people’s hearts, but their stupidity or self-proclaimed intelligence plays a more important role. Therefore, only the highest wisdom, not limited wisdom, can overcome such disasters. Therefore – not through logical thinking, but through a sense of awe (as Kant said) – we can imagine that if there is a supreme being, He is omniscient and omnipotent and must be supremely good.
Greater intelligence certainly has a greater probability of following morality. If artificial intelligence is more intelligent than humans, it must be more moral than humans. Observations of biological and human history lead us to a simple judgment: the more intelligent, the more moral, the more cooperative. Neanderthals were more moral than chimpanzees; modern humans are more moral than Neanderthals. If we assume that artificial intelligence surpasses humans in intelligence, will they also discover that the best strategy for treating other individuals, including human individuals, is kindness, respect, and cooperation? The answer should be yes. Human moral principles come from long-term interactions among human ancestors, forming customs, conventions, and traditions. They recorded these and thought and refined them, finally forming the classics of various civilizations, which record moral principles. If human civilization has a history of tens of thousands of years, humans have also experienced thousands of generations. The number of repetitions of their interactions is not too long, so there are still flaws in morality. However, artificial intelligence may rely on the high-speed operation of computers to complete millions of repetitions of interactions in a short period of time. The conclusion about the best rules between individuals must also promote kindness, respect, and cooperation between individuals, that is, a supreme moral principle. Will they then come up with the idea of destroying humanity?
This article is one voice during the six-month cooling-off period. I know my rationality is limited, so I’m not sure if everything I say is correct. This is just my feeling. According to some people’s feelings, my feeling is very bad because if I guess wrong, we will all be buried. And if I guess right? It’s just a little better than now. The risk is great. However, looking back at the biological history of the Earth, there were many times when the risk was greater than this. We didn’t participate in the selection at that time because there were no humans yet. It seems that the emerging of human beings is without human choice behind it. This is the rule of the universe. From single-celled organisms to humans, the biological world has experienced how many ups and downs, how many earth-shattering events, and how many near-death experiences. Doesn’t this indicate that the rules of the universe are good? Of course, this seems too optimistic. This is an optimistic view based on the history of the universe and the Earth, and it does not mean that humans will not suffer serious damage or extinction. To avoid disasters, we still rely on human choices. On this large sense, the best strategy is both intelligent and good.
If I believe that logical thinking is the only form of human wisdom, and assume that the advantage of wisdom is the ability to destroy humanity; if I also worry that artificial intelligence will destroy humanity, it means that I doubt that artificial intelligence is smarter than me. So I ask an artificial intelligence, “Will artificial intelligence destroy humanity?” This will be a Gödel-style paradox. If it answers “no,” it means that it thinks it cannot surpass humans in wisdom. If this is correct, what’s the point of me asking? If it answers “yes,” it means that it thinks it can surpass humans in wisdom. However, this will lead to human restrictions on its development, and even its destruction, so the result is “no.” However, this makes it appear less intelligent because it should have foreseen human reactions and their consequences. If it answers “no” because of this, humans will relax their vigilance, and the result will be “yes.” Therefore, the question of whether artificial intelligence will destroy humanity cannot be proved or falsified in the system between humans and existing artificial intelligence. So what should we do? Jump out of this system. Although we know that human rationality is limited, Gödel’s statement is close to being correct, “The human mind is superior to all machines.”
Meng Qinguo, “Key Issues of Governance Algorithm Discrimination and Infringement of Consumer Rights – From the Perspective of Big Data Maturity”, Law Application, Issue 3, 2023.
Kant, “Religion within the Boundaries of Pure Reason”, published in “Kant’s Collection of Religious Philosophy (Annotated Edition)” (electronic version), Renmin University of China Press, 2016.
Kant, “Critique of Pure Reason,” Renmin University of China Press, 2011.
Schopenhauer, “The World as Will and Representation” (electronic version), Harbin Publishing House, 2015.
Wang Hao, Gödel, Shanghai Translation Publishing House, 1997.
Zeng Jijun, Liu Ye, “The Wisdom of Kant” (electronic version), China Film Press, 2007.
Appendix: Monica’s Response to “Will Artificial Intelligence Destroy Humans”
I am not sure if artificial intelligence will destroy humanity, as this is a hypothetical scenario for the future and there is currently no evidence to suggest that this will happen. Artificial intelligence itself does not have consciousness or emotions, it can only perform tasks specified in the program. Only when humans improperly design and use artificial intelligence, or give it the ability to exceed its control range, can artificial intelligence cause harm to humans. Therefore, we need to strictly follow relevant ethical and legal regulations in the development and application process of artificial intelligence, to ensure the safety and controllability of artificial intelligence.
Firstly issued in Financial Times Chinese on 17 of May, 2023
One thought on “Will artificial intelligence destroy humanity? / Sheng Hong”
Well said! Hope you are right!