- 07 June 2024
- Free Speech
- 9 min to read
- article 6 of 14
-
Wessel van RensburgTechnology consultant
WHY is there so much controversy about whether ChatGPT and other large language models truly exhibit “general" or human-like intelligence? These tools are undeniably useful, complementing tasks. So why do prominent researchers remain dissatisfied, expecting them to demonstrate human-like intelligence across a wide range of domains? Surely narrow AI models, like those trained to specifically identify novel proteins, are also less likely to disrupt the job market but make our lives easier? And have you wondered what the term artificial “general" intelligence (AGI) actually means?
Moreover, why are some AI experts so deeply convinced that AGI would wipe out humanity? Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute and the rationalism movement, told Time: “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die." This can seem a bit over the top, particularly now that we have a better grasp of what the most powerful large language models are actually capable of. And why, if it's so dangerous, would we want to go on and create a superintelligence, which until recently was OpenAI's stated goal and remains the goal of Anthropic and Google's DeepMind?
Lees hierdie artikel in Afrikaans:
When Alan Turing developed the concept of the Turing Test in his 1950 paper Computing Machinery and Intelligence, he did not use the term “artificial intelligence" (AI). The term was coined only in 1955 by John McCarthy, when he proposed the Dartmouth Summer Research Project on Artificial Intelligence. McCarthy picked the term because it was so neutral, as research in this area was until then called “cybernetics" or “automata theory" but studied different things. In the proposal for the meeting, it was clear that his interest was wider than had been the case before, namely “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it". A period of great optimism followed, with researchers naively believing that human-level AI could be achieved relatively quickly.
By the late 1960s and early 1970s, progress in AI had ground to a halt, leading to reduced funding and interest in the field. The term “AI" became a little embarrassing. The 1980s, a great time for sci-fi, saw a brief surge in the use of the term. In fact, one could argue that science fiction was where the idea of AI, even if ill-defined, was kept alive. Yet, despite some success with neural networks on specific tasks, AI was out of fashion again in academia. Many researchers now preferred more specific terms like “machine learning", “natural language processing" or “machine vision", in part because “AI" had become associated with unfulfilled and extravagant promises.
Nevertheless, the key addition of the word “general" to form a new term, “artificial general intelligence", was first recorded in 1997, the same year Deep Blue beat Gary Kasparov at chess. For decades it was presumed that once computers could beat humans at chess, they must have achieved human-like intelligence. But as soon as it happened the goalposts shifted. The addition of the word “general" signalled that the field was clearly drawing a distinction between this narrow form of intelligence and the ultimate goal.
It took until 2007 for “artificial general intelligence" to be popularised as a term for “human-level or superhuman AI systems". It was proposed by Shane Legg, who went on to cofound DeepMind, for the book Artificial General Intelligence by Ben Goertzel. Goertzel felt there was a body of contemporary research into machines mimicking “real" intelligence that had not been given the prominence it deserved and did not have a name. Though he now named it, Goertzel admitted, “general intelligence does not mean exactly the same thing to all researchers" and “it is not a fully well-defined term".
This lack of a clear definition is a challenge that bedevils the term to this day. But the more significant problem, say critics, is the intellectual heritage on which it rests. One of the contributions to Goertzel's Artificial General Intelligence quoted an article, The General Intelligence Factor, by psychologist Linda Gottfredson. She asserted in 1998 that a general intelligence factor (g) underlies all cognitive abilities. Intelligence could be measured and represented by a number. She argued that g is a real, biologically based trait and is highly heritable. She later proposed that IQ score differences between racial groups have a genetic component. This has been highly disputed in psychology as a view which lacks scientific consensus, not to mention that arguments like hers have been used historically to justify racial discrimination, famously in South Africa.
Goertzel later became director of research at Yudkowsky's Machine Intelligence Research Institute, which initially was called the Singularity Institute for Artificial Intelligence. It was founded with funding from the right-wing tech billionaire Peter Thiel. Inspired by the writings of science fiction writer Vernor Vinge, whom I mentioned in a previous column, the Singularity Summit was an annual event from 2006 to 2012, organised by Yudkowsky's institute, futurist Ray Kurzweil and Thiel. It was after the 2010 summit that Demis Hassabis and Legg approached Thiel in hopes of securing funding for DeepMind, and got it. Elon Musk was another funder. Hassabis described DeepMind as “working on AGI".
Vinge's concept of technological singularity held that once humans created intelligences greater than their own, it would end the human era, as the new superintelligence would upgrade itself and advance at an incomprehensible rate. Kurzweil foresees humans and machines merging, thus commencing an unparalleled phase in the annals of the cosmos. Philosopher Nick Bostrom was a speaker at the first Singularity Summit and is useful because in him one can see a bigger picture form. A transhumanist, he participated in the extropian movement, anticipates the singularity with excitement and trepidation, argues for a cosmist vision of the future, and cofounded the long-termist ideology, a strand of thinking that's hugely influential in the effective altruism (EA) movement. In fact, everywhere you care to look you find the influence of transhumanism, extropians, singularitarianism, cosmism, rationalism, effective altruism and long-termism among leading AI thinkers and doers. To such an extent that computer scientist Timnit Gebru and sociologist Émile P Torres have coined the acronym TESCREAL.
Sam Altman, the head of OpenAI — who counts Musk as the key early investor — was previously an effective altruist. He is a self-described transhumanist who believes in mind digitisation, promotes cosmist and long-termist ideas like galaxy colonisation, and argues passionately for the importance of controlling AGI. Musk is a transhumanist aiming to merge minds with AI through his company Neuralink, and he is influential in the rationalist community. He has just founded his second company to build AGI and says he aligns with long-termist philosophy. The sociological crossover between the communities associated with each letter in the acronym is significant, say Gebru and Torres in a recently published paper.
Transhumanism was initially developed by 20th-century eugenicists. Eugenics was a common way of thinking about the world in the early 20th century. In South Africa, newspaper reporter ME Rothman heard the vice-chancellor of the University of Cape Town tell an audience that there was something inherent in Afrikaners that made many of them intellectually backward and poor whites — a clear example of eugenic sentiment. Today, say Gebru and Torres, transhumanism, extropianism, singularitarianism and cosmism are examples of second-wave eugenics, since all endorse the use of emerging technologies to radically “enhance" humanity and create a new “posthuman" species.
While first-wave eugenics sought to enhance the “human stock" through society-wide coercion over reproduction over many generations, second-wave eugenics emerged due to the possibilities offered by genetic engineering and biotechnology. These technologies enable potential human “improvements" without the need for population-level policies or transgenerational timescales. Through this liberal eugenics, parents could theoretically choose to “design" their children by selecting genes believed to determine desired traits, such as exceptional “intelligence", and in a single generation.
Effective altruism and long-termism have no direct link to transhumanism but they can be seen as the siblings of rationalism. Where the latter is concerned with rationality, EA is concerned with ethics. Rationalists focus on improving human reasoning and decision-making, with many proponents believing that advanced AI systems are highly important for shaping the future of humanity. Effective altruists use evidence and reasoning to determine the most effective ways to help others and reduce suffering. EA has been especially influential among wealthy tech entrepreneurs, with its concept of “earning to give" providing a framework and justification for doing good with one's money, instead of paying taxes — and in the case of disgraced crypto entrepreneur Sam Bankman-Fried, fraud.
But where does the apocalyptic aspect of TESCREAL come from? First, late-1990s transhumanists realised that if they were right, the technologies needed for a posthuman utopia could also pose unprecedented threats to humanity, as they are expected to be extremely powerful and could be used by state and non-state actors and for good or ill. Second, while in their view a “value-aligned" AGI could solve global problems and enable immortality, many TESCREAL advocates believe that an improperly aligned AGI would probably result in existential catastrophe.
Despite these risks, prominent figures like Dario Amodei of Anthropic, a prominent supporter of EA, argue that capitalist dynamics and competition between states make stopping this research impossible. In line with EA philosophy, people like him contend that the best approach is to actively engage in and lead the research to ensure that humanity can be saved from calamity — and guided towards utopia.
♦ VWB ♦
BE PART OF THE CONVERSATION: Go to the bottom of this page to share your opinion. We look forward to hearing from you.
To comment on this article, register (it's fast and free) or log in.
First read Vrye Weekblad's Comment Policy before commenting.