Home / Thinking Zone / Is AI eroding our humanity or just holding a mirror to it?
Is AI eroding our humanity or just holding a mirror to it?
Let’s face it – humans are deeply imperfect. Aside from any physical characteristics, many of us cheat, lie, steal, are ‘immoral’, intolerant and thoughtless; however, many of us can also be kind, clever, creative, inspiring and full of goodness. The list on both sides of the mirror is endless - and now that humans have created AI - a powerful advancement of our technological skills, the worry amongst many is that have we just created a monster which replicates only our worst qualities.
But is this really the case? Given that language-based AI is drawing its knowledge from existing information, perhaps what we are seeing is simply a reflection of ourselves. The famous saying goes ‘we are an anagram of our parents’ – so could it be that those who are feeling an existential angst on this issue are fundamentally concerned that AI is the metaphorical anagram of us? Given that concern then, perhaps the troubling question of whether AI is eroding our humanity contains within it a deeper, more complex inquiry:
What does it mean to be a human?
Posted 18 December 2023
Being human has been explored within philosophy, art and science for centuries, and now that we are living in an age of advanced technology, its permutations have become even more complex. What makes us uniquely human as opposed to an ‘advanced’ animal is up for debate, but human vs machine is surely a no contest ‘win’ for us. Why? Because of consciousness.
Scientists and thinkers are still unable to fully explain where consciousness, or the sense of ‘me’ resides – is it in the brain, the nervous system, the soul? Interestingly, how we understand the brain has often been compared to technology; the ancient Greeks compared its function to a catapult; neuroscientist Sherrington said it was like a telegraph system; Leibniz compared it to a mill; Freud a hydraulic system, and in recent years it has been compared to a computer system (Searle). What would the analogy be now that we have AI in the mix?
Despite best efforts, a conscious machine programme is yet to be built and so far, only ‘weak AI’ such as knowledge-based systems, have been achieved. Models like Chat GPT which essentially learn on patterns and averages from a broad base of information, (albeit very cleverly) are not ‘intelligent’ in the way that we would define it as humans. Most current trends in AI are now concerned with creating AGI – Artificial General Intelligence – which, if it succeeds, will create machines which could match human intelligence. However, given that so much of human intelligence is tied up with consciousness, and that this remains a relative mystery to scientists, conscious machines may be a long way off yet. This is not a problem specific to machines; it is a special case of the familiar other minds problem (Searle, 2004).
Gary Marcus is emeritus professor of psychology and neural science at the University of New York and is known for his criticism of the ‘hype’ around AI. His research explores the intersection of cognitive psychology, neuroscience and AI, and he believes that there are many potential risks to its use in real world situations, due to the inheritance (and perpetuation) of bias. Marcus concedes that AI does not produce misinformation ‘on purpose’ because it cannot possibly possess values of its own – however, what AI does do extremely well, in his opinion, is lie. AI is able to produce grammatically correct language which looks and feels convincing to us, but Marcus maintains that it is essentially ‘autocorrect on steroids’ and worries that younger scientists coming into the world of technology are being side-tracked by technical wizardry instead of focussing on how AI can make the world a better place. Earlier this year, he and fellow researchers signed an open letter calling for a ‘moratorium on the training of AI systems more powerful than GPT-4’ until proper safeguarding can be implemented.
In an interview with Noam Chomsky[1], Marcus opposed the view that cognition could emerge from AI – making the point that until it can understand basic things like gravity and emotional nuance, it will never pose any sort of ‘threat’ to humanity. He also raised the interesting theme of ‘innateness’[2] in relation to AI and said that the blank slate of machine learning means it will never truly understand language, and therefore never be able to deliver accurate information with true meaning. Whether, as humans, our inherent selves are born with ideas, concepts or knowledge or whether we acquire those things is probably out-with the boundaries of this discussion, but it is certainly worth considering in any comparison between an evolving AI and our own concept of humanity.
As with many things, the philosophical seeds of AI were sown in Antiquity with the use of automata and machine creatures such as the ‘automatic servant’ of Philo of Byzantium (3 BCE) and the life-sized humanoid figure made for King Mu of Zhou by Yan Shi, a skilled artificer in ancient China (400BCE).[3] These early examples of sophisticated mechanical ingenuity are obviously not AI as we know it, but it is interesting to note these early deployments of technology within daily life.
A little bit of history…
The term "artificial intelligence" was coined by John McCarthy in 1955. McCarthy, along with Marvin Minsky, Nathaniel Rochester, and Claude Shannon, organized the Dartmouth Conference in 1956 – and this is now considered by many to represent the birth of AI as a formal field of study. Allan Turing, commonly called the father of computing had already laid the theoretical groundwork for AI with his concept of a universal machine and the famous Turing Test [4]. In his 1936 paper, published in the Proceedings of the London Mathematical Society, ‘On Computable Numbers’, he introduced the concept of a theoretical computing machine. This, together with Turing’s other work remains among the most influential writing on computer science to date.
Despite initial enthusiasm, in the 1970s and 80s the AI field faced (financial) setbacks and this period is now commonly referred to as the "AI Winter" – although work on the development of expert systems, which used predefined rules to simulate human expertise, were ongoing. Since the 1990s there have been substantial advances in machine learning, especially neural networks, statistical methods, and practical applications in areas like data mining, speech recognition, and computer vision, all beginning to emerge. Progress appears to have grown exponentially in AI since the 2010s with breakthroughs in deep learning, increases in computing power and large datasets all helping to revolutionize AI as we know it today.
As 2023 draws to a close, we are seeing how AI is becoming enmeshed within many domains, including healthcare, finance, autonomous vehicles, language processing, smart technologies and even religion[5]. And whilst some people think AI is aiding progress (particularly in medicine), ethical concerns continue to permeate throughout all discussions of AI. There is a key acknowledgement from many that just because we can create and develop AI, doesn’t necessarily mean that we should. In fact, in a fascinating debate at Oxford University in 2021 entitled ‘This House Believes that AI will never be Ethical’, an AI called Megatron was invited to present the case for and against itself.
Megatron gave two main opposing points:
- If humans did not want to risk serious harm from AI, they should get rid of it altogether, ‘AI will never be ethical. It is a tool and like any tool, it is used for good and bad. There is no such thing as ‘good’ AI and ‘bad’ humans.’
- That a ‘conscious AI’ embedded into the human brain could be the best form of artificial intelligence, creating something ‘better than the best humans’.
Whilst this second statement may seem extreme, giving Megatron the benefit of the doubt, perhaps it was saying was that a combination of the best bits of both humans and AI would create the ‘perfect’ intelligence. However, we must surely ask whether this is something that is desirable? Regardless of any of the more prosaic arguments around AI – would we really want to live forever? Transhumanists[6] might argue that technology such as AI could be utilised to put an end to illness and disease, giving us the chance to find solutions to global problems. Computer scientist Ray Kurzweil famously theorised that human and artificial intelligences would merge by 2045 in ‘the Singularity’, solving all societal woes. Unsurprisingly, transhumanism has been met with a lot of opposition, with many agreeing with political theorist, Francis Fukuyama’s statement that it was ‘the world’s most dangerous idea’
Research into creating conscious machines has been underway for some time now, and Professor of computer science at Oxford University, Dr Michael Wooldrige, has insightfully observed that although AI is not about creating life, it feels like creating life – and in this way, it taps into one of our greatest desires as humans. It could be said therefore, that the very creation of the world ‘ex-nihilo’ (from nothing) – whether that be through the big bang, creation theory or any origin story begs the question of whether we are ‘playing God’ (or at least ‘parent’) when we create AI.
In her paper ‘In Whose Image? Artificial Intelligence and the Imago Dei’ (2012), Noreen Herzfeld, Professor of Science and Religion at St. John’s University and the College of St. Benedict, Collegeville, Minnesota, says we are standing in the middle ‘looking out toward God and toward our own creation in the computer’. Herzfeld notes that early AI was created with the sort of logical intelligence we associate with rule-based games such as chess[7]; but that this sort of knowledge, whilst potentially better than a human’s grasp of logic, is often limited within fixed parameters for the machine. It is interesting that AI was initially developed to have this sort of rigid problem-solving software, perhaps cementing the notion of it being ‘other’ to humanity; but as Matthias Scheutz (Professor of Cognitive and Computer Science at Tufts University) points out, ‘robots need to be able to say no’ for them to be truly intelligent, and so far, this has not happened.
Criticism of AI is often centred around the fact that it inherits bias from the existing wealth of information online, which in turn, perpetuates all sorts of potential prejudice, discrimination and unfair treatment within society. Many have been deeply concerned about the potential risks which AI poses, and in April 2021, the European Commission proposed the first EU regulatory framework for AI. In brief, it said that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users (with different risk levels requiring more or less regulation). In June 2023, MEPs adopted Parliaments negotiating position on the AI Act and talks are now in progress with EU countries in the Council on the final form of the law, with the aim of reaching an agreement by the end of the year. Other parts of the world have varying degrees of legislation and/or guidelines in place, with the UK being seen as a relative ‘outlier’ in having comparatively little regulation in place.
Amongst the general concerns from opponents of AI are security risks, environmental impact, threats to employment, loss of connection in ‘real life’ relationships[8] and the undermining of creativity – perhaps one of our most ‘human’ traits.
Earlier this year, German artist Boris Eldagsen won the Sony World Photography awards using an AI generated image and in 2022, Jason Allen’s work, Théâtre D’opéra Spatial, won first place in Colorado fair’s contest for emerging digital artists. Both pieces sparked a backlash from the art world and beyond, with some accusing them of cheating – although it might be worth pointing out that these artists were engaging with the topic of AI as well as utilising it. Similarly, in an exhibition called Uncanny Valley: Being Human in the Age of AI,” at the de Young Museum in San Francisco, a piece by Trevor Paglen entitled They Took the Faces From the Accused and the Dead, showed a grid of 3,240 ‘mug shot’ faces, which had been used to ‘train’ facial recognition systems. These images had been used without the subjects’ consent and through this work, Paglen was presenting his belief that data is being ‘weaponised’.
It seems then, that if we are to think about AI in the round, we must also attempt definitions of humanity, intelligence, consciousness, the self and creativity – not an easy task, particularly as all of these issues have associated branching sub-issues with no definitive answer.
However, here at the Thinking Zone, if we have inspired you to find out more or draw your own conclusions on any of the issues raised then we have done our (human) job! AI will no doubt continue to evolve, as humans evolve – but will it still be a mirror to humanity in years to come, or will one of the outcomes which Megatron suggested become a reality?
Which one of Megatron’s outcomes would you choose?
- We should get rid of AI completely.
- ‘Conscious AI’ should be embedded into our brains to create ‘better than best’ humans.
[1] Debunking the great AI lie | Noam Chomsky, Gary Marcus, Jeremy Kahn (youtube.com)
[2] Innateness and Language (Stanford Encyclopedia of Philosophy)
[4] The Turing Test (Stanford Encyclopedia of Philosophy)
[5] BBC News, God and Robots: Will AI transform religion? https://www.youtube.com/watch?v=JE85PTDXARM&t=11s
[6] Transhumanism | Definition, History, Ethics, Philosophy, & Facts | Britannica
[7] Deep Blue | IBM Supercomputer, Artificial Intelligence & Machine Learning | Britannica