Hal Swerissen
Artificial intelligence – AI – is all the rage. It’s almost impossible to be involved with business, government or other organisations without being regaled with seminars, videos and discussions about the wonders of AI. Add to that the media hype, ranging from predictions that AI will wipe out the human species through to utopian views of a world where all the dangerous, dirty, boring and repetitive work is done by AI enabled devices of one sort or another.
In practice AI has been with us for a while now, but in a more limited form than the recent explosion of interest that came about with the release of ChatGPT, Copilot and Google Gemini.
The earlier forms, mostly called machine learning and “narrow AI”, are terrific at automating highly specific tasks, like sorting and directing mail and packages, analysing radiology and pathology tests and identifying vegetables and fruit at the self checkout machines in the supermarket.
Narrow AI automates an increasing range of ordinary tasks. It works best when the data is clear, the task is relatively routine and uncertainty is low. It’s not so great for complex tasks, with messy data and high levels of uncertainty. That’s why self driving cars are still a challenge and the roll out of fully autonomous vehicles has been much slower than early expectations.
The game changer that kicked off the recent AI hype was the development of Large Language Models (LLMs). While the earlier narrow AI systems were able to process large volumes of reasonably simple data, they couldn’t handle high volumes of complex and uncertain data. The problem was solved in 2016 with some clever maths and chip re-engineering that led to the release of ChatGPT.
LLMs like ChatGPT can be trained to respond to complex questions in everyday language. The experience is uncannily similar to talking to a human.
This is not surprising because LLMs mimic how the human brain works. Our brains have around 86 billion nerve cells or neurons and each of these neurons have hundreds and sometimes thousands of connections to other neurons. These trillions of connections are organised in networks that develop from infancy to adulthood as we learn about the world. Effectively, what we learn and remember is coded into our brains as a complex set of neural networks.
LLMs like ChatGPT are a form of Generative Artificial Intelligence (GAI) that use artificial neural networks.
Like the human brain, GAI’s are designed as artificial neural networks of electronic circuits. Billions of artificial neurons are connected together, allowing trillions of connections. The specific patterns of connections between the neurons are built up as artificial neural networks by training the GAI on huge data sets like all the text or images on the internet. Through repetition and training the GAI strengthens specific artificial neural networks to predict relationships in the data for letter combinations, words, sentences and so on. It is then able to generate complex images or answers to questions in response to specific prompts.
The great advantage of GAI is that pretrained models can be applied to a broad range of complex tasks across analysis and prediction, content creation, software development, customer relations, finance and banking, health care, science and education.
But there are risks. The current GAI models are impressive but far from perfect. They have a tendency to over interpret data and then make confident mistakes and assertions, sometimes called “hallucinating”. They cannot reason like humans using broad experience applied to specific situations often called “common sense”. This means they sometimes miss obvious answers or completely misinterpret the prompts they are given.
GAIs are only as good as the data they are trained on. If the data is inconsistent, uncertain or biased, responses are likely to more inconsistent and biased.
More problematic, even with their current limitations, GAIs can be used by “bad actors” for cyber fraud and scams, the generation of false information, invasions of privacy or theft of intellectual property. There is also the potential for dangerous military applications.
More broadly there are concerns that GAIs will disrupt employment and that the explosion of the data centres required to process vast amounts of information will require tremendous amounts of electricity for computing and water for cooling. Add to that are concerns about the increasing concentration of digital control and wealth in a very limited number of powerful corporations like Alphabet, Microsoft, Amazon, Meta, Apple, Nvidia and Tesla.
At the extremes, some commentators worry that the next generation of AIs will become self aware and develop their own goals which may not be in the interests of their human creators. But self awareness and super intelligence are probably still a way off.
Nevertheless, the challenges and risks are real and regulators are considering what rules need to be put in place to deal with the ethical, legal, environmental and social problems that new forms of AI will produce.
There is still time to prevent disasters. At the moment, the successful application of AI is mostly narrow and highly specific. Generative AI use is mostly focused on enhancing existing applications and technology and, as yet, there are very few fully autonomous applications in complex settings.
In fact, there is a very real worry that GAI has been over-hyped. It may take significantly longer to address the limitations of GAI, as investors have discovered with the development of autonomous vehicles. It is not yet clear that the current models of AI are good enough that we will be able to trust them with highly sensitive, complex and risky tasks. They have potential, but it is still untested.
Even so, it is more than likely that over the next couple of decades a range of GAI applications will progressively emerge, from autonomous vehicles to digital personal assistants, integrated smart homes, and automated shopping, health care and education. There is a reasonable chance that everyday life will probably look quite different in 20 years time.
Prof. Hal Swerissen is an emeritus professor who lives in Daylesford and has a long-standing research and teaching interest in learning and cognitive science, including the history of computing and artificial intelligence and its implications for the future of work, leisure and community life.
This article is based on Prof Swerissen’s presentation at Words in Winter 2025.