Artificial Intelligence - To fear or embrace?
- Paul McRae
- Jun 19, 2023
- 4 min read
Updated: Jul 24, 2023
The future is here
May 10 2023
How many Sci-Fi movies have you watched, whether recently or in your younger years, where a computer, robot or other form of technology wrestles control away from the humans and proceeds to wreak havoc on their creators, doing things their way without fear or favour, before either ending in disaster and destruction or being foiled in the nick of time before they are capable of completing their objective? 2001: A Space Odyssey, The Terminator series or I, Robot anyone?
While watching these admittedly great, but ultimately Science Fiction movies, did you ever think they were anything other than entertainment?

Back in the real world last week, something unusual happened. The VP of Artificial Intelligence at Google, Geoffrey Hinton, quit his role following the voicing of his concerns at the “speed of change” currently taking place within the AI arena.
That he went on record to express his concerns was not out of the ordinary. In fact, this happens regularly in business and soundbites such as this could be seen as positive rather than negative – there’s no such thing as bad publicity, right? But to then go several steps further and leave his role due to the gravity of his concerns is a much rarer – and far more significant – occurrence and should serve as an indicator for all of us to sit up and take notice.
Pace of change
Hinton’s concerns centered around 2 things; The 1st was the pace of the advancements being made with AI while it is still being developed – and fully understood. Recent articles from various news outlets have described several examples of the extraordinary capabilities of the Large Language Model (LLM) and Generative Pre-Trained Transformer (GPT, as in ChatGPT). For those not familiar with ChatGPT, it is a supremely advanced Bot, developed to be able to listen to and receive data, analyse it, learn, and develop it and then respond to the request using the vast amount of other data it has at its disposal via the enormous capability of the Super-Computers that have been built to power it.
Certain features – and uses by its newfound fans and consumers - are confined to age old edge-gaining acts such as cheating. In recent months, students have dictated their requirements (their need to produce an essay, for example) via the bot and sat back in awe at the results. Simply set the parameters including subject, style, period etc and - viola, the ChatBot would compose a paper, often to a higher standard than the student. Write and de-bug a computer programme? Executed. Answer test questions? Pass. Can it compose a song? This answer will be music to your ears.
However serious, dishonest, and potentially unlawful, these uses are, other uses and capabilities may be viewed as more sinister - if those capabilities were fully understood and therefore able to be controlled.
Regulation
Which brings us to Hintons 2nd concern – Lack of regulation.
At present there appears to be no regulatory body for these rapidly developing technologies, no control over what it can and can’t be used for. No consequences if someone benefits academically or financially. No limits to the damage that can potentially be done, reputation-wise or legally. Like all things new and developing, the legal and regulatory system tends to play catch up rather than be out in front and ahead of the game, armed with policy and process for punishment should you stray out with the set guidelines.
A bit like these new electric scooters that every teenager seems to have these days which fly past you on the pavement at 30mph, face mask planted over the teens face, texting their mates with one hand while trying to avoid mowing you down at the same time with the other, no helmet in sight. It seems dangerous and like it shouldn’t be allowed, doesn’t it? It maybe wouldn’t be allowed if the law had caught up with it yet and managed to attach a set of newly created laws for their usage. But, like AI, LLM and GPT, it is a new product and governance is, at the time of writing, still to be defined and introduced.
Back to GPT, and another consideration that we should all maybe have, is the effect it may have on people’s jobs. Could it replace certain tasks carried out by certain professions? Could it answer phones and have a cohesive 2-way conversation? Could it provide advice on a range of products? Could it produce a sales pitch? Early examples appear to show that it is capable, even if not to a standard that is quite fully realised or completely developed for permanent, enterprise-level usage.
HAL
Maybe we shouldn’t be too concerned. Maybe ChatGPT and other products of this kind will be of great benefit, removing the need for monotonous tasks which could free up time for professionals to engage in other, more high-skilled activities? The same as Self Service Checkouts have replaced many human operated tills at Supermarkets, for example.
Maybe any worries about a lack of control will also dissipate through a hasty establishing of governance. Maybe it won’t advance as quickly, or as powerfully, as we are anticipating.
But from what we have already seen, and from Hinton's and others comments and concerns, there doesn’t seem much that it cannot do. In the words of HAL, the rogue bot in 2001: A Space Odyssey, the words "I'm sorry, Dave, I'm afraid I can't do that" do not seem to figure much in its vocabulary.
Comments