Has Open AI invented the digital equivalent of the Oracle?
Director at The Power of Twelve and Digital Jersey Technology consultant Rachel Harker shares her thoughts on ChatGPT.
I can honestly say that this is the most hyped technology product I’ve ever seen. A casual observer engaging with the breathless commentary might think that Open AI has invented the digital equivalent of the Oracle and that we can instantly forget all knowledge and lay down our keyboards for good.
Steady on. Because it’s less Oracle and more digital Magic-8 ball. As Dame Wendy Hall told the Today programme ‘It is the state of the art in this technology. It’s natural language and it reads well but you cannot guarantee its accuracy at all. You can’t trust it to tell you the truth’.
ChatGPT is what’s called a large natural language model built using ‘deep learning algorithms’ (a snazzy catch-all for systems comprising a huge number of co-ordinating mathematical widgets). Engineers tune the widgets in the model to hold knowledge and output text by training it with pre-written text. Its output seems almost magical (it really does, you can try it here: https://chat.openai.com).
OpenAI have achieved an extraordinary feat by cleverly architecting an enormous model and training it with every digital word of English they can get their hands on. But while it feels magical we need to remember it’s not.
The largest source of English digital text is, of course, the internet. You’re no doubt aware that stuff written on the internet isn’t always 100% true – propaganda, conspiracy theories and hate speech all thrive alongside a huge amount of conflicting information and stuff which is just plain wrong. Given a need for vast amounts of training text all of this will have gone in and become part of ChatGPT’s wisdom.
Now, because this is a big model trained on not just big but ginormous data no-one (including the engineers who built it) knows what it knows. Or more importantly what it thinks it knows to be true but is actually wrong about. And they also don’t know how it actually works because to a large extent it built itself. Which basically means it’s down to trial and error to see how well it performs.
The dangers of its inaccuracy are amplified by just how good it is at producing text. It writes well and with confidence, has no awareness of its limitations and is both plausible and compelling (some wag has already dubbed it as mansplaining-as-a-service). Worse still, humans have a psychological tick which makes us tend to over-trust computers. It’s the reason that otherwise completely sensible people find themselves driving over cliffs at the instruction of a satnav. Unfortunately, we are naturally pre-disposed to fall for BS output by chatbot con artists.
At the moment this is all very entertaining and low stakes, but before long Open AI will issue a developer kit and an API and people will start building all sorts of products and services on top of it. That’s the point when us techies will need to start being really clear-eyed about its limitations. So as you marvel at the creative possibilities ChatGPT presents don’t forget to think about what will happen when it smoothly lies to your users.