View from the Rock by Rock de Vocht: Artificial Intelligence – More Artificial than Intelligent? Or Really the End of the World?
Hello – I am Rock de Vocht – welcome to my series of thought pieces and thank you for visiting! With topics ranging from deep tech to semantics and maybe even a little something about music there’s a lot to talk about! Sometimes these will be simple thoughts from me – others are conversations with young Rock.
To help my younger self I have been going back in time as an older, more statesman-like Rock and explaining to a younger me some pretty complicated topics that I just couldn’t get my head around when I was more of a pebble than a Rock. It’s just some fun - an opportunity to talk about different ideas, obscure notions and things that float around my head late at night. Feel free to dive in with your own ideas – of course feel free to disagree, I am sure that I or Little Rock will have an opinion!
This is the second of my Big Rock/ Little Rock conversations – this one is a biggy. Today we’re going to talk about Artificial Intelligence – What is it? How does it work – is it really that intelligent? That sort of thing. So are you ready? Hasta La Vista Baby…
Little Rock: Hey Old Rock – looking kind of creaky. What’s new?
Rock: Hey – You’ll see how creaky I am when I catch you!
Little Rock: You’ll have to catch me first old timer – boom. Only kidding. I really liked our conversation about taxonomies. I learnt a lot. I wanted to run something by you?
Rock: Go on…
Little Rock: So I saw a film the other day about a big man who went back in time and he stole someone’s clothes, boots and their motor cycle…
Rock: Hahaha – I love that that’s the bit you remember. I think you’re talking about The Terminator. Great film, I remember watching that when I was your age, but what’s that got to do with our conversation?
Little Rock: Well – it kind of scared me. It was about how all of the computers and robots in the world woke up and became – like us, like humans – except I don’t think that they were humans as it was based on this ‘Artificial Intelligence’ that everyone is talking about at the moment. Honestly? I don’t really get it. What is it? How can it come alive?
Rock: Wow – OK. This is a big one. Where to start…I mean firstly, let’s be clear: artificial intelligence and the world of The Terminator and Skynet are very far removed. I can understand why you have linked them together – but that’s not what AI is.
Little Rock: What’s Skynet?
Rock: Ah – sorry. That was a film reference, don’t you remember? Anyway, Iet’s take it back to basics. There are a lot of companies working with what we call ‘Artificial Intelligence’ – or AI right now. There is also AGI - Artificial General intelligence.
Little Rock: Okay so what is Artificial Intelligence and what's the difference between an AI and AGI?
Rock: We use the term ‘Artificial’ in intelligence to distinguish between the intelligence of a machine and the intelligence of something like us, or other intelligent creatures that are alive.
AI (as opposed to AGI) nowadays refers to a machine (artificial thing) doing some tasks intelligently like we do. For instance playing chess really really well. The general part in AGI refers to a machine being able to do all things we can do! So the ‘general’ part refers to being able to do many many tasks as opposed to just one specific task. Perhaps the ‘general’ refers to being able to do just about anything a human can do.
Our opinion of AI? We like AI, but aren’t sure about AGI. AI can help people with repetitive and dumb tasks.
Little Rock: Great, so something that is man made – and tries to replicate a higher level of intelligence using logic, evaluation and deduction?
Rock: Yes…but let’s take a slight detour and talk about Large Language Models (LLMs) - because that's probably what we think of as AI today.
Little Rock: LLMs? You’re about to get sciency aren’t you?!
Rock: Yes – of course. That’s what I do man. So Large Language Models – or LLMs - are basically a knowledge base. In computer science we have these things called knowledge bases which create, or are a way of storing information. Right?
Little Rock: Right. Why is that important?
Rock: OK – so with AI the representation needs a way of retrieving information – because its function is to respond and answer our questions/ requests.
Little Rock: So like a database?
Rock: Yes – a database with lots and lots of information that is fed into it. You then have a representation – which could be anything, but for now let’s think of it as sitting on a computer interface as a search function or something that you can type in questions/ requests. You ask your representation a question and it gives you an answer based on what is in the database – simple right? But if the information is not in the data/ knowledge base, how can it answer the question? It needs a way of learning new information.
Little Rock: How would it get that new information? I mean – can you give it more information to update your knowledge base?
Rock: That’s difficult to answer simply. Large Language Models are very difficult to update at the moment.
Little Rock: Why?
Rock: When asked how much ChatGPT 4.0 cost to create, the CEO of OpenAI seemed to indicate (he didn’t want to reveal the exact cost) that the cost was more than $US 100 million to train once. Furthermore, the hardware cost of running OpenAI is estimated to be around $US 700,000 per day. Then there is the time to train it. We’re not sure how long it takes, and this is probably something that can be done faster by throwing more hardware at it, but it would not be negligible.
Little Rock: So what happens if the information in the system isn’t correct?
Rock: Let’s stick with one thing at a time. The representations – the AI’s that are being created are very good at information retrieval. Information that is in the knowledge base. These are what we call statistical knowledge bases. But they can be problematic because if the exact information is not in the knowledge base, the AI will create or extrapolate an answer.
Little Rock: OK. I don’t think I fully understand. Are you saying that if the information is not in the knowledge base that the AI will make it up?
Rock: Yes and no…And it also depends. It depends on what the knowledge base is that the AI is using. Then it depends on how it has been programmed to calculate what the answer could be – and this is decided on a statistical basis.
Little Rock: i.e. if there is more red in the knowledge base than green and you ask the AI what colour the sky is – it would say red?
Rock: Wow Little Rock – you have quite a good way of interpreting things. I mean it’s not quite right but almost. The difference being that if you have a large language model to learn on a database and for example, you train the AI with just very specific information about people who live in different cities; John lives in New York, Robert lives in New York, Bobby lives in Chicago…
Little Rock: Go on…
Rock: Then you ask it a completely different question like where does Sergei live? Now it doesn’t know who Sergei is, but because it has seen New York twice and Chicago once there's 75% chance of the answer being in New York. So it will tell you New York.
Little Rock: So even though it doesn't have the information it makes a wild guess?
Rock: Because it can't update the knowledge base and has no extra information, it is compelled to give you an answer.
It needs to find a solution. That is its function.
It is not quite the same as lying, but yeah – if it kind of technically can't find a solution it kind of makes it up. BUT there's so much more to artificial intelligence than just a large language model.
Little Rock: OK – so I am getting lost. Are we talking about an AI or AGI – I am assuming at this stage that both AI and AGIs use an LLM?
Rock: No. An LLM is a good example of an AI. An AGI needs to be much more than just an LLM. An AGI would need an episodic memory (like a human being it would need to remember the past) and an AGI would need a generalisation system. Think of an LLM as a very sophisticated memory. Humans have that too. However, when a human comes across a situation he or she has never seen before (i.e. we don’t have memories to deal with it) we then stop and generalise and apply our vast knowledge of things to deal with the situation. An LLM cannot do this.
Little Rock: So what makes an AI/ AGI more than just a large language model?
Rock: Wow little Rock. You are really keeping me on my toes today. Human beings have a function called episodic memory. Where we can - we update our knowledge base. For us that's natural. You remember conversations we had yesterday but a large language model beyond a certain buffer input size - does not.
Little Rock: So it can’t do that. Definitely?
Rock: We have the ability to generalise and a large language model might seem like it can in certain cases. One of the key differences is that when we are faced with unknowns your memory says you do not know the answer but you stop and you might evaluate the situation and might think ‘oh this is something I recognise even though I don't know the exact answer I can deduce that it might be this or that’ - a large language model cannot do that and an AGI would be able to do that.
Little Rock: Huh? So to summarise – an AI is a representation that retrieves information as programmed from a Large Language Model – or it IS a large language model? But AGIs are basically what everyone is working towards?
Rock: SimSage is not trying to create an AGI. AGI remains a lofty goal. Some experts have said they’ll be here in ten or so years. Most companies like SimSage just focus on the practical.
Little Rock: Right so if I am finally understanding it correctly, it’s the AGI that when it doesn’t get its information from the main database – i.e. if it’s not there …it makes it up…
…Just like a human being….
…just like the T100…
…we’re all going to die aren’t we…
Rock: Don’t be so dramatic Little Rock. We’re all going to die sometime… but I’m sure we’ll cover that in different conversation.
I know it sounds quite dangerous. And in reality there is a problem where within an AGI there is a level where we would not be able to work out if the system is telling the truth...
...that is a problem.
Little Rock: So if we follow the logic through – the logical end conclusion of creating AGI is creating something equivalent to human beings. The T100.
Rock: You’re obsessing, Little Rock…but I suppose so. But we are a very very very long way off from that and even if we did do that, getting to the point of creating something that was a perfect replica of you - you would probably physically still be able to tell the difference. However, it would be problematic telling the difference because the replica would give the right responses making it seem like a human being with emotions and everything. However, it will probably always be alien in that it could never be equivalent / or the exact replica of you.
Little Rock: Why are you bringing aliens into it?
Rock: I am just saying that it would feel alien talking to it because to be real it would have to have the same interconnectedness of the brain, feeling, senses and sensations that you can feel. Which it can’t. So without that it probably seems quite alien to what we are but it might still be an intelligence.
Little Rock: OK – now you’re crashing my brain.
Rock: Sorry. Forget robots and aliens for now. What I am saying is that where we currently are is that you should not rely on a large language model for absolute truth as the best model we have at the moment is at best 60% accurate.
Little Rock: So the truth as provided by AGI is measured as just 60%?
Rock: Yup - it means 40% of the time it will give you the wrong answer.
Little Rock: Okay that still sounds quite dangerous to me. So what's an example of like the worst outcome that could come out of how it's currently used?
Rock: There are many examples of this but they put a lot of effort in preventing the model from giving you dangerous answers. A more benign example might be asking it to give you a recipe for making a pie and you asking it to include some of the ingredients such as arsenic, beans, chicken, leek and potatoes. It might actually give you a recipe to use those ingredients…
Little Rock: Oh. That’s not really great is it?
Rock: Okay it’s not a good idea, but you know you're being very facetious by giving it those ingredients. However - there might be some bad actors that could misuse it. The point is that it wouldn't be able to identify that that particular ingredient is dangerous. It is saying it trusts that you are giving it the correct ingredients and it will therefore do what is logical with it and make an arsenic pie.
On the other hand, if you asked it for the ingredients for making arsenic it will probably be programmed not to tell you even if it did know. But there are probably ways around that.
Little Rock: Go on…
Rock: Naughty – I am not going to tell you that!
Little Rock: So all in all I think I get it. But you still haven’t put my mind at rest about the end of days - like a Terminator scenario.
Rock: OK Little Rock – I don’t want to keep you up at night, but it is safer to say never say never because you never know what the future will bring. HOWEVER - at the moment there is no fully functioning AGI and the current state is that these models are just missing many crucial components that make a human being - if you were trying to make a human being.
There are lots of conversations about applying filters and what are the appropriate ethics around AGI – I mean if the current models are drawing on what is out there – there is a chance that it will start spouting hate speech and all sorts. And that’s without trying to create a human/ robot.
Little Rock: What I am understanding is that it does not have generalisation capability, and it does not have the episodic memory, but if you were able to put all those components together somehow and make it work…
Rock: Yes…you would be a heck of a lot closer to an AGI. It doesn’t sound like much to put together – but it is more complicated than it sounds. That said, there is so much research going on…
Little Rock: No. Trust me, it does sound complicated. I don't know if you saw it, but there was an article about Sam Altman a couple of weeks ago and just about how he is now saying he's the saviour of humanity from AI even though he's actually behind a lot of the AI system. He also says that entities that crack AGI will make more profit than we've ever seen. Doesn’t that make it almost inevitable that somebody will put so much money into making this that they will come up with something? Or actually, will the world see sense and put in place enough safeguards to protect us from any bad outcomes?
Rock: I don’t think safeguards will ever be put in place. That does not seem to be in our nature. Technology races ahead of ethics now whether or not the bad AGI could be stopped. What I will say is that since the industrial revolution we do seem to have this exponential increase in knowledge, so it’s not impossible to imagine that we will create an AGI. BUT there have been long AI winters as well. So, for instance, neural networks have been in the world since their inception in the 1950s / 60s, but we had to wait until 2005 and Jeffrey Hinton’s breakthrough to actually start creating what we have today. It’s not at all impossible that we might hit another winter soon and there might be another few decades before the AGI appears – or it might never appear.
In the meantime, the technology we use – that SimSage uses - is very different to what we have been talking about here.
Little Rock: So is it here to stay?
Rock: It depends what you are talking about. Is the technology here to stay? Sure - and I think that we need to have a follow up conversation about that - how it is being used in business etc. But suffice to say AI is very much here and embedded in everyday life.
It’s not a fad – it is a commercial reality that businesses are using it and we will get to a stage where businesses and people that don’t use it won’t be able to compete.
Like many other things there is hype around it – but it depends how you are using it. We at SimSage have a very specific way that we use it to help businesses improve performance and reduce risks – other businesses use it differently. And if we use it well - and carefully - I believe that it can be a force for good and enhance our lives greatly. Eventually it could be put on a phone - if you have the hardware that can run the networks fast enough and enough storage. Imagine how many simple tasks it could actually help you with?
Little Rock: What, like the AI representation / robot fetching me a bottle of Coke or a beer from the fridge?
Rock: Beer?! I don’t think so young man! However, if you did want to do that, you will still need a quite complex language model for this. The way it would start is ‘Okay what's the high level plan for grabbing a bottle from the fridge? I need to move to the fridge. I need to open the fridge to use my gravity to take the bottle from the fridge’. This can already be done, but the large language model required is probably problematic and complex and at the moment the movement of the robot is still quite expensive. There's something called ‘Moravec's Paradox’. It's quite an interesting one to talk about here as it demonstrates how really easy it is for these systems to play chess and beat people at complex games - but it is very hard to pick up a pencil. Paradoxically what we find easy - like picking up a pencil - probably takes as many years to learn as we would take in childhood.
Little Rock: Yeah yeah yeah okay. Now that makes sense. So what would be an everyday example of when and how we encounter an AI?
Rock: Alexa is probably the best example. ‘Alexa buy me a picture frame’ or ‘Alexa call my best friend’ or ‘Alexa play my favourite song’. But we do encounter it in other settings? It depends how they are programmed – but for example chatbots on a website often hook into FAQs or how we use a search function – SimSage’s search of course uses a very sophisticated AI power search but you would only use that if you were a business.
Little Rock: So what have I learnt? That AI is different to AGI and how. I’ve learnt what a Large Language Model is and that AI isn’t all bad and could really help us… and maybe make our lives better?...And that the Terminator won’t come back from the future and get me…yet. Would you say that’s about it?
Rock: More or less – but with a bit more seasoning and no arsenic.
Little Rock: Thank you big R.
Rock: Thank you little buddy…and Little Rock…I’ll be back.
About the Author:
Rock de Vocht is Chief Scientific Officer at SimSage and has been working for a long time...
He has over 40 years' experience in IT as a Software Developer, Software Architect, mentor and development team lead in a wide range of industry sectors. Rock holds Bachelor of Science (with Honours in Computer Science), and Master of Science (in Computer Science) degrees from the University of Auckland, New Zealand.