Our CEO was invited to RMB’s On The Verge podcast to discuss the realities of AI, the ethics of machine learning, and the future of IoT. In this episode, he explores the four main types of AI, debunks Hollywood-driven myths, and highlights how businesses can leverage emerging technologies to stay ahead.
Topic: The Reality of AI & the Future of IoT
AI is evolving fast, but how much of what we hear is real, and how much is hype? In this episode, we uncover the four types of AI, explore the ethical dilemmas of machine learning, and examine two major IoT trends shaping the future.
Despite advancements, most AI today is still reactive or limited memory-based, with general AI (sentient AI) remaining a Hollywood-driven myth.
Businesses that succeed in the future will be the ones that understand and leverage these exponential technologies.
Want to hear our CEO’s full insights on AI, IoT, and the future of tech? Listen to the full episode now, or on RMB’s On The Verge podcast and stay ahead of the curve! 🚀
You're listening to RMB's On The Verge podcast series, where we look at disruption through the lens of opportunity. My name is Willem van der Foerst, and this is RMB's On The Verge series where we explore disruption through a lens of opportunity. Today, I'm looking at convergence. There's a whole lot of exponential technologies out there, and we wanna zone in on IoT, data, AI, and algorithms. And we don't wanna look at them individually.
We wanna understand the opportunity sets that exist when these forces converge. And so with me today, I have Anibal and Gerald. And, I thought I'd start by just asking Thanks. How's it going? Hello.
Good morning. Morning. I thought I would just start by asking, Annabelle, in your experience, what is the difference between what people perceive to be AI and the reality of what constitutes AI? And I I ask that because we can't spend thirty minutes in any big corporate without someone mouthing off the latest buzzword. Well, it's a bit like blockchain, crypto as well.
Yes. Now we're dealing with AI, machine learning, everyone's an expert. And there are a lot of experts out there. There's a lot of really great stuff that's happening in the market, and I think the key is being able to get to the bottom of what is real, who is real, who's really providing tangible value to corporates and to the ecosystem. And I think, for us in this space, it's really critical from an education perspective to help people understand what is real and what isn't.
Because when we come into the market with, solutions and products, you know, you you don't want this expectation of this AI robot will fix all my problems and, you know, fix my business and make me profitable and, you know, all the terms that we've been told to use in business. So I think to just distill it, there there are four main types types of AI. First is, reactive machines where an AI will be given a dataset or information and react in in return to the data it's being given. The second is limited memory. So AI has the ability to form some type of memory and act on memories that's formed in the past, and that's really autonomous vehicles.
The third is self awareness or well, actually, the third is theory of the mind. This comes around when you're about three or four years old. I know Gerald just had a little boy so when he turns sort of three he'll start developing theory of the mind. And this is really where you become aware that other people have thoughts and feelings, and you react and turn to the fact that you're you're aware that there's a sentient being in your presence and you would react and turn. And then the fourth is self awareness.
So I know, I have thoughts and feelings. And what we're seeing in the market is a lot in the reactive machine space. So locally, we're seeing a lot of really great work being done in computer vision. So the ability for an AI engine to see documents, understand what it's looking at, extract the data, and do modeling on the data. I wanna jump in there just quickly.
And you say the machine will understand. Mhmm. So pump the brakes a little bit. We're getting, technical, which I want to do, but I just wanna get for everyone listening to this to understand. When I say AI, is Siri an AI?
Are we talking about the sentient being that listens to everything I say and then orders the coffee and the haircut that I want? Like, just what is an AI if you have to if you have to unpack it? What do people expect? Get rid of Hollywood. What's the reality?
So Hollywood is really selling what we call general AI, which is this super pervasive, super sentient, you know, robot in the form of a cyborg, in the likeness of a Sofia that we're seeing in the market, which is being, you know, she's being shown around the world. And I think that's where the terror comes in is general AI would make your coffee, do your kids' homework, drive you to work, fly the plane, you know, drive you back to the office. So it's super pervasive. It would impact your whole life. And it's thinking for itself.
It's thinking for itself. And it's aware of what it's thinking. Correct. Does it love? It's not only forming its own memories, but it has consciousness.
So this is the great But not emotion? Exactly. So there's a wonderful book by a guy called Sam Harris where he talks about spirituality and consciousness and what that actually means. And I think this is the great debate about AI is would we ever be able to code a sentient being? It's a being in our likeness not only, but a sentient being.
I think that's where the hype is really coming in is this general AI versus what is realistically happening in the market. So there's a lot of great stuff that's happening, that's grounded in reality today in terms of what the AI can do. So for example, I mentioned computer vision. When I said the engine can see and and understand what it's looking at, it's understanding it based on what we've told it it's looking at. But what we're seeing in the market now is the ability to then allow it to teach itself.
Right. That's machine learning. That's machine learning. And that's something different from AI. That's sort of an evolution one step further.
That's the that's kind of under the umbrella of AI. But what we're seeing is a lot of human assisted machine learning. So, you know, in the market, what we try and and unpack is the fact that a your AI or your engine is only as as powerful as the data that you have. And as humans, we need to assist these engines to learn and understand the data in the context that we're giving it to them in so that then they can make informed decisions going forward. Okay.
So now this presupposes that when the humans are teaching the machines that the things that we're we're teaching them obviously emanate from our own paradigm and our own value sets. And so if you've got skewness inside of those applications, you might teach the machine evil. Oh, completely. We've seen a lot of bias being coded into AI. Recently, it was Amazon launched an job employee search AI where they, oh, it was a recruitment AI.
So they launched it in the market, and what they were finding is it was predominantly skewed in favor of white males. And what they put it down to was some of the bias that was coded into the AI based on the developers and the data that they were feeding it. So very interesting. So there's a whole ethical, consideration to how these AIs are being built, the context that they're given, the data that they're being fed, and the job that they're being given to do. That's right.
Very important considerations. So I saw an MIT study which had put an AI in control of a vehicle, and it put the vehicle in a situation where it had to choose, who was going to die because of an accident. And so it was scripted. You can't escape the accident. Now you have to make a choice.
And in the end of all of this, I mean, it's a very emotive thing. Sure. Yeah. It got distilled into empirical data, and there was a ranking order of who dies first and who dies last. And I'm pleased to say that strollers came out on top as to keep alive, and cats were at the bottom.
You can kill them first. You know, but if you're a cat lover and you don't ever plan on having kids Yeah. You know you know, that's potentially more important to you than the stroller. So all these aspects are coming into the debate. But I think the interesting thing there is that, you know, you've got some, software engineers in a backroom deciding essentially morality.
And when you look at it, though, it's not AI is obviously, you know, kind of the the buzzword of the moment, but this idea of the computer is something that goes back, you know, forty years. So you usually, you'd be able to walk into a arcade in the seventies and play Pac Man. And when you played against someone, you'd play the computer. That was that was the that was the vernacular for it. You'd play the computer, and the computer would pay you back.
And it would fight you, and if you were playing, you know, some, you know, kind of any other arcade game. And there was logic built into these computers forty years ago. And if you can imagine, like, an exponential curve in how fast the technology has changed and how fast your cell phones got better over the forty years, can you imagine how fast the computer has gotten better? And so, you know, it's not just the morality that we might be coding in today. It could be biases that are inherited from many, many years ago.
And then how do you prosecute an AI? You know? So so it chose the stroller or chose the cat over the stroller. You know, is is there a framework, a regulatory framework in our current ecosystem to to cater for things like prosecution and the legalese around these kinds of things? So we're challenging absolutely everything that we've held as just standard forms of living life.
We are now challenging everything from the regulatory frameworks to the way we bank, to the way we insure ourselves, to, you know, the way we eat, to everything. Everything is being challenged. And and I just wanna touch on that regulatory framework because it's very interesting because it's almost impossible now because of the pace of technology innovation, for the regulations to keep up. And it's because, like you mentioned, convergence. It's technology is combinatorial.
So a breakthrough in AI necessitates a breakthrough in robotics, which, you know, pushes through to data, which pushes through to AI IoT and and software development, and so it kind of releases innovation across the spectrum. And as this is happening, you kind of wanna maybe, bring a landmark case around AI. Now you have to explain because a lot of this regulation, you know, might originate in Europe or America. And now you stand in front of a jury, and you have to not just explain how AI works, but how software works, how engineering works, how systems development works, how robotics works in order for them to understand why that AI has might have broken a rule before you even pass a regulation. And so it's almost impossible to catch up, and it's not really realistic in order to keep up with the paces of the development, across the spectrum.
So we we saw this locally with blockchain. We saw and and, you know, with the the regulators, it took them years and working forums and groups to have an understanding to then be able to tailor the regulatory framework of our country to meet this new technology. You know, you don't you don't have years because it's moved already by the time you've sort of drafted something that's meaningful. To that point, I recall in Cape Town that when Uber first arrived in South Africa, Uber vehicles were impounded because they didn't match a current set of definitions in the local regulation because Uber is neither a meter taxi nor is it public transport. And so under what regulation does it fit in?
And so regulation has to evolve to catch that up. Let's let's rewind one paragraph here because we we skipped over something that I thought just would be interesting to make practical. So if AI does get control of the vehicle and we're assuming the vehicle will be electric and therefore, you know, not combustion Mhmm. So one evolution takes place. Another evolution is that it's now self driving and there happens to be an accident.
Who do you sue? The owner of the vehicle, the coder of the AI, the installer of the IoT system? How do you draw a line practically as to where you're going to pinpoint owners of responsibility? And I think this is a very interesting question because, you know, there was a sort of event that happened not too long ago with, a Tesla that was on self driving and Elon Musk. And kind of his viewpoint, which, you know, you might like him or hate him, was, well, I'm sorry about that one accident, but I've saved a million other people's lives.
So you're gonna just take the one. So, yes, there's a might be a court case about it, but if there's, you know, something like four million deaths in The US from Kazi or something like that, and now it goes down to ten, they're saying, well, you could sue us or you can just say thank you. And that's the approach. But the but there is a there's there's a separate camp. So Elon Musk very against general AI whereas Sebastian Thrun is very pro general AI, but he developed the skin, skin monitoring app that detects skin cancer.
And it's ninety nine percent effective. It beats all of the top physicians in the world, all of the all of the top skin doctors in the world in terms of diagnosis, hands down, but he will not launch it because it's not a % foolproof. Why? Because there's a risk in the one percent of misdiagnosis? Exactly.
There's a risk in that one percent. And then how do you prosecute that AI? And we haven't gotten to the bottom of that yet. So interesting. The market's shifting.
So, Gerald, as the dean of IoT at Scrums.com, you obviously play a big role in sensors and networks that encapsulate or record the data pieces that ultimately becomes the lifeblood for algorithms and AI to be able to develop. Yeah. We're seeing bias in the in the in the I don't know how to put this differently, in the catching or in the recording of data. The democratization of sensors worldwide is is leading to a massive influx of data. Have we got the ability to distill for this empirically and not irrationally?
So I think it's an interesting one. I think the the there's sort of two trends that are happening. One on the macro scale and one on the micro scale, in the sort of Internet of things, the connected universe space. On the macro scale, you've kind of got, sort of Google and Facebook leading the charge. Google's got a project called Project Loon, which is balloons in the atmosphere to give Internet to the world, and at and Facebook's got a drone project, which is gonna fly drones around the world and get Internet to the world.
So so your first thing around trying to connect data, if you've got a car that's driving through the middle of the Karoo and it's self driving, you actually have to have Internet connectivity in the middle of the Karoo in order to allow that car to drive. And that's interesting you mentioned, you know, I've I've just got a newborn son, and I don't think I'll ever teach him to drive. Right? So when he's 18, I'm never gonna have the experience of actually teaching my son to drive because the machine will just drive him around. But in order to do that, you have to do this macro scale of Internet connectivity, which is making it democratized.
Then you've got the microscale, and that's where the scary thing is around the data side, which is the microscale is almost five nanometer microchip bots that can be in sent inserted into the bloodstream, that can be kind of, you know, put in a cornea in order to tell you what's going on. And that's where the data, the microscale is, you know, completely available to manipulation. And so, you know, we're seeing the sensors being created, but when it's almost deployed, without any thought to where this data is gonna go. Because if you only start recording, you know, 10% of the problem, you start making the algorithm off the back of 10% of the problem, and you actually miss 90% of the context. Actually, you're manufacturing bias.
You're you're manufacturing bias just by the choice of what you start to record in the first place. Okay. Now I wanna zoom out and presuppose that we get that right. Assume that we're able to distill appropriately for bias, for irrationality, have full on empirical data. Let's take the example of a self driving car.
Let's look at what's gonna happen to the world when a product or an innovation like that comes to life through the conversions of these exponential technologies. And let's go and explore a little bit perhaps some of the less obvious impacts. For instance, I've heard recently that there's a a view or a vision around the impact that self driving cars will have on the prices of real estate. So no longer will I need to buy a Downtown Sandton apartment such as to miss out on the traffic congested commute. I'm able to live a little bit further away from the city because I sleep or work or relax in my vehicle on the way to work or work on the way to work.
And if there is less demand for the same amount of real estate supply, it presupposes that price drops. Yeah. Now that for the world is is is a bit of a thing to think about because, like, 75% of the world's wealth is locked into real estate for Yes. Preservation. Completely.
It's challenging everything. I mean, a really nice example of how a spike in certain products is showing us what's really happening in terms of the shift in in our whole ecosystem is the spike in, the cost of motors in gates. Because with Uber Eats and with Uber, the use of the gate is that much higher that the motors are burning out that much quicker so that the motor business is fine. I mean, it's it's challenging everything to your point, you know. It's challenging where we live, how we live, real estate, what type of business we're in.
I mean, it's it's actually just fascinating what's happening in terms of the shifts that are happening in our world. And I would almost advocate that we use this as an opportunity to get used to the fact that things are changing and we do it rapidly. As humans, we like to nest and we like to keep things familiar and the same. But with this change comes massive opportunity. It's not a negative thing to move away from the known if we embrace it and explore within that unknown the commercial and other societal opportunities that exist.
As an example, education. Right? So I think, you know, just before I touch on the education, the one you know, very interesting thing around self driving cars is also, you know, once you got the time in the car, what do people do with it? And there's a lot of you know, there's a whole research teams in some of these self driving car businesses that are dedicated to working out what's the best way for people to have sex while the car drives itself, because that's a very likely thing that's going to happen if the car is driving itself. And so you're thinking, well, hold on.
The entire interior of the car has now been redesigned for what, you know, what people might actually use this for, which isn't just sitting and watching the road. You know, that you're taking us to the edge here. But I suppose a very practical thing to consider, we've got the same issue with robotics in having a partner, perhaps an intimate partner or a physical partner that you're able to customize, and order online. Completely. And that and so that education part goes to is that how do you actually, you know, what are you what are we teaching in schools?
What are we teaching in in tertiary institutions about the way that these technologies are gonna impact, businesses, you know, the way you do your career? And if you don't kind of inherently understand that the way that you can utilize these technologies and utilize them for good, you kind of you're gonna miss out on the whole entire opportunity space. But I think we talk about this a lot, the linear versus exponential. The fact that we are linear beings, we're we're very adverse to change, we, you know, one plus one equals two, consequentially. So I think it's one, two, three, one to 30 in an exponential world is one to 1,000,000,000.
That's right. So very quickly, we're seeing exponential technology affecting a linear world, and the companies that are winning at this exponential technology adoption by linear individuals understand fundamentally that the way they build the interfaces, so the way they interface exponential technologies with our linear beings, the companies that are gonna win. And those are the Googles and the Amazons because they do it in a way that doesn't feel, like we're under attack. And you willingly give them your data because they understand fundamentally how to build exponential interfaces for a linear world and a linear way of thinking. So the sooner we can stop thinking in a linear fashion and start understanding the effect of this exponential technology, you know, the better from an education perspective all the way to how we live our lives.
I think that scope of the education, reach has also changed. It's not something that you do for the first twenty five years of your life and then pursue a career. It's now integrated as a pervasive, real time, continuous effort. And so I would advocate that step one in seizing the opportunities associated with convergence, with exponentials, with disruption is awareness. And so thank you very much for listening today, and, thanks for joining me this morning.
Thank you very much. It's been wonderful to have you and a great conversation. Thank you. You've been listening to RMB's On The Verge podcast series. Subscribe for more solutionist thinking.