ExpertsConnect EP. 29: Understanding Artificial Intelligence with Jan Werth (Series III)

April 16, 2021 watch
ExpertsConnect
Artificial Intelligence
Deep Learning
Machine Learning
Computer Science
Engineering
Summary

Welcome to the grand finale of our Understanding AI three-part series! You may watch Series I here and Series II here.

In this episode of ExpertsConnect, Lead Data Scientist, at PHYTEC, Jan Werth (PhD), dives deeper into how artificial intelligence (AI) is disrupting and transforming major industries. He explains how he connects embedded systems with artificial intelligence and discusses how to deploy embedded AI on the edge. Furthermore, Dr Werth shares his perspectives on the battle for AI supremacy: China vs America vs Europe and also highlights the challenges of AI in this era. Finally, Dr Werth shares his predictions on how AI will change the world in 2050.

 

SHOW NOTES:

How is AI disrupting and transforming major industries? [0:50] | Jan explains the following. We had a talk actually it was from was Microsoft or Google, I think at Philips. And we had a lot of medical doctors there. Right. And there was a question it was about: At some point, will they provide algorithms to replace doctors? And they're constantly saying no, no, no, no, of course, that's just for aiding. And of course, they mostly settled for political reasons, because you don't have the doctors lobby against you. But the truth is in there as well because it would be humans are very capable of doing a lot of stuff. And we noticed that even Elon Musk a builder, almost fully automated construction sites, and I think it was 80% automated or something and they noticed it's not working. The human is so much more versatile in some parts who had they had to tune it down. So it's more about freeing your employees at some point to give them tasks where they can shine as a human right? But of course, you will see big disruptions let's say the transport system even just taxis - Ubers right? Truck drivers, long-distance truck drivers, trains all this kind of stuff. I can't imagine being in the near future completely changed here. But yeah, as I said, medicine, of course, but there's more like a plus side because I mean, at least in Germany, we need doctors so if you're a doctor, you know you're unemployed just come to Germany, we definitely need doctors here. We don't have enough at least many of course in Berlin don't go to Berlin, they have enough but at least in the countryside, they don't. But the idea is, of course, you can have algorithms as a doctor and help him to quickly assess disease or something or more precise, etc. and just give the benefit of all this what I said before the knowledge about previous cases, a new doctor doesn't have that, for example, right? So give him a machine aside. He said like, hey, I've seen a million of those cases in my database. So I can tell you most likely it's this disease and now humans do your own research. Right? So so it's more of an aid but again, a lot of things will change. I just read a story about call centres. I didn't thought of them actually. But call centres was this. revolutionising the natural language processing here was much better language bots, etc. At some point in a couple of years, most likely will not see any humans or much fewer humans in call centres. 

The old lady and young lady dilemma  [6:26] | Dr Werth explains the following. I will say is, it's called dilemma, right? a dilemma is a Greek word for a situation, where is there is no positive outcome. And that is a promise, as soon as you ask something like whether it's a positive outcome, so I cannot answer in a positive way more or less, right? That's First of all, but I have, of course, ways to answer positively, because a machine right, can just, I mean, most likely will not come into that situation. Because why does it happen? As a good driver, you should drive well aware, you should see the situation, you should not drive too fast, you should see that maybe the road is a bit more slippery. So you should even slow drive slower. And a machine if it's well designed, will not make the mistakes of driving a bit too fast, or doze off, or like just lighting a cigarette. So the chances of the machine coming into that situation where they suddenly have a dilemma and saying there is no positive outcome is much less likely to happen. On the other hand, people will think like, Oh, yeah, it's broken in. So what do you programme in your programme to kill the old lady or the hot young chick? I altered that a bit. But the thing is, it's not the point is AI is not programmed, right? So it's not that you do that, but it's learning from experience the same as we humans do, right? So the Tesla, for example, what do they do, they, they just record data and data, every drive, you drive, it's being recorded. I think at the time, it's not every recording is analysed only like a critical situation, for example. So but it's learned, at least from the past, and especially now from actual real-life situations, and they're driving much than we can ever drive. So we have our set of knowledge and experience and in the situation, we as humans, we don't think, we think like, oh, wait a second, okay, this is Grandma, and she has this societal impact. And this is a young lady, you will not do that you will act in that moment. And you will act with a gut feeling, you act with instincts, and you will act this knowledge of stuff you've seen before, maybe you know, your gut knows if a brake too hard, or my car will turn and I'm maybe I will die and most likely you will do the reaction, which will save you and not the person in front of you, you want to don't want to harm yourself. So most likely, you will do something. And that's exactly the same thing the algorithm will do, it will do something from previous experience, but most likely will do it much better, will just break earlier and make it safer, etc.  

How to connect embedded systems with AI? [15:08] | Jan mentions the following. So as I said at the beginning, right, so there, there has to be made a switch. And actually, in 2019, Google created the first development boards. And so, it's becoming more and more the norm, let's say, more people recognise that that's the way to go. And I think that comes because we now understand the database, artificial intelligence, way of programming, the problems and how to solve them. We understand them much better. A couple of years back, people think, Okay, how can I optimise this algorithm? How can I get better accuracy, etc? Oh, and now it's more like, okay, now it's the next step. So what? What can we do now? Now it's okay. Okay. Now, the challenge is to bring it on an embedded system to make it variable for an industry use case, have less power consumption, have fewer latencies, don't send everything over the internet. We don't need bandwidth here there, right? autonomous systems that you can just play somewhere and make and make things smarter felt like retrofitting. This is an idea, right? If a partner who's just focusing on retrofitting here, very interesting. So it's the next step. Let's say the first step is to understand the problem. Run that on your desktop. And the next step is okay, make it small, make it lightweight, make it low power consumption and put it on an industrial device to steer something. Right? So this is definitely the connection between the two.

How do you use embedded AI on the edge? [16:54] | Jan explains the following. An edge device just means the device, the end where the data is collected, so your mobile phone is an edge device. And sensors can be extra devices, if they're directly attached to an embedded system, right. And the idea would be that you calculate your results there, where the data is created, let's say a good example would be, you have a train, and you want to see that if the doors are working? Well, I would like to do predictive maintenance on the door, right, you could measure the current in the doors, for example, and see like, oh, there's something changing, maybe the doors are deteriorating. So what you could do is you could put like a sensor with a model, which just analyses the environment (i.e., the temperature, current etc), to predict the state of the door, right and either send that then collectively at some point to the main PC on the train, or when they come into the depot, just sending, like, a kilobyte message out there, it's like, oh, by the way, this door is fine, or this door will break down in x days. And this reduces the need for for for bandwidth, right? If you don't have to send a massive amount of data, especially if you're looking for video analysis, right? I mean, if you would have to send a video stream, upstream, analyse it, and then send something down, just do something. It tremendous costs because you have to send over some carrier, you have to be connected to the internet at some point. So rural areas might be quite difficult. Yeah, and then also like latencies, right. And when you have to send something out, you have bottlenecks all the way and every step of the way. So if you need very fast times, it's good to have everything combined. In one system, especially with embedded devices. The RAM, the memory is directly connected, a shared CPU and maybe neural accelerators. So if there's no bottleneck between the memory, and the computing devices can be extremely fast. And the only limiting factor is the amount of memory to put your model in to put all your algorithms in and of course, calculation units. And if you have a very complex, deep neural network, it can happen that it runs super slow. But now we know more and more neural processes coming to the markets. Yeah, we see a huge change here. And there's a change, back in 2019, we had I don't know a handful of distributions, right? That was Google and some others who do embedded devices specifically for AI. Of course, you have Nvidia, but it's more like GPUs but let's say specifically dedicated vendors for embedded AI were like just a handful and now we have I think over 100 vendors. 

Does Europe have a chance against the US and China when it comes to AI development? [20:29] | Dr. Werth shares the following views. Yes, of course. Yeah, it's difficult, of course. First of all, it's a kind of budget, right? If you look at the money they put into startup companies, I think United States at something like 15 billion US dollars. They're pushing AI startup companies, China has nothing to half of that. Seven or eight or something. And then Europe as I think three, 4 billion. So there's a steep, steep decline. And but the thing is, that this is one thing, right? And also, the question is always who's inventing and who's using? Sounds stupid, but for example, just a small difference, right? I'm an electrical engineer. I'm not a computer scientist. So I'm not inventing the latest news, deep learning networks, right? So the United States, for example, they come up with a lot of novel algorithms, etc. But of course, we can adapt them, we can just also use that maybe always a bit behind. Of course, that's the point. But I see a lot of good use and really stable algorithms also coming from Germany, and one of the top translation websites deep L is coming from Cologne in Germany. And there are other algorithms, if you look at autonomous driving algorithms, for example, if you look at that from Mercedes Benz, VW, etc, they just do a great job. Of course, they started much later. So like I said, the biggest thing, this is all a joke right? Now they know it's like, oh, my goodness, this is not a joke, but they just bought a lot of money into it, they have top engineers. And so the quality of the algorithms are is really good. And also actually the research as well. So if you look at research papers, they look actually a bit different. So I think China's still there, they say that most papers, peer-reviewed papers are coming from China, but the quality is actually much better in Europe. Actually, Europe would be taking second place before the United States. So the United States is not as publishing less than Europe. And quality-wise, and I'm not ahead but at least Europe is quality-wise, better than China. So it's not all hope is gone. Of course, just by the manpower, China has millions of electrical engineers, right?  Yes, and the United States has a different mindset. They're like explorers from their core, right? So that it's easy for them to have start up? Yes, one after another fail. Doesn't matter. Next one. And Europe is really like more like slow. Let's see what's happening. Is it good? tested again? So we're a bit slower? Yes. But that means also the end product? If you're bringing something to the market, or to Germany bring something to the market? It's good. So I don't see that hope completely gone. I think it's an equal race.

 What are the challenges of AI in this era? [24:05] | Jan states the following. So one big thing would be natural language processing. So I talked about that before a lot of research is going in there. But it's, it's very, very difficult because you have a combination of just the dictionary words, but then also grammar on sentence structure. And very complex, very complex topic. And to get how to put them in the right order to get the context of it. All right, to get it together, to get the context of a sentence, the meaning of a sentence, that will be one of the challenging things. And here we see actually, that quantum computing might play a huge role. So they started to use quantum computing for actual natural language processing, or they mix it up and they're seeing very good results here. So I'm, I'm looking forward to seeing some quantum computing working on natural language processing and other problems on the way. I mean, another problem was for the inside was the underrepresentation problem. Google just found out that the models work in training, but they're working not sometimes not that well in real in the real world, right. And we have seen those, I mean, they could go and detail you, but maybe not to. But the thing is, this is just a problem. I remember that when the message first came out was like, oh, my goodness, hold what's happening is like, artificial intelligence is not working, right, but just make it phase two, completely unstable. But I've seen the same thing a couple of years back when it was the adversarial hex meaning that you want to identify a face, right? And then you put a pattern somewhere, which says, Ah, this is too dark. And instead of a face that will tell you it's a dog, it was like what this is, this is called an adversarial hex. And I remember back in the day, people saying, Oh, it's over. You can't use AI, if I so easily tricked, no chance. But what that meant is actually at that point, people say, okay, it's not flawless. So we have to further work on that. So they made it more stable. 

Should there be higher standards with respect to transparency, privacy, fairness, and accountability with respect to the public use of AI? [26:58] | Jan explains the following. Yeah, yeah, it's, I mean, maybe you heard it. There was actually a Google they fired, actually, that top, their top AI ethic persons, right. And there was a controversy about that because it was actually about natural language processing. Again, I'm talking a lot about that, why not even being in the field that much. But the idea here was, there were several points, first of all, think we have what we have to think about is a carbon footprint. So they figured out that they pointed out that training, a huge neural network, especially in natural language processing, takes a lot of energy. So huge carbon footprint, but also, I mean, the just the amount of money that goes in there, right? So it's, it's of course not equal, right? I mean, the democratisation of AI, was the first step actually, what brought us to today. So the good thing is, there was Azure, this AWS meeting in the days before, we were not able to really make any or train a model significantly because we just didn't have the computation power, you would have to buy, like for $10,000, you had to buy a workstation or something. So you wouldn't probably do that. Right? Nowadays, we have AWS, we have Azure, we just rent for a couple of hours, a high powered machine. So this was a democratisation of the whole process. But the next step here is that, like the real, sophisticated models, right, for the next, the next era of AI, they need a lot of money, I think.

And, of course,  with democratisation who's participating? Right? Who's owning those algorithms? And of course, it's the big tech companies, right? It's Microsoft, Google's, it's Amazon and Facebook, and to be honest, and that was also the point here, right from the ethic researchers. And there was another point of course, especially with also this natural language processing thing, right? A huge amount of data has to be gathered. And the problem is, you cannot it's so much data, you cannot just analyse it, you have to go for good faith. It's like well, I should be fine, right? And what they said it's especially for natural language processing, it was a bit finer points, or that you have to gather the data from the internet but on the internet, we tend to speak differently, we are much more rough, just racist, we say things you would never say to somebody in the face, right. But people write that. And that will be that the standard to train on and natural language processing network. So meaning the whole network sees it as a racist comment. That's just yeah, that's normal. That's how we speak right? But often, of course, bias to the United States and England like, or, for English speaking countries, because let's say, I don't know, Portugal. So it's maybe not that represented with just written text as the United States. So also here, bias towards the country bias towards ultimate making the language bland, right, there was another point that if you just use one language and making huge models, trying out language, which was representing us in Google and elsewhere, so suddenly, language becomes not versatile anymore. And there were some other issues here. But the pain The point is, of course, people think of that and people know that problem. And they are many AI principles, their Beijing principle, their Montreal, our G20, OECD. I mean probably Germany has, of course, their own principles. I think the probably Milan city as a pre AI principles, I don't know. So everybody comes up as a principles thing, we have to do something about this. And it's, it's mostly about, okay, global regulation, right to be we can't do this on our own, we have to do in a bigger picture. And we have to have transparency. So how is your algorithm actually working? is working like that? Okay, well, let's see, what happens. If something goes wrong, right, you have to figure out where does something go wrong? Can we recreate this error? Can we fix this error? So it's not just like the classic black box? A lot of people talk about it, which is actually mostly not the case. And yeah, but you have to fix it. And then also who's responsible? There's also a question because it's not the algorithm. The algorithm is just an algorithm. You can see that's the machine Sorry, no, it's a human, a human programmed that so and it's a huge problem, is just like, also a tip for the guys who are in development, If you code something, you are responsible for what is happening. So if your boss tells you, oh, I want my algorithm to be biassed against either women or something. If you do that because you go, I'm pressured by my boss, you are responsible. It's not your boss, even if you get that as a written email, maybe in court, I'm not a lawyer. But I'm just saying you're responsible, right? So the same thing is going with AI. So who's responsible if you write your code with the best intentions, and at the time, even if it changes with the time, it may be in a year people know like, oh, my goodness, that was the worst way to programme. But at that time, you didn't know better, you're fine. But still, you have to achieve the standard of your time and then try the best you can. Right.  But the main key points also ban the AI arms race. Maybe that is the biggest thing that all of those AI principles have in common saying we don't want to have an AI arms race. But yet in reality. I mean, look to Hong Kong look to is Syria. They have their autonomous weapons already. I don't know. It's a box of Pandora that is really hard to get that close. And I know, that's a lot of good efforts and look good of intentions. But yeah, I hope I hope they come through and yeah, I hope it will not end catastrophically.

What are your predictions on how AI will change the world in 2050? 34:01  | I have to repeat myself from our from the embedded board's panel, because a similar question was asked, and the thing is that I can't answer that because we humans tend to think in linear terms, right? You think okay, I know what happened yesterday, or I know what happened a month ago. But it's, it's not a linear development. Right? So actually, if you look into development, it's a sign of the timeline before it's something it's maybe not exponential, but at least some nonlinear function at least right? So it's really difficult to say what I know is yes, as I said, I think more general AI will be there. Very important like understanding context, understanding its context in time in the spatial domain. This is a table and there is something on the table. So it has a reason why it's on the table right and this kind of stuff, but also the direction in time and understanding the context. And yeah, of course, then combining that to get a more general AI.  But the point is, I don't want to say never, because a lot of people say, well, will that happen? Will the AI overturn the human race or something? And the point is this actually from the  Asilomar AI principles that actually stated: prepare for everything, prepare for all possibilities. So would be not good to say, I, I don't know, this will never happen. So we don't have to think about this. Think about worst case, prepare for the worst case. And hopefully, it's not, it's not happening, right. So I will not say, AI will never be more intelligent than humans are and I would say, I don't know. Just think of it like, how would the world look like if AI took over and how can we think of alternatives right? So? Yeah, exactly. So So don't, don't push it aside, Think. Think open, think broadly. And just be part of the community. Maybe that's a good final sentence.

Notable Quotes from Dr Jan Werth

“[…] the chance will be that with reducing certain types of labour, there might be a chance, and there might be the idea of Okay, now the human has a certain quality, which a machine cannot have. And if we can bring that quality into society to enrich society. And it's not about the amount of time you spent at work, but more about what type of human are you and what can you bring to society and lift up the society and interchange that because maybe, maybe classic labour can be done more and more by machines. But again, we have to tackle this because it also can go wrong..” [5:36]

“[…] the democratisation of AI, was the first step actually, what brought us to today. So the good thing is, there was Azure, this AWS meeting in the days before, we were not able to really make any or train a model significantly because we just didn't have the computation power, you would have to buy, like for $10,000, you had to buy a workstation or something. So you wouldn't probably do that. Right? Nowadays, we have AWS, we have Azure, we just rent for a couple of hours, a high powered machine.” [28:12]

“Just think of it like, how would the world look like if AI took over and how can we think of alternatives right? So? Yeah, exactly. So So don't, don't push it aside.” [36:00]

"Think open, think broadly. And just be part of the community. [36:16]

CONNECT with Jan:

LinkedIn: https://www.linkedin.com/in/jan-werth/

Medium: https://janwerth.medium.com/

YouTube: https://www.youtube.com/channel/UClH75cIszZQuyyUOSIszwOw 

Contact PHYTEC:

https://www.phytec.de/en/startseite

https://www.phytec.de/en/leistungen/kuenstliche-intelligenz/

https://www.phytec.de/en/unternehmen/schulungen-trainings/ki-schulung/

FOLLOW Kadian:

Instagram: https://www.instagram.com/kadiandavisowusu/

Linktree: https://linktr.ee/kadiandavisowusu

If you have been empowered by the information shared on this podcast, then please clap using the clap emoticon. Please feel free to ask any questions or to share your comments using the comment emoticon.

User profile image

Created by

Kadian Davis-Owusu

Kadian has a background in Computer Science and pursued her PhD and post-doctoral studies in the fields of Design for Social Interaction and Design for Health. She has taught a number of interaction design courses at the university level including the University of the West Indies, the University of the Commonwealth Caribbean (UCC) in Jamaica, and the Delft University of Technology in The Netherlands. Kadian also serves as the Founder and Lead UX Designer for TeachSomebody and is the host of the ExpertsConnect video podcast. In this function, Kadian serves to bridge the learning gap by delivering high-quality content tailored to meet your learning needs. Moreover, through expert collaboration, top-quality experts are equipped with a unique channel to create public awareness and establish thought leadership in their related domains. Additionally, she lectures on ICT-related courses at Fontys University of Applied Sciences.


© Copyright 2024, The BoesK Partnership