ExpertsConnect EP. 27: Understanding Artificial Intelligence with Jan Werth (Series I)

ExpertsConnect EP. 27: Understanding Artificial Intelligence with Jan Werth (Series I)


“Artificial intelligence enables computers and machines to mimic the perception, learning, problem-solving, and decision-making capabilities of the human mind.”  – IBM  Cloud Education 

Artificial intelligence (AI) has become engrained in our everyday lives, whether we're trying to find podcast or movie recommendations, receive live traffic updates, get driving directions, send emails, navigate our social media, or use our smart home devices. However, while AI can be beneficial (e.g., it improves healthcare, saves and money, and increases productivity) it poses a number of threats (e.g., ethical challenges). In this three-part series, Understanding Artificial Intelligence, we will dive into a number of AI concepts, discuss the timeline of artificial intelligence, explore various AI fields and applications, dive into its challenges, and consider the future of AI. 

In this episode of ExpertsConnect, Lead Data Scientist, at PHYTEC, Jan Werth (PhD), explains the notion of machine learning. He dives deeper into the differences between deep learning, machine learning, and artificial intelligence. Later, Dr Werth discusses some of the tools one would exploit if they wanted to start a career in machine learning. 


Jan's journey to AI [0:48] | Jan states the following. I actually did quite terribly in school. So I failed school. But I had the chance to become a professional cabinet maker. So I spent the first three years of my professional life on construction sites and then I noticed it's not the way of life I intend for the rest of my life. So, I was able to start studying electrical engineering, luckily. And yeah, I focused on signal processing there, which was perfect, because later the studies I noticed is actually the most interesting fields ever.  In the beginning, I focused mostly on healthcare applications. Also, mostly my university in Treia. And there was Professor, who was a really great mentor actually.  He was into the retina implant and it was just fascinating! So then I had the chance to finish my master's thesis research in Eindhoven. And that was actually on automated pneumonia detection. So and it was a really nice algorithm, but suppose really classic signal processing like wavelet analysis, etc. And, but it was well conducted really nice results actually. So, they told me I should apply for a PhD position. And so I applied for a PhD with the Eindhoven University of Technology, Philips research and Maxima Medisch Centrum. Yeah, exactly. And that was really interesting because my topic was just fantastic. It was on preterm infant sleep analysis, and let's say automated preterm infant sleep analysis and it really quickly got to the borders of classic signal processing. We noticed it was not working on preterm infant sleep and was too difficult. So we said, okay, we have to try other methods and that was the first time I actually looked at machine learning, and then later also in deep learning. So now I have I like over 10 years of experience in signal processing. And I'd say into the 13, something I started with machine learning. And a close up to that also was it was deep learning. Maybe I later probably explained a bit what's the difference here, but exactly, so it's quite similar. And luckily, I came from signal processing. So that made the transition really easy. Yeah, and after I finished my PhD, due to family reasons, I moved back to Germany. To Mainz, a beautiful city, actually, I could talk for an hour about Mainz, you should come and visit. So yeah, I found a position interestingly, at an electronic producer, PHYTEC,  sort of the hidden champions actually in the business, for over  26-30 years. And yeah, they're doing an amazing job at creating embedded devices for the industry. At Phytech, I realized I could close the gap between electronics and AI. I could bring a lot also to the company, just teaching AI and especially for our customers, to teach them how to solve their problems using machine learning and artificial Intelligence. 

What exactly is machine learning? [4:39] | Jan mentions the following. It's a huge topic. And you could dissect it into classic machine learning and deep learning. But generally, you can see machine learning as a statistical method, algorithms to learn to recognise patterns. And the thing is you don't do that by hand, but the machine will learn by example. So they give them examples. And they start to learn to understand from previous examples. Maybe a good example would be the weather or something like that, right? So you could determine by hand probably like, okay, I want to see if it's raining tomorrow or not. So you can think of okay, what's the weather today? Where do I live? Actually? Am I closer see,  what's the pressure, what's the temperature, what's the time of year, like that, we call that features, right? So we can come up with those features. And to understand that, and then you could programme by hand, you could tap, okay, if the weather was nice, and if it's July, and if it hasn't rained for three days in a row. Most likely, it won't rain tomorrow, right. And you might have a good, good prediction there. But we know, it got better over the years, right. And the thing is he comes to play, the more complex it gets, the more difficult it is to do that by hand. And then machine learning comes into play. And the good thing is, you can just bundle up a lot of data, and you can actually bundle up also a lot of those features where you have maybe an idea of the problem. But there might be so many features, that it's nearly impossible to combine them to see, okay, how they combined actually. So it could be like one feature is obvious, like, okay, this is a super good feature. But maybe another feature combined with two or three other features are even better than the one. And this combination, you probably wouldn't see as a human right, you maybe have a gut feeling that that would be the gut feeling. Or you see like, there's something else. I don't know, it's raining tomorrow, something like that. But you can't really put it into two values. And that is a combination of those features. And as soon as you have that, and it's becoming more difficult than just, I don't know, I guess a super simple example would be I measure my temperature if it's over 37 degrees Celsius, I'm sick. If it's under, I'm fine. Even this is simple, right? You don't need any machine learning for that. But as soon especially with the body, right, as a human body, as soon as you go a bit into depth and see like okay, well, I feel sick. But are there other indicators, for example, you very quickly come to a point where you have so many features and things that just interwork with each other, that you cannot do that by hand. And the thing is, so that's why the position of the outcome just got better and better. Right. So the algorithms have just gotten more sophisticated and more stable and got more reliable results with machine learning. And this idea of learning from previous data and creating this dependence on features, this data is machine learning. 

What's the difference between Deep Learning, Machine Learning, and Artificial Intelligence? [9:34] | Jan explains as follows. The thing is, mostly, we talk about artificial intelligence. And this is interesting wording because we have this kind of a huge bubble, which contains several other wordings, right. So artificial intelligence actually describes more or less everything, and it could incorporate much more than just machine learning.  Also, just general computer science is part of that. Then machine learning is part of that. But machine learning, for example, has a subpart called Deep learning. So deep learning is machine learning. But machine learning is not only deep learning, so it's kind of layered, right. So can I, if you look at that image here, you can see that you have different bubbles, right. And in the middle, you see this machine learning part. And here, you see, actually, that's the deep learning in there, right, and you have the natural language processing is there a part of that cognitive computing big data, etc. But it's part of artificial intelligence, but doesn't have to mean it's only artificial intelligence. So meaning, there's a lot going on around these wordings. And artificial intelligence just describes the whole mechanic of getting data, you pre-process your data, using some algorithm to understand your data better to get some knowledge out of your data, and then use that to steer some processes.  And actually, machine learning will say kind of before this classic machine learning, just came up with algorithms. And at some point, he said, okay, we want to solve more complex problems. And then deep learning is something that is self-learning systems, where people misunderstand that quite a lot. Because people when they do when somebody says, cell phone exists, they think like, Oh, perfect, I bought a camera with a system somewhere, it records something, and then it learns from that. So that is actually not correct. Let's say you can programme that. But that's completely different. That's a special topic. But it's generally self-learning means here that the algorithm in itself learns to check and balance itself. So what it does you feed it data, right. And as I said, they learned from previous examples. So I give him examples. And I tell him, by the way, so now we want to look at dogs, cats and dogs. And so I tell him like, Hey, this is a picture of a cat. This is a picture of a dog, right? And I tell him that so and then the algorithm learns. And now what is the self-learning part about that? That's the difference. Also, machine learning, and actually deep learning, because in deep learning, we have the so-called backpropagation, meaning we do a run through the whole system, right? And at the end, you just compare what the system thinks it detected. It's like, Oh, I think it's 70% probability. It's a dog. But it's actually a cat. So sorry, no, no, you're wrong, then this is not like, Okay, I'm wrong. So I have to start over. But I have to change all the parameters in this huge network to change it to have better results in the end. And that is the self-learning part. So you don't have to actually tweak that by hand. And that you had to do actually beforehand was a classic machine learning right? And this actually gives you a huge headache. 

What tools do I need to know about if I wanted to start a career in classic machine learning or deep learning?  [18:57] | Back in my days, we mostly worked with MATLAB. We started off with MATLAB and I had to make the switch to Python. Nowadays, luckily, Python is a bit more available and broadly used. So Python, first of all, that is a tool to go to. Actually, also, if you're in Windows, or working with Windows, I would highly recommend changing it to Linux just as a baseline because it's much easier to implement. Once you get started. with Linux, it's much easier with all the machine learning stuff because of all the libraries, it's much easier to do that. But then, of course, you choose one of the libraries you want to go for. As either TensorFlow or PyTorch are the two main libraries. There are others out there MXNet, and I don't know. But these like PyTorch and TensorFlow are the main libraries you just choose one. Back in the days, let's say 2015/2016, there were huge differences between Python and TensorFlow, this actually was in the latest version, you just at the right time, because now we have TensorFlow two. And it's more or less kind of interchangeable. Even the code is really, really similar nowadays. Maybe make yourself familiar with also one of the big data centre services like Azure, or AWS, because of course, you can run stuff on your laptop, even, like a basic classic machine learning algorithm. You can do that on your laptop. But I wouldn't recommend it. Because you have to iterate a lot. Yeah. And, yeah, it's nice. If it's going a bit faster then just the development processes a bit faster. Yeah, that's, of course, the thing is just just one, it is a classic machine learning, you should not forget about that. Because a lot of people think now that, oh, yeah, it's all deep learning now. True, it's more, let's say it's maybe more trendy deep learning. But machine learning definitely has its place. And especially as I said before, if you have less data, and classic machine learning algorithms can and will most often outperform deep learning networks, up to a certain point, I mean, deep learning methods really scale well with a lot of data. So meaning, if you know, you will get more data, you can also start with deep learning, actually, because the more that you get, the better the model we're getting. This is not the cases with classic machine learning you kind of stagnate at some point. But if you have less data, first go with classic machine learning. Also, if you're thinking of a later computation problem, and I'm coming from the embedded world now, right? If you're thinking of putting something on an embedded device, yeah, just the raw calculation power is needed for running a deep learning network, not training, just running is also usually different between classic machine learning and deep learning models. So if you have that in mind, you need a really small device, maybe battery operators, something like that, like a classic machine learning algorithm might be actually more suitable than deep learning. So don't disregard machine learning at all. And if you want to also look into machine learning, that's actually the one library is scikit-learn is called. That's the library for classic machine learning why you actually can do it also with TensorFlow, I think. 

So as I said before, right, so we have here like PyTorch is a set, we have a TensorFlow, MXnet from other, this is a platform for that. And then we have here the AWS, Azure, Google Colab, and Kaggle. These are all platforms where you can run your models actually on more sophisticated machines that run a bit faster. Right, and then I choose to just work with a code our Spyder, Jupyter Notebook and Visual Studio Code.  They also what could you be honest, there are different actually. So Spyder and Visual Studio Code are really similar. But for integration into larger systems, Visual Studio Code is a bit handier because you can better integrate stuff. But just for sharing add to quickly try stuff out. And just to share with your colleagues. Jupyter Notebooks is really nice because you can have really nice also descriptions of your code, you can put in graphics, etc. And you can actually just run small parts of your code to see if they're working. And so they all have their benefits and their negative sides. So just again, here, just choose one and see where you best fit. And I think over time, you anyhow will end up using all of them at some point. 


Here are the links to all AI tools referenced by Jan

Programming languages 

Operating Systems 

Python Libraries 

Cloud Service Providers 

Python IDE and Code Editors

CONNECT with Jan:




Contact PHYTEC:

FOLLOW Kadian:



If you have been empowered by the information shared on this podcast, then please clap using the clap emoticon. Please feel free to ask any questions or to share your comments using the comment emoticon.

Published By
Kadian Davis-Owusu (PhD)
Kadian has a background in Computer Science and pursued her PhD and post-doctoral studies in the fields of Design for Social Interaction and Design for Health. She has taught a number of interaction design courses at the university level including the University of the West Indies, the University of the Commonwealth Caribbean (UCC) in Jamaica, and the Delft University of Technology in The Netherlands. Kadian also serves as the Founder and Lead UX Designer for TeachSomebody and is the host of the ExpertsConnect video podcast. In this function, Kadian serves to bridge the learning gap by delivering high-quality content tailored to meet your learning needs. Moreover, through expert collaboration, top-quality experts are equipped with a unique channel to create public awareness and establish thought leadership in their related domains.... Show more