This is a transcript from the AI and the Future of Work podcast episode featuring Christopher Nguyen, serial entrepreneur, AI professor, and CEO of Aitomatic, discusses human-first vs. data-first approaches to machine learning

Dan Turchin (00:19):
Good morning. Good evening, depending on where you’re listening. Welcome back to AI in the future of work. Thanks again for making this one of the most downloaded podcasts about the future of work. If you enjoy what we do, please like comment and share in your favorite podcast app, and we’ll keep sharing great conversations like the one we have for today. I’m your Dan Turchin advisor at insight finder, the system of intelligence for it, operations and CEO of people reign the AI platform for it and HR employee service. We spend a lot of time on this podcast talking about data, how we get it, where it comes from, how it’s used to build AI models and what happens when it’s inherently biased and leads to poor automated decisions. Today, we get to explore a different question. What if our approach is completely flawed?

Dan Turchin (01:12):
Maybe just maybe the answer isn’t more data. Maybe AI can be human first rather than data first, perhaps we can instead cut what small numbers of experts do to solve problems and maybe automate decisions based on that. It’s a radical idea, but for certain types of problems like fixing industrial equipment, it just could work. Automatic is challenging traditional ideas about the role of data and AI and solving problems in the IOT or internet of things space. Although the underlying principles apply equally well to a broad set of domains, everything from say aquaculture to carbon dioxide capture Christopher Nguyen Aitomatic CEO is a Renaissance man and a serial entrepreneur he’s as comfortable talking politics as he is about neuromorphic computing, Christopher, and a quite impressive team of AI engineers and research scientists founded Aitomatic last April, following the successful exit they had of AMO to Panasonic a few years back, Christopher received his doctorate and master’s degrees in E from Stanford and his undergrad from Berkeley among many other accomplishments. Christopher also founded and taught at the department of electrical and computer engineering at the Hong Kong university of science and technology. If you’re not already follow Christopher on Twitter at Pentagon, thanks to our mutual friend, TES, how from the ventures for the introduction and without further ado, it really is my pleasure to welcome Christopher to the podcast. Christopher, why don’t we get started by having you share a little bit more about your background and how you got into this space.

Christopher Nguyen (03:03):
Thanks, Dan pleasure to be here. Well, you’ve covered quite a bit. So maybe I’ll just start at the end, right? So the, the journey that I’m on could be dated back maybe 10 years, the most recent one after I’d lived Google I updated my LinkedIn and said that I wanted to work on something in big data slash machine learning. And this was 2011, 2012. Big data was still a term that people were just starting to grapple with, let alone machine learning. So it didn’t make any sense to anyone. But I had a podcast earlier around 2014 where I said, the reason for big data is machine learning. And again, that didn’t quite make a lot of sense to people coming from the big data side. But of course now it’s, it’s an obvious fact.

Christopher Nguyen (03:54):
So I started a company to do the machine learning or now people refer to it as, as AI layer of, of the big data processing, big data storage, big data processing and the application layer. Long story short we were acquired by Panasonic back in 2017 and you know, by then we had been, had built and, you know, I’d like to refer to it at the time as a bunch of geeks with algorithms, right. Panasonic was about to celebrate its hundredth year anniversary, you know, 2018, it was founded by Maus Desani in 1918. And the, we did not know this at the time. But the reason for their approaching is initially as as a customer, right. We’re very excited about Panasonic NASDA CIA Kohls and so on as potential customers.

Christopher Nguyen (04:54):
But what Panasonic did was set up a bakeoff right between us and a few other well known company to solve some of their industrial challenges, specifically industrial AI but what they were really looking for was a team and a technology that could help Panasonic be part of that transition into the second century of the company. And the initiative was, can be thought of as software to hardware, but really moving up the stack, you know, from a components player to somebody that can, you know, control the user experience fast forward after we were part of Panasonic, we, we remain our own unit. And so I helped with a lot of the global AI effort across the globe. the divisions people may, may not know that Panasonic today is really an industrial giant, not, not so much in consumer electronics, but a lot in the industries.

Christopher Nguyen (05:56):
If you drive a Tesla or you’ve seen a Tesla, the batteries in there are manufactured by Panasonic. If you fly and you have, you look at the LCD screen and you enjoy wifi, 70% of the market share is Panasonic avionics on and on coaching cold supply chain that get it’s a fish from the ocean to your the dining table. The technology there is, is provided by, by Panasonic and automotive and on and on and on the Genesis of it Aitomatic came out of our initial failure, being part of Panasonic, applying our machine learning algorithms to problems like industrial, predictive maintenance energy optimizations, and so on. We quickly ran into a very interesting problem for this, you know we’ll expand on it, but I would say essentially there is not, not enough data from machine learning in that context.

Christopher Nguyen (07:00):
And I can, I can talk more about why that is but it took us a while, six months to a year of beating our heads against the wall. Before we realized that there was in fact, a lot of asset to be leveraged and to help solve the industrial AI problem. It just didn’t come from data, but it turns out it comes from the domain expertise of the 250,000 people that are part of Panasonic you know, already. and so we developed something that today we call knowledge first AI and that’s, you know we launched, we left Panasonic and we launched the company back in April, 2021, and we just completed our first year of operation.

Dan Turchin (07:43):
Let’s build on that theme. I teased in the opener about human first versus what I’m calling data first AI, why is that human first approach better suited for the problems that you’re solving with it?

Christopher Nguyen (07:59):
Right. so, so one way to look at it is just to think about machine learning. You know, you, you could say give it sufficient amount of data, right? We and sufficient amount of compute we could use machine learning algorithms to essentially discover virtually any pattern in the data, put very crude way, very a grossly, you know, simplified way machine learning can solve almost anything given enough data. But if there is enough data is, a big gift. And it turns out in a lot of industries, and I’ve, I’ve thought about this. This is any company that has a significant dimension, which is quite different from the world that, that I came from that most of, you know, folks in Silicon valley come from the Googles, the Facebook, the Twitters of the world that are digital first companies, right.

Christopher Nguyen (08:54):
There’s a huge world out there, right? We, we still drive our cars. We still, you know, eat our lunches. We’re still very much, you know, atomic beings, not just digital beings. And when it comes to industries that have a large physical component, right. We talk about industrial, I manufacturing and so on, it turns out the question of having the right kind of data and sufficient amount of that data is, is a big challenge. Machine learning fails at these problems. I can give you a particular example of that, but it’s there’s a pattern out there and very, you know, not just us that have it is we’re lucky in the sense that we’re actually inside the belly of the beast, right. But a lot of vendors outside that try to sell to manufacturers and other industrial companies find that, that you know, the algorithms, the machine learning algorithms alone will not work to solve these business relevant problems.

Dan Turchin (09:56):
So let’s take a specific example, maybe in the manufacturing space, let’s say a very sophisticated piece of HVAC equipment. Yep. And they’re relatively speaking, like you’ve educated me about a small number of quote experts on how to fix that HVAC equipment. Yep. And what you wanna do is capture total of knowledge of all the experts about how to fix particular problems that the HVAC equipment has and codify that. Yeah. It’s a non-trivial problem. How do you go about capturing what’s in this small number of not trivially small, but relatively small number of experts, heads and codifying that in a way that you could train models on how to recommend fixes for, for problems?

Christopher Nguyen (10:43):
Well, let’s talk about the motivation. Why, why do you wanna do such a thing in the first place? Right. take, take the example you just raised, right. HVAC equipment. There’s a field called cold chain right. The coal supply chain that we work on quite a bit that when we work a part of Panasonic let’s talk about predictive maintenance use case within that. In other words you know, the, in the past we have, what’s called reactive maintenance. When something goes broken, right. A sensor goes off and then somebody goes out and say, okay, let me take a look. Oh, it’s a compressor. And then there is preventive maintenance which is an improvement because reactive maintenance in terms of, to be very expensive. It’s not the equipment you worry about. It’s the downtime, right? The downtime can be tremendously expensive, you know, even in life critical situations I don’t know how you measure the loss of lives.

Christopher Nguyen (11:36):
So preventive maintenance essentially says, okay, every six months let’s go through, had just replaced this set of, of, of components so that they can’t FA you know, the chance of their aging and, and failing is, is very low. The holy grail is predictive maintenance, which is fixing things before they catastrophically fail, but only those components that are needed. Right? So you need to be able to a answer the question. Can you tell me the probability of this compressor failing over the next two months, right. To be able to, to do that to be very precise, what that is is failure prediction, because predictive maintenance, a lot of the people that have implemented it out there or claim to have done. So they have not actually done failure prediction. What they have done is a nominated section, right? And what a nominated section is, and that can, can be done with machine learning with massive amounts of unlabeled data.

Christopher Nguyen (12:30):
And all that is, is that you, you take in as much temporal or time series datas you can. And then over time, you discover certain patterns that you would say, well, these are normal things. I’ve seen them before. And one day you wake up and the sensors are sending a, a series of, of, of signals that are abnormal. They’re not necessarily a alarms, right? But these are patterns you haven’t seen before. Then you trigger an alert. You say, okay, I’m seeing something strange that is useful, but is not actually failure. Prediction. All that is, is something looks different. Not even something is wrong, right. It could be that, you know, the power source is a little different today. So what happens is machine learning ends there in order to then map that to potential failures, machine learning, it has to enter what’s called supervised learning.

Christopher Nguyen (13:32):
In other words, there must be a lot of labels of past of the same type so that I can, okay, given this, then I can learn that. But the problem is what equipment is generally designed not to fail a lot, right? And then even when you have past failures service personnel that goes out there and replacing, they tend to replace a lot of things. So if you look at the lock records, it’s not just the compressor, it’s also the filter. It’s also the temperature sensor and so on. So then you look at those things, they’re not useful labels for machine learning. And yet when you give it to a human expert, who knows that range of system, they will look at the sensors. And they, they say based on this reading, based on this pressure being too high and the temperature being too low, I think you should look at the compressor.

Christopher Nguyen (14:28):
So that’s the, the regional motivation of when we realize we need to talk to these experts, because we can only generate using another detection signals that’s. So something looks different. It still, it still requires a human expert with, with their domain expertise, with 30 years of experience, to look at the, at that time, and then say, based on this, I believe the probability of failure a is this pay failure B is that it took us a while longer for us to essentially accept defeat, right. And say, okay, all right. We keep having to go through this manual process of asking the three experts that are available, you know, in all of the, a country of Japan, why not start to, to figure out how to codify what the whatever procedure or S that they go through. And it was when we started doing that, that we began to succeed as a industrial AI unit, right?

Dan Turchin (15:27):
So academically, this approach makes sense. And I certain it from the perspective of building a business attic, but if I’m one of the three experts, my reaction is I wanna fiercely protect that tribal knowledge, because I don’t wanna make it easy for you to automate me out of a job. And you respond to that.

Christopher Nguyen (15:49):
Well, these three experts are having to service all of Japan. So they’re actually overworked, but you touched on something very interesting, right? There was a, there was a talk I gave at the 2000, I think it was 18 AI conference or Riley conference in London. and one of the questions was, you know, how do people feel about being automated away? Because, you know, there’s would talk at the time, even now about, you know, masses of people, you know, manufacturing, for example, being automated away by AI what we do. And it turns out when we look across the landscape, most of the time, it is about scaling the few human experts that do not have enough muscles and brains and time to go solve problems, you know, that are on scale. That turns out to be the much bigger value creation for smart companies like Panasonic, that, that know, you know, what the, what the points of labor are.

Christopher Nguyen (16:44):
It’s not so much about replacing, you know, removing 10, you know relatively lowly paid manufacturing line people from, from, from their assembly line. So the use cases that, that resonate, you know, for this kind of thing is very much about finding the cases where really only the human expert can solve, but you actually have so much deployment out there. And I think in the, in the us, you know, at we may have talked about this in one of our early your conversations yeah, in the last 40 years, everybody’s been encouraged to go to college, right? Nobody wants to do the trade work anymore, but it turns out trade work, you know, maintenance, plumbing tooling, and, and so on that page really, really well. So the world is in short of human expertise. And so what we’re doing here with, with this knowledge first AI is to be able to apply that, you know, at scale to, to, to, but interesting physical problems for which machine learning with data alone can I solve,

Dan Turchin (17:52):
So when we think about future opportunities, problems that can be solved with AI machine learning, what would you say is the biggest constraint on innovation? Is it compute power? Is it storage? Is it the limitations of you know, how, how fast we can process signals? Is it human ingenuity? What is it that’s holding us back right now?

Christopher Nguyen (18:18):
Oh I I’ve been in technology for a fairly long time. You know, when we started dating ourselves, when I said I worked on, on X and S right, which is the conversion tot C P I P back at Berkeley I I’ve seen a lot of these cycles. and the, the correct answer to your question is that it depends the bottleneck or the long pole, if you’re doing things right, you have a different bottleneck that sort of rotates every few years, every 10 years or so in computing, we’ve got, you know, storage, bottleneck, networking, communication bottleneck, or compute CPU bottleneck, right. Right now where we are with respect to machine learning and AI for, for example, for very long time it was a computing bottleneck, even though we didn’t really realize it. Right. And then in 2012 Google the Google brain team published this work, you know, without saying, well, we’re just applying this deep learning algorithm, which has been known for a very long time to a very large compute resource that, you know, at, at the Google disposal and suddenly the same rhythm performed wonderfully in such a way that sort of, you know, accelerated and went right past all of the other machine learning approaches.

Christopher Nguyen (19:42):
Today it may seem like we are also compute limited, right. But increasingly people are, are seeing, well, there are a few institutions, right? A few players like a Google or Facebook, or an open AI that says we’re gonna train extremely large scale models. But the next breakthroughs, a lot of us believe is gonna be in somebody being smart about the way of doing something. Right. and in the, in the more generic sense I use the word algorithm, but I don’t just mean the machine learning algorithm itself, but for example, right. Approaches or algorithms, or ways to, to codify human knowledge. Right. I think that that kind of thing will allow us to make the next major leap frog in terms of this broader field that we call AI, you know, artificial intelligence that is not just machine learning.

Dan Turchin (20:36):
So I often get on my soapbox on this show, and I talk about the importance of practicing responsible AI, and as someone who’s hired many AI engineers. And I know you talk a lot publicly about a field of AI engineering, what do you think is our responsibility, as let’s say, vendors or anyone who’s putting AI related technology out into the public domain, what kind of responsibility do we have to train AI engineers to be?

Christopher Nguyen (21:08):
Yeah, strangely that turns out to be a controversial question within the field. I I regret that there are, there are people in, in our field that say that is not the, that is not the problem of the people, you know, designing or practicing these things. it’s kinda like, you know, guns, don’t kill people, people kill people. I, I do think there’s a shared responsibility. And I think that that is much more than academic for people like us, because we do work on live critical systems. You know, one of the things we do is automotive both the in cabin experience, as well as automotive cybersecurity over the next few years, cars are gonna be less like, you know, traditional cars and more like computers on wheels. They’re they’re gonna get attacked. Right? Cyber attacks is a very, very real thing.

Christopher Nguyen (21:58):
As, as we’re, we’re seeing in, you know, doing these times, there’s a real war going on, physical war people realize, okay, well, war is real. So it, I think the number one thing is, is the first thing that companies like a Panasonic have always been concerned with, you know, even before the religious wave of AI, right? When you build a car, you gotta think through, okay, there’s gonna be humans driving in this thing and safety features and so on. So the responsibility on AI practitioners, or, you know, algorithm designers and researchers and so on, I think it’s not any less, you have to, you have to think about the impact of these things, how you go about these things. There’s a lot of debate about it. but I think the way the way I think about sort of philosophically or my mental model is a following, we focus too much on intent. What matters is impact, right? Because by, by, by incorrectly focusing on intent, it allows people, the out of saying, if I don’t mean bad, if I didn’t mean to do that, I’m okay. But I think, I think if we’re held accountable to an appropriate degree for the impact of our work, then I think we’ll begin to have the right systems and processes for, for measuring these things and even regulating this,

Dan Turchin (23:29):
You and I started talking about recent guests that was on the podcast. Gordon Wilson was the CEO of rain neuromorphic. And you ended up drawing me on a back of a napkin, a diagram of <laugh> of, of how potentially we could recreate the human brain in Silicon. And it led me to be curious to ask you this question, which is what gets you most, most excited in terms of research themes you know, potentially achieving some kind of breakthroughs that could dramatically accelerate the field of AI, you know, maybe, you know, similar to some of Hinton’s research, you know, a decade or two decades ago, like what’s the next breakthrough that will be achieved in academia?

Christopher Nguyen (24:14):
Do you mean AI or, or you mean in terms of human impact?

Dan Turchin (24:19):
A lot of it comes back. I was thinking more about like, you know, you, you were telling me about S and some that we could potentially do with some fundamental breakthroughs. What’s the fundamental breakthrough almost. I was thinking about it, you know, like back propagation or something that ends up being just a foundational technology that unlocked, you know, deep learning, like what’s the next equivalent of something like back propagation,

Christopher Nguyen (24:44):
Right. Well, you you’d touched on what we talked about. Right. You know, starting from the Meister as one possible such device, but you know, there’s, there’s a whole area roughly labeled neuromorphic, right. or let’s, let’s call it human brain inspired computing. One, one thing to realize that we went from CPU, which is Aon Newman machine. It’s actually quite a very different, just it’s an unnatural invention, if you will, because the compute power we have is so little that we have to impose a lot of human knowledge, the programming from the outside, right. It’s not, it’s a non earning computer, right. So we have this, this arrangement of compute separated from memory, and then they come together doing some processing and so on. Right. and then we transition to this age of GPU, which happened to be, I would say, an accidentally useful computing unit.

Christopher Nguyen (25:47):
Right. Because it, you know, what a GPU does is, is that it, it doesn’t generically do all this, all the compute that, that a CPU does. It, it does the same kind of calculation, but across large arrays very quickly, which is a very useful operation for machine learning. And the people get to be more if it with F PGAs and so on. But I think the way I think about it’s that all, all we’re doing is we’re simulating right. The equivalent of a neuron. Right. And I wanna be careful because a lot of people say, you know, we oversimplifying, we’re not really, yeah, we’re really not building the human brain, but we’re using the we’re inspire. But the neurons units that we’re using today is incredibly inefficient. If you think about it compared to the human neuron or compared to the basic unit that we want, all the neuron really does in this sense is that sums up signal from a number of inputs multiplied by some weights, and then, you know, has a single output coming out and then trigger if it’s, oh, well, with a certain threshold don’t trigger.

Christopher Nguyen (26:52):
If it doesn’t that operation can be vastly more efficient, right. You could use a for example, a single transistor to do it, or you could use a, you know, an optical to do it. Or we mentioned the Meister. So the idea of going through from, you know, from all of the simulation right. Of, of the neuron equivalent, there’s something where it’s a single device doing so very efficiently. I think we’ve learned in the last, you know, really a hundred years that anytime you multiply compute power, by factor of 10 X, amazing things happen that you and I can’t sit here and predict in terms of use cases. So I’m pretty excited about, you know, let’s say the compute cost of, of of AI today going down by a factor of a thousand, right. Who knows what kind of intelligence that would unlock.

Dan Turchin (27:47):
So let’s say it’s imminent maybe in the next decade. What’s one example of, of behavior that will be commonplace in daily life that will be enabled by that breakthrough that would today just seem like science fiction.

Christopher Nguyen (28:02):
I think that, I think probably the very obvious one, and that is the you know, what is AI, right? It’s prediction, it’s anticipation you know, everything El, all the compute we’ve had up to this point is generally looking back, right. And then doing the analysis of that. But the decision making the modeling of the future is still very much the domain of, of, of the human brain, right. Of course we’re at the, where this transition where computers are starting to be able to be relied on, right. That in fact, that’s my business and their predictive maintenance, right. Is this, is this compressor likely to fail within the next two weeks, right. With what probability? So I think that, that, that’s one thing that computers will, will get much better at, right. They will be able to model the future much better, and they will be able to predict likely outcomes, right.

Christopher Nguyen (28:55):
80% sure. This is what’s gonna happen. And 70% it’ll be this case. and depending how we build them, maybe they’ll be doing so much more rational than, than the way the human brain does it. Right. I mean, people don’t realize how flawed, you know, how primitive the human brain is. I, don’t like to think in terms of building, you know, the equivalent human intelligence, it’s just another track of smart machines, right. So I think the ability for machines to be able to our needs, our wants our desires farther into the future and more accurately, I think it’s it’s gonna be very interesting set of applications.

Dan Turchin (29:38):
So Christopher time flew by, I I gotta get you off the hot seat, but I gotta ask you one last question somehow came up in conversation that you had a significant contribution to Unicode a while back and you can share what that was. But my, my question for you is given all of Yoman accomplishments as as a researcher, a technologist, an entrepreneur, a professor, to this point so far, what do you look back on and say, you know, that that’s my proudest achievement.

Christopher Nguyen (30:06):
That that’s hard to say, but I think, you know, referring to the conversation where I talked about Unicode that, that was a remark where I say, you know, I observe. And in fact, well, I I’ll say what that shared that observation. And I’ll share that. I just shared such an observation, you know, back in Cambridge at Harvard you know, a few days ago, the most impactful thing of your work would not be whatever you think it is today. You know, that I was referring to my, my work on beauty code the Vietnamese part, the implementation of that I did that work during my PhD, which was on semiconductor devices. Right. and it was, it was a hobby, right? I, I love working on that. I wrote Vietnamese keyboard software. I helped define the VI Q R and the Viki standards and so on, but that was, that was that was my hobby.

Christopher Nguyen (31:00):
That was my passion. It consumed most of my time. <Laugh> I just spoke to my former PhD advisor to sort of remind him that, that I was only working on his work. You know, maybe 20, 35% of the time. That’s an example where, you know, now that I look back, the Unicode work is far more impactful than whatever I did with quarter micron by cm, most device. Right. And and most recently, I, I was on a trip to visit the the Kennedy school at Harvard where they’re, you know, they have a new initiative called the global Vietnam war studies initiative. And I’m, I’m hoping to sponsor that. and I, I reminded the participants, right? I’m, I’m a supporter of that, that the people that spearhead that, that the impact, you know, we, we can, we can forecast, right. Their, their goal is, I think it’s quite relevant to what’s happening to, you know, in Russia and Ukraine today, they’re trying to promote through interviews, truth, selling and healing you know, from of the wounds of that war. And I said, you know, I fully support that, but I believe the impact of that will be much greater than anything. Any, any one of us here around the table can predict today. So, so that, that’s how I think about the meaning of impact.

Dan Turchin (32:16):
So Christopher we’re at time, but please share with our audience, where can they learn more about your work and about the work of Aitomatic?

Christopher Nguyen (32:25):
Yes. So our website is probably the easiest thing to go to it’s Aitomatic. It’s like automatic, but starting with AI. So

Dan Turchin (32:35):
Excellent. Well I look forward to having you back for another episode. There’s so much that we, we didn’t have a chance to unpack, but this is a great starting point. You, you mind coming back at some point?

Christopher Nguyen (32:44):

Dan Turchin (32:46):
Good stuff. Well, this was really a pleasure. Thanks for hanging out, Christopher. Appreciate it.

Christopher Nguyen (32:50):
All right. Thank Dan.

Dan Turchin (32:51):
Well, that’s that’s all the time we have for this week on AI in the future of work. I’m your host, Dan Turchin signing off, but we’re back next week with another fascinating guest.