This is a transcript from the AI and the Future of Work podcast episode featuring Dr. Eric Daimler, Obama’s AI authority, professor, and serial entrepreneur, discusses how technology influences public policy

Dan Turchin (00:21):
Good morning, good afternoon, or good evening, depending on where you’re listening. Welcome back to AI in the future of work. Thanks again for making this one of the most downloaded podcasts about the future of work. If you enjoy what we do, please like comment rate and share in your favorite podcast app, and we’ll keep sharing great conversations like the one we have for today. I’m your host, Dan Turchin advisor at insight finder, the system of intelligence for it, operations and CEO of PeopleReign the AI platform for it and HR employee service. You’ve heard me say frequently that all AI is a data problem, how and where we get it and how we store and access. It is more important than the models we build or the algorithms we use to make the decisions. AI only becomes dangerous when we don’t understand or don’t take the time to properly introspect data spec data today’s guest is no stranger to the challenges of using data to develop high integrity AI systems.

Dan Turchin (01:23):
He’s an AI pioneer, a frequent speaker on topics related to AI policy and a serial entrepreneur whose current company connects this uses category algebra to improve data interoperability before co-founding connects us in 2018, Eric was a professor at Carnegie Mellon, a white house innovation fellow. He’s been an investor and a director on numerous tech and AI related boards, including perpetuum skilled science and Wellways medical without further ado. Eric Daimler, welcome to AI in the future of work. Let’s get started by having you share a little bit more about your your background and how you got into this space.

Dr. Eric Daimler (02:05):
Yeah. Dan, thank you for the kind introduction and it’s great to be here. I am often I guess knowing if, if people know me that the AI authority during a time in the Obama administration is a terrific privilege to serve the American people and serve that administration in a time where AI was really coming into wider public awareness. We’re fortunate enough to have the resources and have the support of the president to look around the executive branch and help coordinate the efforts of the of the government. We coordinated the research initiatives over the decades, and we looked to coordinate what we even meant by AI, what we meant by robotics and wrote some nice policy papers that helped bring the people together and helped provide a reference point for future conversations that we continue to see today.

Dr. Eric Daimler (03:14):
One of the wonderful parts about that job is it’s now continuing, we now have a, now an AI office inside what was col known as the science advisory group or the white house, but run by some very competent people. People, I respect people I had worked with at the time factor in the Obama administration and the whole the whole job’s been elevated. the science advisor, <laugh> informally known as now a cabinet level position to the president. And I hope to go back to that public service someday. At some point I, I am in a fortunate position that I, I have been in and around AI in a lot of different capacities besides spending time in Washington. I also have been an AI researcher as you’d mentioned, I’ve been a, an entrepreneur and also an investor as part of a well known fund on Sandhill road. So I, I have a rare a rare combination of perspective seeing this technology and even how we define the technology evolve over the last generation.

Dan Turchin (04:24):
Talk to us a little bit about the founding vision for nexus.

Dr. Eric Daimler (04:27):
You know, Connectus is something that I saw during my time working with the white house where I had spent, as I said, a lot of time working in and around AI in different expressions. But I began to see at that very privileged position, meaning this, the scale and the scope and the breadth of interactions we saw from governments both in defense and non-defense applications to the largest organizations began to see where the implementations of AI were being limited. You know, people wanted to have the magic of AI that they see in the popular press from, from, you know, deep learnings fantastic expression with the game go to the autonomous vehicles that we saw back in the grand challenge, sponsored by DARPA in the two thousands.

Dr. Eric Daimler (05:21):
You know, companies wanted that, you know, gimme some of that, that, that magic whether it’s, you know, flash or, or value you know, gimme some of that boards are pressuring their executives to provide some commercial value. And I saw, I saw the disappointment that there would begin to be experienced by many of those. So, you know, we know that data’s been growing for quite some time. Everybody’s aware of that proverbial exponential growth, literally quadratic growth in data what’s less known is that there is a growth of data sources. So you think of the internet of things, just all the sensors we have around us, that the number of sources is also growing quadratic. So if you have a quadratic explosion of data and a quadratic explosion of data sources, the result is that data relationships are be they’re unimaginably, large data relationships are unimaginably large.

Dr. Eric Daimler (06:27):
You, you, you have to really think about these completely differently when you go from millions and billions to trillions, there becomes an at point where you need abstractions. the technologies we see in many of these probabilistic AI is just a nod up for the job. And I’m not of this religious bent that I just think we need more probabilistic AI, and we’ll just wait for that to develop. I am a little more pragmatic to be thinking that we need some combination of, of deterministic AI and probabilistic AI. So that led me to look into some research that we are funding in the government. There was this discovery in math, in this domain called category theory or categorical algebra in categorical algebra with a, which is a sort of meta math. There is a, a discovery that applied that to databases.

Dr. Eric Daimler (07:31):
and through that, the whole world opens up. If you have then a meta math that allows for this connection at approvable level of, of, of laws, of nature math then you begin to have this purchase over the AI implementations that all your data scientists and data engineers struggle a against. So I’ll give you a story. You, you may think of Uber as a a company with an effective infinite balance sheet, and you may think of it having some very smart people and that’s all true. <Laugh> what, what is less known, who is despite that profile, they grew up with a non-optimal it infrastructure. So even a company with an effectively infinite balance sheet to fund an it infrastructure, that would be ideal. They grew up with something that like many organizations, a adhered to the design of the business, more than some theoretical perfection in, in Uber’s case, that was an it infrastructure that developed city by city or jurisdiction by jurisdiction.

Dr. Eric Daimler (08:48):
The result of that is that when they needed to respect a privacy, Latice of say driver’s licenses in some jurisdictions, having a different privacy profile than license plates, or they wanted to say predict the driver supply or writer demand because the, we we’re gonna have the Oscars or the super bowl or any other, some such event. They had to do that analysis jurisdiction by jurisdiction, city, by city, not state by state or, or, or let alone for the whole the whole world. And this provides some friction both in time and in accuracy it’s possible, but it, but it was a complication, like a lot of businesses, they just can’t bring this data to bear at the rate of their state of scientists, intuition ConnectUS then worked with Uber. Uber looked around the world, how do we solve this problem? They’ve found that the existing tools like aren’t in RDF and owl, where many webpages are based doesn’t solve the problem.

Dr. Eric Daimler (09:56):
They looked, Uber looked deeper. They said, this problem needs to be solved in math. They found, they found category theory and then said, well, who are the leaders in category theory? Uber found ConnectUS . We happened to be 40 miles north of them, which is fortunate, and then connects us, worked with Uber to develop this solution that brought together 300,000 databases, a lot of databases, 300,000 they brought it together 300,000 databases to be able to answer these questions at the rate of their data scientists intuition. So this, this is the you know, AI implementation in action, you know, before you apply any of these fancy, sexy algorithms, you have to apply your data engineering. You have to not just clean your data, not just DISA, ambiguous your data and not just do entity resolution on your data. You have to bring these databases together that maybe they’re an Oracle, maybe they’re an SAP, maybe their tabular data or graph data. You have to bring all this data together in order to do complete analysis. The result for Uber, you know, to have them tell it is with the ity, with which they can now do their analysis. They save low 10 millions a year in, in both the efficiency and the accuracy of their business decisions. And that’s, so that’s the motivation and the benefit for Connexus. And it applies to, to much of what we’re gonna talk about where about, about fulfilling the promise of AI.

Dan Turchin (11:25):
So to my simple mind, what you just described bringing together 300,000 databases at Uber seems more like a compute and, or a storage challenge versus a, a math or a stats or an algorithm challenge. Where am I, where am I going wrong? There,

Dr. Eric Daimler (11:43):
It’s a great question. It’s, it’s because of the scale the, you know, at various times, you’re absolutely right to look upon any of our results as being constrained by storage or memory or compute or algorithms. You know, I want spent time in computational linguistics where we would periodically apply these new flashy algorithms to any particular language problem that we would reach a, a, a result and reach the end of the results that we could get without algorithm. I would’ve to go back to understand language more, we’d maybe go up and down the stack to semiotics or semantics to, to understand what we were doing. And then some new technology would come around and go back and forth. That’s what I mean by blockers. There’s blockers either way today, the blocker is not in compute and it’s not in storage when we have from the largest repository for data for AI is capturing some, some vast, vast percentage of the English language Corpus.

Dr. Eric Daimler (12:46):
You know, there’s a funny thing it’s occasionally gets in the press about the degree to which the algorithms there can predict the human language. I take a different view. I’d be a little surprised if it didn’t predict the human language, because you have most of the English language Corpus caught in this, this whole system. So the limitation, isn’t the storage limitation, isn’t memory, the limitation, isn’t the algorithms there. The limitation is, is how are we actually doing this for ordinary people? Because every other company can’t completely bring together everything they’ve ever known in this into one big database in a way that then is dependable and accessible for analysis there. the easy way to describe this is that the data integration problem is combinatorial. So that, and so it’s not just easily addressed with traditional mathematics of, of relational algebra and relational databases. It requires something different. Otherwise it’s really computationally intractable.

Dan Turchin (13:52):
So like you said, we’re looking at quadratic functions, both in the volume of data that’s being generated. That’s well known, but also to your point, the volume of data sources is the key to unlocking the vision that a lot of us have. And a lot of our guests talk about in the show for AI constrained by how we manage the data. Is it constrained by engineering in ingenuity? What are the constraints on being able to unlock that future that we, we, we all share as, as as advocates for the value of AI?

Dr. Eric Daimler (14:28):
Yeah, I think the controversial view today, which is becoming more widely known is that the constraint today is in the math. You know, another way of asking this question is where else do we see the math breaking? and the examples today right now are in quantum computing, or they’re in smart contracts, sous you know, my firm, isn’t the only firm using categorical algebra category theory. It’s just the only firm that’s applying. It’s a leading firm, that’s applying it to databases, but if we look at quantum computing, for example, they have to apply category theorists and type theorists to to their work for their quantum compilers. Because us as just humans, can’t use Excel, you know, or can’t use pipeline. You know, we, we can’t otherwise compute the we can’t produce a quantum compiler that then we could understand we can’t, we can’t understand the results from a quantum computer without category theory, without categorical algebra, without type theory.

Dr. Eric Daimler (15:30):
So that’s another expression where this is happening today. Already another one is smart contracts. So Ethereum, another smart contract platforms on a, on a blockchain that also uses category theory or type theory that is they’re really the future category theory is gonna be the way we analyze complex systems, compositional systems. And it’s gonna wipe the slate clean over the next decade or two. That’s still gonna be the limitation, anybody that’s, that’s claw onto relational algebra and all that stuff based on calculus, that’s gonna look more and more like Latin you know, still interesting, intellectually useful in less and less frequent circumstances.

Dan Turchin (16:16):
So let’s say we manage to tame the data problem. Then the most optimum optimistic among us would say you know, we’ll be able to apply automated decision making to almost everything let’s unpack that thesis a bit wearing maybe your policy hat. Is it a good thing for us to think about kind of the unbridled possibilities of how AI might be used, or is there a dark side we need to consider as well?

Dr. Eric Daimler (16:43):
Yeah, this is a, it’s a great question. And it’s one that I, I frequently got asked by congressional representatives with, during my time in Washington, you know, there is the a Hollywood dystopia. That’s easy to imagine. And in some cases, if at some places in China or other authoritarian regimes, you you’re living much of that AI dystopia there’s also an AI utopia available, and we live that in certain circumstances, the power that we have available in our pockets in our phones has a lot of expressions of learning algorithms in it. and people can say that it benefits us. Sometimes these are even in life saving applications where we find ourselves as a society over the next decade or two will be based on the degree to which we are as a society, participating in this conversation about where in that continuum we are between the dystopia and the utopia.

Dr. Eric Daimler (17:44):
What I fear probably more than anything is that people will not understand the technology so will either ignore it to their detriment and kind of be at the effect of it or just reject it out of hand. You know, we have 18 million programmers in the world and those people are not malevolent as a class is my, is gonna be my hypothesis. That’s my working, working theory. They all operate out of their own sense of, of, of right and wrong and their own core incentives, often quarterly working in conjunction with product managers. They need feedback from society about how these algorithms need to be implemented into the world. You know, the classic one here about, about learning algorithms and about automation comes to us from autonomous vehicles. Those are great ways for us to have a conversation about where we generally want automation to take place.

Dr. Eric Daimler (18:52):
Cause automation can happen in a lot of different ways. Automation has happened for example, on equities exchanges. You know, you don’t see the floor of the New York stock exchange filled with as many people today, because a lot of that work is automated. There were not a lot of tiered shed for the jobs lost by, by floor traders on the New York stock exchange. But that’s a little bit like a Disney world now, right? That’s a tourist attraction. It’s not a, it’s no longer the majority platform for for equity exchange, equity trading and that that sort of job loss is gonna take place in other other places. And it’s gonna be abrupt. That’s the shocker so automated vehicles, you have a car driving down the road, seeing a an image. It interprets it possibly as a crosswalk that may then have a shadow.

Dr. Eric Daimler (19:42):
Is the shadow a tumbleweed? Is it a person then, then does the car do nothing? Does it slow down? Does it stop? Does it ask for a driver intervention? And that’s the point that we’re talking about here? When is the automation linked and when do we want to have a sort of, I’ll call it a circuit breaker? The circuit breaker would say, Hey, I need, I need human intervention. Now I need a, I need a human to come in. We see this right now in some cars that require some sort of haptic feedback on steering wheel, either a shake or just a, just a touch that is gonna be useful in more and more circumstances, because those 18 million programmers around the world, if they can link automation, they probably will. <Laugh> right. It’s just cool. You know, we like building us engineers, we like building stuff and we like connecting things. So we will, but it, we wanna be registering that to our human values and finding where we want to catch ourselves, catch ourselves as a society and have, have some degree of human oversight. So that, that answers a sort of regulatory question. It also answers a, a way that business people can prepare themselves in their deployments to, to think about where they might wanna put a stop whether it’s an internal, external observation, but put a stop to before another automation sequence takes hold.

Dan Turchin (21:06):
So you’re at this fascinating intersection, you’re both an AI entrepreneur. You’ve been an academic in the past. Obviously you understand category theory in some what you’ve been discussing. And you’ve also been in the halls of power influencing policy. It is a trickier one to answer, but who is responsible when AI makes a bad decision, it’s getting more and more complicated by the day.

Dr. Eric Daimler (21:31):
Yeah, who’s responsible when AI makes a bad decision. It’s a great question. And that is a question us as a society will have to answer. It’s, it’s funny how much funny, I guess it’s, it’s strange to me how much we are not confronting that, that question today, you know, Mercedes has essentially, as a man car manufacturer, they have essentially made the decision that they will their cars will protect the driver as a bias point, relative to pedestrians on the street, that’s the gonna be their bias when things go wrong in that car what Mercedes says is they now will, will assume the liability up to a speed limit. So they have this autonomous capability on a, in a controlled circumstance up to I think it’s something on the order of 30 or 35 miles per hour.

Dr. Eric Daimler (22:26):
They, Mercedes will take responsibility for any accidents or, or any collisions that take place in, at that, at that point we, as a society will need to determine where where that liability lies and that ha that that needs to, that needs to register in a lot of different ways. We, we need to construct awareness about these things and, and some new regulations perhaps around, around these things. You know, we have noise now on electric vehicles that were otherwise quiet not so long ago. Those are the sort of modifications we’ve had. I expect lights to be the next addition to autonomous vehicles. We’re gonna have lights in different ways that are going to signal who’s in control of the car and how aware the car is of its and, you know, confident the car is of its circumstances. Those are, you know, off the cuff predictions about the future, but that’s all a conversation that we need to have as a society about what makes sense for us in different circumstances. That’s how we need to be engaging in this conversation.

Dan Turchin (23:36):
So, 2018, I had an opportunity to go to capital hill to brief the AI house committee, sorry, the house committee on AI which was nascent at the time. And you were in around the white house. Even before that, I know from my experiences in these briefings, the awareness of Congress, people at the time at least was, was immature. And they were to their credit. Everyone I talked to was eager to learn what was going on in Silicon valley and, you know, our, our vision for the future, but I’d say their understanding of data and algorithms and automated decision making, let alone the ethics of AI was was immature from the time that you spent in and around the white house, what would you say is the appropriate role for government to play in AI policy making?

Dr. Eric Daimler (24:32):
So I think there was a a, a real reckoning after the terrible rollout of healthcare.gov, where there a realization quickly came into focus that we did not have enough technology experts in government you know, it was okay. At that time it would not have been okay at that time to walk into a cabinet meeting and raise your hand saying, oh, you know, I don’t understand this economics thing or come in and raise your hand and say, you know, I don’t quite understand this national defense thing, but it wouldn’t, it would’ve been completely okay to have come into a cabinet meeting and raise your hand and say, you know, I don’t understand this technology thing. You know, that today would be regarded as irresponsible, perhaps even incompetent. That was a shift. When I was in the government in 2016, they, I was really hired as a result of this new sensibility to bring people like me and to government.

Dr. Eric Daimler (25:40):
You know, I think I was one of the first PhDs in my little, my little cohort. There were other PhDs in other, in other groups, there was a, a fantastic professor from Princeton. There was expert in computer security. There were other people in other in other domains, but in, I was, I was one of the first, if not the first in my particular expertise around AI. And I would still find myself in rooms of maybe 30 people. And I’m, I was the only one with a technical undergrad, read a technical graduate degree. Now, you know, that doesn’t mean I’m that other people weren’t particularly knowledgeable in technology, but kinda as a proxy for the understanding of the tech you know, 30 lawyers and one technologist often will not make for the best conversation are the best outcome that is changing.

Dr. Eric Daimler (26:35):
You know, more people like me have been hired some really fantastic people. and as, as we started this conversation we said that the whole, the whole job got elevated since I was there. And that’s wonderful to see. So I think we’re gonna have more intelligent conversations there, there have been other expressions and other governments you know, Europe’s additions to the GDPR come to mind New York city’s city councils proposals around automations are, are, are LA were laughable that were introduced in, in December of last year. Despite some very smart people being you know, in, in that administration and working in the technology office that the city council introduced this one plan just to require employers to disclose whether or not your, your resume screening went through an automated tool.

Dr. Eric Daimler (27:32):
<Laugh>, it, I, I did not, I didn’t care for that because it just was so much window dressing just because it’s it has a particular algorithm versus another algorithm. it can give you a false sense of security to say, oh, a human looked at it and what, they’re not, they’re not presenting with their own biases around these things. And then how do we disclose in a way that is useful for people? Wasn’t wasn’t mentioned it on the opposite end of the Atlantic ocean European union introduced this modification to GDPR in December of last year. They require now companies to have this pseudo denomination of data provable, provable, pseudo anonymization of data. What’s interesting about that is I don’t think they know even how to do that. I don’t even know. I <laugh>, I think it was a good idea, like in theory.

Dr. Eric Daimler (28:27):
And so they, they, you know, obviously some very, very smart people are advising the government or at least available to be advising the government that, but the end result is somebody had a good idea. It gets written into law. I don’t think it’s even gonna be tested about how they expect companies to implement that regulation until some entity, other than Google gets a billion dollar fine. That’s the problem with regulation today but it is getting better. It is getting better. I think not as fast as the 18 million programmers around the world and their product managers are implementing the tech but it’s getting, it’s getting slowly better. And I, and I hope to contribute again to, to our, our government’s intelligence in, in forming and implementing regulation.

Dan Turchin (29:14):
You make an excellent point that we’re getting really good at scrutinizing over AI based decision making. And perhaps we we scrutinize less over, you know, the inherent biases and errors in in human decision making. And along those lines on this show, we talk a lot about what it means to practice responsible AI. And I talk about kind of three core tenets AI based decision should be transparent, kind of like in the example you gave about resume screening, you should know when a decision is being made, using an algorithm, it should be predictable, same inputs should reliably generate the same output. So kind of high integrity algorithms and configurable, if it’s determined that the AI is, is behaving poorly, there should be clear kind of levers and knobs to be able to bring it into line with what you’d expect. So that’s kind of the working definition I use, but we’d love to get from your perspective, what does it even mean to quote, practice responsible AI?

Dr. Eric Daimler (30:10):
You know, I think we can as a society work to help ourselves distinguish what we care about what we care about are there may be some algorithms that demand a sort of public transparency so that we can audit them the easy way I think of starts with this separate the data from the data model. the reason I say that is because we often get conflate this in the public discourse where we’re concerned about bias, bias data, this bias data that well, that’s true, but let’s separate out the bias in the data. You know, historically CEOs look like this good candidates look like this bad actors look like this from the data model. And when, once we do that, then we can look at the data model. and, and to your point, Dan, we can just say, what did we intend the data model to do?

Dr. Eric Daimler (31:05):
<Laugh> what did we intend to have it produce? And does it produce that? So that’s an, that’s an easy way then if we, if we make that accessible to, to outside observation to demonstrate whether or not these algorithms do what they intended, and then we can analyze their degree of bias and every everything has bias, right? That’s a, that’s kind of the weird thing about this. I, I have a, a model of a, of a sailboat in my, in my home that my, that my wife’s grandfather built. He built this, he built this model of a sailboat and someone was doing it. We had somebody doing a repair on this, on this sailboat. I was asking this person doing this repair on the sailboat, on this model about some of the detailing and what I and what he said is look, unless this sailboat was a one to one replica, right?

Dr. Eric Daimler (31:57):
Unless it was actually life size, then it is manifestly some bias of the model builder, whoever built this model said that they were gonna abstract away the window detailing or something. You know, it doesn’t have the windows stripping the weather stripping, like you’re gonna abstract away. Well, that’s the same for a mathematical model. You’re gonna abstract away something. It is reflecting of the bias of the creator in any particular model. We want to say, well fine, but what is the bias? Let’s just figure out what the bias is. So we can all be clear where, where the biases it’ll always have a bias. So separate out the model data from the data model. And then the last thing I can say about this, about responsible AI, there may be some algorithms that we wanna keep as secret, and that can be fine.

Dr. Eric Daimler (32:43):
So there we do this right now in credit scoring and FICO scores, you know, FICO score algorithms are black boxes. We don’t actually know what goes on inside. We have a pretty good idea, but we don’t know. And we’re, we as a society are okay with that. But those exercise, this this dynamic called zero knowledge proofs where we stick in some data, we get out some data, we stick in some data, we come, we, we get out some data and we can do that in infinite amount of times so that we can get comfortable with what the characteristics are of the box. But we don’t know what’s in the box. That actually also might be a dynamic that would be worthy of a wider deployment. And those are all of the sorts of dynamics that I want us as a society to to talk about so that we can find the appropriate touch points for the implementations of these digital automation systems.

Dan Turchin (33:38):
So you’re talking to a lot of students and people who are maybe young in their careers and aspiring to careers that involve AI. What are some of the skills that you think are innately human that will never be automated by AI machine learning?

Dr. Eric Daimler (33:56):
I don’t know about never be, be automated. You’re the easy answer to that one that you gotta often repeated by, by those in the AI world is the skill we’ll say of empathy, you know, human relations. Those are the things that will never be automated. I’ll give a more general answer to that, which is just to be mindful of what is a human and what is what can be automated that, because that’s a better, that’s a better window, I think for career planning, because we don’t know the timing of the replacement of any of these technologies. You know, ultimately if you’re looking far out empathy will be the last to go. <Laugh>, we’ll say, or at least thinking faking empathy will be the last to go, but way before then there’ll be, they will be displacement.

Dr. Eric Daimler (34:49):
There’ll be iterations that, that people will, will wanna adjust to. And in that I sup strongly support people, reorienting their math awareness, their math education. We might say the more math the better but if I were to choose for, for my children’s education for, for my career I would replace geometry trigonometry and especially calculus replace it with category theory, categorical algebra, and with probability and statistics CA category theory is really the math of the 21st century. It provides the epic from logic that is defined our previous generation to that of composability, that’ll define the next generation. The second point I’ll make there is there is a skill, an emerging skill that is taught in some way as, as an engineer in some way, lawyers get it accountants get it, which is we’re being, being mindful of, of, of one’s precise desire.

Dr. Eric Daimler (36:05):
And, you know, there’s a lot of knowledge that we have in our heads implicit knowledge, we will benefit by training ourselves on how to express that the nice way to say that is showing our work <laugh>. We need to learn to show our work more. That’s a skill instead of just having it on our head. And that can sometimes be in conflict with having empathy, because empathy is in me and you talking, right, Dan, we can just be talking and listening. We want to exchange information. We also want to be really clear about our thoughts and what we wanna have written down and expressed as an algorithm. So that’s a skill. Those are, so those are two skills. One, one be explicit on our thoughts and the other is move on to the math of the 21st century category theory.

Dan Turchin (36:55):
Eric, I’m gonna make my kids listen to this and rewind that part about the importance of math doing your math homework. Right. So thank, thank you. Let’s see. I gotta get you off a hot seat, but one question I’d love to get your perspective on we’re having a version of this conversation. It’s let’s say it’s 2032. And you know, we’re looking back on, on the past decade, what’s one workplace behavior that will be common in 2032 that today we’d think is just science fiction.

Dr. Eric Daimler (37:25):
Yeah. I, I wrote a, a, a chapter in a book about what I was gonna be happening over the next coming, coming decades. It’s in a, it’s in a book called aftershock, and I can revisit a part of that, which is that what we’re gonna be doing in the future is we’re gonna be collecting modules, identifying modules, and then redeploying modules in different contexts. This, you know, often people will say that the future is here. It’s just not widely distributed. The metaphor I use here right now is eCommerce, which might sound a funny place to start when describing what the future’s gonna look like. But it eCommerce right now is I think going through a change where people are realizing in some ways that it’s kind of a bad business, the margins are pretty small, but it’s allowing this wide explosion of expressions where I don’t have to manufacture my stuff. I don’t have to do customer analytics. I don’t have to do advertising. I don’t have to do fulfillment. I, I, I, I have Shopify take care of a lot. Like I, I can create a different direct to consumer business just by reconfiguring a whole bunch of other modules. These are not tech businesses. You know, we’re still calling a lot of these businesses, tech businesses, we’re really they’re tech enabled businesses. I think the redeployment of of patterns in different contexts will be the skill of the 2030s.

Dan Turchin (38:50):
Well, we will see how you did <laugh>, we’ll, we’ll, we’ll have you back in the future. <Laugh> and and I’ll go ahead and post a link to aftershock in the show notes as well. Very interesting. We’ll gosh, that’s all the time we have, I feel like we were just, just getting started and we went way off script, but that was a lot of fun. Thanks for hanging out that, Eric.

Dr. Eric Daimler (39:09):
It was a good time, Dan.

Dan Turchin (39:10):
Yeah, you bet. That’s it for this week. Another another fascinating discussion with the great Dr. Eric Daimler I will provide links to all the topics we discussed in the show, Eric work, and our guests learn more about your work and maybe the work with ConnectUS

Dr. Eric Daimler (39:27):
ConnectUs.Com and I’m on all the usual social channels under Eric Daimler. I think Twitter I’m E I D and LinkedIn is a good place to connect for me professionally.

Dan Turchin (39:36):
Well, I hope for all of our sake that you’re back in the halls of power soon, maybe we’ll meet up there, but certainly interesting topics. And we’ve got a lot of a lot, lot of growing up to do as a society in terms of of interacting with machines,

Dr. Eric Daimler (39:50):
Fun conversation. I hope to have more people involved.

Dan Turchin (39:52):
That’s it for this week, I’m your host, Dan Turchin AI and the future of work, but we’re back next week with another fascinating guest.