This is a transcript from the AI and the Future of Work podcast episode featuring Dan Turchin interviewing Rob May, CTO of Dianthus

Dan Turchin (00:17):
Good morning, good afternoon, or good evening, depending on where you’re listening. Welcome back to AI and the future of work. Thanks again for making this one of the most downloaded podcasts about the future of work. If you enjoy what we do, please light comment and share in your favorite podcast app, and we’ll keep sharing great conversations. This is your host, Dan urchin advisor insight finder, the system of intelligence for it, operations and CEO of people rein the AI platform for it and HR employee service. And we’ve published over a hundred episodes. And I gotta say, I’ve been looking forward to this one for at least the last 50. Rob May is what I’ll call an AI provocateur and the author of the always entertain the inside AI weekly. It’s an email newsletter, but also the investing in AI podcasts Rob’s unique takes on AI technology and culture are always insightful.

Dan Turchin (01:13):
His recent list of annual predictions for 2022 span topics from AI on the org chart to AI regulation, to the rise of non neural nets. Go read them if you haven’t yet subscribe to the newsletter, Rob’s a venture partner at PJC. He’s also the CTO and co-founder of di previously, Rob co-founded the conversational AI company TA and co-founded Backupify before it was acquired by data in 2014, to be honest, I have no idea what we’re gonna discuss today, but I assure you it will be entertaining and we’ll all leave a little bit smarter if you’re not already following Rob on Twitter, he’s at Rob may and without further ado, it is really my pleasure to welcome Rob to the podcast. Welcome to the show. And let’s get started by having you share a little bit more about your background.

Rob May (02:09):
Yeah, thanks for having me, Dan. So my background, I trained as a hardware engineer, so I used to do FPGA and AQ design and that’s how I got my start went to university of Kentucky. And after I graduated went to work for Harris corporation in Melbourne, Florida had a great time down there, but you know, to, to accelerate my career forward, I got into sales, engineering, business development, and eventually went to work for a startup that went under during the financial crash. So well we were working we were doing well, we were hitting all our milestones and the economy was falling apart. And so we didn’t you know, couldn’t raise any more money. And at that time I started a company when that company shut down, I, you know, I decided to start something myself and that company was called backup a five.

Rob May (03:04):
We did backup for cloud computing applications. So we built that company to 10 million in revenue and sold it had a really nice exit there probably should have held onto it because, you know, looking back eight years ago now and the number we were, the number one player in the space at the time, the number four player is now a unicorn. So probably sold that too early then started doing a lot of angel investing and also started a company in the AI space called TA that built support chatbots. Paula was not as successful good technology wrong go to market. We ended up merging that in with another company. So that’s still going the technology still doing something in the interim. I had built an investment port for of about 70 angel investments that I had done mostly AI and robotics focused.

Rob May (04:06):
And so I got pulled into a venture capital firm as a general partner and that was PJC. It used to be called point Judith capital after point Judith and Rhode Island. I loved it. I thought I would in my career there, but after two year I was doing some work on our thesis about machine learning and e-commerce, and I came to the conclusion that a lot of the value in the e-commerce space was not going to accrue to machine learning tech tools. Similarly, during that research phase, I came across Thoracio and Perch and some of those Amazon FBA roll up companies and basically decided, well, what if we became this new kind of company where we built software, but we built machine learning software for eCommerce companies, but instead of selling it to those companies, we acquired these small and to I e-commerce companies with debt, let them use our software and grow them faster, et cetera.

Rob May (05:08):
And so that became Diane. So I initially put the team together PJC seated Dianes et cetera. And I just fell in love with it and decided to leave and be the full-time sort of founder CTO, which was not my expectation to go back to the operating side. But you know, when you’re glutton for punishment you know, I guess you’re, you’re stuck with it. So so that’s what I’m doing now. I’m still a, a venture partner at PJC. I still do a lot of you know, investing both personally and, and through them on the AI side I still write the newsletter and do the podcast cast, but you know, my fulltime day job is, is focused on Diane

Dan Turchin (05:48):
Every week when I read your newsletter, there always some seems to be some kind of contrarian take on the state of

Rob May (05:54):
AI. So to, to start the conversation

Dan Turchin (05:58):
What do you think is holding back AI, AI adoption? Is it technology, is it tech talent? Is it market demand? Is it something else? Where, where would you go with that?

Rob May (06:12):
It’s a couple of things, right? One is, there’s a very large gap between the newsworthy, AI and production level system AI, right? So you read these like wild things that come out of deep mind or open AI and and not always, but most of the time they’re actually things that are nowhere near production ready. Obviously there are counter examples, GT three, you know, is something that you can like legitimately use in your application now. But what happens a lot of time is they publish this fancy new thing and it sounds incredible. Nobody can replicate it. You can’t get access to the code. You don’t have a team more of the processing power to do it. And so people get these ideas to do these grandiose AI projects rather than these like things that could actually matter day to day.

Rob May (07:11):
And the grandiose ones that get people excited aren’t actually feasible yet. So that’s one problem. The second problem is to really, to put AI into some products doesn’t always work if you think of it just as an add-on. And so the example I would give you is in the late 1990s, when the web came around, people already had a graphic design process and they used it for are flyers and docents and catalogs and things like that. And so if you were a big company and you had that process, and I actually saw this, people would make a, they would use the graphic design process to make a web page. And they would give that piece of paper to an HTML developer and say, Hey, can you turn this piece of paper into the website? Well, that’s not really what you wanna do because the web’s different, right?

Rob May (08:05):
You can have, you can have something that’s much longer than a sheet of paper, right? It can scroll down for an infinite amount of time. You can have dynamic links, you can update the information regularly. You need a new process that accounts for that information architecture that dime, and that’s where we are with AI, right? So a lot of the, a lot of the things that you could teach AI to do the thing that’s holding them back is a lack of label data to capture a label data set. You would need people who are doing a task somewhere to do that task differently so that somehow they label data as they’re doing that task. And people don’t wanna change how they work, if they can avoid it, they particularly don’t wanna do something extra. And in many cases, because you need so much labeled data, if you were gonna do it as a company like doing this at Diane Dias, right?

Rob May (08:56):
Sometimes we’re collecting data for stuff that we expect we’ll have to collect for one to two years before we’ll have be able to show any results from the data collection. It’s a combination of those like workflow factors, cultural issues, people’s fear and misunderstanding of it. AI, you know, I think all those things are holding it back. And there’s a little bit of practical, a few practical issues with the idea that a lot of the, it it’s getting easier and easier to do AI in terms of being able to execute on a model because you have TensorFlow or whatever, but still not everybody can do it. And more and more of the secret sauce needs to be abstracted away into tools and packages, right? So we can, so an engineer who knows a little bit about machine learning, but doesn’t need to know the mathematics of neural networks or how to tune hyper parameters to actually go, you know, make, make something work.

Dan Turchin (09:54):
So you and I have been in and around AI for the enterprise for a while. And I’d contend that what is constraining adoption of AI in the enterprise is poor data quality and poor data quantity. So unlike, let’s say a deep mind or an Amazon or a, you know, Microsoft, Twitter, Facebook, et cetera we contend with a small data problem. We don’t have a big data problem. Right? What’s your perspective on how we get enterprises to adopt AI by overcoming those data challenges?

Rob May (10:28):
Yeah, that’s that’s a great point. I don’t know that it manifests itself. The one thing I will say, I don’t know that people, I don’t know that executives are sitting or saying like, oh, we don’t have the data. Let’s, don’t try this necessarily. I think the way it’s manifesting itself is in a lot of failed proof of concepts where like a lot of people don’t realize like IBM Watson took a lot of crap and granted maybe they, they overpromised and oversold some things here or there. But when you dig into a lot of those, I have some firsthand knowledge of some of those projects. And a lot of times it was the, the recipient. It was the company that was trying to buy the project that realized as they got in there, they didn’t have the data to do what they wanted to do, right.

Rob May (11:12):
Or the data there wasn’t enough of it. Maybe, maybe it was formatted lots of different ways. And so the expense to bring that together in a clean, you know, easy format was gonna be too much. So there were a lot of issues there. You know, there are a lot of tools turns out this is a really, really hard problem to solve. It’s actually easier to solve on the machine side where you’re trying to build machine learning model machine learning models off of data, that’s generated by other machines for other, you know, use cases. When you try, try to deal with it, do it with something like CRM data, where, which are like, oh, that’s customer data should be fairly standardized. No, it looks different in almost every company, right? Slightly different, you know, humans enter it into the CRM. They wanna do it in different ways.

Rob May (12:05):
They wanna set up their CRM in different ways. Turn, it turns out like if you’ve ever even tried to like merge a CR am when you’ve acquired another company or something like that, man. I mean, imagine having to do that at scale across millions or tens of millions or hundreds of millions of records to train models this, this problem of data formatting and data quality is a really, really big, hard problem to solve. And it’s not one, there’s a handful of like people in the world who really like that kind of stuff, but there’s really not, not very many and they’re hard to find. So yeah, I think it’s a big problem. Some people are trying some interesting machine learning approaches to it and I don’t know how successful they’ll be at, at some point, definitely, but, but I don’t when, and I think the other thing that’s working is is you talked about small data AI.

Rob May (13:00):
There is some research on small data AI, and there will be more small data, AI projects, long term. There hasn’t been a lot of incentive to do that initially because the companies that have had the resource to drive AI, I mean business 1 0 1 is you play your, your assets and your strategic advantage, Google Facebook, Amazon, apple, data’s a big advantage to them. So they’re incentivized to explore big data strategies, right? Not small data strategies that would help the rest of us, but it’s conceivable given what we know a brain works that small data AI is possible under certain circumstances, right? I mean, if you have a, you know, if you have a toddler, you don’t have to show them 10,000 examples of coffee cup for them to understand what a coffee cup is. And so it’s one of the areas that I think we’ve shown the least amount of progress compared to what I would like to see, you know, and I but I, but I’m very hopeful for like where the world’s going and, and what I think we’ll get to in, you know, in the next decade, at least I think we’ll make big progress on small data AI, but in terms of how the enterprise adoption problem gets solved, I’m not sure where you buy off first, right?

Rob May (14:16):
You’ve gotta solve the data problem. You’ve got some culture and workflow issues, you know you’ve got a general misunderstanding sometimes going back to my, like my printed out paper, you know, web version analogy. Sometimes it’s just got these issues of people don’t really understand. What’s a good problem for machine learning and what’s not. And and where should they look to apply it? So you have a lot of different problems to solve and people are chipping away at pieces of them everywhere. And so I just, you know, hope it gets better and better and better. As some companies become really good at becoming AI first companies, they will start to lead the way I think for everybody else,

Dan Turchin (15:00):
It’s hard to not have the conversation about AI and data without talking about what it means to responsibly use that data. To what extent do you think these ideas or regulatory frameworks about the ethical use of data is, or will constrain the advanced advances in AI technology?

Rob May (15:25):
Yeah, that’s a good question. I mean, I’ve thought a lot about, well, if I understand your question correctly, you know, about rules around data, you know, one of the big argents has been that like, well, China’s gonna surpass the us and AI because China doesn’t care about data privacy and they’re gonna, you know they’re, they’re just gonna record everything and have all the data sets. You know, it’s, it’s true to some extent, but I think we’re still so much in the begin and understanding you know, what data we need, how much we need for different types of problems. That I think that the number of problems where we’re gonna say China has an advantage because of their lack of privacy laws is gonna be relatively small at least on the really, really important stuff. And then you know, and then we’re also starting to see, what’s most interesting to me on the data side is really starting to get edge data, right?

Rob May (16:24):
Putting, putting AI on edge modules, vehicles out in the world, robots that are doing things right. You know new types of sensors being developed and starting to figure out with those data sets. What kinds of things can you predict? What kind of things can you automate? What kind of things can you do in the world that you couldn’t have done you know, previously? And so I think while, while, while I definitely have a worry about, you know, I, the issues about bias are, are very real, the issue about, you know, data quality in a lot of different ways. They’re very real, but I’m not sure they’re the, they’re the most serious issues around, around AI at the moment. I think like I’ll give an example that I’ve, that I’ve wrote about a newsletter a couple years ago. It just having access to all this data changes some of the options that are available to algorithms.

Rob May (17:22):
So to give you an example, if you and I both use ways, right, and ways learns every day, you know, I leave at eight o’clock to get to work at eight 30. What, whenever COVID is over, right? And we go back to the office, I leave work at eight, I get there at eight 30. You normally leave at eight 15. You get there at 8 45. Those are standard commutes for both of us. What happens if one day I happen to be a little bit early. I get in the car at, you know, 7 57. You get in the car at eight, 17 does way say, well, Rob’s actually got a little bit of extra time. I’m gonna route him a slightly slower way because he can take it and Dan’s, Dan’s gonna be late otherwise. So, so I’m gonna send Dan the faster way, like, does the, like, what does the algorithm start to optimize for?

Rob May (18:14):
Does it keep optimizing for each of our individual commutes or overall is it a utilitarian kind of thing where, Hey I just wanna make sure that the greatest number of people get to work on time based on what I know about them and when they should be at work. And then that feeds back into user behavior, right? Which is, Hey, wow, you, you, somebody learns that you get the best routes. If you leave later than your normal start time, right. Or something like that. So you get all these weird issues or, or maybe it determines a different thing. Maybe you’re worth more to Google from an advertising perspective. So you get the best routes, right? Like how, you know, as these systems start to have more data and influence more lives, the choices that they can make around how they optimize the outcomes. I mean, it can blow your mind. Some of the, you know, ethical issues that can arise from that.

Dan Turchin (19:04):
What’s incredible to me is that we’re entrusting some of these decisions that could have potentially, you know, global social impact, cultural impact lives could be at stake to junior developers, you know, with with an undergrad degree writing algorithms, you know, at at Google or Microsoft what what considerations or what, what the responsibility of big tech to kind of reign in how, you know, the owners of these algorithms can manipulate data often in, you know, to unintended and have UN unintended consequences.

Rob May (19:46):
Yeah. I’m not sure they’re ready for it. Right. And I, another example I’ve given in talks sometimes is, you know, for a long, long time, if you’re a civil engineering major, you have been required at most degree programs to take an ethics course, because you can’t say let’s skip on this concrete, save a little bit of money because if you’re building a bridge or building a, whatever people could die. Right. So but you know, I was an electrical engineering major and to my knowledge, most double E most computer science majors do, are not required to take an ethics course.

Rob May (20:21):
And so you have these complicated issues now that are in some ways novel, you know, ethical issues that society’s gonna have to face. And we don’t have people that even have a, you know, number one, they don’t have even a grounding in what are the different types of ways that you can ethically make a decision, right. You know, there, there are multiple perspectives to take. There are multiple ways to think about what it means to be ethical. It’s not always a clear right or wrong answer. And it depends on, you know, how much you value different types of points of view, you know, things like religion, you know, things like social constructs, et cetera. But we can’t even have the conversation, right, because people aren’t necessarily qualified in most cases. And they don’t even see that as part of their job. That’s where I think’s dangerous. And that’s where I’d like to see the big companies, you know, lean in a little bit is do some broader training around ethic, not just talking about, you know, they’re doing a lot about data and bias and all that kind of stuff, but just really laying a, a framework for how do you make ethical decisions for, to put intelligent machines out in the world? I think is a, is a big, big area where I’d love to see them lean in and, and take everything more seriously.

Dan Turchin (21:41):
We’ve explored the topic of AI explainability on this show quite a bit, including we had a Chris nega day, who’s the CEO of a company called Fiddler an AI explainability platform. And we’ve explored this topic of, I, I, if anyone is using AI in their technologies is required to make sure that the outputs or the automated decisions are explainable, then presumably there’s kind of a higher ethical standard that will be built into the development methodologies. To what extent do you think AI explainability? How, how far does that go toward addressing some of these ethical concerns?

Rob May (22:19):
Well, it definitely helps. But even explaining why you made a decision you could still run into, to, to different problems, right? Because you could I, I, if you, so let’s say you had a credit scoring algorithm and you don’t you don’t even include race in the algorithm. It doesn’t mean that the machine might not figure something out about you know, people that live in a certain zip code, right. Oh, wow. There’s a, you know, it basically picks up on the fact that there’s a bunch of people who live in, you know public housing there and maybe, you know, is biased against them for a certain reason. Or it may pick up on certain ethnically charged names. Right. or all kinds of weird things. Like it’s, what’s weird about training these models on data. Sometimes you don’t know.

Rob May (23:13):
So while I definitely think it can help I’m not sure how far the explainability can go. I mean, look, sometimes humans, I, I would say, there’s look, there’s very few people in the world who would say, yes, I’m racist, but there are a lot of people who frequently accuse law, lots of other people of being racist. We see this all the time publicly and what happens is some celebrity or something, you know, they, they, they say something that someone feels is inappropriate. Somebody points out that it’s racist. They initially say it’s not racist, or that’s not how they meant it, or whatever. Eventually they just end up apologizing. But like my point being humans don’t always agree on something like that. Right. Like humans sometimes disagree on whether something there’s certain things we would say are definitely racist. There’s certain things. There was a big disagreement about like the you know you know, having the, the Washington Redskins as a name or something like that.

Rob May (24:12):
Right. And so, you know, when you look at those things and you, you can’t always get humans to agree, and there’s a lot of han argent about it and we’re supposedly the standard for high intelligence. Well, and, and sometimes we can’t explain it. It’s like, why did you say that thing to was offensive? And you’re like, well, I don’t, I kind of don’t really know, or I didn’t intend to do that, or that wasn’t my intention. Like, it’s, you know, how well can the machines do? Like my hope is that someday they do better than us, but it’s hard to see. It seems like there’ll be a lot of missteps before we get there.

Dan Turchin (24:42):
So talking about this complicated relationship between humans and machine, I’d say the equivalent of, you know, a ludite in 20, 22 is someone who believes in this, you know, bot apocalypse, and, you know, the bots are out to take jobs and, you know, sky’s falling. I, I contend that, you know, that’s, that’s the being on the wrong side of innovation. Yeah. So fast forward 10, 20 years, what does that relationship look like between humans and machines?

Rob May (25:08):
You know, it’s interesting. I like, I, I do believe that technology has always displaced sectors of the economy and will still happen. I also think that new jobs will pop up and, and look, there, there are these, like, humans are always looking for status against one another. And so just, you know, being able to have humans. So lemme give you an example. We used to all ride horses, right? Having a horse was not a big deal. Now, I don’t know if you know, any horse people, it’s, it’s a, it’s a hobby you can only get into if you’re very well off. Right. And that happens with a lot of technology, which is tech makes something, you know, we used to make our clothes by hand technology made it easier to make clothes suddenly we could all have nice, well, stitched together, stylish clothes.

Rob May (26:03):
Now it’s it’s a show of status. If you can have a handcrafted suit right. Or whatever. And so these things come back around in, in different and unique ways that aren’t always you know, predictable. And so I, so I, I think, is it feasible that someday machines are better that humans than everything? Yeah. It’s definitely feasible. It’ll probably happen at some point. I don’t know if it’ll happen in my lifetime. I’m I’m like, it’s definitely a long, long way off. If it happens or there are no jobs for humans, I’m still not convinced that’s the case. Right. I think there’ll be some people that just wanna work. Cuz they like it. There are things people do like, look, I play piano as a hobby. I do it entirely for fun. Some people do it as a job. Right. There are things that cross over I’m sure.

Rob May (26:50):
You know, some people love writing code. I’m sure there’s some people that will write code, even if machines can do it all. So it’s, it’s gonna be really interesting. Right. I, I, I agree to your point, I’m not worried about I’m not worried that we’re gonna have this rapid transition and suddenly there’s no jobs for humans and we don’t know what to do, but I do think we have to be smart about constantly retraining you know, and figuring out actually, yeah, there’s a like, so there’s really interesting company that I invested in called adept ID a D E P T. And they what they do is they use machine learning models to match people. One match job codes together that might not be obvious. So, you know, everybody’s like, oh, you can’t be a coal miner anymore. You should be a programmer, like look programming isn’t for everybody.

Rob May (27:40):
For particularly people that like to work with tools and work with their hands, like sitting behind a screen all day, looking to a blue light, it’s like gonna make their eyes glaze over. But it might turn out that like, if you were a good, you know, CVS cashier, you might be a good flight attendant. They’re both about being personable, right. About interacting with the public. You know, maybe if you were a good welder, you might be a good you know, I don’t know something with a phlebotomist right. Or something where, you know, maybe you’re good with your hands. You can draw people’s blood, you can do whatever. So they do those kinds of transitions. You know, which previously would’ve taken, it would’ve be real, really difficult to do. Cuz you have to know so much about so many jobs by putting all that off on a computer. And on an algorithm, it actually brings a level of, it allows you to do this at a scale that you couldn’t do before. So they’re having a lot of success and I think you’re gonna see a lot more tools like that and approaches like that down the road. But yeah, not really worried about the bottom apocalypse in the near term.

Dan Turchin (28:41):
So let’s say in call it, you know, maybe it’s 40 or 50 years, we achieve AGI artificial general intelligence. And let’s say, you know, the bots are sentient. Even if, if we get to that point, which is certainly debatable, what does it mean to be han? What, what are the things that when, when we’re surrounded by thinking machines are innate, innate in innate part of the han experience.

Rob May (29:03):
Yeah. probably suffering, right? If you, some people still wanna do it, it, I mean you know, it’s interesting, right? Because you, you have the, the Elon Musk approach here is, Hey neural link, we’re gonna augment ourselves. We’re gonna become like the bots. And then I also think, I think the han experience is gonna split into two pieces where you’re gonna have a lot of people that prefer to live in the metaverse. Right. Which is where I think the robots are gonna spend most of their time. Cuz it’s gonna be way more interesting than here in terms of what is possible. And so but, but I think there’s still a, There’s still a, like a physical experience for some people that that they get, that’s hard to explain from,

Rob May (29:52):
I don’t know, from nature from being around other humans. I think it, it depends a lot on, I look, people are different, right? They’re different in what they like. And I think you’re are gonna see a broad continuum of experiment experiences. And I, I think you’re gonna see a lot of people that are, you know, I think you’re already starting to see some technology. I wouldn’t say it’s ludites, but there’s a lot of rejectionism right. Where people are just like, Hey, I, I don’t need the next thing. I don’t like this about technology and technology. The, the last couple of wave web one and web two have not been about I, I mean some, some aspects of ’em like Netflix have been about like personalization or whatever, but largely they’ve been technologies that can apply broadly as technology applies to itself and starts to, you know, through machine learning and AI make technology itself easier to create then.

Rob May (30:42):
And as AI starts to allow personalization at scale in a whole bunch of areas, and you’re gonna see this very broad continuum of how people interact with technology, where some people continue to reject a whole lot of it. Some people embrace as, as much of it as they can, as deep as it can go. And then it everywhere in between, right where people are gonna figure out the part of their lives that they want to apply to, to AI and the parts that they don’t. But I don’t you know, I’m not even sure now pre-technology how I would define what it means to be humans. So I don’t I don’t have a good answer for you in, in that a day and age, either,

Dan Turchin (31:19):
Maybe in another podcast, we’ll explore that topic in a little more and thanks for indulging me. This just became a a philosophy podcast. Well, Rob, I gotta get you off the hot seat in a few minutes, but not before answering one last question. So you mentioned you’ve invested in somewhere north of a hundred companies. And I know from reading the newsletter that you know, you see a lot of interesting, innovative applications of AI. The thing that sticks out in your mind is kind of the AI moonshot that just, you couldn’t stop thinking about whether or not you invested, but you, you know, something that came across your desk that you said is gonna change the world.

Rob May (31:55):
Yeah, that’s really easy. It’s a company called mythic. And the first time I met the CEO, Mike Henry, we were at a dinner together on the west coast, you know, I’m based in Boston, but I was out in San Francisco for some meetings. And this was probably probably early 2016. And you know, I’m a, I’m a hardware engineer by training. So I was talking to Mike about what they were doing and he was telling me how they built this analog AI circuit. And so just to, to back up for your listeners there are lots of different ways to do compute actually. And most people don’t understand. I mean, like, you know, an Abacus is a computer, right? And there are lots of ways to do it in with electronics. You can use analog computing where you use a lot of you just use voltages and resistance and different things like that.

Rob May (32:50):
You can do spiking, neuromorphic computing. There’s a whole bunch of ways. 98% of the computers you use are probably, and the of circuits in there in them are digital. Right? And that’s been the, the standard for a long time. We’ve been tied to this van Neman architecture in the way we think about compute and all of our tools and programming and computer chips are sort of built around that. So I met Mike Henry, this thing, and he’s telling me about this analog of pro to neural networks. And I remember thinking the first thing I said to him is like look, sorry, dude. Like, I’m an electrical engineer. You, you can’t do that. That’s impossible.

Rob May (33:29):
So when he realized, I actually kind of understood what he was talking about. Then he got really excited and he started going into the details and I was like, oh, actually that’s pretty brilliant. Like the way you guys figured it out. So their analog compute, model’s interesting because analog compute uses way less power and, and is much, much, much faster. So you take the, the type of compute power you would have in a typical and video server card. And you put it in a $30, you one milliwat package or whatever, you know, I don’t know what their specs are for it right now. So what’s interesting about it is not just because it has a lot of applications for, for, for AI and running AI at the edge and doing it more cost effectively and, and, and low power and everything else. What’s interesting about it is when that paradigm, when chips like mythic and, you know, they have some competitors as well, when those chips get out there in the world and people start writing for those chips and you have 5, 6, 7 years of people who have, you know, been in grad school or come outta grad school and, and worked on these and the, these chips capabilities are their main way of thinking about the world.

Rob May (34:41):
They’re gonna come up with ideas for these chips that they can extend them to, that we didn’t ever think about. Right. and, and to me that’s really, really exciting. It’s, it’s a lot like, you know, companies when the web first came out, people thought of, you know, the web as, Hey, this is our online brochure about our company, right? Didn’t think about it as actually building tools and software and interactions and communities and all that, like that all came later, as people were more web native and all that. And so the same thing’s gonna happen with these different forms of compute that AI is driving is as people work on these different forms of compute, they’re just gonna, it, it’s just gonna unlock so many different things that we can’t even imagine right now. And I just find that super, super exciting to think about what some of those things could be.

Dan Turchin (35:27):
Well, Rob we we managed to discuss almost zero of topics we planned to discuss <laugh> and this has been a fascinating discussion. So we’ll we’ll definitely have to have you back for a sequel. What do you think about that? Yeah. Happy to do it anytime. So yeah, maybe we do it in person next time. Post COVID all the better, the sooner, the sooner, the better. Yeah. Well, Rob, this has been so much fun. Thanks for hanging out and really enjoy enjoy spending time with you and look forward to the next conversation. All right. Thanks for having me, Dan. This is your host, Dan Turchin of AI and the future of work signing off this week, but we’re back next week with another fascinating guest.