This is a transcript from the AI and the Future of Work podcast episode featuring Gordon Wilson, CEO and co-founder of Rain Neuromorphics, shares how to re-create a carbon-based brain on a silicon chip

Dan Turchin (00:21):
Good morning, good afternoon, or good evening, depending on where you’re listening. Welcome back to AI and the future of work. Thanks again for making this one of the most downloaded podcasts about the future of work. If you enjoy what we do, please like comment and share in your favorite podcast app, and we’ll keep sharing amazing conversations like the one we have planned for today. I’m your host, Dan urchin advisor insight finder, the system of intelligence for it, operations and CEO of people rein the AI platform for it. And HR employee service, AI models live in the cloud and make automated decisions delivered by software via apps in the form of dialogue, predictions and recommendations from listening to this podcast, you might believe that’s where AI starts and ends. What we haven’t discussed yet is hardware architectures that could expand the scope of what AI can do.

Dan Turchin (01:16):
For example, the future of augmented reality and virtual reality requires vast amount of computing power, lots of bandwidth, lots of storage, and increasingly at the edge of the network. And that just can’t be provided with today’s devices and networks. Just maybe there’s a better way. Well, today’s guest believes that carbon-based human brains can be replicated with Silicon. If he’s successful, it’s not overly dramatic to say our very notion of what it means to be human could change forever. An open question for all of us, of course, is whether or not that’s a good thing, but whatever you believe it’s important to prepare for that future because new AI first chip architect, Xs are inevitable. Gordon Wilson founded Ray neuromorphic in 2017 to disrupt computing. As we know it since then he and the team graduated in the S 18 Y Combinator batch and have raised venture funding from an exceptionalist to ventures of investors, including Sam Altman. Who’s the CEO of open AI, thanks to Rob may for the introduction to Gordon and without further ado, it’s my pleasure to welcome Gordon Wilson from rain neuromorphic to the podcast Gordon let’s let’s tick things off by having you share a little bit more about your your background and how you got into this space.

Gordon Wilson (02:41):
Thank you so much, Dan. It’s a pleasure to be here. I’m so glad to have the opportunity to share our story. And yeah, happy to kick off with a little bit of background on myself and how I got involved with rain. So, you know, I kind of come from a family of entrepreneurs who was always sort of the expectation that I would strike out and do something on my own. And that was always something that I found very compelling, the idea of building a project and building an organization that could stand on its own and solve really interesting problems. And I also grew up in a family that was really defined by a love of sign fiction that we all shared. You know, we had this massive bookshelf filled with, you know, Hugo award winning, you know, novels from Ronda, with Rama to Hyperion.

Gordon Wilson (03:30):
And that love of sci-fi was definitely something that sparked my young imagination has, you know, kept me motivated and inspired since then, but this project rain neuromorphic was started in the summer of 2017. When I met my co-founders Jack Kendall my CTO and Juan Neno our chief scientific advisor at the university of Florida. So I was studying math at the time. And Jack was a really interesting kind of disciplinary student himself where he had studied physics and chemical engineering was working in a material science laboratory, but then was also independently studying neuroscience and artificial intelligence. He had also started a student organization at UF that was intended to teach for, to broad audiences, the fundamental of Python and machine learning. And that was through that organization that we met. We were sequential presidents of the student group, and Jack had been working on really thinking about this problem in a very interdisciplinary way.

Gordon Wilson (04:34):
And that is really native to the way we understand the big problem around AI and computation that you really need to tackle it and approach it from many angles simultaneously to be able to improve as much as we need to improve in order to fully realize the potential of artificial intelligence. And, you know, the bottom line issue is that AI is really expensive when you compare it to the brain. You, our brain has 86 billion neurons, half a quadri synapses that run in a parallel fashion fit inside of a five inch cube, roughly by volume have a 20 wat power footprint. You know, it is extraordinarily efficient and it is still the most complex and biggest network that we have ever known artificial or biological. And this combination of extraordinary scale with what simultaneously achieving extraordinary efficiency has always been kind of our north star of what we wanna achieve at rain. And Jack had developed a few core technologies at Florida, but it basically there that we started the company that was four and a half years ago, the technology has evolved significantly, but at the end of the day, you know, the guiding light for our for our technology, for our innovation and for our roadmap that we’re building is to, you know, recapitulate the brain in its entirety and to enable for independent autonomous machines to have all of those features of intelligence that we see in humans today.

Dan Turchin (06:11):
What are the fundamental technological impediments to achieving that vision?

Gordon Wilson (06:17):
Yeah. So there really are kind of two technological pillars that have now formed the backbone of our roadmap and these reflect the kind of core problems that we sought to solve. And those two technologies mirror, the two technologies that enabled the deep learning revolution that began 10 years ago in 2012. And most people are familiar that these technologies enabled AI, but they don’t necessarily know the whole history, but the two technologies that came together were a learning algorithm, a robust way to train neural networks. This was back propagation. It was invented in 1985 by Jeff Finton and then a scaling arch, a hardware platform that you can can Coate many of these pieces together, more and more to support, bigger and bigger neural networks. And that hardware architecture is the GPU. The graphics processing unit that at first came to market in 1999. So back prop 1985 GPU’s 99.

Gordon Wilson (07:17):
It wasn’t until 2012 that they came together. And what made 2012 so special with the Alex breakthrough was that they demonstrated you could take multiple GS, build a bigger neural network, train it on, on with that propagation. And it beat the benchmark, the vision benchmark at the time by like 11, and then every single major breakthrough that we’ve seen in the digital AI world has utilized those two technologies together. And it’s really been about just putting more and more compute, more GPS behind it. Alphago alpha, fold GPT, three natural language to CNNs, to vision. It’s all been driven by this learning algorithm and scaling architecture combination again. But the problem with that is it’s really expensive. It’s about a million times more expensive then for our brain to do the same types of neural computations. And so AI is limited in terms of how big we can scale it and where we can put it.

Gordon Wilson (08:18):
So what we have done is we have built the analogous technologies for a new kind of neuromorphic physics based platform for artificial intelligence and the biggest way to distinguish between the neuromorphic way. And the proceeding digital way is digital. AI are neuro simulations. They don’t physically, they are abstractions written in software that are run many layers above the hardware itself where neuromorphic hardware, as we’re building, it are neural circuits. They are physical instantiations of neurons and synapses. Everything has a physical element. And so our technologies are the learning algorithm and scaling architecture that learning algorithm is called equilibrium propagation. So we developed with SIO, the touring prize winner. Who’s also one of the godfathers of deep learning this circuits model that allowed us to translate this algorithm into the physical world. And it really is the first algorithm to robustly train end to end analog neural networks.

Gordon Wilson (09:19):
So this is one of the major challenges we overcame and then the scaling architecture we call the N P U or the neuromorphic processing unit. And it basically just, it answers the question, how you fit a ton of neurons, and it’s a ton of synapses onto a single chip. And the answer is compacting them as closely as they can and connecting them with a sparse pattern. So a pattern that mirrors the connectivity pattern of the brain. So these are the two biggest problems that we identified. You need a scaling architecture and you need a learning algorithm, and these have formed the backbone of the technology at brain.

Dan Turchin (09:54):
So rain is I think, safely in the category of startup that we would refer to as a moonshot high, high risk, but high reward. You know, it’s a, it’s a, it’s a put a, put a man on the moon, put a person on the moon, kind of an opportunity if it works. What’s the most memorable reaction that you heard from VC, or maybe a prospective investor who didn’t really get the the science behind neuro neuromorphic computing?

Gordon Wilson (10:19):
Oh, man, there have been so many. And I mean, it’s, it’s been really a funny journey for us because, you know, Jack and I started this company and we were both 25 years old and you know, we’re now 30. So not only were we coming with a very unorthodox approach to building hardware, but we are also just not the people that investors expected to be building this company. Today. I mean, we now realize that was why we could come with an Orthodox approach because we were from outside of Silicon valley and we didn’t have all of the biases of decades of Moore’s law computing kind of baked into the way we understood hardware, but nonetheless, we’ve had some very interesting reactions. So, I mean, it’s funny one that was just recently. So we, we recently closed a series a and there was we had been pitching a fund that had invested in a lot of different companies and they had looked at a lot of a accelerators and they were clearly compelled by our technology.

Gordon Wilson (11:16):
And they thought that that tech was really interesting, but they forgot to leave the they forgot to remove us from the CC on an email that they were sending internally. And they’ve said some comment that was like, this technology is so interesting. I just can’t get over the fact that they only have bachelor’s degrees and they don’t, and they’ve never come from a major, a semiconductor company anyway, they didn’t, we didn’t respond directly to that, but we were just like really four years in and they still are discounting us for this reason. So anyway, that’s happened a lot to us. And I mean, over the years, there’ve been countless different versions of this, where, where we really are thinking pretty far into the future. And folks just don’t necessarily buy it at its face. But you know, one thing that has been extremely validating is that while there have been so many people that have dismissed us because our ideas were either too out there, or we were too young, whenever people really pause, take the time to get to know and take a closer look, they seem to walk away convinced.

Gordon Wilson (12:16):
So, you know, it’s a, now a badge of honor, you like to wear,

Dan Turchin (12:20):
You alluded to some of what makes the human brain, such a miracle, such an absolute miracle of biology, of chemistry, of, you know, of, of science, of, of of ever to a cynic who says you can’t possibly capture the sophistication. At some point, you just have to appreciate the human brain is magic to you and your, you know, your co-founder who say, no, we can actually, we can rep we can deconstruct what the brain’s doing and replicate it. And, you know, with a combination of physical models and AI, and do it in Silicon, how do you defend that claim given that, you know, it’s still, still, we’re not there yet, but obviously you make the progress.

Gordon Wilson (13:03):
I mean, as has other people have said in the past you know, any sufficiently complex technology or science is indistinguishable from magic. So, you know, it’s only natural that the most comp complex information processor in this case designed by, by millennia of evolution and all that it’s capable of seems almost impossible to, to recreate, you know, in a Silicon substrate. But, you know, we are firm believers that, you know, this is a piece of, this is a, it’s a computer. Our brain is a computer that has evolved over over a long period of time to process information and help aid, you know, evolutionary evolutionary trajectory of life. But it’s also, and it is something that can be understood and broken down and kind of, we can take the right clues from the brain. So, so we aren’t necessarily trying to recreate the brain in every specific detail.

Gordon Wilson (14:05):
And actually this is an important point that distinguishes us from actually a lot of the neuromorphic projects that proceeded our, our company. So there are a lot of folks who have been really focused on mimicking say the exact dynamic of a neuron and make sure that the exact way that a neuron spikes is almost perfectly replicated. And how do we build the analogous kind of sodium and potassium ion channels. So the spike happens exactly. and this is what most of the neuromorphic world has been focused on. And we don’t actually think it on that unit level. It actually matters as much. You need neurons, you need synapses, which are both used as memory and processing, but it’s more about how do you say connect a huge number of neurons together? How do you scale a neural network to such a massive size that can support really complex problems?

Gordon Wilson (15:00):
And that’s a clue that the brain clearly tells us, you know, through sparsity, by not connecting everything to everything else, but by having these very you know, well connected PA, so they can minimize the total number of connections, but still pack lots and lots of neurons together where there isn’t information doesn’t need to jump too many steps to travel. And then of course the other clue was, was how does the brain learn? What is the algorithm that allows synapses to know, do I need to become stronger or weaker as a maps for the whole system to become smarter? And that was where this algorithm, which is also a very brain-like algorithm. In fact, the original paper was about bridging neuroscience and deep learning. This could be the algorithm that the brain is using in any case, you know, to, to, to get back to your, the core of your original question, you know, we’re not trying to mimic every piece of the brain, and it’s not about you know, copying it piece for piece, it’s about taking the right clues.

Gordon Wilson (16:01):
And I think when you combine those correct clues with decades of progress in semiconductor engineering, you can build really powerful products as we are doing. And I think you can even roadmap all the way to building a full brain. Whether when we build something that’s artificial, will things like consciousness emerge, will things, you know, these, these things we consider so magical and ethereal and unique to human life or to biological life, will they, will they emerge? I don’t know. and I’m not, I’m not sure, but I’m certain that we can build a far more efficient subs of computation by taking the right clues from the brain.

Dan Turchin (16:40):
So we have models in other AI domains, like autonomous driving, where we’ve got a five stage process. And we can talk about the progression of the, the field based on going from, let’s say level zero, no automation to level five, a full autonom is driving. Right. is there a corollary in neuromorphic computing where like, what are the early tasks that will indicate that we’re making the right progress?

Gordon Wilson (17:06):
Yeah. So on our roadmap with, for our processors, you know, we’ll, we’re starting by implementing our algorithm for fairly so small models for, you know, we’ll have to start small and then we’ll grow the chips and the models bigger and bigger. So the roadmap will begin with very, very efficient implementations of fairly simple types of machine learning. And that has always been kind of step one of neuromorphic engineering. Let’s make it dramatically more efficient so we can put things in interesting places. So we’ll start by, you know, maybe making a simple computer vision task so efficient that it can go in an untether device. But then as the, as the roadmap continues, we add to that efficiency scale and we start to approach far larger, far more complex problems. And I think the inflection points that I find the most interesting are ones where we can start to combine training and inference complex training and inference onto untethered devices.

Gordon Wilson (18:12):
You know, what, at what point can we take G T3 and a continuously learning model of G T3 that can adapt its language to the user that it’s interacting with? When can we put that into a robot? You know, when can we take, you know, a complex robotic control that is also adaptive and learning from a complex environment or adapting to the degradation of the machine’s joints? When can we put that into a robot? I think those are, those are some of the really compelling inflection points that I get excited about in autonomy, but it end up, it ends up kind of reflecting the autonomy roadmap and unsurprisingly, you know, many of our, you know, target early customers are folks building autonomous devices.

Dan Turchin (18:57):
So I would think that even once the chips are capable of replicating some of the even rudimentary capabilities of the brain in order for them to be successful you use the term untethered or at the edge of the network. I would think there’s a parallel set of technological challenges related to storage of data because the learning algorithms presumably need to be collecting all the sensor data. What, how, how do you, how do you plan to address the stores of an edge device?

Gordon Wilson (19:32):
Right. So a few different ways, you know, for one, you know, you don’t necessarily need to store the entire model on the device for it to be able to continuously to, to fine tune itself and to keep learning, you can stream that store a bit locally and then have more data that kind of comes in and after the next. But I think the, the bigger and more fundamental solution to this problem is we really need to have an algorithm that can learn with fewer examples, which is, again, something that humans have already achieved. The data efficiency of human learning, you know, is extraordinary. We have single shot learning, two shot learning. We can general and take one lesson and apply it to a broader field of concepts, whereas to do the same with regular deep learning today, things can, may take 10,000 examples for in our face.

Gordon Wilson (20:22):
So we actually have a path towards more efficient learning in the neuromorphic domain as well. And this comes from exploiting natural quantity in the physical universe. So this might get a little technical, but it’ll be an east egg for maybe some more of the engineers that are listening, but there’s something, but back propagation is a first order learning algorithm where as it’s learning, it’s taking a step down this gradient, and every time it takes a step on the gradient, it gets a little bit better. There are second order learning algorithms where it’s like sliding down the gradient, and we’re nearly certain that the brain is using a second order algorithm. We’d like to put it into AI, but they’re super expensive. And the reason why they’re expensive is because inverse operations in GPS are just really long and really expensive. So in the physical world, in the analog world, in the neuromorphic world, you can actually just flip voltages and currents and perform a natural inverse operation on the same piece of hardware, because voltage and current are duals of each other. So, so much of the, that we do is about identifying these relationships in the, in physics, in the physical world and exploiting those so that the physics kind of does the math for us. Anyway, this, we believe is a, towards far more data efficient learning, and it’s something that can only be achieved when you’re doing your learning in the neuromorphic domain.

Dan Turchin (21:48):
So that got a little bit abstract, a little bit wonky, so let’s take it down a level. So some of the early use cases who, what kind of problems are you targeting solving?

Gordon Wilson (21:57):
Yeah, so, I mean, initially we’re looking at, you know, both being kind of a co-processor as part of a much larger model. So we might be plugged into a very large natural language model, but all only do say pre-processing and dimensionality reduction for the first layer of that model. So that’s one use case we’re looking at now where we’re not, we’re not tackling the entire end to end neural network yet, but just dramatically improving the efficiency of one piece. And then beyond that, you know, we really are looking at a lot of problems in the robotics and autonomy space. So, you know, we want to put simple vision, simple and even complex vision, but perception in these devices and these machines. And there still really, isn’t a good hardware platform to support, you know, complex, you know multi sensory perception in robotics. So we see that as another kind of entry point a use case entry point for us, but we’re really targeting this whole area around the heavy edge, you know, where the, we don’t need to go to the ultra low power devices because it’s, there are challenges there, and we don’t need to go to the massively deployed data centers, but kind of in between where you have, you have energy limit, but you still want to have high performance on some fairly complex models. That’s our sweet spot.

Dan Turchin (23:18):
So we’re all familiar. I’d say most of us are familiar with Elon Musk and Neurolink, and I’d say the general public perception is that chip implants in our brains kind of creepy take the counterpoint. So what are some of the things that implants could be useful for enabled by companies like rain that aren’t creepy?

Gordon Wilson (23:40):
Yeah. Great question. And, you know, and I should say, you know, we aren’t initially targeting BCIS, but it is, is something that naturally, there’s obviously a natural compliment to a brain computer interface and neuromorphic chip. And, you know, I have a few friends who are building companies like, like neurally. And I think I should just give a shout out to Perros. I think they’re doing incredible, incredible work. And Matt Engel is doing a great child there, but, you know, he will speak to a set of, of use cases I think are extremely compelling. And that just, it starts with, you know, restoring people’s capacity to experience the world as they, you know, a as, as they were born or maybe they were even, they were born without that, you know, people could that get injured that lose a leg that lose an eye.

Gordon Wilson (24:24):
You know, if we can build a brain computer interface that then connects to directly into your nervous system, we can restore vision. We can restore people’s ability to walk. You know, we can reconstruct spines and give people a new lease on life. And I think that that to me, you know, merits all, all of this work on its own. And of course there are very interesting and bigger questions about the longer term consequences of a society where that is possible. But I think, you know, I would like to live in a world where, you know, someone who is paralyzed has another chance to walk

Dan Turchin (24:56):
Love that example. Every couple years, we hear about innovations for athletes, whether it’s you know, the Paralympic or the bionic swimsuit that, you know, makes you swim faster because it’s more you know, aerodynamic, et cetera. And we always, you have to confront these kind of philosophical issues about at the point where we replace enough of a human with some form of technology innovation machine. At what point is a human a hu what, what is a human at the point where we’re augmented by these you know, digital or technological extensions of ourselves?

Gordon Wilson (25:38):
Yeah, that’s a really interesting question. And, you know, I don’t know if there will be an exact point or tipping point where we, you know, are where we realize, oh, we’re no longer human. It might be a sort of ship ofthe kind of problem. You know, as each, each piece is replaced, you know, we slowly move down this gradient. But I dunno, I think that as long as it’s the same continuous stream of consciousness, it will be, it will still be human as that was a human consciousness that it started with. And, you know, one could have robotic arms and robotic legs and robotic eyes and Silicon cochleas for their, for their ears. But, you know, I think that the continue that, that as long as it’s part of that continuum that started with the human, that, that still is human. I think that there will be machines that might approximate, you know, something that feels human in the future.

Gordon Wilson (26:30):
And that will beg a very, another interesting question of what happens when they’re really passing that full tour test. But, you know, I’m, I’m fairly bullish and excited about the possibility for, you know, human augmentation and sort of restoring people’s capacity to interact with the world. Because I just think it’s it’ll be able to extend longevity. It’ll be able to give people better quality of life. And yeah, I think it, it, there is so much good that he brings to us that it’s worth ultimately bringing society to a place where we have to ask those questions. And it may not be easy for us to tackle them altogether, but I think we’ll, we’ll be able to handle it

Dan Turchin (27:14):
Might the inverse ever occur where you’d essentially build a robot and augment the robot, let’s say with you know, a cryogenically frozen cranium from a human course, or, you know, maybe it, it’s very hard to get robots to do certain grasping tasks. And maybe you could you know fix a human arm to a robot. Is that, is that inverse of the, starting with the human and making it robotic? Could, could that play out as well?

Gordon Wilson (27:44):
I think that’s less likely than the former. I mean, to take the example of the robotic arm, you know, today, robotic arms aren’t as good as humans, but I think that once we have a bit of an algorithmic development and better, better compute for, to run those arms, then they will far far exceed, you know, the and control of, of human, but to start from the machine and work backwards. And I I’m immediately brought to bicentennial, man, Robin Williams was a great film. I mean, it’s possible, I suppose. And, you know, I think if, if we build in desires to machines that, you know, that those machines want to look at us and, and be a human, you know, that would be a very interesting world. You know, and guaranteeing kind of perhaps again in BIC amount of death is sort of this, this license to life. I think it’s less likely though, you know, I think we’re more likely to, to start human and, and work the other way. And I think that is, that is one of the big goals of, of BCI, certainly from, from Elon Musk’s perspective,

Dan Turchin (28:48):
Where are we at with respect to battery or energy storage technology? I imagine these new chips will have massive requirements for energy. What, what are you and the team working on to make these energy efficient, these chips?

Gordon Wilson (29:03):
Yeah, so actually one of the, of the things that makes theoric hardware where we’re building so, so great is that we are dramatically reducing the power footprint compared to the equivalent hardware for digital AI. And, you know, we just recognize that you need to both speed up the processing and you need to reduce the power footprint in order to get the efficiency gains that come anywhere to human intelligence. So the, the first, the bottom line answer to this problem is that we are building a far more energy efficient substrate that is just going to use far less power, but it will still require, you know, more power than a human brain would, you know, for the equivalent task, at least initially. So, you know, I think there is, I, I know of different types of battery technologies that are on the horizon that, you know, hopefully will intercept with us, you know, on our roadmap. And we’ll be able to help support, you know, long term storage. So we can have, you know, several hundred or several thousand wa power foot, power foot on an untethered device. But, you know, we are gonna continue pushing our hardware to be as power efficient and as energy efficient as possible. So we hopefully won’t require that to fully realize our roadmap.

Dan Turchin (30:13):
So it may not get to the point where you know, a simple sensor, you know, taking temperature of, you know, in a, in a, in a building or counting traffic, or you know, a kind of a dumb sensor or a wearable, you know, in, you know, on a watch or embedded in clothes like that, that may not be the target for a chip that rain is building, but something that has a little bit more capabilities might be close to where with a efficient power management you could achieve in that kind of device.

Gordon Wilson (30:45):
Absolutely. Yeah. So, you know, I really think there’s going to be a whole ecosystem that ultimately supports the full artificial nervous system. You know, we need, you know, artificial eyes, so we need Silicon retinas. We need artificial ears, Silicon Colias, we need pressure sensing. We need, we need thing to be taking that information at the, at the edge of that system. But we also need something to be the interconnect to the, you know, nerves and the brain, you know, the hub or for all the compute, we are really solving the brain problem. We are hoping to support kind of the hub that central computing for all of this you know, neural information where there are a lot of other companies that are building those Silicon coquis, those Silicon retinas ultra low power processors to go next to those sensors, to even further reduce that information that has to be transmitted and that’s. Yeah. So I see them as the compliment to what we’re achieving, where we’re really about the brain.

Dan Turchin (31:41):
I, we could take this conversation a million different directions, and I gotta I gotta resist the urge and maybe might have to continue this in another conversation if you wouldn’t mind, but I love you. Yeah. You’re not getting off the hot seat without answering one last question for me. And that’s look, you’re a science fiction buff. I’m a science fiction buff. Probably a lot of our listeners are if if rain is wildly, when, when I should say rain is wildly successful, what’s one science fiction plot that you hope to be able to make reality using rain technology. You go first. Oh man.

Gordon Wilson (32:21):
Oh man. That’s a great question. That’s a great question. You know, to not be overly sentimental, you know, but you know, plots like, you know, I’ll, I’ll, I’ll, I’ll choose the, the films as, as, as they be most PR pretty common, but things like bicentennial man, things like the, a AI artificial intelligence from Spielberg. I mean, they, they all have a bit of a dystopian edge because it all comes to, you know, robots feeling so human that we end up performing emotional bonds with them. But, you know, I think in the long term, we will see, you know, artificial systems in robots as things that we care for and things that are part of our communities, and we’ll still have humancentric communities. And that will be the foundation of everything that we, that we build. But I would like to see those types of plots at least possible. I would like to see it possible to build machines that can interact with the world, understand the world and empathize the with humans. If that sounds a little bit eerie and I think probably will be scary for maybe some folks, but I think that it will be, it would be a better world when, when we can reach that.

Dan Turchin (33:30):
I wanna dig up Isaac Asimov and ask him what he thinks we might have to settle for Steven Spielberg. Maybe when you come back on the show, we’ll invite Steven Spielberg as well and get get his perspective.

Gordon Wilson (33:42):
Great, perfect,

Dan Turchin (33:45):
Good stuff. Hey Gordon, we we stuck to none of the script and I think we had a better conversation as a result. This is this is a lot of fun and I’m gonna insist you come back and have another version of this later. Would that be right?

Gordon Wilson (33:59):
Yeah. No, absolutely. and sorry for getting into more of wonky stuff with the technical things. I, you know, I can go in a lot of different directions and levels of abstraction, but hopefully there is enough fear for you to work with to make something good.

Dan Turchin (34:10):
Just brilliant. If our listeners wanna learn more about rain and about neuro morph computing, where would you point them

Gordon Wilson (34:16):
To our website? So www dot rain dot I AI.

Dan Turchin (34:22):
Fantastic. All right. Again, thanks to Rob may for the introduction to Gordon. This is a a fascinating conversation and is your host Dan Turian of AI and the future of work signing off for this week. But we’re back next week with another fascinating guest.