This is a transcript from the AI and the Future of Work podcast episode featuring Dipanwita (“D”) Das, Founder and CEO of Sorcero, discusses how AI improves lives by helping life sciences experts accelerate medical research
Dan Turchin (00:17):
Good morning, good afternoon, or good evening, depending on where you’re listen. <Affirmative> welcome back to AI in the future of work. Thanks again for making this one of the most downloaded podcasts about the future of work. If you enjoy what we do, please like comment and share in your favorite podcast app, and we’ll keep sharing great conversations like the one we have today. I’m your host, Dan urchin advisor insight finder, the system of intelligence for it, operations and CEO of people reign the AI platform for it and HR employee service. One of the most obvious ways AI is used to help humans is by making it easier to find and process information. First, there was the Yahoo directory going back many years than there was Google keyword search. And when I got into enterprise search in the early two thousands Google-like information retrieval was available for enterprises from vendors like Infoseek and Verity.
Dan Turchin (01:14):
They were some of the early ones, even the best keywords search systems today suffer from two problems. The first is you need to know which keywords are in the documents you want. And then second, there’s not really any intelligence applied to documents retrieved. So typically you get back kind of lists of links. The subfield of AI that we’ve talked about a little bit on this show is called NLP or natural language processing. And it makes it possible to ask for what you need using plain English, or of course the language of your choice. And you get back answers rather than links, just kind of a linguistic understanding or a semantic understanding by the AI. It’s a classic example of where AI is augmenting humans. Certainly not replacing them today. We get to speak with an expert in applying NLP to the field of medical research.
Dan Turchin (02:02):
We’ll learn how this approach makes it possible for ethical experts to get value out of kind of esoteric journals and other unstructured data sources that contain most of what’s actually needed to advance their fields. Ddos started SORC in April, 2018 to make technical content more accessible to experts in stem fields. Since then, the company’s raised about 16 million from leading DCS and built an amazing team of technologists before starting a company de founded 42 strategies to help social impact organizations with digital transformation initiatives. These also been active in organizations, helping women advance their careers and leadership roles, a topic that listeners know I’m passionate about without further ado. It’s my pleasure to welcome D to the podcast D let’s get started by having you share maybe a little bit more about your background and Hey, get into the space.
Dipanwita Das (02:59):
Sure. First and foremost, thank you Dan, for having me on your podcast. I am honored and I am always excited to tell the story of what we’re building out at SORC Cero and more important. Why my name’s Dita do I go by D I’m the CEO and one of the three co-founders of SORC. My background is at the intersection of public policy and technology. For many, many years. I have worked on large scale data collection and analysis platforms to help adults in the workspace, keep up with their field, take better and more effective decisions at scale, and ultimately be able to communicate what they need to communicate in order to move the field itself forward. And a great example of this is of course, being able to communicate effectively some of the bad effects effects of tobacco in order to move legislation forward. That brought me to a, have a very, very, very deep understanding of the difficulties of communicating science at scale and a much deeper appreciation of what our customers and users and colleagues in life sciences are trying to do make better, more nuanced decisions based on vast amounts of complex medical and scientific content, make it faster and make sure that everybody who that information is impacts patients and physicians and insurers are able to access that in a timely manner.
Dan Turchin (04:27):
Talk to us about the founding vision for source. You’ve done so many interesting things in your career. You, you you went down the rabbit hole here
Dipanwita Das (04:35):
I did in need. And what I I’ve been reflecting a lot on what I’ve done before and how I came to it. It it’s a question I’m asked often, and I like to boil it down to really three things. First and foremost, I am deeply, deeply passionate about building systems infrastructure, whether you talk about an organizational infrastructure or technical infrastructure really gets me excited because infrastructure allows you to build things on it. And as a single person or a single, you cannot solve all of the problems that need to be solved, but if you are able to build consistent standardized infrastructure, other people can build on top of it and start solving their problems. And that’s one thing that has led me over and over again, both back to entrepreneurship, but even more so back to platforms. Number two, doing something that leads the legacy of good, a, you know, building a company, building a product it’s a lot of hard work and a lot of people and just, you know, SORC steam for one is putting in an incredible amount of love and care and dedication into building something great.
Dipanwita Das (05:39):
I wanna make sure that we can be proud of it and make sure that it is moving the new forward for us in improving patient outcomes through better decision making. And third is, are we supporting something that can truly scale one of the things that’s extremely exciting about public policy, as dry as it sometimes may sound is the scale of its impact. And so for me, I wanna make sure that what I’m doing can scale in the right hands and with the right kind of focus and planning. So these are the three things that really tie everything I’ve done in the past and what we’re trying to do with our Sarah and come April, 2018. When I was at dinner with one of our co-founders Walter Bender, what really got us excited was the potential of, of a, that the market needed a platform. It needed infrastructure that was fit to purpose for medical and scientific content that the regulations and the, and the push to have more evidence based medicine meant that the market needed to buy and adopt and scale these tools. So it’s opportunistic, of course. And then number three was the technology itself. You refer to NLP, we call it language intelligence. We marry heuristic, AI, statistics, AI, and linguistics altogether, and transfer learning models amongst other things that came out in 2018 in a big way, enables us to do that.
Dan Turchin (07:08):
I love you talked specifically about wanting to do work that matters going back to the founding of the company 2018. Talk to us about either a doctor or a user that you’re most, most proud of and how do they, how do they use the platform?
Dipanwita Das (07:23):
Sure. so I’ll take a step back and talk about the problem we solve and then how we solve it. So our users are usually MD PhDs at pharma companies. They fall under medical and scientific affairs, which is the fast emerging third pillar within the scientific and life sciences community who are tasked with advancing scientific engagement itself. It’s very interesting, but their work and the impact of their work is in whether the therapeutic area that they’re focused on or the target PA population population they’re focused on. Are they being better, are treated through an appropriate application of their products and therapies. That’s really what they’re trying to measure. So they are measuring scientific impact. So a it’s extremely exciting. Number two, they are, they’re dealing with both internal and external datand these are some of the data types we deal with. For example, clinical literature, ads, scale, not just what’s publicly available, but also full text articles that are coming from all sorts of spaces.
Dipanwita Das (08:26):
Healthcare data is growing at a 36% CAGR, and I’m gonna leave it there for a moment to understand that this data needs to be analyzed. It’s not a, it’s not a choice. It’s not a nice to have. So our users are dealing with a du of external datas well as internal data. It could be coming from insights. It could be coming from CRMs. It could be coming from their field medical teams, and they have to bring this all together in a complex heterogeneous space and then say, Hey, HCP treating CA cancer patient. You should consider this therapy and not the other one because their biomarkers show us that therapy one will work better than therapy two in the end. That’s what matters. And I’ll give you a personal story actually, rather than one of our customers to, to really, really bring this home.
Dipanwita Das (09:18):
One of my colleagues, one of their parents has recently been diagnosed with you know, well metastatic breast cancer. And it’s a tough, tough diagnosis to deal with. And they were prescribed a course of treat. Now our analysis using our software showed that that was actually maybe not the best treatment for them, but their alternative was much more expensive in the short term and cheaper in the long term, this kind of economic data is called health economics and outcome research data, very interesting, but in clinical literature. So we were able to use our platform to give them the information they needed to make a case to their care team, to change their treatment. And they were able to do that, cuz the data that people can extract from our platform’s analysis allows them to make the kind of economics based case for a differential treatment. That’s why SORC Cero exists is to enable this transformation in somebody’s life, coming off of an extremely distressing diagnosis.
Dan Turchin (10:28):
I love that you keep coming back to better patient outcomes. And so if I’m, if I’m understanding the problem space correctly, the a, the resource example would be potentially using limited information to draw potentially incorrect or incomplete conclusions about things like how to treat a cancer patient. Is that kind of, is that overdramatizing the impact of
Dipanwita Das (10:51):
No, actually not. You know, there are some very, very interesting numbers, but as even the best in class physician is in no place to keep up with the rapid evolution of science and they are more and more relying on the creators of therapies to tell them both how to treat a particular kind of patient better how to use their therapies better. So that one is just the quality. The other part is the time cleanliness of it. How fast is the latest research getting into the hands of practitioners? How fast is it going from access to information or access to knowledge, to the usability of knowledge? So with Aero we aim to solve for both. One is not just you can access it, but really more important. Can I use it now? Do I understand why it matters? Do I understand why this particular change in a therapy really matters for the kind of patients I treat? That’s the part that is really important. It’s quality and timeliness.
Dan Turchin (11:52):
So you referenced the 36% Kager cumulative annual growth rate and data. Clearly it’s not a problem that is at human scale in terms of being able to process 36% more information year over year.
Dipanwita Das (12:05):
<Laugh> not at
Dan Turchin (12:05):
All. So it makes sense. Good. Obviously good fit for technology and for AI, talk us through how you turn the data into a little bit more about the technology.
Dipanwita Das (12:15):
Sure. so our platform has essentially three broad pieces to it. First and foremost is our ingestion framework, which allows us to bring in content that is from any, in any format from any source. So where con format and source agnostic ingestion framework straight through into a data lake. This is important because ingestion isn’t an active extraction. It doesn’t destroy the document. If the document is what we’re ingesting from, it retains all of that richness. Once it’s in the knowledge fabric, we can then begin to enrich. It usually use of marriage of different kinds of deep learning in language models, as well as ontologies to bring in that accuracy. And then that heterogeneous data pool is the basis of our analysis. So then that brings us to the third part of the platform. So you’ve got your ingestion, you’ve got your enrichment and app applying either the model academy or ontologies to get that data into something usable.
Dipanwita Das (13:18):
And then our third block is how do people use this data? So on the business user side, they use it in one of three ways. They use it for situational awareness, competitive intelligence, really seeing how the field that they’re working on moves forward and that falls under the bucket of, so of our monitoring. The third is what we are. We are actually launching a new product next week. I’m not gonna give, I’m just gonna tease this here. I’m not gonna give too much information. I don’t wanna take away from it. But our application gives our users for the very first time a view of the full product life cycle and the, and the, to in the total impact of, of their scientific engagement end to end, which has just never been done before. So our users are now able to see, am I succeeding in moving the needle forward?
Dipanwita Das (14:05):
How am I succeeding? Who, how is it working? What are the messages that are working better, that kind of analysis. And then the third is pharmaco. Vigilance is my product performing as it should. And this is both responding to new regulations, but it’s also that the burden of monitoring has gone up radically where the authorities are demanding much more real time and at scale monitoring of, of clinical literature that has just never been done before. So that’s what the system does. So it allows our users 10 X more productivity. It reduces monitoring time by the human by 92% and also performs at a 96.1% net positive. So basically the accuracy is best in class. So our users are able to make more nuanced decision. And that’s really what I wanna talk about here for a moment. It isn’t about automating cuz automating can only be done for very simplistic tasks for what our people do. Our, our, our users are MD PhDs. It’s helping them make better, more nuanced decisions at scale and speed in keeping with the regulatory needs of the markets they serve.
Dan Turchin (15:25):
So solving the problem of ingestion makes sense. That’s fairly well, well documented and how you, how you do that. Enrichment is a more interesting challenge. And it you know, particularly when we’re talking about unstructured data, you know, for anyone who’s been around, NLP knows that that’s the hard thing. And the more you understand the domain, the more valuable your enrichment is. Talk us through. You mentioned using ontologies and some of the other Lexile techniques that you use. Talk us through more about how you apply, call it structure to an unstructured document using domain awareness.
Dipanwita Das (16:00):
Sure. one of my favorite topics and again, I will stop where I think I’m treading too much into pure eye ad, but the way to think about how source represent domain and then finally the user is in five layers. It’s a pyramid start at the bottom where we first identify the language. And in this case, the way we look at languages is the English. Is it not English to begin with next it, what is the domain? And that is in this case, is it oncology or dermatology next? What is the sub domain? The oncology isn’t good enough. There is nothing called medical oncology or oncology oncology. What kind of oncology are you looking for down to as precise as possible. Next is context of that user itself. For example, am I looking for symptoms? Am I looking for dosage? Am I looking for what looking for in this bucket of data or am I looking for all of the above and the interplay between, so it’s purely looking at concepts, not just at words and that’s really important and that’s, there’s a lot of disambiguation that has to happen there.
Dipanwita Das (17:07):
So, you know, misspelled word, a misspelled term, and then last, but certainly not the least is you represent the company, the customer through their taxonomies, through their classs, through their proprietary language or whatever they may have created. In some cases, our customers have already been working on ontology. So we bring that in these five layers applied or one after the other, in a hierarchical method. And that actually is one of our patents in this space allows us to bring accuracy, not just at scale, but much more important in a small data core, poor for our market. It’s not the massive that matters. It’s accuracy and precision in just the right place. And that’s probably been the trickiest part of what we do is how do you bring an anti fragile AI powered system into small, you know, working with small data cor
Dan Turchin (18:05):
We talk a lot in this program about what it means to crack just responsible AI. And I tend to kind of casually use three parameters. AI that’s responsible is transparent. So I know when an an automated decision is being made, it’s predictable, the same inputs will reliably generate the same outputs. And it’s configurable. If the automated decision is deemed to be incorrect, there’s no black box. There’s a set of levers to turn, to bring it back into line with expectations. And the kind of related question that I’m always interested to get answered from from guests is to kind of assess the level, call it the level of responsibility of the AI. What if something goes wrong? What are the input? If a SORC model is, is performing poorly and, you know, I guess the, you know, the follow up to that is, and how do you monitor for the accuracy of, of the of the model’s output?
Dipanwita Das (19:04):
Great question. So again, I’ll take a step back, cuz I’ve been again talking about AI ethics and all of these things quite a bit. The first and foremost, I wanna make the point that a lot of the issues we have with bias comes from the raw data itself and not knowing what is encoded in that data. So anything you build on top of it is inherently broken it’s guard, gene garbage out. So certainly the attempts we make to identify that bias from the outset in whatever way we can, as AI practitioners is really essential. So I would almost call it an an otology of bias. So if you’re dealing with life sciences or clinical information, what are the commonly known bias sees that may appear in this content? And those biases are commonly known. There may be other things, but we have a place to start.
Dipanwita Das (20:01):
So the first part of responsible AI is I having some way of identifying the bias within the data that you are working on. That’s one number two, one of, one of the four parts of our system is what we call a validation workflow. It’s not supervised learning. It is exactly what describing what you talked about. Our users are subject matter experts and they can tell when they look at a work screen, whether something is on or off. So we have two layers of SMEs. We have SMEs within the company who are at the same expertise as our users and who do that check on a regular basis. So we have computer monitoring, human monitoring, of course there’s transparency in auditability. And then we also have the users monitoring. So we’ve got sort of lots of layers to make sure that information isn’t wrong. Number one, number two, we have stayed away from the point of care and the kind of black box system that can come.
Dipanwita Das (21:00):
Sometimes when you’re saying, making a recommendation that will immediately touch the patient. We believe that the role of a subject matter expert is essential. Cause only they can tell if something is wrong with a recommendation or a piece of analysis on clinical literature, no patient can tell that something is wrong. So those are choices we have made. So three real, three important things. One is identifying the BI bias in the core data. Number two is making sure there’s a validation loop. And then number three is staying away as much as possible from the end recipient of that decision and putting subject matter experts in every step. So these are the three things that we have done to make sure that we’re responsible.
Dan Turchin (21:44):
So here’s a harder one. I’d love to get your perspective on what responsibility does or Sarah have, if you’re surfacing content that maybe is of questionable integrity. So it’s one thing to provide the rigor that you just described to the process of, you know, how you enrich it, you know, how you, how you retrieve it, that sort of thing. But do you ever think the responsibility that you have for the, you know, the, you know, has it been you know, let’s see peer reviewed or kind of the integrity of the underlying content. I could imagine that that, you know, there, there could be a set of ethical issues. Mm-Hmm <affirmative>, that would be indirectly your obligation, but nonetheless you’ve yeah. You’ve provided that content to the expert. How do you think about that second level ethical challenge?
Dipanwita Das (22:37):
So the great part of what we do is our experts tell us what sources of content they care about. And our users are using peer review. They, they don’t take into account what is not peer review. So within their ecosystem, the literature, the content that’s coming out is that’s from the external world is peer reviewed, or it is something like social media where they’re trying to figure out what’s being talked about. So it’s not informing a medical decision. It may be informing a business decision. So when it comes to what kind of content we deal with, ERO does not work in a space where content that we are analyzing doesn’t have a supervision layer built in already. So whether it’s Congress intelligence, it’s, you know, literature from PubMed, if it’s pre-print, we know what sources and we actually have the source linked right back in our system.
Dipanwita Das (23:35):
So anytime you enter the system, you can go back to where everything came from. It’s it’s there. So one is just a software linkage, but the other is the choice of content and differentiating between when it’s a business decision. So I’m looking at a, you know, an insight from a conversation between a physician and a field team member about a product or a drug. We know what the context of that is, which means we know how to mediate. So that’s how we stay away from that kind of liability. It’s not the same for our space.
Dan Turchin (24:07):
So you’re developing these expert systems. And one of the concerns that immediately comes up in the medical community, when we talk, have these kinds of conversations about the incursion of AI into, into there specialized domain is at what point could a patient go directly to Cero and bypass? Will we ever auto will, will we ever automate away the need for the professional?
Dipanwita Das (24:31):
Absolutely not. And I cannot be more, more emphatic about that. You know, one of it’s a core part of ERO. There’s a symbolic formula we use in everything which says humans. Plus AI is greater than either alone medicine, which is an ever-changing space is hard enough for a human being with 20 years of school to keep up with on the other hand, many of the AI powered systems in the world today struggles to differentiate between elbow and elbow. So I certainly do not believe that in the very near term, we will a anytime be replacing the physician, but much more important. I don’t think that’s the right goal. I just think that’s the wrong goal for us to strive towards. I much rather have a physician that is armed with the great latest and the greatest and able to bring the best of their expertise to bear at every time they treat a patient. It does not in any way excite me to think that we’re automating the doctor away. That’s that’s not in the right goal.
Dan Turchin (25:44):
So D you’re telling me you’re not signing up to do your own appendectomy anytime soon, huh?
Dipanwita Das (25:48):
Feel God, no, I barely want to put, you know, trim my own Dales, much less do my own.
Dan Turchin (25:59):
I love it. So so Polish your crystal ball it 10 years from now, and you’ve paved the planet with SORC and it’s just, it’s the, you know, of tools for the medical community. How’s my life different as a patient?
Dipanwita Das (26:16):
I hope, and I am actually pretty certain that if we are truly able to touch every medical affairs professional in the world today, who’s interacting with healthcare providers on any sort of, you know, in a continuous basis that your doctor is better prepared. That’s that’s the real thing to measure is is your doctor better prepared? Is your first diagnosis more correct than sometimes the first diagnosis tends to be, are you getting the right medication straight out the gate? Are you being given the right dosage? And are you being treated without bias? So if we’re able to identify the biases that may be ingrained in core, the base literature you being treated without bias, those are the things that are really, really important. And we should be able to measure that simply from the same dashboards that our, that our customers are using. If they’re able to move their science forward, we are gonna see it in the literature. And we’ll know that we’ve made that impact.
Dan Turchin (27:20):
I mentioned in the opener that on things I’ll liked about your background is that you’ve spent a lot of time working on projects that have social impact and specifically looking at females, having access to stem careers. I’m gonna have my two daughters who are 12 and 14, listen to this. And I want you to tell them why should they feel like they’re entering a world of work? They provides equal access to females.
Dipanwita Das (27:47):
I am both the daughter and the granddaughter of, of two women scientists geophysicist. My grandmother was a geophysicist and a mathematician and one of the best in India at her time. And she would’ve been a hundred this year, just to give you a sense of her time or age. And my mother who is a solid state physicist. The only way to win is to persist. And that’s the best piece of advice I ever got from them is do not allow per perceived or real heart in soul sexism, whatever it is to ever push you out. Cuz if you persist, you are there. And at the end of the day, if you’re not at the table, you’re on the menu. So the only way I think women in them and women at work are going to make sure they have their space at that table is to be imovable. It’s not a world. It’s a, it’s a rough world out there. But the thing about women is that they are far more resilient and able to take a lot more and keep moving forward. So that’s what you shouldn’t on. So two, your daughters persist.
Dan Turchin (29:13):
Have you ever felt in your career disadvantaged because of your gender?
Dipanwita Das (29:17):
Oh yeah, like a million times in so many ways. So, you know, growing up a girl, the boys are out there. The girls are at home having to be protected and safe. It’s from being kept away from extra regular activities. Mind you. I grew up in Calcutta and went to college in Delhi, a very different environment than the one I inhabit today. But nonetheless, you know, I grew up in a world where women’s safety is a critical problem and women’s safety being such a problem. Keep, keep women and girls away from work and school and extracurriculars cuz they wanna be safe. So that’s one number two is just society that is tends to be more patriarchal. Oh, you know, could get married. Right. so there’s that again, night I mentioned my privilege of being a daughter of not my daughter and a granddaughter, which means I certainly had a much more supportive family, but that’s one today. When I go to raise money today, when I go and hire people, there is a little bit of gender, a little bit of youth, both of those things can come and play against you. But at the end of the day, you have to persist. And you know, it is statistically proven that women run companies of a 35% better ROI on investment than men. So at the end of the day, the data will tell it’s not gonna be easy, but the data will tell
Dan Turchin (30:39):
What do you do if you’re a female and you encount to implicit bias, it’s so easy just to turn around and pursue a field that you’re expected to pursue, but I love the term persist, but make it real. What does that mean when you actually encounter implicit? Let alone explicit, but implicit bias
Dipanwita Das (30:58):
I’m example cuz I can think of so well let, let me put it in a language that we all understand. So Eros venture funded once a year, every year, without exception of to go raise around a capital that means that I’m talking to potentially hundreds of people. And these are some of the comments that I have heard. I want to invest in this company. This isn’t a great market. I cannot, I wasn’t expecting to find something like this, but I’m afraid that you won’t be able to lead this into scale. Not something that I he think you would hear very often. Number two.
Dipanwita Das (31:55):
So how do you think you are gonna work with all of these men around you? Do you think they’re gonna listen to you? So again, they will, these are comments that I have not only been made to me. They’ve been made to many other incredible female, not just founders, but CEOs of companies when they go out and raise money. And when you encounter it, you say, you say what you will say always with grace and then just put them on a blacklist. And that’s it. Because at the end of the day, there are many, many incredible people out there, women and men who will support you as a woman. I would say index on that and start to just ignore and leave behind the people who are still struggling with the concept of gender equity.
Dan Turchin (32:49):
Unbelievable. Just unbelievable. Those examples in 2022. And I, I know that I wanted you to share those examples cuz unfortunately I know the real and I know that the point of finger here, I know the venture capital industry is perfectly designed to perpetuate itself <laugh> and it’s you know, it’s clearly a, to your point, a very patriarchal culture, but it takes strong women leaders like you to demonstrate why that amazing metric that you cited, why that’s real. And so we,
Dipanwita Das (33:18):
And we have so many incredible female venture capitalist, not just backing source, but is in the community. And one of the things I’m excited about is to see that change as well in not just the VC community, but the community in which we work itself and the community of startups and technology where women are often left behind. Not, not cause they’re not good enough, but for very many other reasons. So women in tech, women in AI, women in life sciences, women. And so as long as there are strong women who are imovable really imovable will be fine
Dan Turchin (33:55):
Persist. You said it best persist. I hope a lot of will, women and girls out there are listening to you and we should all be rooting for you to succeed. Cuz it’s a great story. I love your energy. I love the problem that you’re solving. It’s not just about doing work that matters, but you built an amazing team and problem that needs to be solved and you really deserve to be very successful solving it. So thanks for thanks for coming on this show and hanging out. This is really fun.
Dipanwita Das (34:23):
Thank you, Dan. Again, it was my pleasure and you are right. SORC has an incredible team and I’m honored to work with them on our chosen problem every day.
Dan Turchin (34:32):
De before I let you go work, can the audience learn more about you and the work that you’re doing at SORC?
Dipanwita Das (34:37):
So soro.com. So R C E R O think of magic and us in the same, same thought follow us on LinkedIn. We have some great, great posts and blogs and webinars coming up and that’ll be great. And then you could always reach out to us@infosor.com if you wanna chat with any of us on the team. I, and I hope you will reach out
Dan Turchin (35:03):
Good stuff de before we close, I was reminded when you were sharing your last set of remarks about one of my favorite quotes from Moala I’m sure you’ve heard it, but I’ll share it with the audience. I raise my voice, not so that I can shout, but so that those without a voice can be heard, we can’t all succeed. When half of us are held back, that’s always inspired me something I’ve shared with my with my kids in the past. But you’re, you’re the embodiment of what Moala was talking about.
Dipanwita Das (35:27):
Wow. I’m, I’m really flattered. And I only hope that I’m able to demonstrate that kind of bravery, but she is brave and I hope many of us have the strength in ourselves to do, be as brave when called upon.
Dan Turchin (35:42):
Absolutely. Well, like I said, we’re all rooting for you. Thanks for coming and hanging out and best luck to you and the team.
Dipanwita Das (35:48):
Thank you, Dan. It’s been my pleasure.
Dan Turchin (35:50):
All right, well this is your host, Dan Turin of AI and the future of work signing off for this week. But we’re back next week with another fascinating guest.