This is a transcript from the AI and the Future of Work podcast episode featuring Harish Batlapenumarthy, co-founder of Emtropy Labs, discusses the future of supervised machine learning to improve customer service

Speaker 1 (00:18):
Good morning, good afternoon, or good evening, depending on where you’re listening. Welcome back to AI in the future of work. Thanks again for making this one of the most downloaded podcasts about the future of work. If you enjoy what we do, please like comment and share in your favorite podcast app, and we’ll keep sharing great conversations like the one we have today. I’m your host, Dan Turchin advisor insight finder, thus system of intelligence for it, operations and CEO of people reign the AI platform for it and HR employee service. Now you’ve heard me say multiple times, what can be predicted is better left to machines. One of the best applications of that statement is using data from past customer service interactions, to understand how best to improve future interactions with the same customer. A few years ago, this use of machine learning seemed like science fiction.

Speaker 1 (01:11):
Today. It’s nearly essential for every product and service company to avoid falling behind the competition. Data can be used to answer questions like who’s most likely to buy. Again. Who’s likely to churn which additional products to recommend and what features to recommend to improve adoption. Today’s guest is an expert in the field, Harish Batlapenumarthy, co-founder of Emtropy Labs, two years back to bring AI driven insights to customer success teams to deliver better customer experiences Harish, and the team realized the best way to understand the voice of the customer is by actually analyzing the voice of the customer before starting entry labs. Harri spent the previous eight years founding and growing other AI first companies, including Nemo’s product studio. After doing time at Oracle as a product manager, Harri studied back propagation in neural nets all the way back in the nineties and IIT Bombay. When the field was new before receiving his MBA from MIT in 2013, without further ado, Harish, welcome to AI in the future of work, let’s get started by having you share a bit more about your background.

Speaker 2 (02:23):
Thanks, Dan. First thing I really appreciate this platform and my team and I both thankful for this as yeah, you mostly covered the background. I’ll try to keep it brief. Originally from India immigrated to the us a while back, more than half my life now trained as a mechanical engineer came here for an operations research grad program. Worked at Oracle as a product manager. So my care career has been mostly around building deploying and selling complex enterprise software applications. That’s what I really understand really well. And we’ve been building, I’ve been building most of my career.

Speaker 1 (03:02):
Describe the founding vision for Emtropy.

Speaker 2 (03:05):
Well, so like you’ve mentioned, we I’ve been, I’ve been venturing. I’ve been dabbling in various adventures for past two years. I started with trying to automate meeting minutes a while while ago, when this was not easy to do. Now, you have so many startups doing this. This was when nuance was the speech transcription company standard. We tried to do something new at MIT. We had a couple of PhD students. We took it some distance. Google was actually interested in <laugh> acquiring us. But it, it is a hard problem to solve. From five, six years back then we tried, I tried and spend approval again, a context problem glad to see some new startups trying to go after this as well. The way we kind of, you know, came into the vision of Emtropy was that I’ve always believed that workplace culture really trumps anything else that people go through and big enterprises, especially fast scaling enterprises, struggled with this a lot.

Speaker 2 (04:06):
And given that there’s so much data in the system today we we’ve kind of thought about how do we analyze this data and extract, extract trends and patterns around group cultures in such a way that, you know, group individual performances could really go up. That was a very broad, high scale vision with which we started this company very hard problem to solve realized it very quickly. And then we started looking at, there are specific groups in an enterprise where we could probably really make this more applicable which is where we kind of narrowed down to the post sales customer interactions. The beauty about this space is that data is very clean very high quality data. The signal device ratio is low. They actually very high, sorry. And that’s, that’s, that’s how we ended up today.

Speaker 1 (04:58):
So I shared a few examples of how customer data can be used to generate AI driven insights. Talk us through that in more detail. Did I get it right? Or what are some of the other kind of KPIs that data from Emtropy can be used to improve?

Speaker 2 (05:15):
Yeah, that’s, that’s a great start. Voice of customer is what most companies have been interested in. And there’s been many consulting, gigs, consulting solutions many companies which have been going after it, right, and primarily, and the most famous one most popular one is to review Amazon reviews or any other online reviews and try to figure out what the customers are feeling, what they’re saying. The angle that we are taking is inside out, not outside in this is more about the voice of the agents or voice of the employees interacting with the customers while keeping in perspective what the customer’s feeling, right? So that is our angle. And in the process, we are trying to help these teams, really mine loads and loads of data, and kind of extract that those needles from the haystack and help them really change the needle, move the needle for their teams.

Speaker 1 (06:07):
So it seems like the use case that has been very popular early on is using transcriptions from SDR sales, development reps, or sales reps, to understand how to better coach them. You’re looking at it from the perspective of the CSM, the customer service management organization, how applicable are those technologies that are now being used on the sales side to what you’re doing with CSMs?

Speaker 2 (06:35):
Very applicable, I think broadly horizontally they’re more or less, very similar. Whatever gong has been trying to do there or chorus is what, more or less what we are trying to do on the customer service side. The use cases differ in the sense that what kind of you know, criteria are you looking for? You know, what are you trying to extract in a sales world versus a post sales world, right? Interestingly post sales interactions that customer support or service, and now increasingly success, they account almost 70 to 70% of all interactions the customers have the customer has with the, with the company. Right? So being able to mind that and look for specific criteria, which really help these teams deliver outstanding customer experiences, that is what we are looking for.

Speaker 1 (07:20):
So you mentioned data that optimizes signal to noise. And one of the problems I know I’ve faced in solving various problems plus various companies is that oftentimes the signal to noise ratio is such that there are a lot of false positives. So for example, if you over rely on sentiment analysis it can be very misleading because, you know, just based on certain trigger words a customer uses that may not be enough to indicate, you know, their actual sentiment or, you know, their, their actual level of satisfaction with a product or service. Are there any tricks you can share with us about how you mine across the variety of signals to optimize for few or false positives?

Speaker 2 (08:07):
Yeah, that’s a great, great question. Yeah. <Laugh> and you say a really tricky part to solve, right? So in fact, we are able to publish a blog post on how we are attacking the customer sentiment problem, as opposed to what the traditional Googles and Amazons have already provided through the APIs. Yeah. So we, the good thing about how our approaches, it’s very specific to this particular use case on how customer services, customer service folks interact with their customers. So when, when we are trying to mindful for an ex an example here is customer sentiment. We take a supervised learning approach. We are trying to figure out what are the broad areas, where there could be a negative or positive sentiment being expressed. We create a ton, tons and tons of training data, and we use that to be able to build these models, right.

Speaker 2 (08:53):
And then being able to test this across multiple industry data sets, right? That is essentially how you keep going forward and try to get as generic as possible. But having said that there are scenarios, there are circumstances that we have discovered that being able to build any predictive models like this is really hard and to generalize across multiple industries. So multiple customers one classic example is being able to build a predictive CSAT model, for example, right? So you have all the CSAT responses that come back. The response rates are usually very low. They have five to 10, 15%. And so companies are actually interested to see the other 85% of interactions, then nobody response what’s happening there. How are they really feeling? So we’ve, we mentioned into this territory, we’ve built some daily powerful models to be able to predict that, but we, what we also learn is the practices, the products and the processes are so different from company to company, that being able to build a very generic model for a problem like this is next door, almost impossible, right? So we’ll have to build a custom model, but where you can bring value to the table is how quickly can you get something like that off the table while trying to customize it for that particular customer?

Speaker 1 (10:06):
All AI is a data problem. Yeah. And all supervised machine learning is a labeling problem. Talk us through how and where, what are the data sources that you use, and then how do you scale the labeling process?

Speaker 2 (10:19):
So the data sources are the customer service interaction data. This is primarily data from all kinds of ticketing systems and phone systems that these teams use could be Z as Salesforce, Amazon connect. You know, what have you we can take all this data, the nice thing about this data, as opposed to highly noisy data like email and slack that we were trying to go after earlier is this is not as noisy, right? It’s very cleanly pared, and you have the kind of data that you need, right. But once you have that, labeling is definitely a huge, huge undertaking. We have dedicated internal teams. We do not outsource this, this aspect. We have trained these teams extensively on how to think about labeling and we take it on ourselves. We also have some tricks where our users do label the data as well. They are the data in some ways to make it even more authentic. That’s how we approach this.

Speaker 1 (11:13):
So you’re giving an, an interface to tag some kind of attributes of the conversation.

Speaker 2 (11:18):
Yeah. So we, we do the, we do the first cut and then we let the users override that when they want to override, but they make it very, very easy for them to do that. So we do get a lot of high quality overrides from them.

Speaker 1 (11:30):
So you mentioned the challenge of relying on CSAT scores, whether it’s MP or otherwise, because the response rate tends to be pathetically low, usually in the single low single digits. Are, are there any ways you, are you, are you gamify the process, any tricks that you’re doing to increase the response rate?

Speaker 2 (11:50):
No, we are not focused on that. So that is, that is where that’s probably our next phase of our journey as a company. So right now we are highly focused on being an analytics company, a predictive analytics kind of a company, but we are getting into the workflow zone as well. Right. So at some level gamification will help to improve the response rate. But even if you did, right. So let’s say the response rate, there are, there are companies where the response rates of 40%, even 30, 35%. Right. The question is, how much of that can you trust, right? Can you really trust that when a customer says that this agent really screwed it up is it because the agent actually screwed it up or because they did not get the discount they wanted, right. When a customer says, I’m going to escalate this, I’m going take you to court. I’m gonna Sue you. Are they really gonna Sue you? Those are things that are not that easy. You don’t want to believe what the customer says. It’s a very valuable signal. But what we believe in is that there is more signal in the actual interaction that took place than a point in time survey, which people may respond a day later, you know, or even five later. So there’s a lot of value, the actual interaction that happens. So we are mining the interaction data to get all of the signals.

Speaker 1 (12:57):
And CSAT is often artificially skewed by the fact that if you respond, you’re likely polarized is either cuz you had a very positive or very negative experience. So it, it, it doesn’t necessarily represent a true you know, assessment of a healthier customer population.

Speaker 2 (13:15):
Yeah. I think there’s where some people in the industry have slowly graduated to the NPS course right now the NPS of course has a lot of pros and its, its own cons that are people who don’t like it as well. So we’ve evaluated all of this and where we are kind of moving towards what we call the customer effort score, right? Because whether you, whether you like CSAT, whether you like NS, whatever you like across the board, across the industry, everybody agrees that what you want to lower is the customer effort, right. Which is a great predictor for customer happiness. And how would you do that? How do you even figure out how much effort they have to put to get a problem resolved across multiple channels across S across over time that is a good problem to solve. And we are more focused on that than trying to, you know, replicate C or any other thing.

Speaker 1 (14:06):
So one of the challenges of building predictive models based on some of the data sources that you shared are that bias tends to creep into the data sets in unexpected ways. And I’ll give you an example. If you wanna build a model predicting, let’s say, you know, the customer’s churn rate and most of your samples are taken from, you know, a representative population of the customer base, which let’s say in for a tech product is likely to skew mail. Just let’s say, hypothetically speaking, it’s probably true, but males may be over overrepresented and let’s say underrepresented minorities, maybe underrepresented in the sample size. And so you end up predicting that, let’s say, you know, there’s, there’s less accurate data when it comes to underrepresented minorities or females, how do you mitigate the impact of biased data leading to poor quality decisions?

Speaker 2 (15:06):
Yeah, there’s, I think there’s two angles to angles. There. One is, there is a set of predictive models we could build where for one of a better word, they are just more sophisticated regression models, right? So there is no supervised learning. There it’s all unsupervised, you just building regression models. So there, the bias could creep in because the data has the bias, right? Not because you are living it differently, but when the data is biased, you gotta live with it. It is, it is what it is. So if I send out hundred customers, customer surveys, and 15 responders and out that 15, 11 or male and only for are female. And I get to see that data. And I claim my model based off that, yeah, it is going be biased towards male responses. There’s not much you could do there, right. Where you could lessen the bias is when you’re doing the supervisor approach.

Speaker 2 (15:52):
Right? So our biggest use case right now is actually not even the C set automation. the biggest use case that where we get most customers is where we try to automate their audit process. So when dependent, when they do this, the QA audit automation that we are going after, right? So as agents talk to customers there’s a process where you sit down and actually it is enter the call and you evaluate the entire transcript, or even the email and chat transcripts to see how well they’ve interacted with the customer. Right? So that is a very tedious and laborious process, which is what we’ve completely automated right now. Right. So if you were to take that and you were to build a supervised learning model approach to that. Yeah. You you’re sitting down, you’re labeling, there is bias, skipping game very hard to undo their bias.

Speaker 2 (16:36):
So what we offer our customers is there’s two different levels of models. One is where you’re doing a very high level regression model where the bias, the model has, is the bias that the data has, and we are not introducing any bias. But when you go to the supervisor approach, there is a risk for that introduction of bias from our, our side. We try to negate that as much as we can, by having very diverse set of labelers and also allowing our users to override whatever we have found that is one way we trying to attack it. But yeah, it’s definitely a hard problem.

Speaker 1 (17:09):
That’s a good approach. Intentionally hiring a diverse set of labelers. I think it’s a that’s yes. We talk about a three principles of responsible AI. Yeah. AI should be transparent when it’s being used. It should be apparent to whoever it’s impacting. Yes. It should be explainable and it should be configurable. So no black boxes. And to the extent it makes bad decisions, it should be E there should be instrumentation that allows for you know, an updating of the parameters to be able to make better decisions moving forward. Right. What is your perspective on what it means to practice responsible AI? And is there anything that you do intentionally as an organization to ensure that AI is practiced ethically or responsibly?

Speaker 2 (17:57):
Yeah. Responsible AI is absolutely super important. In fact it’s not even a question of takes that. It’s not even a question of a soft thing. It’s actually a business question because if you’re not able to prove that this is how the decision has been made, you won’t get business, your customers won’t trust you and you’re not making money. Right. it just, it’s just good business sense to be able to be as transparent as you can. So they trust you. And everybody know, I mean, people are smart enough now to know that nothing is a hundred percent accurate. Even humans are not a hundred percent accurate. We all have our own biases as well. So they do. I think they are. They’re okay. So long as you’re being transparent. And even if you’re 75%, 80% accurate what the world is looking for, at least not particular use case is how directionally correct.

Speaker 2 (18:43):
We are rather than are we, are we corrected on every instance, right? This is not like a, a bot taking a piece delivery. If it’s exclusive the address piece, I was to somebody else. But in our use case, thankfully we have to be directionally, correct. Which is what we really focused on. And the way we try to deliver that to our customers is we try to incorporate expandability into all of our UI and all of our models, hard to do. We cannot do it for every model right now, but like you said, it’s a black box. Sometimes we don’t understand, but we are building in that expandability part, the other piece about when the model deviates from what it should have been, right? So we diligently maintain a list of all those deviations. And every week our users, users get a report of where things went wrong, how we are trying to catch up to what went wrong. And that’s where we try to practice transparency.

Speaker 1 (19:34):
One of the important questions that every vendor who’s using data to make audit automated decisions should have to answer, ask, and answer is what could go wrong and let, let me just, I’ll tee this one up two hypothetical examples for Emtropy, let’s say an automated decision predicts that you know, a whole category of customers is less likely to churn and they end up churn. So obviously there’s business risk or potentially let’s say you’re transcribing CSM conversations. And you know, we know that transcription quality tends to be poor for people who speak softly or for people with accents. And, you know, let’s say it could lead to performance reviews that that lead to lower scoring or imply less poorer performance for, you know, either females or people with accents who speak softly or, or don’t speak the same quality of English that transcription is, is best for. So two hypotheticals where the impact of a bad decision could be significant. What do you think about are those right? Or are there other ways where if something went wrong, the, you know, the impact on your customer can be significant.

Speaker 2 (20:50):
Yeah. This, this is definitely a valid concern that we have, right? So that’s also why when you say responsible AI, it’s not just about being responsible in how we show our models or the output of our models to the customers. It’s also about how we even deploy our models, right? So internally we have a really high bar for before we deploy anything into production, we try to at least test 75, 80% accuracy, depending on how you define the accuracy for that model. We do extensive internal testing. We open it up to our users and ask them to test. They do the testing, but you’re right. And that this is a, this is a hard part to solve because every supervisor learning model has been trained on the past, right. But the new data is continuously coming, right? New, new skills are being released.

Speaker 2 (21:34):
New people have been hired. New customers are being onboarded the way they talk, everything could drastically change, right? So the model drift is a reality, right? And so it is for us to be on top of that and continuously return the models, which is why AI and enterprise in the sense of application layer is a hard problem. It’s not gonna be easy, especially when you depend so much on conversation, data and which changes so drastically, it’s also cultural component there. The risk is definitely there. There’s no way to minimize that. It is up to the vendors for us to be on top of what’s changing. It’s not our customer’s job. They are doing what they’re doing. We are selling the tool to them with a particular premise, a particular hypothesis. It’s onto us to be able to stay as close to that hypothesis as possible. And when we veer away from it too much, it’s also on us to kind of tell them, Hey, this is what’s happening on this model. It’s not working out. So you’ll have to, you know, think carefully about the decisions you’re gonna make based off of this model.

Speaker 1 (22:37):
So we’re talking about ways to make the role of the CSM, the customer service management agent better deliver better customer experiences, but let’s fast forward call it, you know, in five or 10 years, these models get so accurate that maybe there won’t be a need for a CSM defend that statement. Is there, is there a time in the future when the role of a CSM will go away because we’ve, we’ve gotten so good at predicting behavior, we can automate around the need for CSM.

Speaker 2 (23:08):
Yeah. I mean, <laugh>, I guess people would probably want to be in a state like that, but it is. I don’t see that coming. I think it’s not a, it’s not a good way to say this, but we need, we need support, right? So no matter what you do, even the best design products like apple need support, right? So the day the world would not need a CSM is the day when a product is so perfect. And when something is wrong about it, in terms of operational thing, or even understanding from an FAQ standpoint, it can just correct itself. Or it can just, you know, update its user so seamlessly that a development is evolved at that stage is when you may not need backend support to be able to support that product. And that is a, is a good dream to have, but I don’t think they’re anywhere close to that.

Speaker 2 (23:58):
This is gonna stay, this is gonna exist. And as you’ve seen in over the last five, 10 years, people have tried to completely automate the human support agents out of the equation. So the whole bot thing that kind of exploded, but that hasn’t succeeded as much, right? So many of these companies have pivoted, they’ve been, they’re doing various other things. So the way we see the way we see understand the industry humans are here to stay. We will all need the support. Some of the easier to do things may get automated, but in the next five, 10 years, I don’t see that. Totally. Even when GT four, GT five, even the most advanced models being ready, I don’t think this is gonna happen that easier.

Speaker 1 (24:37):
So you’ve been in and around both the the opportunities, but also the challenges of AI for gosh, I think about a decade. So I’d love to get your perspective on, let’s say in another decade, let’s say we’re, we’re having a version of this conversation in 2032, what’s one behavior that’s commonplace at work that today would seem like science fiction independent of what Emtropy is doing. Just, you know, in terms of, of, of culture at work.

Speaker 2 (25:06):
Yeah. I mean, yeah, this is a tricky question. Yeah, there’s two flavors to this that I would like to answer this in one thing, of course, in a, in a funny ways that no matter a decade from now two decades from now, even we go to Mars, I think Microsoft Excel will stay. I don’t think that’s going anywhere. Right. <laugh> so we we’ve all been living through that. It’s gonna stay. But yeah, so what I want is different from what I think is gonna happen 10 years from now. And I’ll say what I want, what I want is that having lived through this whole enterprise application space, we are nowhere close to the idea of state, right? It’s still really hard to be able to for a company to adopt a new enterprise application, to be able to use it nicely, very easily collect data, have enough actionable insights from all of this data to make their life easy.

Speaker 2 (25:59):
Even today, people have to touch multiple systems, the whole BI thing, evolution, BI evolution that was supposed to make our life easy, never really took off and famously. You know, you end up being around for a long time. Larry said this in like 2002 or two, it was like, the whole world is about disaggregation and aggregation. So in three years from now, there’s only gonna be Oracle and SAP. And people at sea and people suffer pissed off, but he made that happen. It was just on Oracle and SAP, but then as cloud revolution to cough, you have so many others like Salesforce and everybody else coming in as well. Right? So the enterprise application application layer, I think keeps on going through these cycles without any true major innovation, completely disrupted, other than cloud that happened in the back end. We haven’t seen a major disruption here, right?

Speaker 2 (26:46):
And I do wish for that. I do wish for a space where scalability is super duper easy. You have the same software that can be used for a very small startup to a large enterprise that is impossible today, but I really wish we could get there. Actionable insights are very, very easy to generate from the data. So people are super efficient with their work. That is not true today. That was not true. 20 years back, not true, 30 years back. I hope it is true 10 years from now. That is a hope though, which is I hope you get there. But what I think is gonna happen is we’ll probably see more and more investments into this 3d S zoom, like kind of zoom kind of things. This conversation that you are having will probably between be between some 3d versions of us in some kind of metaverse. I think that’s more likely to happen less useful for any company to improve their operations, but more fun. But I think that’s probably more likely to have

Speaker 1 (27:44):
A note. This podcast is not sponsored by Microsoft. And if you’re <laugh>, if you’re out there developing an alternative to Excel, come on the program and you can challenge Harish, good stuff. Well Harish, it’s about all the time we have for today. Where can where can our audience learn more about you and Emtropy?

Speaker 2 (28:03):
Well, we are on website, https://Emtropylabs.hk/. We are on LinkedIn. We are on Twitter at Emtropy labs. We’ll be glad to follow up. We’ll of course love to have some inborn customer leads. <Laugh> so we really appreciate that. Thanks again, Dan. This was great talking to you.

Speaker 1 (28:24):
Thanks for coming by and hanging out. I really really enjoyed this one. Gosh, that’s a wrap for this week. This is your host, Dan Turin of AI and the future of work, but we’ll be back next week with another fascinating guest.