PeopleReign CEO joins Aithos podcast to discuss responsible AI, human-machine partnership, and why we’re getting workplace AI wrong
In a candid conversation on the inaugural episode of Aithos, PeopleReign CEO Dan Turchin cuts through the noise surrounding workplace AI to deliver a message that’s equal parts challenge and reassurance: we’re spending too much time worrying about the “intelligence” in AI, and not nearly enough thinking about the “artificial” part.
Hosted by Aashna Jain, the podcast’s first episode takes an ambitious approach—exploring not just what AI can do, but what it should do in the workplace. And Turchin, who also hosts the podcast AI and the Future of Work (nearly 400 episodes and counting, with a community of about a million listeners), doesn’t hold back.
The Problem: Anthropomorphizing Our Way Into Anxiety
“We spend more time and more energy—certainly in Silicon Valley where I’m from—perseverating on the intelligence part when we should be focused more on the artificial part,” Turchin explains. “If instead we thought about AI as being a proxy for math and statistics at scale, then I think we’d cultivate a healthy relationship with artificial intelligence.”
The issue? Organizations are giving AI systems names, birthdays, photos—treating them like employees rather than tools. This misguided humanization, Turchin argues, is exactly what’s fueling workplace anxiety about job displacement and creating unrealistic expectations about what AI can and should do.
His alternative? Think of AI as augmented intelligence—something that complements human capabilities rather than competing with them.
A Framework for Responsible AI: Equal Time on What Could Go Wrong
For every hour spent designing AI systems, Turchin advocates spending at least an hour focused on responsible use. That means:
- Understanding the complete data chain of custody
- Anticipating latent bias in training data
- Building automated accuracy checks
- Ensuring AI decisions remain explainable
- Planning for unexpected outcomes
“AI is perfectly designed to replicate human bias,” he notes bluntly. “If you could send an entity out to memorize the internet, you can imagine there’s a lot that is good and accurate, but there’s a lot that’s pretty scary in those dark recesses.”
This framework isn’t theoretical. It’s the foundation PeopleReign uses when developing AI-powered employee service automation that organizations actually trust.
The Job Displacement Myth: History Doesn’t Repeat, But It Rhymes
On the perennial “AI will take all our jobs” concern, Turchin offers historical perspective: “Every time humanity has proven to be pretty resilient. We’ve created new industries that have created new jobs that are better for humans, higher paying, safer.”
He points to the printing press, steam engine, automobile, computers, and internet—each a disruptive force that ultimately expanded human opportunity rather than contracting it. The key is focusing on what AI can automate: the “dull, dirty, and dangerous” tasks that don’t exemplify the best of what humans can do.
“If we use artificial intelligence to augment or supplement all the things that maybe humans shouldn’t do in the first place, what it’s going to do is free us up to do more jobs that maybe we haven’t envisioned yet,” Turchin explains.
Partnership, Not Replacement: What the Future Actually Looks Like
In terms of what the human-machine partnership actually means in practice, Turchin challenges listeners to think beyond “words in a text box on a monitor.”
Future AI partnerships might be things like:
- Smart prosthetic limbs that restore lost capabilities
- AI-powered exoskeletons helping manual laborers work longer and safer
- Diagnostic tools that augment healthcare providers’ pattern recognition
- Systems that free employees from repetitive tasks to focus on creative problem-solving
“When AI is there as a partner to augment where humans are naturally deficient—that’s what a partnership with machines should look like,” he says.
Accountability Is Never Artificial
When the discussion turns to who’s responsible when AI makes wrong decisions, Turchin is unequivocal: “The answer is always the human.”
Humans write the algorithms, generate the data, and decide when to deploy AI. “Just like I said, don’t assign a birth certificate or a name or use a human pronoun to describe an AI. Similarly, we need to get really comfortable using the language of ‘only’—the human is accountable.”
This clarity on accountability isn’t just philosophical. It’s essential for organizations implementing AI in critical workplace functions like employee service, HR automation, and operational decision-making.
The Environmental Question: Breakthroughs on the Horizon
Addressing concerns about AI’s environmental impact—the significant water and electricity consumption of data centers—Turchin acknowledges the challenge while offering a data-driven perspective.
“I don’t think large language models are the ones that we’ll be using as we consume AI services in five years, maybe not even three years,” he predicts, pointing to emerging breakthroughs in quantum computing, neuro-symbolic architectures, and chip design that could make AI orders of magnitude more efficient.
What “Team Human” Means in Practice
Throughout the conversation, Turchin returns to what he calls “Team Human”—a philosophy that puts human agency and wellbeing at the center of every AI decision.
“Wake up every day and be enthusiastic about the future,” he urges. “As long as we commit to being on Team Human and thinking about what’s best for us and how we derive purpose and meaning in life and think about ways to engage technology in ways that will augment your happiness, help you find meaning, and not feel threatened by it—it’s a super empowering way of looking at the economy and jobs.”
Why This Matters for Organizations Today
The insights from this conversation have immediate implications for any organization implementing AI:
- Stop humanizing your AI tools. They’re systems, not employees.
- Invest equally in risk and opportunity. For every hour on AI capabilities, spend an hour on responsible use.
- Focus on augmentation, not automation. Identify tasks that don’t celebrate human strengths.
- Maintain human accountability. No matter how sophisticated the system, humans make the final call.
- Think beyond chatbots. The future of workplace AI is more nuanced and embedded than text-based interactions.
As organizations continue grappling with how to deploy AI effectively and ethically, Turchin’s perspective offers an alternative to both the dystopian fears and utopian hype that dominate most AI conversations.
The question isn’t whether AI belongs in the workplace—it’s already here. The question is whether we’re building relationship with these tools in a way that will serve humans, not the other way around.






