The Hidden Risks of AI: John Baldino on Bias, Empathy, and Smarter Adoption

EP13 - GTMI - John Baldino - Thumbnail - YT

In the latest episode of GTM Innovators, I had the pleasure of hosting John Baldino, President of Humareso and co-host of the But First Coffee Podcast. Our conversation tackled some of the most critical topics organizations must consider as they rush to embrace AI in their go-to-market and talent strategies.

While the promise of AI is exhilarating, John provides a grounded and necessary perspective on the challenges we must confront before blindly integrating these technologies. Let’s dive into a few of the major themes from our conversation and share why every business leader should be thinking critically about how they vet the AI tools shaping their future.

The Danger of Entrenchment: Bias in AI

One of the most striking points John raised was the hidden danger of “entrenchment” in AI models. As organizations feed AI platforms their data, preferences, and styles, those models don’t just get “smarter”, they often become more biased, reinforcing existing blind spots rather than helping overcome them.

“AI doesn’t have empathy. It’s all logic. What enables us to break down the biased walls is empathy, and that’s still a uniquely human strength,” John explained.

When businesses rely too heavily on AI-driven decision-making, especially in nuanced fields like HR or customer experience, they risk hard-coding their own unexamined assumptions into their systems. This can have serious downstream effects, from exclusionary hiring practices to poor customer engagement strategies. Without active, human oversight, AI becomes a mirror for our unconscious biases, not a bridge past them.

Beyond Automation: The Role of Empathy and Talent Development

Tied closely to the topic of bias is the risk of overreliance on AI for talent management and employee development. John cautioned that while AI can efficiently organize data and streamline workflows, it simply cannot replicate the depth of human interaction required to truly engage, develop, and mentor talent.

We discussed how easy it is to let AI summarize performance reviews or generate feedback, but pointed out a fundamental problem: great leadership isn’t about slick summaries, it’s about presence, empathy, and genuine human connection. Only humans can coach, mentor, and truly “see” the person behind the work product.

This isn’t a call to abandon AI, but a reminder: AI should assist, not replace, the crucial human elements of leadership and development.

Building a Foundation of Trust: Security and Governance

Security and trust also featured heavily in our conversation. With so many organizations sprinting toward AI adoption, few are taking the time to set up proper vetting processes around data security, governance, and model transparency.

“It shouldn’t be any different for AI adoption, we need to have confidence in those systems just like we do in SOC 2 compliance,” John emphasized.

Without the equivalent of a “SOC 2 for AI,” companies risk exposing sensitive information, making poor business decisions, and even harming their brand reputation. Vetting isn’t just about functionality; it’s about trust, governance, and protecting the people whose data and futures are being shaped by these tools.

Practical Steps: How to Vet AI Tools Before You Leap

John shared a practical first principle for approaching AI adoption: always start by defining the problem.

“Start with a simple question: What problem are we actually trying to solve? Not every solution needs AI,” he advised.

This may seem basic, but it’s a critical discipline that many organizations overlook in their excitement. Adopting AI because it’s trendy, or because a vendor promises massive ROI, is a recipe for disappointment and risk. Instead, leaders must be clear-eyed about:

  • The exact business need they’re trying to address
  • Whether AI is the right (or best) tool for the job
  • What data is required, and how it will be protected
  • How outputs will be reviewed and validated by human experts

Starting with these questions ensures that AI serves the business’s mission, not the other way around.

Why This Matters Now

The speed of AI development is only accelerating. As John pointed out, many models are “learning” at a rate that humans can’t possibly match. That’s exactly why now, more than ever, it’s crucial for businesses to slow down just enough to vet their AI tools carefully, intentionally, and ethically.

Bringing empathy, trust, and critical thinking into your AI adoption strategy isn’t just about mitigating risk; it’s about positioning your company to thrive in a future where technology and humanity must work hand in hand.

Want to hear even more powerful insights from John Baldino?

Catch the full conversation on the GTM Innovators Podcast where we dive deeper into:

  • Practical frameworks for responsible AI adoption
  • The future of leadership development in an AI-augmented world
  • Why slowing down can actually accelerate your long-term success

[Click here to watch or listen to the full episode.]

Share your comments: