In this episode of GTM Innovators, host Kyle James sits down with John Baldino, President of Humareso and co-host of the But First Coffee Podcast, for a candid conversation on the future of AI in the workplace. This discussion explores how organizations must balance innovation with responsibility when adopting AI technologies. John shares his insights on building trust, maintaining ethical standards, and preserving the human element as AI continues to reshape the way we work. From recruitment to performance management, this episode offers practical advice for leaders navigating the rapidly evolving landscape of technology and talent.
Subscript to GMT Innovators Series on the following platforms:
- SoundCloud: https://soundcloud.com/research-859405782
- YouTube: https://www.youtube.com/playlist?list=PLsoV6fwX4cpGR2Hg98rc1e1k2-cOqCbhb
- Spotify: https://open.spotify.com/show/1gvDzcl0jxpPIfu9WYu6U4
- iHeartReadio: https://iheart.com/podcast/258127960/
- Pandora: https://www.pandora.com/podcast/gtm-innovators/PC:1001097038
- iTunes:Â https://podcasts.apple.com/us/podcast/gtm-innovators/id1790738579
- Amazon:Â https://music.amazon.com/podcasts/1b000615-31cc-49dd-a5d8-f80d5098bf2d/gtm-innovators
Transcript:
Kyle James 00:00
Kyle, welcome everybody to another episode of GTM innovators by 3Sixty Insights. I’m your host, Kyle James, and today we’re diving into responsible look at some of the crazy technology coming. I don’t know. Let’s call it vetting the future. You know, little bit of an HR spin, where we’re talking about, you know, vetting people for the job. Because, let’s be honest, HR is coming for all of our jobs in one way or another, whether it’s helping us or let’s hopefully not replacing us. But there’s a lot of fear on that, and it’s worth a conversation. And with me here today. I’ve got John Baldino, John, welcome to the show.
John Baldino 00:44
Thanks, Kyle, I promise I’m not coming for your job. You can rest assure that.
Kyle James 00:50
Well, I appreciate that. And John, you are the president of I’m going to mess this up. Humari, so did I get it right this time? That’s right. Humaris, yes. And you also co host the but first coffee podcast. So this kind of format of having conversations, absolutely you’re probably much more of an expert than I am. To be honest with you,
John Baldino 01:09
I will at least say I’m comfortable. How’s that? I don’t know. I love that. I
Kyle James 01:13
love that. Well, let’s have a comfortable conversation. Awesome. And you’re on our kind of you’re an HR Guru is a member of our 360 insight Executive Advisory Council, and working on some of the research previews that I do, we were kind of sharing that, and I thought you had some really insightful comments, kind of coming back of some of the research I’m doing and exploring around AI about, hey, let’s, let’s talk a little bit about the security, the ethical, the brand, the compliance concerns of all this stuff we see with AI adoption. I’m like, hold on a second. Like, yes, let’s have that conversation. And I was like, Hey, you want to, you want to jump on a show, and let’s talk about it. So I’m super, super excited that you kind of, you know, took the debate and had this conversation with me and with that, you know, would love I’m a comic book guy, so, you know, what is your origin story? Kind of tell us how you kind of came to the point of, kind of setting up and founding human resource. And that’ll kind of dive us into this kind of AI and technology stuff.
John Baldino 02:16
Sure, sure, I really would like a superhero origin story, though. Now, I mean, would love that. So I am from Philadelphia, have had the privilege of being involved in HR, leadership, development, technology related work for very long decades, and so that’s been fun. It’s like an overall origin story, but specific to human resource. I started the organization as an HR consulting firm almost 13 years ago now. And really the premise behind it was, how do we get organizations the resources, information, access points necessary in order for them to be competitive with other organizations in their industry, regardless of size. So this the SMB right, the small businesses to mid market to enterprise. If you’re doing a service for as your your way of earning money, if you’re creating a product, if you’re doing technology related work, those that are using it don’t necessarily know how big each company is that they’re looking at to decide which resource they’re going to go with. They just want either right things like good quality, predictable results, cost effectiveness, for what I’m paying, am I getting something good? So when I started the organization, I was like, You know what? There’s a lot of small business and mid market companies that are providing that competing with enterprise level organizations, but don’t have the resources at their disposal, small businesses and going to hire a Chief Human Resources Officer at that level, right, to be able to help manage talent, right? Have an organizational strategy that’ll make sense in comparison to, say, a larger organization that has those resources. So that was a premise behind it. And so all these years later, we’re it’s I was right, and so it’s been a good time supporting up and down that spectrum small business, all the way up to enterprise.
Kyle James 04:21
So your superpower, then, as you level the playing field, I did,
John Baldino 04:25
oh, that’s a good one. Yeah, yes, that is my my superpower. It’s funny, though, because you know, again, just as a person, right? We know some of these truths. If I work for an organization that’s 10 people, 1000 people, 100,000 people, I’m still a person. You still have to see me. You still have to understand the competencies I’m bringing to the table. I would hope that you’re able to then help me advance in those competencies, learn new things, whether that’s by knowledge or you. Or skill practice, right? Or aptitudes, I would want that to be true anywhere in these organizations, right? So that’s an HR person’s job, yeah,
Kyle James 05:10
yeah. And, well, that kind of even sets us up, but kind of this conversation today, right? Because you’ve had that front row seat, and kind of how this technology shifting and how it’s affecting the workplace, and how do you take these things that the big companies could just afford and scale them out? But also, you know, how AI and, you know, LMSs and all this other stuff kind of helps level the playing field. So how would you describe that evolution, really? How has that evolution kind of affected what you do over the last, you know, eight plus years, and how are you seeing that play out with with even the more crazy technology that’s kind of coming at us now with really, AI and whatnot,
John Baldino 05:53
yeah, I mean, I think that there’s, I think, a couple of things. One, certainly the last 12 months alone, 12 to 16. This whole AI component of things has really pushed technology conversations forward differently. I see it impacting. I think at the onset areas of recruitment, we see adoption already starting to happen pretty quickly there, whether that be resume screening, whether that be the way in which job ads are even crafted, put together right leaning into AI, even from like a chat GPT or co pilot, or something platform wise like that, that organizations are adopting to say, here’s our current job ad, make it read better for for a particular tech role that we’re looking to hire for, or a custodial role because we’re struggling, or whatever it might be. So I think that adoption is already rampant and in a positive way that that that is happening, I think, though, from a from a concern standpoint, because it’s moving so quickly, one of the things that we’re spending time on is, do you know what it is that you’re really adding to the mix, and what it is that it’s really doing to help you do your job better. If you just think it’s going to be plug and play and it’s going to solve all my problems. I don’t think we’re there, and I don’t know that we’re ever fully going to get there. We need to have human eyes on some of this. But sometimes this may be shocking. Sometimes, as people, we can be lazy, and so we, we just want to apply something, because it’s it’s there, and it’s easy to do it, and we don’t necessarily do as much of the vetting that you were talking about at the onset, right of these tools, and I think we’re trying to help bring that to the conversation. Nice.
Kyle James 08:00
So with that, I imagine, you know, a large role of HR, orgs and companies, a lot of times, is the education and training piece, right? This stuff, I heard some stat last week that AR AI is getting twice as smart every six months now what it’s capable of doing like us lazy humans. We, we, you know, how do we do it as easy and fast as possible? Like, we don’t change that at that rate period. We like, how are you seeing, you know, that training and education and support, while also vetting with companies, like, do you force them to slow down? Or, or, what is kind of the what are you seeing there?
John Baldino 08:39
Well, and I think that it’s an interesting word that you use, that you know, it’s getting smarter every six months. Really, I’m going to my preference would be to change the word smarter to its learning at a very rapid pace every six months, because the things that may be learning may actually not make it smarter. It actually may make it more entrenched, might be the word that I use. And so if you use some of these open AI type platforms, and that the platform is going to get wiser to how you prefer things, what? How, the kinds of information that you are putting into its program, the kind of outputs that seem to please you more than others, right? When it asks you, do you want me to? Reword this? Do you want me to? And you do this at a certain clip or pace, and they see the things that you settle in on. It’s learning about you. Yeah, and so that may not be to your benefit. It actually could be, again, entrenching you in a bias, in a frame of reference that isn’t as holistic as you’d like it to be, it’s not as inclusive as you would want it to be. And so that. Add to though to your question, should be mildly terrifying to some, because when you use that information then to input back into other systems, and specifically for what we’re talking about today, those systems that are applicant tracking systems, human capital management systems, HRIS learning management systems, any other technology modules or suite, tech suites that you might be using that’s going to make it really difficult to undo what needs to be undone if there’s a problem, because we’ve allowed it to sort of become a spider web through all of these platforms.
Kyle James 10:47
That’s I’ve never thought of it. I’m gonna step back and, like, repeat some of what I heard you said, like, the way I interpret it, because I would love to, like, dive deeper here. It’s, like, the bias comment, right? Like you are you’re right. Like, the way that we’re feeding it and training we’re training these things, and through that, these models are becoming biased on what kind of feedback we want. Well, we all know every single human is bias. The way we were raised, the things we like that our preferences and what what we need more of is we kind of trans transcend, and it kind of come together across everything. It’s like that empathy element, right? But, but AI doesn’t have empathy. It’s all logic. So what able enables us to, like, break down the the biased walls, is the empathy. Like, I never thought it that way, but, like, that’s a whole other set of challenges being introduced that we don’t have a idea where to even start with,
John Baldino 11:44
and we don’t have people who are necessarily very compassionate to begin with, or, or, or consider it empathetic to begin with. And I don’t, I’m not saying that because, you know, I think everybody’s terrible in HR, that is ridiculous, right? That that is not meant to be, you know, a sweeping statement about everybody that’s involved in particular, in this particular discipline. But I would say it will, it’s a, it’s a sliding scale, yeah, you really don’t have anyone who was trying to be considerate. Big, big, the word consider it right then, then it’s tough right to to have this tech partner in AI try to fix that for you. It can craft language that, because you can input to say, make this sound more compassionate. You can do that. You can spit out language that does it, but that language is two dimensional. Yes, the language is not really going to be we talk about talent engagement if, if you think that is going to meet the net the what’s necessary for true talent engagement is a really nicely worded email. I think we’re missing it, if that’s what we’re counting on, it still comes back to that human interaction. It still comes back to basics around rapport, development, communication strategies, right? That are multi channeled, not just written, that can cut and paste from from a platform, but how do I sit with somebody like this? I don’t. I’m not. This is really me. Yeah, I’m not some AI image. I’m really responding Kyle and I did not rehearse this for hours before we got on here, right? Like this is we kind of came up with some ideas. And, yeah, let’s just see where it goes. Let’s see where it goes, right? I didn’t know we were
Kyle James 13:41
going to be talking about AIB bias, but hey, that’s a super fascinating, interesting
John Baldino 13:45
angle, but this is what I so. So it’s going to lean into a bias. And by the way, bias does not necessarily always have to be sort of illegal, negative, true, just could be it’s it can have a bias around the fact that I am, I lack empathy, yeah, and so the way in which it integrates is going to be from that vantage point, yeah,
Kyle James 14:07
yeah. It’s fascinating. And that doesn’t necessarily help us, because, you know, I just some of the conversations I’ve had with it. It’s like, I wish you would challenge me a little bit more. Why are you always trying to please me? Every idea I throw it used great, wonderful thing. And sometimes you’re like, No, I need you to say that a little bit more firm, because there are times in business, even in writing, when you need to get across a point and you need to be a little bit more pungent. And AI seems to like, you know, everything tries to smooth over,
John Baldino 14:38
yes and and remember to that’s a wonderful way to put it, because AI, AI’s goal in some of these things, is towards resolution or solution, right? That’s what it’s designed the logic of it wants to get there for you. So sometimes it may choose the path of least resistance. There.
Kyle James 14:59
Yeah, that’s super fair. So, so, all right, let’s, let’s talk. Let’s, let’s take a step back and go bigger here, right?
John Baldino 15:06
Like, I love that. Your gears are turning. This is so fun to watch.
Kyle James 15:09
No, this is, this is how I do it. Like, I just, like, where do you take it? Like, what? What is, what does it make me thinking, How do I kind of interpret it more so with this is, is kind of companies are, are adopting these things, and we’re kind of seeing this as we’re doing here. But how do you see, or how do you think, what are they jumping the gun on in this adoption? What are they kind of like not thinking deeper through, and I think we’re talking through a lot of it, but, but how do they really think about some of the A like we said earlier, ethics, the compliance, the the security concerns, the human concerns of it. What are some, maybe even a story or two, that you think that you’ve seen out there, that companies aren’t doing their due diligence here? Well, I
John Baldino 15:51
think the easiest place for me to start with this is and again, I know I shouldn’t say to start to continue, because we You are right. Some of the things we’ve said already answer this question. But I think security is a concern, a relative level of security, when you think about SOC two as an example, and you think about the way in which that entire approach to why that type of, those types of parameters, excuse me, are are in place? It’s really about trust. It’s about trust in systems. The same thing has to be held true for AI, adoption. What’s the level of trust that we can have in what’s being adopted? What? What hoops do we need to make sure it jumps through in order to have confidence in it. If you don’t, if you’re working with an organization that requires you to go through a SOC two audit, Hey, before we align and partner with you as a as in that client customer and customer type relationship, we need you to go through the SOC, two audit, force or assessment so we can have confidence that our data is going to be used and housed in the way that works and meets a certain parameter. It shouldn’t be any different for AI adoption when it comes to those things as well those frameworks. We need to have confidence in those things. So to specifically answer your question. I don’t think we have enough structure in place to have confidence in the same way. And I think companies need to take a moment to say just the same things in many ways, generally speaking, that they might consider when it comes to SOC, two compliance, what would you ask for? What do you want to know? What do you want to know about storage? What do you want to know about workflow? What do you want to know about what happens with the data after the fact, those kinds of things? Okay, why aren’t we asking that with some of the tools we’re using in AI, yeah,
Kyle James 17:53
I’m curious now, like, because what you’re talking about is, like, processes and procedures, and it seems like a lot of that can be, I hate this word in this example, but like templatized, because it’s pretty universal. Maybe it’s different for industry to industry, but there are those kind of things that you need to check off confirm have have had internal conversations about before doing that have, but I’m curious that, like, have y’all started standardizing any of those processes and work with customers and businesses on like, Hey, here’s your checklist or here’s your foundation. Yeah, you need to have in place before you jump into any of this.
John Baldino 18:31
Yeah, I sort of that’s a terrible answer, isn’t
Kyle James 18:36
it? That’s probably the most honest answer’s changing so fast. It’s
John Baldino 18:41
changing so fast, that’s exactly right, like it’s changing so quickly that it’s hard to say with 100% certainty that what we have as templates, right, are going to work today, tomorrow, next week. I would say, though, that that one of the maybe more than template at the moment, we’ve gotta have some structural confidence and conversation around and I’m gonna be HR specific, right for this. You know, when we you’re using AI for performance feedback as one area, which we know it happens when you’re using that, yes, is it going to help you collate and correlate data inputs? You’ve got an annual appraisal to do. You have a year’s worth of emails, five minute check ins, Slack messages that speak to this performance, and you take those things and put it into AI to help spit something out. I love that. I think that there’s wisdom in that, because it’s helping you to sort of find themes. And in this instance, which I know I’m going to be talking out of both sides of my mouth. Now, the potential for removing certain types of bias. A recency bias. I’m not just going off of what Kyle did for me in the last month. I’m hearkening back to things from 10 months ago. If I’m really looking at a true annual appraisal, that’s where, okay, great. That piece of AI makes sense for performance. But if we’re making decisions based on this, performance feedback, career advancement, competency development, vertical reassignment, capacity, all of these things. Are we comfortable leaving them to to AI to tell us you don’t think AI has enough input to know the answers to those things, but we may anecdotally take what it gives us in the in the collation of information, and let that give us a trajectory for responding to a real human being and their future advancement within the organization, their future development within the organization. We’ve got to really be able to interpret the output from the AI platform with a more of a critical eye than I think we may set ourselves up to, because we’ll take what spits out are like, Yes, I’m done already. That’s not the right approach. It’s got to be looked at critically.
Kyle James 21:24
Yeah, well, and I think what I’m hearing you say is kind of, what’s two pieces really, but it’s all, it’s all one thing. It’s like, high quality data, right? Like, where do you get all of this high quality because, great, I’ve got a year’s worth of somebody’s emails. Doesn’t mean all those emails are great. Like, how do we cull the crud and focus on the good stuff? But, but also, yes, that’s the that’s the qualitative, lots of data. But there’s also the quantitative, the women backwards flip that. The quantitative is all the data. The qualitative is like, maybe the human element of like, I have quality interactions with this person. I can kind of also see what’s going on here too, besides just the massive communication they have, how do they treat each other? How How are they competent in their role? How do they respond to customers? And you need both. And it kind of still the same challenge either way. It’s like, how do you get enough quality data?
John Baldino 22:18
Yeah, and I, and I think that that’s the personalization part. Ai can’t take the place of developed managerial skills, which should be developed managerial skills, right? Because I need to be able to have good conversation with you. I have to have performance related conversation, corrective related conversation, coaching related conversation, yeah. And if I think I’m just going to take this, you know, report that gets spit out and say, here read this, we are missing the boat on real talent development, we might have baseline talent coverage I can I can say I have this piece of paper, so to speak, that I can put into your personnel file to prove that I gave you something that shows that I moderately care about your connection here at the organization, yeah, as opposed to, I want to help Kyle get really good at customer interaction. As an example, I if that’s your responsibility, I need to spend time with you, Kyle, let’s, let’s chat. Maybe 15 minutes into our chat, I’m like, hey, you know what? And there are times where you go to talk to someone and and you you just look down for a while when you’re dealing with customers, sometimes that can be perceived to be, you know, you don’t care or or you’re, you’re not enough to sleep, yeah, or something, yes? Is AI going to put that on the form? Because it’s not observing you doing what you just did I did, yes. Um, that’s what we have to remember. It’s, it’s okay to be both end. It should not take the place of, yeah,
Kyle James 24:16
let me. Let me take this a step further, because that one of the big things I worry about with this stuff that you know, you and I have both been in the business for over decades, we have the scars on our back of getting us to the point we are, and the learned experiences of getting there right. And now you bring in these AI models that can do certain things we can as well as we can, some things better. Now what I worry about is great. We can take these things and become superhuman, comic book example again, and do more faster. But the reason we’re able to do that is because we know what great is, because we’ve had to work our way through that, right? You can’t take a 22 year old, fresh out of college, give them the. Things and expect the same result. It’s just not possible. And I don’t ever see it become impossible, because you need to go through those reps of getting slapped on the wrist because you were not making eye contact, and like, learning how to make eye contact, and the only way to do it is just over and over and over and over, have dozens, hundreds, 1000s, maybe, of customer conversations to get good at it, and that’s what I worry about, is we become more superhuman. Do we spend less time training the future on how to become us? Because we’ve got this other thing Filling in the gap for us. Like, have you thought about that? That angle, and I don’t know how to respond to potential,
John Baldino 25:36
is absolutely there for what you just described, 100% and I’m saying potential, because I don’t want to paint again with too broad of a brush. Yeah, could I believe that there are already organizations that have that suffer from what you’ve just described? Yes, I know there are. I know there are. And I’ll piggyback to say in this tired topic, I recognize what I’m about to say the pandemic and the distance producing time that got inserted into life cycles. We were, we were not physically with one another. We We had varying degrees of video technology integration into things. There are the as a result of that, I think there are nuanced pieces of human interaction that have either become stymied or underdeveloped because they weren’t practiced. And now that we are post pandemic and we still have distributed workforces, we have hybrid schedules that, again, minimize the amount of time that people may physically be with one another. There are some of those components that you’re describing that may be dormant, if not the you know, but not developed at all. And I think that that if you look at AI as being the trainer for those individuals, okay, I say with a question tone, yeah. Can you do an AI coaching model? You sure can. Is Sally robot going to be as effective as John Baldino. I don’t think yet, maybe one day Sally will be as effective as as I am, but I want to be able to sit because I’m going to read the room a little bit, and I’m going to say to Kyle, buddy, I just want you to know I think you’re doing a tremendous job, and I know that we’re getting together because there’s been some issues, but I do want to start by saying I don’t think these issues are insurmountable. I think there’s an opportunity for you to be able to run with these opportunities towards success more so than you’ve than you’ve been encouraged over the past few months, but it’s coming from a vantage point of success, not of lack and so with your permission, I really hope that you know this conversation for us is one of trust is one of encouragement and and I’m I’m excited to ask you if you would run with me for the next few months as we work to sort of develop these things deeper. I mean, that sounds a whole lot better than something Sally AI may say to somebody right now. Yeah, yeah. So okay, I think I was off the top of my head, by the
Kyle James 28:35
way. It’s great. It’s great. So where do you decide let’s get a little bit more practical or actionable for people. So where do you decide to draw the line between kind of the automation and the you know, into personalization and personalization in human interaction? Kind of, do you have a balancing act or framework?
John Baldino 28:52
Think about data, data oriented, I’m okay with. Take data, put it through some AI tools, let it help you organize from a skill set standpoint, that, to me, leans into AI is helping us to learn how to organize better. Here’s a whole bunch of data. If I sat with myself in two hours, would I have come up with what AI came up with in 20 seconds? Maybe not, yeah. But now that I’m looking at all this, I’m so grateful for this tool to be able to to put all of that together. For me, I think it’s okay to use it for support in messaging as a result of something that you’ve already put in as an input. So as a result of, say, reviewing various pieces of information. Here are the things that stand out to me about Kyle’s performance. I’ve got a generic para general paragraph that I put together about some of these things. I may put that into chat, GPT or open air, whatever it is you’re using, yeah, and saying, Help me to organize this. Um. Um, for a 30 minute conversation, be specific, and so it will take that and say, okay, based upon these main talking points you’ve given me, here’s an outline that might be helpful for you to keep in mind in a 30 minute performance conversation, I’m okay with that, yeah, because it’s taking the data you’ve given it and just organizing it, you’re not asking it to infer anything from the data. Sure that, I think is always safe to do. I think where we have to be thoughtful about that line then, is letting it take the place of the leadership messaging, for instance, that you would normally have, like what I just used with you as an example, if I ask my platform to give me the words to now say to Kyle, yeah, I don’t think it’s going to give you as either eloquent or compassionate a tone in the messaging as I hope that I just gave to you in that Example. Yeah,
Kyle James 30:59
I guess what I heard you say is, really, really, we still need to be driving these things. We still have to be controlled. We do not need to put these things in control to make decisions and do whatever it wants. Like, you still need managerial turnkey. Like, okay, this is fine. This is good. No, this needs to be tweaked. And I think it’s probably responsible to say that’s always going to be I hope that’s always the case. But
John Baldino 31:24
I don’t, I don’t know that. I buy that everyone feels that way. I think there are moments now we’re getting the good stuff. I think there’s moments of laziness, yes, that that as humans will have and feel like this thing is doing this stuff for me. Yeah, and it might feel like it’s doing some of it, but it’s not doing all of it. That’s why I lean into the data part. Listen, I want to, I don’t want to sound like I’m the glass is half empty with AI. I don’t feel that way. I just think we have to be smart about it. When I think about what, what excites me, for instance, about AI when you’re when we’re talking about this, take that data. Let it review systems. If it’s able to flag things like burnout, hey, you know, based upon some trends on sick days, that kind of stuff that’s being used, you may not notice this, but you’re up an average of this in the company than you were last year. Or, Hey, we you may want to take a look at compensation for this department, because there may be some pay and equity going on in terms of the pace of raises from a percentage standpoint, or things like that. That, to me, helps me to be more set up for strategy. Yeah, because I have real data that I’m working from. But to your point, I still will need to kind of get in there and say, as a result of looking at this, what is it telling me? What do I know about the organization to also matter here? What future realities should I be thoughtful of? Great example are are like the tariffs? If I’m involved in an organization where raw materials or overseas materials matter, then I could that, you know, the AI can tell me, you need to give everyone a 6% raise. And I also know, though, the reality is I’m facing a, you know, $6,000 per cargo container increase because I gotta make both of those work. Ai doesn’t,
Kyle James 33:41
doesn’t know, welcome to complex systems that only has half the data. Yeah, that’s right, that’s right, yeah, no, that’s that makes a lot of sense. What have we not talked about that you want to talk about on the subject like, you know, we’ve kind of high level talked about some of the security, some of the ethics, some of the concerns, some of the excitement. But what are you seeing? What are you seeing out there right now that either has you most excited or most, most, you know, scared. I’m gonna.
John Baldino 34:07
How about, if I do, can I do a little bit of a giveaway? Maybe that’ll help some folks. So absolutely, let’s go. I’m gonna, I’d love to give some thought out there as to, like, what then? What questions, if you really want to look at AI opportunities, rather than just saying, Yeah, we have AI, we’re hip, right? Like we’re relevant, because we’re using some things, I want to encourage people to really do sort of an honest audit around when and why to adopt AI like, what? What should I think about? I think the first question to ask is, what problem are we trying to solve? Just be very simple about it. As I look at what the problem is, let me define it so I can know if a tool like AI is even. Um, an option for answering it. It doesn’t mean, by the way, that answering that question means AI is the only option for answering it. But is it one of them and and what is that right type of tool that I could use if I am going to adopt AI as a result of answering that question? How does that model work? What is it based on? What’s the philosophy behind the setup there and and you know, as I said to you, to you a few minutes ago, about security, what data is it using? What data will it need access to? How much of that should be protected or not? Yeah, that those kinds of initial questions, I think, help you to be honest, then you can get into questions of, say, post usage, What? What? What risks do we need to be thoughtful of who’s actually going to read the results to know if, if we see success or failure, where are those definitions coming from? What will we do with the output that we’re given, and are we comfortable with having sort of some quality control there to ensure that what gets spit out can be reviewed in case it’s wrong, then we know to kind of go back a few steps and re input some things with the right parameters. I think those questions might be really helpful for some folks listening who, who haven’t yet known how to fully embrace the tool of AI, yeah,
Kyle James 36:31
no, that’s great. I think that’s a universal thing, right? Like, before you jump in. What is it? The old like carpenter measured twice, cut once or Yes, or my favorite framework, why, you know, start with the why, the who and the what, and then, you know, figure all that out. And then, before you just jump into like, oh, how can we do AI to do this? Well, maybe AI is not That’s great advice, and it’s universal to everything, for everybody’s always doing. I think I
John Baldino 37:01
would like to believe that. But you know, we live in a world where with quick response, that’s part of the problem too, is we’re so blessed with immediacy and so many things that that if it isn’t quick enough, it doesn’t feel that it’s relevant enough anymore, it must not work well anymore. And so where can we go and find the thing that’s faster? Yeah, faster does not mean better all the time. It doesn’t,
Kyle James 37:28
yeah. And these things have gotten so fast that you have time now to go, you know, oh gosh, it’s going to take AI three minutes, 3030, seconds to solve your problem. Spend, if you have an hour to do it, spend 45 minutes figuring out exactly what new and why you need to do it first. Yes, yeah, yes, yeah, that’s great.
John Baldino 37:50
Like we still have value in this. Yeah, I’ll give you we. I know we talked about the human part, but let me just mention one other thing that we haven’t really like said much about, which is accountability. Just because AI gives me a lot of data points, I still need to be thoughtful critically as to how I’m going to introduce accountability as a result of these data points that have been given to me. What is the story that we that is telling us today, and what’s the story we want it to tell down the road. I have a responsibility in that in between to talk about, for instance, accountability. Who is going I mean, those, those who are listening will be old school, like I am. Get your get your racy model out, right? Who’s responsible, who’s accountable, right? Who’s going to support that, who’s informed, you’ll want those frameworks in place. It doesn’t replace the frameworks to accomplishing what we need to accomplish. It may be culling data a lot quicker, but we still need that human piece of accountability, and then the explanation and introduction of that accountability, you’re still setting a tone based on the values of the organization. That’s a personal component. An algorithm is not going to replace that. It should not, yeah, replace that. Yeah. That’s great.
Kyle James 39:13
So I want to be mindful of your time, your time here, John, but I got to ask like you have been podcasting a lot longer than I have on your you know, but first coffee podcast, what other tips or or tricks do you have for other podcasts or and then kind of leading into that would love you, kind of like, how can people support and help
John Baldino 39:31
you? Thank you very much for that. And I will start by saying that is actually Jackie and I on. But first coffee, it’s both
Kyle James 39:38
of you. There’s a great energy there. I’ve watched a couple episodes, if you want to get your dance on it, start the morning, join them.
John Baldino 39:46
We bring the energy, that’s for sure. And it’s so obvious that neither of us are AI generated because AI would be much smarter and put together than the two of us are the it i. Would tell you the thing with podcast, webcast type opportunities, in my estimation, Kyle, just like us today, I think that there, there are so many opportunities for us to introduce consideration. It’s just a word that I’m finding us leaning into a lot more. Not everything is is black and white for everybody, some things I know, but not everything, and that these nuanced pieces remind us that we’re people and so in podcasting and webcasting, while the information like that you and I have just talked about today, I hope people still hear it through a hearing lens of humanity. We don’t have it all figured out. Please keep the smile on your face, because you’re going to have to laugh through some of this, because it’s just off. Be critical in your thinking of it as well, like that in a podcast and a webcast that I’ve learned people have an appetite for that. They want to hear good things, but they want to hear it in a context that’s inviting, that welcomes them into the conversation, not makes them feel like there’s so much more that I have to learn, and I won’t open my mouth until I feel like I have it all under my belt. You’re going to wait a lifetime. We’re never going to have all the answers. Start to engage now and be in an environment that encourages that I
Kyle James 41:23
love, that I I’ve kind of always told people my role in this is, like, I’m the idiot. Okay, asking the question everybody wants to ask, but they’re afraid to and and, you know, if I can do that for people, great, because I’m, you know, I’m okay doing that because I’m super curious and I want to know. So, yeah, we encourage everybody, it’s okay to not know things, and it’s okay to ask questions
John Baldino 41:44
100% and if you can set the tone on your podcast and webcast to to show that it’s okay to ask curious questions, hopefully that’s an example, then for those that are listening,
Kyle James 41:58
awesome. Well, John, thank you so much for joining us. This has been an absolute pleasure. We’ll have to have you back on again sometime. I’m sure we will. And with that, you know any other thing that you want to leave with people, how to connect and how to find you online, besides absolutely check out your podcast. Where are you on LinkedIn? Talk about the company’s website a little bit. How can they how can they find you? And what other services do you guys offer to companies of all shapes and sizes?
John Baldino 42:23
Thank you. Kyle, appreciate that. So yes, please. You know, search me on LinkedIn. John Baldino HR will take you right to me. Believe it or not, there’s a lot of John baldinos. So John Baldino HR will get you right to me. Humariso.com is the company site. It is. It’s, I think it’s a beautiful site that there’s my bias, lots of information about the kinds of services that we provide on the tech side, on the consultation side, on the HRO side, we’re really supporting organizations, Gen, genuinely, of all shapes and sizes all industries, primarily US based a little International, but the bulk is US based, okay, geography in the US again, across the board. And there’s really very little that that we don’t do in the HR space. So if we can be helpful, please reach out.
Kyle James 43:16
Love it. Love it. Let John help you. As you could tell, he’s a pleasure to have a conversation with so there’s no reason not to. Yeah, no, absolutely. And for everybody out there, if you enjoyed this episode, go out there five star reviews, maybe even six star reviews. If you’re super happy, we won’t turn those down. And to everybody, thank you for tuning in. You know, always a pleasure to do these for you. And as always, like to tell you, keep growing everybody you.