3Sixty Insights #HRTechChat: Informing Artificial Intelligence

For this episode of the #HRTechChat video podcast, AbilityMap CEO and Co-Founder Mike Erlin and Mike Bollinger, vice president of strategic initiatives at Cornerstone OnDemand, joined me to discuss a crucially important area of focus: at this still-early stage of its development, helping to ensure that we inform artificial intelligence with the best human-centric data possible. After all, most of us would like to think that the behavior of AI, as it grows eventually to an exceptionally high level of sophistication and begins to take over higher-level decision-making, will continue to reflect what we hold dear as “humanness.”

Another way of putting this is to say human empathy is critical to the development of AI. And, in an irony that was lost on none of us, when Erlin’s wife — unaware we were recording — entered the frame of his video camera several minutes into the episode and stopped to say hello, she inadvertently broke the ice for us. From there, the conversation gathered momentum in the organic way we all hoped it would….

Both Bollinger and Merlin are vendor-side members of our Global Executive Advisory Council and repeat guests on the podcast. The episode you’re reading about here has its origins in an an unrecorded conversation the three of us had several weeks ago. It all began when Bollinger alerted us to “Bias in AI: Are People the Problem or the Solution?” By John Sumser, principal analyst for HRExaminer, the article acknowledges two camps and their diverging viewpoints on the development of AI. “One group says people are the problem; the other sees them as the solution” in the development of AI, according to Sumser, who also says, “All tools contain embedded biases. Bias can be introduced long before the data is examined and at other parts of the process.”

We commenced this episode by agreeing with Sumser. The way forward, in our opinion, is to flood AI with as much human perspective as possible. The alternative, for developers to work overtime attempting to ensure that AI remains devoid of human bias, may be the wrong way to go and, not to mention, possibly impossible. This is my own inference from Sumser’s article. The approach is counterproductive if we wish to avoid the generally dystopian future that AI has the potential to produce should we fail at this point in time, right now, to shepherd AI in a direction that humans would recognize as desirable.

This does not mean a direction that humans necessarily would set on their lonesome, by the way. And, yes, there are implications for the future of work specifically. Erlin made great points here. In the world of work, when we test for cultural fit and soft skills, the best candidate for a role can often be nothing like we might have predicted. What manager anywhere would guess that a former daycare worker would be the best fit for a role in debt collection, for example? I might be getting it slightly wrong, but something like this is a finding that modern psychometrics have produced.

Imagine a future of work where AI lacks this perspective, drawing instead solely on conventional decision-making metrics such as credentials and past work experience. That’s where we’re headed, a future where the AI for talent acquisition, for example, will have been developed with data that precludes the AI entirely from the very ability to unearth delightfully unintended, unexpected relevance. In an additional twist, it’s a particularly human outcome that mere humans would never reach on their own.

Erlin further expounds on the idea. Incorporating quantitative evidence of human bias — think inherent human preferences — into the referenceable data sets available to AI generates higher-quality, human-centric current and future choices for humanity, he suggests. I agree. And it’s a continual, never-ending process to feed this type of information to AI, which should then provide us suggested courses of actions. Furthermore, we must think deeply about the questions we ask AI to answer. For example, rather than ask, “How can reduce crime?,” we should consider asking, “How do we create an enriching community?” — lest AI return answers that only exacerbate human suffering or frustrations.

In a tangent worth mentioned, Bollinger made a great point with regards to human bias. At the outset of our discussion, I noted that Erlin’s time zone was “down under” and asked whether this term was unique to the northern hemisphere. Bollinger chimed in to say this was, in fact, a really good example of how bias doesn’t have to be negative or positive; it can be neutral too. There is nothing positive or negative about seeing Australia as being down under, but seeing it in this way is a bias to the northern hemisphere nonetheless. That’s interesting. Consider also Waze, the GPS-powered driving application for smartphones. There is no option in Waze for the scenic route. The best you can do is opt for toll-free routes. It’s an illustrative example of developers’ failure to consider or anticipate human preference.

This episode is deep, and this introduction to it falls far short of the extent of our conversation. I highly encourage readers to view it. Anyone looking for a good primer on AI should check this out. And, go here to view my previous conversation with Bollinger and here and here for my previous conversations with Erlin. In all three of these chats from earlier this year, as well, we tackle very heady issues of import to HCM professionals.

Our #HRTechChat Series is also available as a podcast on the following platforms:

See a service missing that you use? Let our team know by emailing research@3SixtyInsights.com.

Transcript

Brent Skinner 00:00
Well, hello, everyone, and welcome to the latest episode of HR Tech Chat. And I’m very excited and pleased to have with us as guests today. Mike Erlin, who is CEO and co founder of ability nap. He’s joining us from the land down under Australia. Did they still call it land down under? I think so.

Mike Bollinger 00:23
That’s an American bias.

Brent Skinner 00:30
Yeah, let’s get some nice illusion nice illusion. There. You are, as you’ll see in a moment. And we also have Mike Ballinger, who is VP of Strategic Initiatives at Cornerstone OnDemand. And my former colleague at that at that company, welcome to you both.

Mike Bollinger 00:50
Thank you. Thank you very happy to be here. Excited actually.

Brent Skinner 00:53
Yeah, yeah, me too. And by the way, folks, as we have two mics here, I will be referring to Mike Ballinger, simply as Ballinger. And to keep things straight, and we’ll refer to Mike Erlin as his email prefix, which is Merlin, so we refer to him as Merlin.

Mike Bollinger 01:12
Well, there’s a little bit of magic and Mr. Merlin to just say

Brent Skinner 01:18
Goodwin, Goodwin. So we want to talk today about artificial intelligence and, and how it relates to human capital management. We’ve been talking about it in several episodes of the podcast over the course of the year. I’ve written about it a bit on the blog for 360 insights. And Ballinger, Merlin, and I had a very, very interesting conversation a little while ago, then our September’s got busy around bias in artificial intelligence, and human inputs. And just various opinions around that. And, and Mr. Ballinger, I’d like to yield the floor to you for a moment here, there’s a there’s an art avert very interesting article by John sumser, that kind of got our juices flowing here, if you want to, maybe clue folks into that.

Mike Bollinger 02:14
Sure. Um, and before I do that, one thing, I did sort of make a little bit of a joke about bias, and, you know, down under, and so on. But the idea behind that is very, very true. We all have biases, every human has a bias. And in many cases, it’s a good thing. One of the things that we know is that if you’ve ever seen a map of the world, from the Australian point of view, with Australia on top, it has a very different perspective. We know because of the Explorer’s and in the history in the past three or four centuries, that there’s a North bias to our maps. So the idea of down under is a bias all by itself. So I’ve always kind of talked about bias, thought about bias, and we talk about bias a lot in HR. But it was really my friend, John, John sumser. And I’m sure he’s going to be just fine that I’m referencing this article, he wrote an article called bias and AI are people the problem was solution. And I had a conversation with him for a while. And John’s been talking about this for a while, but what he really talked about in this article, and I encourage you to go look at it, it’s on the HR examiner, HR examiner.com, is that there’s two groups of people when it comes to tech and HR. And the tech group thinks and believes and to some degree, I agree with that assessment tools can help eliminate hiring bias so that they can help redact particular kinds of search information that might trigger bias or that matching tools can help create diversity and so on. And the human group, the people who err on the other side, are concerned about and believe potentially, that vendors who create these kinds of assessment tools, and delivery tools have implicit bias built into the tool itself long before the data occurs. In other words, the models themselves. So what that means is that if machine learning finds patterns in data, that’s just fine. But if you have a bias in AI, are the patterns that it’s trying to identify biased in the pure nature of looking for those patterns. So it’s an ethics question that I think is going to face HR more and more and more is around this. And John terms that the noisiest of it ethics questions, but how does it really happen? And is it going to explode more and what is it that humans are going to do in the push back the old movie the Minority Report, and so I think today, we should talk a little bit about the tools that are very useful, the potential bias that goes into those tools and maybe the humans placed in that that equation.

Brent Skinner 05:10
Agreed? You know, what’s interesting? To me? In that whole description, there is this idea that there’s two camps, we have sort of this, either or viewpoint or attitude, or this debate, and I’m wondering whether, whether we need to have it. On the one hand, this needs to be discussed, but I’m just wondering if the premise, if part of the premise may be just a little bit off kilter? Not quite, where we need to be discussing it from, from my viewpoint, you know, you have you have human input, no matter what, right, you know, we have in human input in the development of the models, right. And these we have these models that are that are at least, you know, PURPORT The creators of the report them to be unbiased or seeming, endeavoring to eliminate bias, because they’re not completely

Mike Bollinger 06:10
a better way to say yes, yeah.

Brent Skinner 06:13
But then we have the other the other camp that says, I am kind of repeating what you’re saying, but I just want to put it just a different little different way. We have this other side that saying, Okay, no, we need to make sure that it’s very human. We have very, very human inputs, this is what I’m getting from this. So that what develops is not does not create a does not result in a in what we perceive to be or what feels like an inhuman outcome.

Mike Erlin 06:42
Well, let me let me let me let me try and, Mike, really good job teeing up John’s Article Two. But let me introduce another concept. And I’ll say that when we’re talking about bias, just out of curiosity, do you guys generally interpret that to be a negative bias or a positive bias? Is bias negative? Or is bias positive?

Mike Bollinger 07:09
So that’s why I brought up the map example, right? Yeah, yes, it’s neither positive nor negative, it just is. But that’s what you what I think we should be considering is, is, it’s not necessarily about eliminating bias, which a lot of the tools really purport to do. Rather, it’s about identifying certain patterns and allowing the humans to us as humans to make good decisions. Well, that means that we need to make judgments, and we don’t buy the output of the machines verbatim. And therein lies the slippery slope.

Mike Erlin 07:48
So maybe for the conversation, if we flip it around a little bit, and say, instead of saying, tech or human bias, maybe we say, we recognize that there is the risk is tech bias. And there is human preference. And that human preference, is something that we want to understand and frankly embrace because that brings the human aspect to the technologies application. And in that application, we’re trying to understand how to remove technology bias, that has no aspect

Mike Bollinger 08:31
of by that to some degree, I think where we need to be though, is we need to be at the front end. I think that’s the point of John’s article. We need to be at the front end that as we create models, we need to be mindful of our own bias in those models. You know, what’s the old joke about projects? I got somebody to door? What’s the old joke about projects? Right? Or models? Some they’re all wrong, but some are more useful than others. Yeah. So that’s an important part of this overall approach, don’t you think?

Mike Erlin 09:05
Yeah. I mean, well, John talks about John talks about you know, the setting the algorithms and how those are, have their inherent bias. He then talks about the sourcing of data and the cleaning of data afterwards. Hi, Lea. Welcome to zoom you’re on. You’re on a HR tech web chat. This will be going out to 1000s of people so say Hi, Mike Ballinger. Oh my gosh. I love the Silverfox look on you. This is great. Very cool. Okay. Brian, this is my beautiful wife Lee. Hello, HR. Tech Chat. This is my wife Lee. My dog. Troopers right there.

Brent Skinner 09:56
Wonderful. Nice to meet you. You might have a really good meeting Sorry, I interrupted no problems. Yes, we will.

Mike Bollinger 10:11
So, one of the one of the things, if we’re talking about the front end of the model, one of the things you need to have is you need to have identified said of, Mike, you’ve been through this with the work that you’re doing. But you have to have an identified set of guidance, if you will, or governance. And not that I cite, Google is the absolute best stone of all things beautiful and lovely. But they do have principles that they try and create, and they’re articulate around being beneficial. I’m looking at it now. Avoid creating or reinforcing bias, which is an important part of it be, be safe, be accountable, create privacy designs, as a part of it and be available with those principles. So if you set some governance, in the beginning around what it is you’re attempting to do, then then you become a better deliverer, if you will have a model that at least can identify and articulate what the bias is going into a use of that tool as a human, then I know when I’m probably going to get and I can account for that it creates a better outcome, I believe, if you go into it with those principles that make sense.

Mike Erlin 11:23
Well, and that Yeah. And that goes to what we talked about in our first conversation, which is, you know, it’s about asking the right questions, right. So before, before you even apply an algorithm, are you asking the right questions, and maybe I’ll try to get a little more concrete here, by let’s just use hiring as maybe a first talk talking point, which is, okay. So our approach is we, we, we’ve learned over time, originally, when we started, we thought of remove all the subjectivity and bias of hiring managers and what people define for the job, right. And I think John called it out. And one of the tools that we have is the ability to take a group of people who are high performers in the job and find out what makes them tick and common from a human capability standpoint, okay. And as we started in really 2016, doing that, we started to observe, wow, it’s kind of important how you select what high performers are, and you embrace people of diverse backgrounds, and you make sure that the metrics that are defining your high performers are quantitative, to the business, you know, digging into all that there’s quite a bit in there. But even when we got to the high performers, what we started learning more and more, is that the biased views of what leaders want in a business is an aspirational view as to where the business needs to go. So we quantify that as well, right? And then you take research from somewhere, or you take research from Josh as an example. Or Ballinger, your stuff. And we look at well, what is what are the research findings. And what we give our clients the ability to do is to look at what they want, look at what they have, and then look at the research so they can make an informed human decision about what they need. That’s one of our best practice ways right now. Now, once you’ve established that, call it a success profile, which people know it by then the ability to apply AI, advanced computation algorithms in a way that does not that has the minimum amount of bias applied to the quantitative evaluation of a person in this case, that’s critical. Because we then compare them we, you know, we do both on the same framework, if you will, that allows us to compare people. But I think I’ve kind of learned that in that article of Bolger talked about the tech buy the tech view and the human view, I think there’s a blended view, at least right now, that has to incorporate and respect and embrace the human bias toward the one the decisions that humans make based on maybe its actual quantitative, quantitative evidence of high performers. But recognize did you do that right? And then look at the research and making a judgment call because that’s why we’re in humans are in the positions. We’re supposed to make these calls to determine what you want and remodel, but always when you’re evaluating people. I think that’s where the you have to embrace their preferences, which is a different story. We’ll come to I think, but do that extremely well. reliably, consistently without bias. Sorry, that was really long. Did that make any sense?

Mike Bollinger 14:52
It doesn’t you need to recognize I know brands ready to jump in here. Um, you need to recognize raise yourself. Yeah, that you can’t take what the machine is giving you verbatim, you have to understand it. And so the notion of predictive kinds of analytics, they don’t predict the thing you should do they predict the potential outcomes of the things that you should do. You still need to account for that if you take what’s coming at you, verbatim. Well, the machine says that. So that’s what I need. That’s creating the slippery slope that I don’t think any of us as HR professionals want to have.

Mike Erlin 15:35
Yeah, well, on that point, too, it’s like there was one of the other articles tied to John’s article talks about, you know, are we looking at correlations, which has risks that was referencing that article, where, you know, the you put in historical data and low income as a driver for sort of the receptiveness of repeat crime incidents? So there’s, there’s a correlation, or is there a causal scoring model? And the causal scoring model is a whole different bar. And today, I, you know, I won’t say that our system is a causal scoring model, I won’t say that I can’t send, what we’re showing is a correlation based on these parameters. And I think our parameters are very solid, we’ve taken a lot of time to do that. But when I go in, and I talk to organizations, and it typically happens in recruitment, they want to push a button and say, okay, so I should hire, and I just go, I go, guys, no, you know, we’re bringing quantitative, quite objective evidence, based on all the stuff that we do that is another critical factor to work experience, to education, to performance, reviews, referrals, whatever it is. And it’s giving you insight into areas of human capability that we haven’t had, but my goodness, do not that is not a yes, no decision. And I think a lot of people look for that. And it’s dangerous right now. That’s,

Brent Skinner 17:05
that’s huge. Actually. Yeah, I think this, there’s, there’s actually a real, I think, an inclination, I think there might be an inclination might sort of manifest or a surface where it will be like, Okay, let’s just, let’s just look at this, we’ll call it psychometric data, right, we’ll look at this, instead of all the other, you know, sort of conventional, traditional data that we’ve our information we’ve always considered in making a decision on who to hire. Right. And I think I was talking with somebody recently, around this, and she called it, you know, we, we, for so long, we’ve looked at, you know, let’s talk about the traditional sort of conventional data, the old stuff that, you know, it’d be raft of this, you know, this better insight into soft skills, or how people fit in with within a culture, right, we look at what’s called, you know, eligibility data, which is, you know, it’s kind of like our credentials, or, you know, where, where have we been, if we, if we worked at the right places? And then, you know, do we have enough of a career arc in order to enter into this new position? If we’re looking at the resume? Right, and then, and then maybe performance, performance data, you know, how did they perform? You know, and, and I think the, the, the problem with that is that there’s this, this, this, just assumption that, that, you know, past performance and eligibility, eligibility data’s is definitely an indicator of future. High performance. And it’s not, it’s not fun it, but it isn’t solely Right. in it. What’s really interesting here, is that is that oh, go ahead, go ahead.

Mike Erlin 18:52
No, no, no, you go, I’m just gonna raise my hand. So I don’t forget it. I just don’t know how to raise my hand.

Brent Skinner 18:58
I may have already forgotten what I was about to say. But um, if we’re talking about putting as my I want to go back to what volunteers said around Google’s Google sort of guidelines, I suppose. Principle principles. Yes. Yes. principles. And that’s a nice, really interesting, I would be very, I would be very, that even that can be a slippery slope because you have to be careful about preconceived, I’m going to call them preconceived notions as opposed to biases because just for the purposes of this conversation, right, you would want to make sure the that you’ve aggregated as many potential put preconceived notions or biases, human biases around every one of those principles, you know, that perspectives on those principles. In I’m talking about sort of a theoretical AI world, we just throw it all into the hopper and mix it up in the AI, the more data you have for the from the broader crowd A section of, of humans, right, the more reflective Sunni, the more perspectives it will represent going in. And the greater the, the the, the potential that the AI will be able to sort it out and make sense of it in a way that we haven’t been able to through our sort of analog, back and forth between each other. Even in social media, we’re just kind of communicating with each other on a human level.

Mike Bollinger 20:29
So I’m going to use a simple quick example. And then Mike, this is going to help you you’re going to pile on from what I’m going to say, your eligibility question, I guarantee it to your eligibility question, I always cite this simple example, I have a niece who is an exceptional author, a well written and well regarded author writes all the time writes for newspapers and so on, and is exceptional at it. She took a gig a role as the as an RPA, Chatbot. Voice, and her job was to and genderized, the voice of the chat bot, she knew nothing about technology, but she was exceptional at that part of the process and, and earned a good living in a startup by doing that. So accountability, we tend to take these paths that around, if I got this skill, I got that skill, I got this skill, I can do these things, when that is completely outside natural language processing is completely outside the technology purview that she landed in. So one of the dangers of using AI to be prescriptive and predictive is you miss those kinds of things. And I’m going to pause, because I know Mr. erlin is gonna want to talk about that.

Mike Erlin 21:51
Well, I, I think I’ll come to that, because that’s identifying her innate inherent talent that she has for a specific role, absent the fact that she may not have ever done it before, which is absolutely critical, particularly in Australia, because even though we’re on the top of the world, our borders are closed

Mike Bollinger 22:19
down in tech, and had no background in tech.

Mike Erlin 22:21
I mean, that’s a beautiful story. And so let me come back to that, because I think we might want to particularly with that segue, I think we need to look at in terms of setting up the question. This ties to something that we talked about a longtime member, when we started talking about the future of work, I mean, it really started picking up about four years ago. And what we said is that as our our jobs get taken over by technology, humans are going to have to step to a higher level of analysis and thinking, Okay, I think at this stage in AI, and again, one of those articles talked about either the governance of the questions that you’re asking and the tools that you’re applying, and whether it’ll be legislated, which takes longer, whether it’ll be done by communities in a responsible way, or whether it has to be owned by the people who are applying it in roles, I think we have to understand we have a responsibility to think about these things, it’s not going to be perfect, but to do our best because we’re in this period of transition. So higher level skills around how you apply it. And being open and aware of that as a new component is critical. Now, the second component, which goes to your knees, is the piece that is missing, I firmly believe, is the embedding the human preference in the AI, because I’ll give you a perfect example. Most all the systems Cornerstone where we all spent work at one time, right, is a wonderful system, but it is subjective, biased and observed in the measures that it applies across the full talent spectrum. Okay, performance review, you have self assessment, you have Manager Assessment, and you have 183 60 potential.

Mike Bollinger 24:19
I know about the rater effect, right? The performance, the rater effect are 80. Yeah, yeah,

Mike Erlin 24:26
that’s right. We look at we look at I may have taken a job. And I have the skills to do it. But what everybody screaming about, particularly down in Australia right now, is we’re all worried about the great resignation, boom, right? Well, why would that happen? Because people aren’t engaged in their work. They don’t align with it. They don’t. They don’t value it. None of that is in our systems right now. The only way you get that is by diving deeply into the inherent preferences of individuals to understand And where we really care where our passions lie, and thereby where our skills lie at an inherent level, because we can, we can put scale, we can build skills for anything. We’re humans, right? That’s what we go to school for. It doesn’t mean we like it. Right? So if we don’t understand that foundation, and let alone in our in our human capital management systems, but when we start applying AI and those inherent human preferences or bias, what I what I care about isn’t considered, then we’re going to be applying developing people in areas in which they we don’t get them into the flow, right? They’re not digging what they’re doing. They’re not passionate about it, and they’re going to leave.

Mike Bollinger 25:44
A great resignation is the great search for meaning. Yeah.

Brent Skinner 25:50
You’re absolutely right about that. And you talk, Merlin, you talk a lot about making sure what people dig, right? He’s leaving

Mike Bollinger 26:01
all the mouth. He’s leaving, oh, he

Brent Skinner 26:02
doesn’t take this. So you talk about what, what people dig and you and you talk about? I’ve heard you use that term many times. And I love it. You talked about just now you talked about you know, what, what people inherently are good at is what they is what they dig. And, and, and I think you’re right, but I would just say a little bit differently. I would say that, if you’re doing what you did, you may not you may not be the best at it. But that might not be the most important thing. You know, if you’re if you’re really liking what if you really like what you do, then you’re going to contribute something of what it’s like. Let’s talk about rock and roll for a second. Okay? Kurt bang, Kurt Cobain, right. The guy, the guy was not a really good guitar player, but I’m sure he really dug playing guitar. You know, I don’t know if he’d ever become a shredder, you know what I mean? This is for the Gen Xers in the crowd. But you see what I mean? So I think that, you know, that can go all the way to engineers. And obviously, you need some of that, you need some of that rifle shot. pure talent, like, like the story you were telling volunteer of the agenda rising, the voice and, and, and her ability to do that, that that’s a real true sort of just rifle shot talent that you need. And I don’t want to go off on too much of a tangent. But you know, I think of neurodiversity. Right. And, you know, I think of you know, folks on the autistic spectrum, right, and they, they often provide a very valuable sort of shot. capability to, to to a team. Right. And so that, so this is I think we’re talking about, you know, there’s those two, two things.

Mike Erlin 27:53
You’re right. But what happens is, everybody, so I take your point, when I say people dig it, that’s great. But there are there are different levels of, of human preference, behavioral preference that I have, right? And so if let’s just think that, I don’t know, Mike about your knees, but let’s just say listening. And we focus on human capabilities. We don’t deal with the technical ones, right. But let’s say that, that listening and supporting diversity, and understanding the needs of others was, was recognized as being absolutely critical when you’re having to de gender eyes a voice, let’s just say those were identified, right. And let’s say that I did an ability imprint, which is our tool to evaluate it. And they came up as my those three came up as my strongest inherent capabilities, preference capabilities, okay, because I had all the things that were there, they were my strongest. But what has to be done is you have to then compare my proficiency level, or you have to, you have to compare my level of preference to create a proficiency level relative to a population that’s being looked at, because Mike’s niece could have come in. And she is absolutely super passionate about all the behaviors that go to how you care for somebody understand their needs, that listening for listening skills are highly attuned, because she looks at how people react. She asked questions, she, she plays stuff back to clarify, she’s at the top of her game. So even though my what I dig may be those same three skills and they’re my tallest at a level compared to a population that’s going for that job. Ballinger is nice as across the chart, she’s at the top.

Mike Bollinger 29:52
And my role. My real point to that story is if we used any kind of a classical tool, that synergy would never have had because she’s still a writer, that’s what she just that was just a gig, right. And it was through a connection to a connection to a connection that that synergy occurred. And she did that work. And then she went right back to what she does naturally unknowingly, so. So there’s a privacy aspect to it, there’s an interest and influence aspect to it. There’s a bias aspect. And I think that we’re starting and you talked about plyometric data, there’s also biometric data that has bias in it as well. And so we’re starting to see those things come together to the point where there’s genuine concern in, in industry and in government that is starting to develop a life of its own. You know, we know about the European interventions in terms of privacy and AI, the federal government just came out in the US and said, We need a Bill of Rights for AI. Right. And they’re taking, you know, input on that, and so on. I think what is what we’re seeing is, through the advent of advances in technology, you’re starting to see a real recognition, at least on the part of many people, that you can’t take it wholeheartedly and that others need to be thoughtful about the outputs in a way that yields a good outcome, not a predetermined outcome. And that’s really where I’m going is, how can it be a good outcome rather than a predetermined outcome back to our recruiting analysis?

Brent Skinner 31:39
Write it. And I want to just add here that I’m so glad you brought that up volunteer, because I was just about to say, you know, that this, how much of you know what we’re discussing tonight? Like the depth like sort of the depth to which we’re thinking about this? Hum? How many builders of these AI based tech tools? How much of that sort of thinking is going into these right now? I am going to guess maybe not that much. I don’t know. I mean, it feels like we’re at a point where, where this, these kinds of conversations that we’re having here, just kind of sort of the impetus for this HR Tech Chat, really, is that we need to start making sure that these types of this sort of, you know, this breadth and dynamic dynamic is of, of discussion, these two needs to feed the AI. Now, I believe very

Mike Bollinger 32:39
firmly that the creators of the tech and the, the tremendous amount of intelligence that goes into it is very well intentioned, okay. And they have a fervor and belief in the work that they’re doing will yield a better outcome. I think what I’m suggesting is, and potentially, you’ll see government intervention and Bill of Rights and all the privacy stuff, and the AI in Europe, which is a very interesting legislation, I encourage the listeners to go look at that legislation, I think what you’re going to see is an expectation that you are able to articulate the bias that went into the creating of that tool, what it is, and what it is not. And if we can do that in a way that is meaningful to the, the buyer and user of that toolset, we’re all ahead of the game. Because what those tool sets yield are good outcomes in terms of patterns and so on. We just need to be wary and mindful that we can take it wholeheartedly. Not everybody is well intentioned, but we’re gonna,

Mike Erlin 33:46
we’re gonna see with all this Facebook paper stuff, you know, you know, algorithms, how it’s used. I mean, you have to think that legislations coming down, and it’s going to be like it always comes down. It’s going to be a blunt hammer, that breaks a lot of stuff. Until it until it gets refined. And you know, the three of us talked about this, I think the reason we brought this we want and it’s great, we got some are doing articles, people are talking about it, but our community in the HR community, we have a responsibility to take a leadership role in this soon. Because it’ll, it’ll be applied, you know, in many cases through no intention, but if we don’t step back, if we don’t think about those things, it’s not going to, it’ll just get it’ll, the beast will get loose. And if you look at it, one of the things I think the reason we’re seeing more about this is because just look at the time since all this stuff started going down. I mean, you remember my Cambridge Analytica, when we were back at Cornerstone and I was pitching that, you know, to the present the cornerstone events or whatever You know, that’s when we started seeing it. And I think what’s happened is more companies like ability map, like, you know, many others are starting to get datasets in which, you know, like, we’re on our fourth revision of our review and revisions of our algorithms and the things. And when you start getting data you can work with, you can start asking the questions differently, you can start, you know, identifying things that you’re doing well, or haven’t done yet, as well as things that you need to change. And we’re just in that process. And there’s, there’s a personal responsibility that I think everyone in this industry should hold not

Mike Bollinger 35:36
to governance that I agree.

Brent Skinner 35:39
Yeah, in the, you know, the the alternative or the, you know, is not very desirable. You know, I think back to the, the article, right here, think back to the article that you shared with us volunteer around, I think it was it was either Amazon or UPS drivers. And it would the AI was, there was some

Mike Bollinger 36:02
sort of getting, they were getting dinged, because they were making too many turns when there was a safety aspect to it. So it literally was either their bonus and route that they were taking, which they were taking for good reason, we’re actually impacting their bonus because the machine thought the route should be different.

Brent Skinner 36:21
That exactly right. And that is an example of HCM, or the HCM apparatus, trusting the AI, maybe a little too much

Mike Bollinger 36:30
was the best of intentions, it was the safety.

Brent Skinner 36:33
Exactly, exactly.

Mike Erlin 36:35
Here’s one, here’s one, a friend of mine, is an incredibly seasoned contemporary, executive leader, CEO, EVP big companies in Australia, I raised her super, super highly okay. She left a job at whereby she was CEO of a very large consumer organization, let’s just say, you know, their customers were consumers, She then moved to a very large financial institution where she ran a division. And she had an opportunity after that, to go be CEO of a really cool lifestyle company. Just I’m trying to keep it a little vague. So people don’t know, case box, but and she was, in my opinion, an incredible candidate for this company where it’s at their global expansion needs, and they would have been fortunate to have her in any capacity, let alone a CEO. So she and she knew one of the board members or something. And that’s how she found it. She went through the interview process, and she didn’t get it. I had lunch with her. And I said, what happened? She goes, I gotta tell you, Mike, they I found out that basically, they had run a algorithm, a technology over all my LinkedIn, all my Facebook posts, and I got flagged for using harsh language and being unprofessional. I said really? I said, Why do you think that happened? And she said, well, in both of the last two roles that I had, I’m active on social media so that I can hear my customers at all levels of socio demographic levels across society. And so I would often, like, you know, or, or comment on things that people were saying, which would never, you know, be said in the same way in the types of environments that I sit in, but it was the customers and it was things they were saying about their experiences or their views on what they need. And they might have used language, they might have been spelled incorrectly. And so when the technology went out and looked at all my responses, it showed that I had people have alternative views, and maybe, you know, backgrounds that were not good, but they were our customers. And she goes That’s why a knockout waited, and they didn’t go any further to look at it. I just found that to be an incredibly fascinating example, that’s more akin to the roles that we’re in. But what a shame. I mean, not only did the organization all of its customers lose, I think one of the best people that could have led that company, but because there was such a degree of weight, put on something where they didn’t understand that, frankly, the fact that that this she was doing this shows how incredibly connected that leader is to her market. And

Mike Bollinger 39:46
it was the real key takeaway to that story is they use it as a single factor decision. What happens do is they create a tapestry by which they use that for a judgment call. Oh, even we seed the high ground to one or two single factor kinds of decisions, that becomes problematic. But social voice is a big part of this. I’ll give you one quick example I know of a company, who monitor the social voice of us have a subset of individuals, mostly LinkedIn, and Facebook posts, Twitter posts, and they were looking for salespeople, they were hunting for salespeople. So what they did was they monitored and they looked for salespeople when this other software company was at Club, and they knew that they were at clubs. So they were looking for tweets and LinkedIn posts, and Facebook posts around were in Hawaii at during a certain timeframe. And they identified those people, and they recruited those people, and it worked. Yeah, so that’s an example of using it in a in a positive way for the outcome of the company and the individuals. The point is, you don’t see the high ground was single or dual factor decision that has to be a wider tapestry.

Mike Erlin 41:04
We go back to what’s the question that you’re asking? And do you understand what’s going to come up in the bias that it has, so that you can make an informed higher level Future of Work decision?

Mike Bollinger 41:20
Correct?

Brent Skinner 41:21
Absolutely. We don’t know enough. I mean, we don’t know enough.

Mike Bollinger 41:27
Your great minds working on this. There’s some fascinating stuff. Brent, when I should do is, I think there’s a really good AI curriculum from Stanford, I think it is, and it has all these little things to read underneath it. Maybe we can put this into the follow up here. It’s public domain into us, I understand. Because I think there are ramifications globally. But it might be for those that are people that are interested, there’s a ton of little snippets in there. And it’s what they’re trying to teach the future students of AI, they’re trying to frame for them, or context before they even got multiple doors.

Brent Skinner 42:10
I’d love that, please send that to me, because I definitely want to include that in the introduction to this. Just that that example that you shared Merlin around. And then the one you shared volunteer that just shows you know that there’s so much we don’t know around what matters and what doesn’t, you know, and it I mean, the nuances are just they seem to be endless, you know, in the one example that, that Merlin shared, that was a really bad idea to wait that so highly in terms of crawling their social media activity, whereas in the example, you shared Bolger was the exact right thing to do. So it’s, it’s very, you know, but you’re right, you know, it needs to be a constellation of factors considered and, and then, and, you know, as humans, we kind of absorb it all in and then we kind of divine, sort of a decision, right, we arrive at a decision, you know, and it’s,

Mike Bollinger 43:09
it’s a nice example, even in Mike’s example, that’s a good idea to go out and look at that and use that, right. That’s not a bad idea. If anyone’s interested in that, by the way, you should go figure out how Disney goes and recruits but that’s another story. But it’s a good idea. It was just used badly. Yeah,

Mike Erlin 43:30
the results for us badly the results actually, the results actually illustrated this into bit one of this individual’s many strengths.

Brent Skinner 43:38
Yeah, exactly. Yeah. Well, you know, I’m looking at the time and we’ve been talking for a while here, this has been super interesting. Any final other dozen

Mike Bollinger 43:50
rabbit holes we can dive down?

Mike Erlin 43:55
Friends summarize the key takeaways made? What are the key takeaways we got from this?

Brent Skinner 44:01
Well, I think one key takeaway is that we need guiding principles. As we go about this, we need as much data as this is my pet theory is that we need as much data is possible from as many perspectives human perspective as possible. To make this you know, as as palatable to humans as possible, we need to take into take into account human preference, we need to understand that biases can be positive or negative or even neutral. And, and we need to make sure that that we’re not trusting the AI too much. And then we’re also looking to inform AI so that gives us guidance as opposed to, you know, black or white answers and with the understanding that we’ll be continually learning the AI itself and we will be in this this is a this is a virtuous circle. It will continue and continue, this is not, there’s not sort of some end game, where we’re gonna won’t be able to stop doing this because the AI will be perfect. Because it never will be. Never will

Mike Erlin 45:10
be. I think what I’m taking away is, you know, this reinforcing, understand the questions you’re asking in there interrogate both them and the resulting data results from multiple dimensions so that you understand it, because I think we’re abdicating we’re abdicating. Sorry, this is a generalization, obviously. But there’s a risk of abdicating the decision authority that we really can’t afford to do right now.

Mike Bollinger 45:42
Yeah. And to pile on that last point, one more time. And it’s tempting to do so as humans to fight that temptation.

Brent Skinner 45:54
Yeah, yeah. We want to defer to some someone else and that someone else is becoming the AI as opposed to a human being. And that’s, that’s

Mike Bollinger 46:04
a wave. It’s a new story. So

Mike Erlin 46:07
yeah, we say ways get us there fastest. And we, we, we start the Congo line through the neighborhoods.

Mike Bollinger 46:18
More than once.

Brent Skinner 46:22
Well, thank you, gentlemen. This has been fantastic. Thanks so much. Thank you.

Share your comments: