3Sixty Insights #HRTechChat with retrain.ai and Seyfarth Shaw LLP

Isabelle Bichler is co-founder and chief operating officer of retrain.ai. An employment litigation attorney, Robert T. Szyba is a partner at Seyfarth Shaw LLP. Both are well-qualified to discuss the at once inescapable and intriguing trends at the intersection of AI and human capital management, and they joined us as my guests for this episode of the #HRTechChat video podcast.

retrain.ai is the creator of AI-based self-evolving ontologies that unearth the relationships at the intersection of an organization’s existing and future roles, its people, and their hard, soft and transferable skills. During the chat, Bichler provided an impassioned, detailed explanation on why this is so important — and why the development of responsible AI in this area is essential to helping leaders act equitably as they plan more efficient, more targeted external and internal hiring with implications, as well, for learning and performance management.

That we’re even having this conversation is evidence that we are finally here: AI has finally evolved to the point that it is now a bona fide benefit to HCM. And, right on cue, AI for the workforce has become the focus of an inchoate, nevertheless quickly gathering regulatory framework.

That the latter has promptly followed the former is unsurprising. Fraught with the potential for misuse both intentional and unintentional, AI is an emerging technology also holding much, much promise for the world of work. Regulators are still wrestling to approach AI effectively. There is always the chance that an early, reflexive, inaccurately or only partially informed flurry of laws governing its use in the workplace could stifle innovation in the field and have the opposite of the intended effect on AI’s impact on people, Szyba cautioned during the podcast.

For example, take this new AI Audit Law that will affect employers in New York City starting in January 2023, regulating their use of AI in screening job candidates or employees up for promotion. Reading it, those needing to comply might find themselves legitimately unclear on just how to do so. Bichler, Szyba and I will be co-presenting a webinar exploring the subject of this law on June 8 at 10am EST. You can register here.

You could say AI and the future of work are inextricable. There’s no stopping where we’re going with AI in HCM, and we humans must, therefore, embrace and learn as much about AI as we humanly can. With this episode, we do our best, the three of us, to help us all scale the learning curve just a little bit more, and I highly recommend that readers listen in….

Our #HRTechChat Series is also available as a podcast on the following platforms:

See a service missing that you use? Let our team know by emailing research@3SixtyInsights.com.

Transcript:

Brent Skinner 00:00
Well, hello, everybody, and welcome to this the latest episode of the HRTechChat video podcast. And I’m very excited today to be discussing artificial intelligence, the ethics behind it and also some of the regulatory framework that’s developing around it. It’s kind of a new frontier and it’s something that employers really need to be cognizant of. And today my two guests are Rob Szyba, who is partner at Seyfarth and Shaw see me that Seyfarth Shaw no and also Isabelle Bichler, who is co founder and COO at retrain.ai, which is a vendor of artificial intelligence for developing, self evolving ontologies of your workforces, skills, and also the roles that you have at your organization. So welcome to you both.

Isabelle Bichler 01:01
Thank you. Happy to be here. Thanks for having us.

Brent Skinner 01:04
Yeah, absolutely. Why don’t I give you both an opportunity here to kind of introduce yourselves and just give us a little bit of background on? Why you’re so interested in this topic? And what qualifies you to discuss it. Isabel? How about? How about you go first?

Isabelle Bichler 01:22
Sure, sure. So I’m Isabelle, as, as you said, I’m the COO and co founder of retrain.ai, retrain.ai, just to kind of recap about it. So we’re an AI powered platform for managing talent into the future of work with an end to end application for talent management, talent, acquisition, learning and development and talent insights, were built on the intelligence comprise of billions of data points from labor market data with global compliances, and bias prevention built in. And when I mentioned bias prevention, that’s why today, I want to talk about what are the biases that are created? What is the AI? How is responsible AI here to help all that. And just a quick kind of background about me, I’m an attorney. And I’m also graduating now from the stern business schools at NYU at the graduate program of risk management. And my research is about the risk of diversity and inclusion, and specifically about tools, responsible AI tools to prevent biases to promote the AI. So that’s about me. Thank you.

Brent Skinner 02:39
Yeah. Very interesting. Rob.

Robert Szyba 02:43
Absolutely, again, thanks for having me. My name is Rob Szyba I’m a partner at Seyfarth and they focus on advising employers on various federal state and local requirements in hiring Employer Relations and terminations of employment. I also defend employers in lawsuits that challenge various employment practices. In my practice, my focus has been on various concepts relating to discrimination and bias, disparate treatment and disparate impact on employees both on you know, under federal regulatory and statutory frameworks, as well as state and municipal level. So I’ve had quite a bit of different angles on on the types of biases that that we’re talking about, including on this new, you know, the new frontier of AI and how that how that falls into these frameworks and into a lot of these concepts. So happy to happy to discuss. Yeah, yeah. And

Brent Skinner 03:37
it’s, it’s, it’s, it’s a very fast developing, and I I’d say, we’re probably still in the early May, are we still in the early stages? Maybe we’re not anymore. The regulatory framework always takes a little while to kind of catch up. Yeah, really looking forward to hearing your, insights here? Why don’t why don’t we why don’t we start with just kind of a broad a broad question around, you know, AI, and, you know, why is why is it being adopted so much in this specifically in human capital management, as an HR technology? What’s driving this, Isabel?

Isabelle Bichler 04:19
Sure. So yes, we see definitely an increase of more than 55% from last year in the adoption of AI and automation. And that’s been around, it’s not new, but the of course COVID-19 accelerated all these processes of adoption tremendously. And mainly, the reason is to optimize and reduce cost. And this is what AI and automation is are doing. Specifically in human resources. You see it for optimization of talent. And this is really what we’re here at retrain.ai we’re doing we’re optimizing different categories such as talent hiring, talent, acquisition talent. management, learning and development, and informing hrs with a lot of data points from labor markets to understand what’s going on in their domain and base their decisions upon data. So that’s, that’s the main driver. Another driver is also the increasing diversity and equity and inclusion efforts. These are the things that we see across the board across industries, specifically, you see a very significant uptick rise in finance. But basically, trying to put more efforts into the AI initiatives, we see 58 percentage increase in the spending on these different efforts. So these are the second, this is the second driver. Another thing is that that is also correlated to what’s going on with COVID is the challenges that HR have right now in terms of the labor markets. So there’s a tight labor market, in terms of the shortage of talent. And you see that everywhere, again, you see specific industries that are more under pressure, such as healthcare, for example, there, there are 580,000, nurses in demand, and also software developers and data scientists talking about AI. And also truck drivers. So that’s one thing. So it’s called the war for talent. The second thing, and I’m sure you’re very well familiar with Brent, is the great resignation. So there’s a quit rate of 4% per month, and it’s just growing. So people are leaving in droves. So these are the challenges that EHRs have companies have. And so again, Taking all this into consideration, you see the rise of adoption of AI to help hrs and also promote di efforts. And now the question arises, is this AI safe? Is this trustworthy?

Brent Skinner 06:55
Yeah, that’s a really good question. And I want to dive into that. Because there’s a lot of interesting stuff that I’ve been hearing a lot of it from you folks. And, but first, I just want to kind of hit on that, that, that great resignation piece of the puzzle for a moment here. And in also how it fits in with the idea of, of a self evolving ontology. Maybe you could just delve into that specifically a little bit so that we understand just exactly what it means when we’re applying AI in this context.

Isabelle Bichler 07:36
So for us, we look at everything by skills, we’re looking at a person as an aggregation of skills and occupation role is an aggregation of skills. And so at this atomic level gives us a better understanding of, first of all, what’s going on in the market? What are the occupation on the rise? What are the skills declining and emerging? How is automation actually affecting that, right? There’s a lot of skills that now are obsolete, you’re not using them anymore. And on the other hand, there’s a lot of new skills coming in. So first, we want to understand what’s going on in the market. And that’s what our technology does, it ingests billions of data points from different sources, such as job boards to understand demand, such as profiles online, such as LinkedIn, for example, courses, actually educational content also gives you the understanding of the trend. So we can predict what are the skills in demand? What are the skills of tomorrow, and help organization prepare? And understand first, what are the inventory of skills they have currently? And what should they have an enhance specifically, and what are their specific areas in their organization, they need to actually add skills and be prepared for the future of work. So that’s the first thing. And again, then need helps. And we provide this use case, application for the day to day work that HR should have for hiring for training for managing their talent. So basically, we see and that’s the skills gap recall, the skills gap is widening due to automation in AI. And so if you want to be actually all of us, if we want to be relevant in the next five years, we need to significantly upskill and rescale ourselves. So that’s where we come in and help organization understand what are the areas if it’s healthcare, finance, manufacturing retail, where they need to enhance their skills and app skills and skills, their talent?

Brent Skinner 09:29
Yeah, yeah. And for the employer, the benefit is that is kind of, well, it’s probably more than two fold. But what I can think of right off the top of my head is for one, because of the tight talent market right now, you know, it’s good to know what you actually have inside the organization already. That could be developed through, you know, career pathing and training and this sort of stuff. Right. Exactly. Yeah. And at the same time, you’re, you’re combating that great read. stagnation to a degree, right? Because you’re, you’re giving your existing internal talent a rationale to stay, because, okay, now, oh, my employer understands me better and, and what I find to be so fascinating about this and is that with the application of AI, right, it actually makes the workplace feel more human because that that ultimately is a human feeling, or a human desire to, to be you know, of belonging of one of feeling like my, my employer cares about me or understands me at least right. So as a human as a human result.

Isabelle Bichler 10:41
So we use AI basically not to displace people to enhance them. So that’s, that’s the power we give to HR is now to understand your skill set. And now we can actually offer you much many more opportunities that we have, but we didn’t know you could actually be relevant for. And you’ll see that one of the top reasons for people leaving companies talking about the great resignation is actually not being able to see how they can develop and progress. What is the career paths waiting for them within this company. So now, when we understand what is the probability of a person to progress in a specific trajectory of his careers, we can see how it aligns to the company’s strategy, the strategy of a company and actually connect the dots and offer the person the different opportunities that later in their company fits training opportunities, positions, mentors, to upskill and rescale. And also full time positions. And so we decrease this quit rate. And of course, we bring a lot of value to these employees to be more engaged more productive.

Brent Skinner 11:44
Yeah, yeah. super interesting. So I have two questions for you, Rob. One is sort of a broad question. I’m just curious, around the diversity, equity and inclusion piece, or are you seeing because this has occurred to me? And I’ve kind of asked around? And I’m not sure what the answer is yet. But is there a kind of a regulatory or is there a compliance facet to DNI that’s developing visa vie AI or just employment law in general?

Robert Szyba 12:20
Well, that’s a that’s a great question. With the D AI concept. That concept has evolved and stands on the shoulders of a number of other initiatives and developments over the years. Certainly going back to the civil rights movement, 50 6070 years ago, there have been efforts to reduce disparate impacts and treatments of employees over the years. The DI concept, as like I said, has been developing over a number of years. And a lot of the regulatory framework that exists, is meant to support that the current efforts that exist are sort of evolutions of previous efforts and previous initiatives, previous solutions that were found. So in terms of a focus on this, you know, I’ll give you a great example, the US Equal Employment Opportunity Commission, EEOC certainly takes a look holistically at organizations and employers at all levels, and looks for disparate impacts. So for example, policies or processes that are used different practices in business that are used, that might have a disparate impact on a certain group, more so than others. Now, the policy may be neutral on its face. You know, it may, it may be a very benign policy, but in practice, when it’s implemented, there’s a certain disadvantage to a certain group. So that’s an agency as a quick example, at the federal level, that looks at these issues on a much broader scale on a much from a much broader perspective, and certainly looks to address those issues through already existing framework. I mean, the laws that exist in most jurisdictions are both are both broad enough and flexible enough that they can be utilized conceptually for new technologies as they as they are implemented. So, you know, just to kind of bring it back to the AI fronds. The same thing that I said about a certain policy and practice that appears benign if it has disparate impact on a certain group. if, hypothetically, there was an AI solution that was presented that was functioning in a way that excluded members of a certain race or a certain demographic, based on a legally protected category. The framework exists to be able to address that. You don’t really you don’t really need to create new laws to fully regulate this because those laws are flexible enough to apply in this context.

Brent Skinner 14:56
Interesting, and I guess what have that sort of prompts, in my mind is that you know, so the flip side of that is that, you know, kind of parlaying with what Isabel was saying, previously, AI can actually help an organization to ensure that it does not run afoul of these types of this type of a framework, right? If it’s, if it’s in a sentence in a way that sort of ensures the Eni, right? So that there’s, there’s a sort of a, an equal represent, or comparable representation of various groups within the, within the within the organization. So it’s, it’s, that’s, that’s what’s super interesting to me is, you know that and we’re not going to get to it quite yet. But I want to get to it is this idea of responsible AI and all that, but we’re, we’re kind of getting right into that right now. Where AI can be, it can be a double a double edged sword, and let’s, let’s make sure we use the right edge of the sword, right. Sure. Yeah. No, no, Rob, I know that. Also, there was some recent legislation law that’s going to go in effect, I think in early 2003. It’s gonna be 2023.

Robert Szyba 16:12
If you’re taking us back 20 years.

Brent Skinner 16:15
Foreign travel? AI is. Yeah, that was so big at the time. But anyways, yeah, there’s some laws coming into effect in New York, around this hiring practices with AI. Can you delve into that a little bit?

Robert Szyba 16:32
Yeah, absolutely. And I, if it’s okay, I’d like to kind of take a step back real quick to something you said about responsible AI and this as a solution for some of the disparate impacts that I’ve been talking about. You know, what’s interesting, from my vantage point is that in the grand scheme of, you know, the universe, I guess, AI is relatively early in the way that it’s implemented in the workplace, and the possibilities are vast, there’s a lot of opportunity. But the problem is, the first thing that comes out, the first solution that that we see is probably not going to be the perfect end or, you know, optimized solution that’s ever going to exist, it’s a work in progress. And in from my vantage point, we’re still at a relatively early stage in development. So I can I can see theoretically, something being developed, you know, in the near future, that has, that is a great solution. And as well intention, but operates in the way that maybe it doesn’t, doesn’t address all the DI issues as adequately as we would like, and that’s okay. You know, that’s, that’s okay. And that’s going to be an opportunity to learn and improve things like and kind of bringing it back to your specific question. Now, New York City did pass a law this year, that requires an audit of whatever the whatever AI tool is being used in the HR context. And that also penalizes employers that don’t perform the audit, but also gives the option to employees to, to ask the employer to use a non AI process for their candidacy for employment. So simplicity, the simple version of that law is basically that it’s an audit requirements, and giving the employees the opportunity to opt out of AI. Now, what’s interesting to me about that is that law was passed a couple of months ago, it goes into effect in January 2023. But at the moment, in order to perform these types of audits that were that we’re talking about, it requires use of that tool and enough data to be able to meaningfully analyze and assess how these AI functions are performing. They may be performing optimally, they may have room for improvement. And that’s okay. The issue that I’m seeing is that the law went into effect that a relatively early process to regulate the use of AI. And my concern is that it’s going to stifle that development, because now, we don’t really have a set standard for what is an acceptable benchmark, or an error rate, or what’s the data set that we’re supposed to be analyzing? And candidly, we don’t really have a lot of insight from what the what the law says, to be able to figure those things out. My worry is that over the course of the next you know, eight six however many months it’s going to take to get this law implemented. My concern is that employers are going to be kind of put in the crunch you know, you have to comply with this law, but we’re not really we haven’t really figured out what the what the standards are. What do you what are you supposed to do?

Isabelle Bichler 19:53
silent about it doesn’t say anything. There’s there are no criteria.

Brent Skinner 19:58
How do I know if I’ve, if I’ve comply? And that’s a good question. And this I have a very kind of, I guess, granular question about this law is, is the requirement simply to conduct the audit and say, Okay, here’s the audit, or are there consequences for the audit showing certain things? Or? Or is it? Or do the certain things shown by the audit open up the employer to other regulations that may apply? Or what some? What, what is the? What is the requirement? Is it just the audit or

Robert Szyba 20:32
all great questions. The law is silent on just

Isabelle Bichler 20:36
wow just says that you have an independent audit, whatever it means it’s not really defined.

Robert Szyba 20:45
And that’s a little bit of that’s a little bit of the crunch that I was talking about putting employers and so you’re supposed to do an audit, but we don’t know who’s supposed to do it, what the criteria is that they’re supposed to use? What data are they supposed to analyze? And what results are they’re trying to achieve in order to receive a passing score? So where does that leave you?

Brent Skinner 21:03
Yeah.

Isabelle Bichler 21:04
So I think, actually, it’s, it’s true, what the regulator has done is kind of went advanced too fast without really thinking about it, right. So we need to put the content into that. And as a tech company, that is really advanced here, we can help. And we’re bringing the forefront of responsible AI. And we’re actually on the board of the innovative program of the World Economic Forum to create a certification for Responsible AI because as Rob just said, there is no benchmark, you can see that actually, the EU, the European Commission, has already started that work. And they’re very advanced in Europe about it, there’s the law, it’s a national law was enacted and is in effect from April 2021. So it’s there. And there’s a standard, you’re there’s also some other laws also now in Illinois, about video, interviews, and also in Colorado, and now there’s a bill in California, you see a rising in these kind of regulation, regulatory measures. But again, I think there are some benchmarks for that there are some measures to take. And for now, because an I, I understand your concern, Rob, but I think the problems that are faced, that HR are faced with are so big, so they’re definitely willing to try to use AI and AI, by the way that this law is only talking about AI used for promotions and hiring. So for retention, for training, for development, for management of employees, that’s a different thing. And I agree that there is kind of a concern, but we can fill it in with a lot of different criterias that already are in practice, they’re the best practices, but definitely, there’s a need for a more defined criteria for ducks.

Brent Skinner 23:03
If I could just if I may, you know, looking at this sort of at first blush here, it seems to me that that the greatest risk to the organization is in is in the aspect of the law that gives employees the opportunity to opt out of the AI. Right? So that’s it, and I can understand that at you know, you know, first pass, you know, you want people are kind of AI sounds kind of scary, and till you look into look into it more than depending on your perspective, that looks more scary, scarier, or, or maybe less scary, right? Responsible AI, for instance, being the less scary. But it’s interesting to me, because that’s, that’s a, because if you haven’t, if you have a situation that See, to me, this is a self defeating, almost regulation, right? Because if you look at a solution, like such as, you know, a self evolving, ontology informed by responsible AI, you’re only ensuring that that your employees are going to be happier. Right. And so, you know, it’s almost a sad thing to see that a lot of people who may not understand how it works would decide just right at the outset to opt out. And by the same token, to me, it seems that this is an opportunity for, for vendors who have who have responsible ai ai, that that isn’t going to exclude people that’s going to actually be inclusive, that’s going to promote inclusivity and, and in greater harmony and greater positive employee sentiment, it’s an opportunity for them to say, hey, look, you know, us me, you know, as the regulatory framework kind of catches up here, understandably so, you know, there’s I think it’s tough to talk about AI without I mentioned Elon Musk at some point. So I will this obligatory Elon Musk mentioned, but just very briefly, I remember viewing a YouTube video of an interview of him from few years back where he said, We have to get out ahead of AI as we need to understand it, and, you know, and, and put some sort of a regulatory framework around it. And maybe that was an impetus or a an impetus for industry to start thinking about what we need to be ethical with our AI. So maybe it’s kind of a circular thing, where they feed off each other. Those are some of my thoughts.

Robert Szyba 25:40
Yeah, Ron, I can, I can kind of chime in with there, there is a tremendous amount of thought, and research and study that’s been conducted in the AI space. These concepts are not necessarily novel. And there is a lot of effort that’s being made to optimize AI for a lot of different uses, not just the employment context. But as it relates here, there’s a tremendous amount of opportunity for development and really utilizing AI in a very positive and meaningful way. Going back to the concern I shared earlier about regulation, I’m not opposed to regulation, necessarily. And I think it’s, you know, perhaps there is some framework that we need to work in. From my vantage point, however, I think that there’s a little bit of a disconnect. And I think there’s a lot of opportunity, and perhaps, you know, from my perspective, I think everybody would benefit from a little bit more collaboration between some of the folks that are regulating, and some of the folks that are really deep into this into this space, because, you know, I think everybody agrees, or maybe I’m being a little optimistic about this, I think everybody would agree and get behind the concept of responsible AI, nobody wants to sit there and develop AI that’s harmful to humans into the population and to workers. I think everybody’s well intentioned. The issue is that, in order to meaningfully regulate and at the same time allow for positive development, I think there has to be a very thoughtful and deliberate process to that goal. My worry about some of the regulations that pop up is that we, we sort of Smackdown development before it really has a chance to grow and develop into, you know, in, in its natural state. You know, so from my perspective, I would I would love to see a little bit more collaboration, idea sharing, you know, whatever, whatever form it takes, you know, but have a little bit more of a structure. That’s, that’s designed to promote the development of responsible AI and not just stop it before something bad happens.

Brent Skinner 27:53
It sounds like you’re and I would agree with you sounds like you’re describing sort of a a product data protocol, like a framework, an open forum framework, you know, a guided open forum framework for, for, for communicate, opening the lines of communications between communication between various stakeholders, whether it be creators of AI, or employment groups, and these sorts of things to help sort of shape that regulate, regulate regulation, those regulations and move them in the right direction.

Robert Szyba 28:27
Yeah, generally, I would say I can’t think of any instance where open communication has been a detriment to development of any area. So I think that the more that we can work together, the more that we can understand each other’s perspectives, chances are the better of an end result that we’re having, we’ll have efforts like what Isabel is doing, and what we train is doing, I think, are extremely important to move that forward. Whether it’s in the AI space, or other areas of the law that I’ve seen, the more that folks are able to come together, share ideas, develop, listen to each other’s concerns. That’s all the better of a result that you normally get. The concern I have is that the more isolated we are, and the more the less that we’re doing that the more that we’re able to, the more that we’re likely to run into friction between different stakeholders.

Brent Skinner 29:19
Yeah, totally agree. Is it bill? Maybe this is a good point. Two things that I think fit right into this in terms of people understanding AI a little bit better, if we can, if we can move the needle at all with this podcast, that’d be wonderful, wouldn’t it? What’s the difference between specifically a black box solution and a white box solution? If you could explain that and also, in also, the one other thing that I think it fits in, is, how can an AI be responsible AI? What does it mean for an AI to be responsible?

Isabelle Bichler 29:55
So let me let me maybe explain a little bit about responsible AI and I agree just to kind of chime in to what Rob said, I do agree that we need collaboration. It’s not just us and the regulators, for example, tech companies and the ones they’re bringing the forefront of technologies, also academia and all these NGOs that are trying also to tackle this problem. There’s many. And yes, we are partnering with different stakeholders to really kind of understand what does it mean, how do we solve this huge problem? Because AI is everywhere? It really is. It really is felt in every aspect of our lives. It’s not just about HR, right? It’s about lending and underwriting you when you’re given premium for your insurance. And whatever you do, and your daily life involves AI now, and it’s going to continue in rise. So how do we really make sure this AI is used? Well, and responsibly, so responsibly AI, and this is really, most of the criteria that you can see the benchmark is talking about five dimensions for evaluation. The first one is the explainability of the AI. So you understand what is the input putting into the model? And what is the output? Why specifically, did you use variables, these specific variables? And how did you get this input? So that is the first pillar we could say. So you want to ensure that everybody understands the vendor, sometimes we see vendors that are really having problems explaining the recommendation that their system is providing, of course, the customers of this technology, and the users, so they all understand and it’s explainable is also people call it transparent, transparent. So that’s, that’s both of them, it’s kind of used interchangeably. The second thing is really the data quality and the compliance to privacy. And that’s, of course, this is touching a lot of different points about privacy and security. So of course complying to that and using the data with the right consent and the compliance to privacy rights and human rights. Second is robustness of the data. So if you’re using it depends on the sample data that you’re using, either for your training datasets, and also for the datasets that you’re really applying it to for your customer. So the more the data you’re using is the sample sizes larger, you are prone to less accurate and less mistakes. So that’s the robustness of the data set. So it provides accurate and granular results. Accountability is something that is now been discussed. So meaning that organizations and vendors are actually operating AI systems that there are accountable for that for their proper functioning. So they’re running audit themselves, they have, and that’s what the law is telling you, it’s telling you, you have to take responsibility of it. And you need to make sure it is audited. There is there are many different methodologies to audit, AI, but I’m just laying out all the different pillars. The last one is really the fairness algorithm. So that’s really the models that you’re using. And so there, that’s the way to mitigate these biases. So neural networks, for example, that’s one of the models and deep learning. These are technologies that use really billions of data points in the inputs. And that’s why we call them black boxes. Because this is harder to explain what inputs were put into the model. And what are the outputs. That’s the problem is this kind of black box and white box, of course, the opposite. You can actually it’s really touching upon all these different pillars that I’ve just described. So it’s checking the box on them. So it’s transparent, explainable, robust, unbiased, the fairness algorithms that are tested periodically, and so forth. So that’s really the technology we use, for example, because it’s based on the ontology on the skills. So you see exactly the skills that are put into the models. We’re not looking, for example, just to give an example to explain all that technicalities. So if you’re looking at a person, right, some technologies could look at past performance, right. And, you know, the, it’s widely known the cases with Amazon, for example, right, and another company, so they were looking for the ideal type of software developer, I’m not going to go into full detail, but they were creating this ideal type of top performer. So based on that it learned the machine they fed it and trained it to detect these kinds of top performers. So the problem is the model was based upon white males. So that’s the output also that you got from this bias. That’s a biased model, right? It used variables that are not supposed to be there. So based on skills, just looking at the different skills, and also testing constantly against different samples, right to see if there are differences to the benchmark, if it’s the population, if it’s a specific industry, distribution, you’re able to detect biases and reduce them.

Brent Skinner 35:22
Yeah, yeah, that’s, that’s a great example. You know, we’ve had a few podcasts in the past here in HR Tech Chat, where we’ve discussed various discussed AI from sort of a philosophical level. And I’m not a philosophy person, I don’t have any schooling in that, but it’s interesting to talk about and to think about, and one of the things that you mentioned is the, you know, having as much data as possible in forming the AI. Right, and that that’s, that increases, the increases the potential for it to be as unbiased as possible, because, and that’s come up in from a different, slightly different angle, in previous conversations, where we talk about, well, how do you inform AI that’s most reflective of the human experience, right? Well, you include as broad of a spectrum as possible of the of human sentiment to make it so and so. That was just an that’s, that’s an that’s interesting to me. So, you know, the whole idea is more data or data or data, it is much I mean,

Isabelle Bichler 36:29
sure, if you go, you know, under the just use a sample size data of a company by itself, just one company, it’s not going to be enough, right. And some technologies are doing it. We’re saying no, aggregate, many companies and public sources. So there you have a right sample size, for your training purpose, and also for the usage itself, of the technology.

Brent Skinner 36:53
Yeah, yeah. I’m just looking at the time and, and we could probably talk about this all day. We’d like to keep the podcast about 35 minutes, though. So I think that we’ll let some, let’s kind of conclude here. Rob. Any, any, any additional thoughts from your side of side of the aisle, the legal aspects of this?

Robert Szyba 37:17
Sure about so I’ll say this, I think we’re in an exciting time for AI, because we’re, we’re seeing a lot of possibilities, and they’re at our fingertips. You know, we’re on the verge of a lot of great developments, I think, and a lot of novel and great uses to this stuff. The only the only reservation and concern that I have is that the less that we even do things like this, have a dialogue and talk about some of the different issues and aspects of it. It does raise the potential for a lot of the friction I mentioned earlier, and just hearing Isabelle talk now about aggregating data, the types of inputs that result in optimal solutions and analyses and optimization of AI. This is this is exactly what I was referring to when I said earlier about New York not having enough standards or really enough development in their law. So my, my concern is that we, before it gets off the ground, we regulate it and stifle development before we can actually realize some of the potential. So I, I look forward to seeing you know, what some of the capabilities are and what we can do with AI to enhance whether it’s hiring, whether it’s training, whether it’s the employment experience that’s going to be positive for both employers and employees. But we’ll see, we’ll see how we go. We’ll see where we will see where we wind up with us.

Brent Skinner 38:42
Yeah, exactly. It’s a critical time in the development of AI, it’s, it’s also an opportunity to kind of get out ahead of it now and to start developing it with intentionality with sort of a broad respect for sort of the human condition. So that it becomes sort of a, a, you know, a complement to our existence, which I think would be, which is what we’re all striving, striving for Isabelle. Any closing thoughts?

Isabelle Bichler 39:16
Yeah, I would say I think responsibly, I should be just a part of every AI strategy right now. So on every talk management’s agenda, they should also when they’re talking about AI, they should also think about the responsible AI ethical AI. And we’re here to help.

Brent Skinner 39:36
Yeah, yeah. Well, thank you both, for joining me today for this for this episode. I just want to do a quick call out. We’re going to be doing a webinar on June 8 at 10am. Eastern, where we’re going to dive into the details and parameters of the new New York City Law that we’ve been discussing today. And what enter I just need to know now to begin preparing. So be on the lookout for that, folks. There’ll be a registration for that soon. In the meantime, again, thank you both so much. Really important conversation.

Isabelle Bichler 40:15
Thank you. Thank you for having me.

Robert Szyba 40:17
Absolutely. Thanks for having us.

Share your comments: