Ben Mones, CEO and Founder of FAMA, joins Dylan Teggart for this episode of #HRTechChat to discuss the evolving landscape of AI in HR technology and talent assessment. With over a decade of experience leading FAMA, a company dedicated to providing behavioral insights through digital footprints, Ben shares his journey from launching FAMA via an accelerator program to becoming a leader in the field. He recounts the company’s origin story, inspired by an incident of workplace misconduct, and discusses FAMA’s role in helping employers identify potential risks through online behavior analysis.
Ben and Dylan explore the broader trends shaping HR tech, including the increasing sophistication of AI procurement processes, the impact of societal movements on hiring practices, and the challenges of integrating AI into decision-making frameworks. Ben also highlights Fama’s approach to empowering HR professionals with actionable insights while maintaining ethical and legal standards.
This conversation sheds light on the interplay between AI, workplace culture, and the future of hiring, making it a must-listen for HR leaders and technology enthusiasts alike.
Our #HRTechChat Series is also available as a podcast on the following platforms:
- Apple iTunes: https://podcasts.apple.com/us/podcast/3sixty-insights/id1555291436?itsct=podcast_box&itscg=30200
- Spotify: https://open.spotify.com/show/7hXSgrFWXwvJqPbnKunztv
- SoundCloud: https://soundcloud.com/user-965220588
- YouTube: https://www.youtube.com/channel/UC6aDtrRPYoCIrnkeOYZRpAQ/
- iHeartRadio: https://www.iheart.com/podcast/269-3sixty-insights-78293678/
- Amazon Music & Audible: https://music.amazon.com/podcasts/7e7d809b-cb6c-4160-bacf-d0bb90e22f7c
- Pandora: https://www.pandora.com/podcast/3sixty-insights/PC:1000613511
- PlayerFM: https://player.fm/series/3sixty-insights
- Instagram: https://www.instagram.com/3sixtyinsights/
- Pocket Casts: https://pca.st/iaopd0r7
- blubrry: https://blubrry.com/3sixtyinsights/
- Castbox: https://castbox.fm/channel/id4113025
- ivoox: https://www.ivoox.com/en/perfil-3sixty-insights_a8_listener_25914379_1.html
- Podchaser: https://www.podchaser.com/podcasts/3sixty-insights-1718242
- Pocket Cast: https://pca.st/iaopd0r7
- Deezer: https://www.deezer.com/us/show/3029712
- Listen Notes: https://www.listennotes.com/podcasts/3sixty-insights-3sixty-insights-DgeU6AW42hn/
- TUNE IN: https://tunein.com/podcasts/Business–Economics-Podcasts/3Sixty-Insights-TechChat-p1529087/
- PodBean: https://www.podbean.com/podcast-detail/9nugf-1e0a27/3Sixty-Insights-Podcast
- Audacy: https://www.audacy.com/podcast/3sixty-insights-hrtechchat-1ccab
- podimo: https://podimo.com/dk/shows/3sixty-insights
- rephonic: https://rephonic.com/podcasts/3sixty-insights
See a service missing that you use? Let our team know by emailing research@3SixtyInsights.com.
Transcript:
Dylan Teggart 00:00
Hi everyone. This is Dylan Teggart. I’m here from 3Sixty Insights, speaking with Ben Mones from the CEO of FAMA. Ben, thank you for joining me.
Ben Mones 00:23
Thank you, Dylan, for having me. Man, excited for today’s conversation.
Dylan Teggart 00:32
Likewise, and you mind telling people a little bit about yourself? It’s the first time we’re speaking. So giving me a little background on how you got where you are and what led you to being the CEO of FAMA.
Ben Mones 00:55
Yeah, sure. I’m Ben Mones. I’m the CEO, founder of FAMA. Who am I? Gosh, oh, I’m a dad, first amateur at home chef and a CEO. I guess you could say a few few different things, but yeah, my background is enterprise software. I’ve been doing startups pretty much for my whole career. I hired a guy who had looked great on paper one of my early startups, he ended up doing something really bad to one of our employees at the company, and we saw signal of what happened and the early indicators of risk via this guy’s online presence. And had we seen who this guy was online, we never would have hired him. So you know, really Fama, that founding story still drives a lot of who we are today, where what we do is help you go from shortlist to finalists on people that you’re hiring. And we think the secrets are bill, the treasure hidden in plain sight, whatever analogy you want to use in who we are online, who we are in text, image and video that can tell us things about the sort of risk. And per my story, someone brings to the table the big five traits, the psychological, you know, three traits that somebody has so much can really be gleaned through who somebody is online. And that’s really what we do, is we provide companies a window into that world of insight so that they can find the right fit in their organization and increase quality hire. So that’s what we do.
Dylan Teggart 02:22
Nice. Well, thank you again for joining me and just kind of getting into the how fama got off the ground. Did you, how did you kind of start it? How did you build out the business and where, how did you get to where you are today in terms of the trajectory of the company?
Ben Mones 02:41
Yeah. So we were lucky to go through a accelerator out here in Los Angeles, where I’m at, called amplify. And you know, for those that are unfamiliar with accelerators similar to, you know, call it the Y Combinator, if you will. That’s one of the more tech star, some of the more famous ones that are out there. But, you know, really, I had had a ton of belief in the why, right? You know, coming out of experiencing the pain that we were solving for, I knew what we do at fauna, again, was a problem that needed to be solved. But as a first time founder, like I didn’t, you know, know that much, if anything at all, about what it took to start a company so amplify actually provided all of the infrastructure, the introductions to investors, the coaching on everything from team development to fundraising emails To even strategy and introducing us to some customers along the way. So, yeah, we actually started fama with just a PowerPoint presentation. So we raised 150k for 10% of the company, which was, people always say, Oh, I’m giving up so much my company. Well, my mindset is, you’re you’re giving up a percentage of zero at the time, which is what that company is, and my math is good enough to know that that’s still zero. So yeah, in any event, yeah, we started with amplify here in LA, and a few milestones later, we raised a seed round, a few milestones later, Series A, B, and you know, 10 years later, January is actually going to march, the 10 year anniversary of fama. So we’ve been at it for a while.
Dylan Teggart 04:23
Wow, congrats. And, and I feel like probably, as you started doing this years ago, I’m assuming you hit some skepticism. You know, people were probably unsure how you were to collect all this data. How was it received, kind of, from the get go, or was it always kind of people understood the premise, and how was the concept conceived initially?
Ben Mones 04:48
Yeah, you know, in the early days, first, there’s segments of doubt, right? You have your own self doubt about whether or not that’s going to, you know, your company is going to work. Working up. But two to your point, like you have market doubt, right? Of other people who are either going to buy it, going to invest in it, you know, going to be a part of the product experience one way or another, who, of course, doubt it. And you know, in the early days, I’ll speak to the the sort of market doubt first. But yeah, in the early days, there wasn’t this sort of understanding of our role in the analog world and what that meant for online identity, right? You know, this is also prior to so many of the societal changes I think, over the past decade that have elevated the importance of things like reducing intolerance in the workplace, enforcing gender in a quadrant. I mean by that, you know, things like the ME TOO movement in 2016 really brought to the fore questions around, how are we minimizing harassment in the workplace? How are we ensuring we’re not bringing people in that are acting misogynistic or hateful towards other people, that are driving others to leave, impacting our brand and impacting our culture negatively. And similarly, I think the racial justice movement of 2020 also brought a lot of focus to, you know, these sorts of issues in the workplace that companies had cared about for a long period of time, but it was also consumers started caring about those sorts of issues too. You know, consumers started buying and making purchasing decisions based on some of these more value oriented buying decisions, right? And that flowed all the way down, not just into vendor supply chains and messaging and the little you know, black squares people posted on LinkedIn. There’s that kind of, you know, real quick in the moment, ways to show that you care about the other thing I think that we’ve seen is that companies really did begin shifting and saying, okay, you know, these are the sorts of issues we care about when it comes to who we hire and who we bring in, in other words, satisfying the demands of, you know, their customers, of their employees, who are now asking different questions about the people that they either worked with or bought from. So, you know, that was one dimension, I think also too, you know, I remember when I started, people used to be like, Yeah, well, social media, like my kids are on social media. What does that to do with hiring anybody? And, you know, that was prior to, I think, such a dramatic explosion and use of social media that happened post pandemic, where so many of us spent time in our homes and found solace in online relationships, online friendships. So I think over that 10 year period, too, who we are online and our digital identities have dramatically expanded. You know, we are creating, I think, billions and billions of pieces of content every week, every single day, every moment. You know, I think almost every second, something like seven megabytes of data we’re creating right now in this world that we’re in. So you have this kind of proliferation and spread of companies that really began caring about issues that we track things related to a person’s behavior, how they’re going to act around fellow employees or customers. You have the really insignificant, I would say, explosion of online, social media usage, online identity, more generally, so that we’ve been a company that’s benefited, I think, from a lot of those types of, you know, tailwinds and changes in society overall. But yeah, for me too, you know, it’s easy to get all caught up in doubt. You know, if something isn’t working, it isn’t working, and what’s that definition of insanity is doing something stupid twice, whatever it might be. You know, for us, he had to kind of put the blinders on and keep executing, keep believing, because we believe throughout, you know, our own belief in what we were doing never wavered, not for one moment. And I think the market, you know, came around in that period too.
Dylan Teggart 08:50
So long, long answer, no, but it makes a lot of sense, because I think no one would deny now that what people post online is something that people notice. It’s no longer obscure to be on a message board or to just post publicly about your life or about your opinions, and it seems like more and more employers are expanding that, you know, it’s not just a drug test anymore. It’s a bit of a social, social tests as well, kind of, you know, touching on what you mentioned earlier, which is cultural fit, which you almost don’t want to find out, you know, two months in, you want to find it before the person starts the job. And as that’s been the case more and more, I think, is just kind of a standard policy for companies, what you know, whether or not they have the tools for it, they’re interested in it, right? And as that’s developed, how do you think it’s a change? How do you think it’s changed? How you’ve, you’ve approached the market and and where do you where do you think fauna falls into, you know, the use of AI tools in a. HR tech for this specific subject,
Ben Mones 10:03
yeah, sure. I think overall, we have been in a bit of a spin cycle, especially years, as it relates to AI, the use of AI in the enterprise, particularly use of AI in HR tech, you know, applications overall, because you’re spot on, right? You know, companies, just like with our solution, who’ve began drifting and asking themselves, all right? Well, my workforce is 50% Gen Z and Millennial by 2030 you know that number is going to be 75% so how am I adopting the tools of my toolkit to reflect changes in the workplace, right? I think that’s happening across the board, across a wide range of functions in and out of the kind of, you know, human capital management experience when it comes to, you know, software. So, you know, I think a lot of the tools and part of this is fama finding its voice and its own footing with our users, right? You know, this has been a big question, not just of like, what is the right feature set? Because, of course, there’s an answer to your question that’s like, oh, well, we just built features that, yeah, as is adopted and grown, more and more companies have come on board, and they’ve had different feature requests. So I want this button to be green instead of blue. I want this number of data sources, I want this Insights report, right? So I don’t think that’s really, that’s not really what you’re asking, right? Those are features that any company you know would put together. I think the bigger question is like, what’s the spirit of this technology, and what is the way? How are users engaging with technology to move the needle for their own business, right? And I think a lot of AI tech that’s coming out now replaces what I would call the sort of decision making framework inside of an organization right now, I’ll give you an example. Take like conversational applicant tracking systems, which were a huge thing at HR tech this year ATS. In other words, it can talk to a candidate on your behalf, ask the right questions, engage them, and replace that previous recruiter sifting through, you know, job applications from indeed, doing that initial candidate screening of deciding who gets an interview or who doesn’t right. And I think that fear of adopting AI, you know, particularly AI based technology, I think in this case, in the world of HR, tech comes from users being fearful about the fact that decisions that they make, are they going to be replaced by a machine. And if those decisions are replaced by a machine, you know, it’s going to make a poor decision, it’s going to make a biased decision, or hallucinogenic decision, right? And so I think in a lot of cases, that sort of AI decision making replacement tech is running into some challenges, and you’re seeing, you know, AI resume creators, this example, a few weeks ago, but like aI resume creators that are now like, going to battle with the AI resume filtering tech for the conversational ATS. So in other words, anytime you outright replace the decision making framework, there’s going to be a companion technology on the other side of the patient that’s going to try to do the same thing and flip the sort of narrative, flip the problem statement, if you will, on its head. So where we focused our technology is say, Well, how do we take a step back? And not, you know, take a step back and how we think about the strategy, but take a step back in the use technology and make decisions. Meaning, how do you bring somebody to the precipice of action? How do you fill in variables that companies are already using in decisions that they’re making about hiring in new, novel and unique ways. And how do you use AI to do that right? And so back to our example of faux Ma. We talk a lot like, hey, companies are, you know, super focused, like we just said, on finding somebody who’s got the right sort of, you know, behavior track, if you will, around employees and around customers. Well, instead of asking questions in the interview, or, you know, trying to, you know, say, Well, how are you raised? Or, like, interview questions, right? They can kind of assess a person’s, you know, perspective behavior. We can look at who somebody is online, instead of, you know, trying to have someone do a self assessment to get a big five characteristic. You can run that just by analyzing their online language, whether it’s a LinkedIn post or a Tiktok video that they’re creating. So our approach has been, how do we, like maximize the decision making, authority, capability and experience and expertise, to empower the HR user so that they can actually not just win one round with the AI from the candidate, but how do we put them in a position where they’re winning the long term battle, right, where they are fueling their own expertise with new and novel technology? So it’s not just that their company stays a step ahead, but they stay a step ahead too, right? That. That’s where we see the excitement and the engagement is that companies look at technology in ways that, you know, really users, that don’t just benefit their boss or their team or their company, but things that benefit them, right, that make them more intelligent, make them more essential and make them more, you know, intrinsic to the success of their company. So we’ve been really thoughtful about how you deploy the technology, and not just like scoring people or saying Jane’s an 88 and John’s in 81 and trust us, we know now we’ve tried to be a little bit more thoughtful, I would say, about where and how we introduce data. Yeah, it’s
Dylan Teggart 15:38
an interesting thing you mentioned, which I’ve thought about as well with the AI versus AI kind of comp battle that you kind of set up because people want to remain competitive while they’re applying for a job. Can’t blame them for that, you know, they’re just playing the game, but eventually they’re going to learn how to play the game really well, and they’re going to have AR AI software that counters the ai ai screening software. The effect I’ve also even seen on and whether or not this works, is people putting micro keyword bricks of text in their resumes that the naked eye can’t see, but a machine scan, or, you know, AI scans it and 100% picks it up. So it’s amazing the things people can come up with to counter these technologies. And in that sense, when you know someone is applying to a job and they get run through Fama, how are you kind of staying ahead of the curve with the challenges and the new ways people are going to come up with to circumvent this? Yeah, and what are you seeing people doing now that this does become a bit more ubiquitous, and how’s Fauci kind of dealing with that? Totally we
Ben Mones 16:55
big question. I would say we operate, from a technology standpoint, in a slightly different framework of qualitative decisions. And what I mean by that is, like, you’re right with companies, uh, and changing. I’ve seen that exact resume, you know, they put it in like, white text, right? Micro text. It’s not even visible on a piece of paper, but it’s a machine, which absolutely you should expect in any open market, that both sides are going to try to compete, to get the upper end, especially in a highly competitive market like a labor market. So it makes sense, and it’s a feature, in the spirit of the conversation we’re having, it’s a feature, not a bug, of the overall experience, right for both consumers and technology providers. So, you know, I think where that leaves employers, though, is kind of back to square one. It’s like, well, the AI is talking to the AI. Now, what do we do? Well, your boss, your manager, your CHRO, your VP of talent acquisition, they still want you to fill all these roles. Hiring managers want you so now, okay, now let’s go back to, you know, paper clips and SharePoint and all the other you know, manual ways to work. I don’t think we’re quite there, but you know, for Fama, when I say we operate in a sort of different technology realm, it’s you can’t gain your behavior over time. And what I mean by that is that we operate in a sort of the world of moral psychology, right, of like, why people, you know, do bad things from time to time or act in certain ways. And you know, really, what we are looking at is a person’s behavior when the camera isn’t on, when the interview isn’t happening, when you aren’t downloading the technology to get through this one stage gate of an interview process. But you know, we’re looking at the things that people are saying and doing over a period of time, establishing patterns, looking at, you know, not, whether or not again, if you put a score on somebody, that’s where this gets really complicated and also incorrect, right? If I was to say, Oh, well, we have a scoring tech. We have some anti AI, you know, measurement to figure out if they’re juking it or not. Now for us, what we’ve designed is a way that can bring a user again into their existing decision making, into the same strategies they’re running, but finding the variables in new and unique ways. What I mean by that is like, instead of saying, Oh, you care about how someone would act towards other people who look different or might have a different gender or different sexuality, whatever it might be, instead of asking interview questions about like, you know whether or not you know how you feel about those issues, nobody would ever ask. Of course, we’re looking at how people are saying hateful things online or threatening things online or posting things that might normalize an aberrant point of view. And that’s what I mean by sort of the moral psychology is like people who say something hateful, hurtful, violent, have normalized that, right? If you’re saying it publicly, you know all the repercussions, like you were like you were saying, this is a known thing that you can google racist tweet fired and see the Google Trends history like between 2015 and now it’s way. Up into the right because there’s plenty of examples of these sorts of things happening. So And folks, by the way, candidates, sign consent forms, they know that these checks are coming. They know So before this check is even run, you sign a consent form. Say, Yes, I consent to this sort of check. It’s happening. So you would think that most people, if they’ve said crazy things on the internet will go and delete their stuff, right, or make it private, right? Because all we’re doing is surfacing and finding those kinds of needles in the haystack, if you will. Hey, I want to know if Ben had said anything harassing, if it’s anything negative, but our CEO, if Ben, you know, posted about illegal drug use online, I want to know that kind of stuff. Again, same sorts of things you assess for a typical interview process. What we do is we’ll source and identify that for you, if it exists in a person’s digital identity. And what we’re trying to do is find where people have normalized certain types of behavior that are unacceptable to your organization. And that’s where we do it, you know, really on the risk side. So
Dylan Teggart 21:01
and from a talent acquisition perspective and talent, you know, talent assessment technology, what is it? Run me through a typical user experience and say, I apply for a job. You know, mid level job stakes are not super high. I’m not going to be doing anything that’s really making or breaking the company. What would be, and I signed this consent form. You know, the employer is using fauna. What would be? What would happen to my candidate profile as it moves through the system?
Ben Mones 21:35
Sure. Yeah. So typically, this is run right around the time of like group of finalists or conditional offer has already gone out, when other types of background check are run, so they might be running a Crim check, a drug test, whatever, employment verification, education verification, a faux ma check gets triggered by your background screening provider, either by us, directly out of your applicant tracking system, so a candidate Basically reaches a certain stage or candidate profile would get to, like conditional offer, trigger a background check, would collect the background check, return the results, and then if there is anything in your digital background that mapped to one of the things that the customer cares about. And every customer has a slightly different take, right? Clients have different things that they care about. You know, some are much more conservative than others. Others take a little bit more of a freewheeling approach. And we care less about, you know, A, B and C, but we really only want to know about D in those cases, you know, if there is a hit that comes back, you know, we go out fama finds your web, the candidates will have to do anything they don’t often, we don’t ask for username, we don’t ask for password. We go out filter through your entire web presence, and if the company has identified anything that might be a risk to them that we appear that we find in your digital background, we share that with them via report, largely API or PDF based. And if the customer decides that, I mean the enterprise decides that they, you know, want to potentially have a conversation with you about it, they have to have a conversation with you. You can’t take action on a faux report without something called a pre adverse action action letter. Essentially, what that means is that you’ll receive a copy of your report the company say, Hey, we’re not moving forward. You know, done with your candidacy because of XYZ. You have five to seven business days to contest and tell us why, maybe bigger one time thing, or something like that. Anyway, the candidate gets an opportunity to contest the results and explain the why and what for, add context behind it, etc. So all of it very high level results in a conversation the company and the candidate, hey, we found this out there either A, delete it or B, because of this, we’re not moving forward, you know, with you in the process. So kind of a few different parameters, but a much more like qualitative conversation than
Dylan Teggart 23:56
you might think, interesting. And I guess, as you know, if we think back 20 years ago, let’s say it would be somewhat unthinkable that you would post as much as we do online. A, it wasn’t as practical. And B people, I think maybe, just maybe, it was a cultural thing. People just weren’t as open to sharing things as we are now for the most part. Obviously, a lot of people are very, very private still, and as those goalposts kind of move does, is it up to the business to decide what is deemed as acceptable and what falls below just the new norm, you could say of that’s just what people do. Totally, yeah, yeah,
Ben Mones 24:46
great question, and 100% I mean companies as part of what’s called the Fair Credit Reporting Act, which I’m sure, as most of your listeners know, is the sort of legislative framework that governs what companies can use when it comes. To gathering public data about people using in a hiring decision. So there are, I would say, compliance, both GDPR, FCRA, Pepita in California where I’m at, or Pepita, excuse me, in Canada, CCPA in California where I’m at. So there’s a lot of, I would say, legislative frameworks that kind of govern, I would say, to a degree, the new normal, right, like the right to be forgotten, the privacy controls, the data security controls, right that all sort of govern and measure and determine how to go about it, but the why and the what, the why is really the company. The company gets to decide. Here are the here’s why we’re making a decision. Company is a private sector. It can totally make a call. Even a public sector company can make a decision on who they want to hire based on their own internal set of standards. But, yeah, you know, when it comes to, like, managing flag types we flag for, we’ve recently added, like a trolling flag for fauna. You would never have a trolling flag 20 years ago, to your point, because trolling wasn’t really a thing, but now companies want to know, like, is this person I’m hiring harassing a bunch of people on the internet, trolling them, etc, because that’s a bad look for our company in terms of the sort of you know mindset we want. So we’ve made other changes too, like when cannabis became increasingly legalized, which also happened in the past 10 years. I was always, you know, in a couple of states, but cannabis is now much more recreational. And I think I even heard something on the radio used more often on a daily basis than alcohol in this country. So we used to treat all drugs the same. Now we have cannabis and other drugs split out because a lot of companies, they don’t care about cannabis, but still want to know about cocaine or fentanyl, or, you know, heroin, that sort of thing that other people are using. So I would say it has always been up the directions. Your question has always been up to the customer and will be up to the customer to decide what risk looks like, because that varies so greatly by company. However, as a service provider, we are evolving the technology to reflect changes in how the software is used, like we didn’t have a for example, like video coverage when we started in 2015 because video on social media was like minimal. It was YouTube, right? And now it’s Tiktok, it’s reels, it’s the number one form of content. So we adapt our technology, we adapt the way it works to fit changes in the current landscape. And also various compliance frameworks drive how this technology is deployed, too, and we’ll continue to as well. Yeah,
Dylan Teggart 27:38
because it seems like one of those things where the lines are going to keep expanding outward. It’s almost like, I think, you know, if you look back at what was acceptable in the 1950s it would be, can seen what we do now would be seen as incredibly lewd, or whatever. 2000
Ben Mones 27:58
even 2005 you know, compared to now 20 years ago, totally different story for sure,
Dylan Teggart 28:03
and and for someone who you know, going back to how people are going to remain competitive, I kind of wonder how people you know, people already have, like their fins, or whatever you want to call it, like their alternative account that is maybe just for friends and private Yeah, do you think this could, you know, for professional people? Do you think this could maybe lead to, and this is a broader trend, of course, but do you think this could potentially lead to a case where people have their private persona, the private account that only their friends can see, and then maybe they have a public account that’s for, you know, the people, if, if even that’s even necessary for someone like that, because they don’t want to have information like that out in the open, because they’re professional lives and that shouldn’t they don’t want that to impact how they work. But even then, if they’re posting hateful remarks on a private account, would that still be flagged, or if they’re posting it under a fake name, I’m just because, going back to the AI resume thing, people are going to, of course, try to circumvent it. And what are the ways you’re seeing that happening in our counteracting it, if at all?
Ben Mones 29:13
Yeah, well, and that’s sort of what I was getting at earlier, which is, like, you know, you you can’t essentially get around this, because it looks at behavior over time. And most people, to your point, like they aren’t acting hateful online, and there’s no reason for them to be private. But most people are private anyway, on the internet, right? Still, a huge portion of people posting things publicly. But to answer one point you read, if anything is private, we’re never going to filter it, never going to screen for it, not now, not in the future, not in the past. We think that if you know, to keep certain stuff private, because I’ll tell you like I have with my friends who I’m really close with, that I’ve known for 25 years. Do I say things to them that I wouldn’t say to my colleagues at work? 100% do I say things to them that I wouldn’t say to my wife? Wife wouldn’t say to my kids, to my parents, 100% right? There is a certain message and a certain self that you have with different people, like that’s just the the way we are as humans, no doubt about it. And what we’re checking for at fauna are those that don’t have those lines or don’t have that division, right? It’s the idea and concept that I think it’s okay to act hateful towards other people because of what their race is, and I’m public about it like doesn’t matter to me, right? Most people who truly have problematic or aberrant behavior online don’t think there’s anything wrong with their behavior. In other words, there isn’t that logical step that you’re describing, which is like, Oh, I’m going to make all I’m going to be really private with all the bad stuff and really public with all the good stuff. That’s how, that’s how most people are today. You know, that’s how the majority of us interact. You don’t say the same things to your you know, your your boss, that you’re going to say to your buddy that you’ve known since you were 10 years old. And you guys, like, went through puberty and did everything together from when you were teens to now, and you’re sharing all those stories from those days. Great, like, have those friends. But what we’re saying is, like, if somebody who is getting to know you for the first time and they see that you’re, you know, acting misogynistic on the internet, you’re a remote company. They’ve never met you in person before, like that makes people feel uncomfortable. That makes people feel like they’re less engaged in the job that they’re in, and to a degree, um, not productive and not contributing the way that they were before. So that’s really what we’re talking about here, is like, to some extent, is it possible, and everyone would post things privately and keep their public image untarnished absolutely but again, we’re not looking for those that are rationally identifying that. It’s the folks that you know don’t think there’s anything wrong with problematic behavior, and that’s what they bring into the workplace. That’s how they treat their fellow colleague, that’s how they treat a customer, right? And so that’s what we’re trying to find, is the simple behavior intelligence type thing. It’s how to think about,
Dylan Teggart 32:06
yeah, that makes a lot of sense, given the fact that you may never meet people you work with in person, and especially what other litmus tests you have. And speaking of broader trends, or getting into broader trends, just to say you were at HR tech, it sounds like aI was a big time. It was a huge talking point there. It’s everywhere. It’s good. It seems like it’s getting to be even just in everything we do. So what are kind of the big trends you see, latest of that, but also in general and and how it ties into what you do? Yeah, sure.
Ben Mones 32:50
I think there’s a lot of investment, of course, right now in artificial intelligence, but you know, not talking about the bot versus bot thing or the trends you know that are happening, I would just say the company creation and adoption level. One of the big things we’re seeing now is that the purchasing processes for enterprises is now changing to include AI specific audits and questions. In other words, like companies have now realized, you know, at the end of 2024 that not all AI is created equal, and that that sales person’s promise, or that marketing websites promise, doesn’t always work out, you know, in terms of exactly how the signer on that deal might have thought it would work out. So one of the big trends that we’re seeing is this AI procurement that’s now happening, where companies are asking how models are trained? Have you run the independent bias audit, right? Talk to me about the makeup of your engineering team who’s building this technology, right? And so what is the source data that it’s trained on? How is that data labeled? What was the inter rater agreement on the golden data set that your company created? I mean precision of models, accuracy, recall of these models, right? And so we’re seeing just a huge change in how companies buy AI based software that it’s almost, you know, like the beginnings again of remember, like eight years ago, SOC two compliance and data security became a huge business as companies now started asking questions around data breaches and cottage industry and third party consultancies and software spun up to support that process. I think we’re starting to see that now with AI technology too, because companies are now completely different questions of us, and we get through those final stages of vendor onboarding than they did six years ago, and it’s been a real. Interesting trend I think a lot of people aren’t talking about, is like the nuts and bolts of how companies are buying AI, the questions that are being asked, and at times, people asking you to explain a black box AI, to them that you or they may not have the capability to fully, in exact detail, explain the why, what for, and how it all comes together. So that’s just been more of like a trend I’ve observed that I found super interesting. It’s like just the changes in the procurement processes for some of these AI providers.
Dylan Teggart 35:34
And how do you find that, as affected the way you’re pitching faux to people, given that this is something people are looking for.
Ben Mones 35:43
I think you know, one of the things that you you sort of have to identify, is that the use of AI, while exciting, is also scary to some of these companies. In other words, like we’ve promoted the fact that we have a human QA on every check. You know, we’re now at the point where it used to be in this when we started the company, try to, you know, front try to put on a little more varnish, a little more lipstick. You know, when it comes to dressing up your technology, if you have a guy on a paper, you know, behind a paper wall, riding a bicycle to keep the lights on, which is totally cool, because a lot of tech companies are like that. We were like that in the early days. Obfuscate, make AI there, technology there, right? Try to seem bigger than you are, whereas now what this is brought up is, and that was something we did as an early stage startup, but more mature now, a totally different story. Now we’re talking about the humans in the workflow. Now we’re talking about the human QA that we do that reduces hallucinations and increases quality, right? Because at the end of the day, these customers all, all they want from technology is not how it comes together. They don’t care to use that old adage how the sausage gets made. They just want accurate data that moves the needle on decisions that they’re making. And so we found almost focusing on some of the Yeah, non technical components to be a bigger sales point through some of those motions. It’s
Dylan Teggart 37:18
interesting. I like the way you put that companies want accurate data to inform the decisions they’re making. It is really just as simple as that, and I think sometimes it gets a little too diluted along the way. Totally. Melissa and Ben, this has been a great conversation, and thank you so much for joining me today. If people want to reach out, you mind letting them know where they can be in touch with you.
Ben Mones 37:41
Yeah, check us out@fauma.io that is the best way to get in touch with us as a company. Again, F, A, M, a.io and you can check out a bunch of resources on our site. Learn a ton you know about what we do, how we do it, these sorts of things around AI, but a whole other world of content to dive into. And of course, you can check me out on LinkedIn too, where I post from time to time.
Dylan Teggart 38:07
Nice. And is there anything you can tell us about for 2025 Yeah, you
Ben Mones 38:15
know, I think for us, we’re still in the early innings of what we’re doing here at fauna. We’ve got a lot of, you know, unique technology plan, but you know, more generally, I would just say that who we are online, who we are in text, image and video, can tell us so much about who we are, how we behave, that we’re going to really do our best to kind of unlock that next year and take our insights to the next level, so
Dylan Teggart 38:42
awesome. Well, thank you for joining me, and thanks everyone for tuning in.
Ben Mones 38:47
Appreciate it. Thank you Everyone. Bye. Thanks