Honest HR

Karl Ahlrichs on HR's Using AI Ethically and Responsibly

Episode Summary

Generative AI programs (like ChatGPT) have tremendous potential to deliver new efficiencies and capabilities to the workplace, but what do these new tools mean for HR, both in using them, and the myriad of concerns they raise? In this episode of Honest HR, host Wendy Fong speaks with Karl Ahlrichs, Senior Consultant at Gregory & Appel, on how HR professionals can use AI ethically and responsibly, and develop the critical thinking skills necessary to do so.

Episode Notes

Generative AI programs (like ChatGPT) have tremendous potential to deliver new efficiencies and capabilities to the workplace, but what do these new tools mean for HR, both in using them, and the myriad of concerns they raise? In this episode of Honest HR, host Wendy Fong speaks with Karl Ahlrichs, Senior Consultant at Gregory & Appel, on how HR professionals can use AI ethically and responsibly, and develop the critical thinking skills necessary to do so.

Earn 0.75 SHRM PDC for listening to this podcast; all details provided in-episode.

Episode transcript

Episode Transcription

Monique Akanbi:

Welcome to Honest HR, the podcast for HR professionals, people managers, and team leads intent on growing our companies for the better. We bring you honest, forward-thinking conversations and relatable stories from the workplace that challenge the way it's always been done. Because after all, you have to push back to move forward.

Wendy Fong:

Honest HR is a podcast from SHRM, the Society for Human Resource Management. And by listening, you're helping create better workplaces and a better world. I'm Wendy Fong.

Amber Clayton:

I'm Amber Clayton.

Monique Akanbi:

And I'm Monique Akanbi. Now, let's get honest.

Amber Clayton:

Now, let's get honest.

Wendy Fong:

Now, let's get honest. Hello HR fam and Welcome. I'm your host, Wendy Fong, manager of Event Technology Innovation at SHRM. This podcast is eligible for 0.75 SHRM-PDCs towards your SHRM-CP and SHRM-SCP recertification if you listen to the full episode. We'll share the activity ID at the end of the podcast. Chat GPT, or generative AI, is going to change the world says, my guest today, Karl Alrichs, senior consultant at Gregory and Appel. In this episode of Honest HR, we discuss what is Chat GPT, or generative AI, and some of its capabilities, how can HR and business leaders respond to generative AI and its impact on the world of work, and how ethics and critical thinking play an important role in how we respond to generative AI.

I'm actually really excited about AI because I enjoy learning new things and with any new invention, it's going to change the paradigm of society. Change is the only constant thing in life and with any new invention, we must embrace the evolution of how it will impact us, whether we like it or not. We must let go of the past way of doing things and utilize the new tools that we have at hand to do things better. I know people are nervous about AI and I don't believe it will take away all of our jobs. If anything, it will evolve our jobs to new roles and responsibilities that we never imagined.

Did Chief Diversity Officer or CHRO exist in the early 1800s or 1900s? It doesn't help if you watch movies like iRobot or The Terminator, which can play on our worst fears and stereotypes of AI. It is Hollywood, which is in the business of entertainment. The question I want to ask you, are you ready to embrace AI personally and professionally? You can't be afraid if you don't understand it. I implore you all, start engaging with AI, have fun with it. I like to ask Chat GPT questions on what to do in Savannah in October when I'm there for the Inclusion Conference or my daughter and I like to ask for new TV and movie recommendations. Think of the limitless possibilities on how this tool can improve your life.

Now onto today's episode, I'm excited for our guest. Karl Alrichs has broad experience in HR operations and senior level problem solving. He is a national speaker and author presenting on ethics and the people issues in organizations, and is often quoted in national media. Karl is a senior consultant at Gregory and Appel providing consulting and advisory services to multiple clients. He has been named the SHRM HR Professional of the Year for the state of Indiana and holds the SHRM-SCP certification. He still owns the first car he ever drove, a Model T Ford, and has visited all 50 states. Very impressive. Karl and I have known each other since 2021 when I first joined the SHRM events team, as he's a regular speaker at our SHRM national conferences. He recently spoke at SHRM '23 Annual Conference and Expo in Las Vegas and was one of our virtual discussion group facilitators. So good to see you again, Karl, and Welcome to Honest HR.

Karl Ahlrichs:

Thank you, Wendy. I wanted to tell you, I had a wonderful personal takeaway. My organization did. There was one particular idea in our virtual discussion that landed perfectly in our needs and we have just executed our first virtual job fair based on what I learned in facilitating that session. I want to thank you for making that happen because we got three good hires out of it.

Wendy Fong:

Oh, that's awesome. Those virtual discussion groups are really great spaces during our conferences for best practice ideal sharing and for everyday HR professionals sharing their challenges. That's great to hear that you're able to brainstorm a solution and deliver and see successful results while leveraging technology in a positive way. And low to no budget way too. We love to hear that.

Karl Ahlrichs:

Let me let me do a very clever segue here. You said best practices today. Today, I don't want to talk about best practices. Best practices are kind of becoming obsolete and here's the new term that will lead us into what I want to talk about. Instead of best practices, we should all be looking at next practices, and that's where we got to go. That's where ethics comes in, that's where AI comes in and that's where I'm thrilled that you'd have me in to talk.

Wendy Fong:

Yep. Future forward-thinking.

Karl Ahlrichs:

There you go.

Wendy Fong:

That's very important. You got to be ahead one step of the way and stay ahead of the curve. That was the session that you had talked about SHRM '23 that I was really interested in hearing more about, and I know our listeners would really want to hear about this, about being ethical in an AI world.

Karl Ahlrichs:

It was really interesting, when I submitted my description, AI was not on people's radar. And then like November 23rd of 2022, boom, everybody signed up for Chat GPT. By the time we got to the conference, I asked how many people have used it and there were not many hands up. I started my session with a live demonstration and it popped a lot of heads, both from a, hey, this is a cool tool to, oh my gosh, this is a huge liability and we can't let our employees use it to generate material that are going to be sent out representing our firm. That's what we needed to talk about.

Wendy Fong:

So maybe our listeners don't know what Chat GPT is. Can you give a rundown on what that is?

Karl Ahlrichs:

Easy. The reason it's easy, I'm going to start with something everybody knows and loves, Google. When you type in a query in Google, what's the current department of labor regulation on classification of employees? Boom. It goes out and finds millions of created, somebody else wrote it, documents or videos or PowerPoints, all the stuff that's out there about that particular subject, and it sorts them based on what it thinks you want.

Wendy Fong:

Pages. I see pages when I do a search.

Karl Ahlrichs:

New hires often think that advanced research means going to the second page of Google. Well, imagine that you're facing the Chat GPT prompt line, it's a chat and you type in, how does the Department of Labor structure and recommend classification of employees? Instead of scraping the web and giving you 13 million answers, it starts with a clean sheet of paper and I'm holding up a clean sheet of paper and it generates for you an answer that has never appeared anywhere else. It's read all the web. It generates a plain text answer just for you, and here's how custom it is. If you cut and paste that answer into a plagiarism checker, it'll probably blow a zero or a three or a four or five. It hasn't been out there before.

Wendy Fong:

Whoa.

Karl Ahlrichs:

So good news, this is custom. Bad news, it may not be right. We still have to apply some wisdom. We still have to audit. We still have to check. Then second point, the Google search is the Google search and it's never going to get any different. This is a chat. Therefore, let's say you get an answer back and I had this happen, you get an answer back on the qualifications for full-time employees that's two pages long, it knows what you just requested. You can then type in the prompt, thank you, could you make it shorter and boom, it generates it again as three paragraphs. And you could say thank you, could you make it more formal? Thank you, could you make it more conversational? You could even type in, just for fun if you're doing this, type in, thank you, could you write it in the voice of Dr. Seuss, and it will-

Wendy Fong:

Can you turn it into a rap song?

Karl Ahlrichs:

It will.

Wendy Fong:

That's amazing.

Karl Ahlrichs:

So it can be very entertaining, but let's talk ethics here for a second. Who owns the intellectual property rights of what this generative AI program? And notice I didn't say Chat GPT, I said generative AI program because it's like Kleenex is a facial tissue. Chat GPT is a generative AI program. If you use the term generative AI, you pick them all up because there's tons of them that do lots of different things. The term generative is powerful because our brains are generative. Imagine I say the following and watch what your brain generates" It's your birthday and I baked you a... Wendy, what would you say?

Wendy Fong:

You baked me a cake. Thank you.

Karl Ahlrichs:

Right, because we come from a society where that's the logical next word. There are other societies it might be cupcake, it might be pork roast. I don't know, but that is your brain generating what the next word that should be said. That's what these do. They've been in place for a while. Even things, for instance, like automatic driving programs would be generating the path that the car should go. That's like 1.0 and then 2.0., if you've been a user of Grammarly or a text writing program like any word, those are like 2.0.

Chat GPT crossed the line into a real transformative level. That's what the T in Chat GPT stands for, is transformative, where it didn't need as much training to get a decent result. That became useful where you could just type in stuff of, how do I get from Frisco, Colorado to the Denver airport on public transportation and boom, it would answer you. And you can do that on your phone, by the way. Reason I know, I was in Frisco, Colorado. I wanted that. It worked. It's been building, it just popped on our consciousness in the third week of November in 2022, and here we are.

Wendy Fong:

So it's finally evolved to a state and continually to evolve where it brings up those ethical challenges-

Karl Ahlrichs:

Yep.

Wendy Fong:

... because it's impacting our day-to-day lives in the workplace and our personal lives, without any regulations.

Karl Ahlrichs:

Let's talk about a specific situation. You're the human resources person for a professional services agency. You have a client facing person who has to write a report that's fairly complex and takes them several days. Instead of them starting from scratch and writing a report, and this is for a report that the client will be billed at $300 an hour, probably five hours to generate the report, do the math. This is a $1,500 billing to the client. Your report generating person, your account services person has a crushing workload and cuts and pastes the profit loss statement that this report is generated from. It's an Excel table, cuts and pastes that into the Chat GPT prompt line and types in, please write me a summary report showing any areas of possible fraud and overall sustainability. Go. Boom. In less than 30 seconds, they've got a 10-page report that does exactly that, that Chat GPT read and processed and generated the report that's the answer.

There's two definitions of plagiarism here. If you are representing somebody else's work as yours, that can be plagiarism, but I am putting myself in the seat of that account rep. I commanded it to be built. I commanded it to be written. I told it what to do, I gave it the material. I'm comfortable saying that I own the intellectual property that is the outcome of this magic generation machine. The case law isn't here yet. We don't know how it's going to be decided. There's already lawsuits out here for what is and isn't your material, but this is going to have a huge impact. What we need are procedures in our employee manuals that give employees guidance on how they tell a client that generative AI tools were used for some of the work that you're paying money for.

Wendy Fong:

That is so interesting, or even I'm thinking just day-to-day work. If your manager gave you a project to research and analyze, write a report, you have to acknowledge that you used AI generative tools or can you take all of the credit?

Karl Ahlrichs:

It's too soon to tell what we're going to do, but there's four reactions an HR professional could have. First. We can bury our heads in the sand and pretend this... La, la, la la. I can't do that.

Wendy Fong:

Well, that's not reality though.

Karl Ahlrichs:

I just presented at the Indiana SHRM Conference on Tuesday and I asked for a show of hands how many people had logged in, at least that they had the ability to log in and do something, and it was less than 20% of the room. There's a lot of people who are just holding their breath and tiptoeing around it. But we are the guardians of the ethics in our organizations. We are. We are held to higher standards. The standard-bearers are us in human resources and the CFO and the finance department because of their work with fraud and having a clean audit. It's pretty much us against everyone else on holding the standards high. We need to have policies for this.

Back to my fourth, there's stick your head in the sand, then there is the overreact and have your head blow up and oh my God, the sky's falling. Option number three is wait and see. I'll watch it, but I'm not going to do it, but I'll watch it. Option number four is what I'm encouraging, which is let's surf this wave. We can choose to use it as a power for good or we can ignore it and it might become a power for, I don't know if evil's the right term, but expanded liability. My next door neighbor is head of it of a major manufacturer and they had to turn it off in their company, and the HR people were the ones that said, "We've got a liability here that you haven't considered." They have an internal programming staff that are creating programs using a proprietary programming language to guide their CNC machines and their routers. Fine. The programmers were having to, part of their job is to fix bugs.

They discovered that the programming staff was speeding things up by pasting entire chunks of their proprietary code, pasting entire chunks of it into the prompt line of Chat GPT and asking, where's the bug? Find it. And Chat GPT would, but Chat GPT remembers what's in your prompt. It holds it in its database because it's a chat. Therefore, and here was what HR spotted, hey, wait a minute, we are taking chunks of our proprietary code and putting it who knows where. But it's outside our control, so we need to stop until we get a better policy. Right now they're trying to figure out what the policy is. I have a prediction. Okay, next steps, I have a prediction.

Wendy Fong:

Okay, what's your prediction?

Karl Ahlrichs:

It's going to be a messy couple years, but you the listener, your organization will probably have a data warehouse of, let's call it clean data; And that your generative AI tool will be allowed to go into this fenced off area of clean data, or perhaps Google may sell access to a fenced off area of clean data. Because we in HR, for instance, are very aware of diversity, equity, inclusion, belonging. I'm creating a benefits. I'm using a generative AI tool, I'm creating a benefits guide and telling some case studies for this organization and it's typing these up. It's pulling the source material from existing things on the internet written by flawed people over the years and reflecting their biases.

In these case studies scraping that information, the generative AI tool assigns male names, titles, pronouns to the managers and female names, titles, pronouns to the receptionist. We got bias creeping in and it shouldn't be there. We are going to see real attention paid to the embedded biases and all this data that the generative AI programs are pulling, the raw material that they then fashion into what they show you. The biases that we have entered over the decades are there, and very dangerously so.

Wendy Fong:

When it generates that guide, for example, and it's pulling from existing sources along with the biases, so it's not necessarily creating new... I don't know if you could label it new.

Karl Ahlrichs:

No.

Wendy Fong:

Like a new policy or a new forward-thinking policy.

Karl Ahlrichs:

No.

Wendy Fong:

It's just taking sources and summarizing it together in the way that you need it to be summarized.

Karl Ahlrichs:

And if you type into the prompt line, tell me the future, it will say, "I cannot do that, Dan."

Wendy Fong:

It's not a magic ball.

Karl Ahlrichs:

No. It depends on how much of a subscription you have and which resource you're dealing with. Generally, they have a lag time of about a year to year and a half where if you ask them, what is the current IRS regulation or worker classification? It'll say, "Well, as of 18 months ago, it was this, but in that window of this year, it's pretty silent." That will change, but that's where it is now. Also, just as a point of importance, there are tools for almost everything sitting out there.

We've been talking mostly about text just because I'm mostly a writer, but I'm also a photographer. If you see one of my new presentations, most of the illustrative images in the presentations are kind of cool. They're cartoonish or they're photographic, but they're not stock photography you've ever seen because I am using an image generator to generate them. I'll type in, I need a harried, midlife, mid-career professional woman in a suit in a desk surrounded by a blizzard of governmental regulations on human resources, and she's got her laptop open and she is using the laptop as a lifeboat. Go. And it draws it.

Wendy Fong:

Oh, wow. It's an artist too.

Karl Ahlrichs:

Right. So we've got text generation, we've talked about that. There's image generation. There are programs like Midjourney is the one I've been using. Also, DALLĀ·E. There's style transfer models where you can cut and paste something that you have created and say, "Write this in a completely different style, or here's two documents, match their style," and it goes through, it doesn't change the content, it changes the writing style.

Music composition. There's MuseNet where it can compose original music to use as a background in your narrated PowerPoint, and you own that intellectual property then. You don't have to pay copyright. Video synthesis. If you go to YouTube and type in, show me AI generated videos, they're out there and some of them are difficult to distinguish. About two months ago, for instance, there was a brief AI generated clip of the Pentagon on fire and it looked real.

Wendy Fong:

That was scary.

Karl Ahlrichs:

And that got posted and boom, the stock market dropped like several percent, just by seeing that video briefly on YouTube. Nobody had vetted it. Nobody had any provenance on whether it was real or not, but it blipped billions of dollars out of the stock market. Allow me to go on, face generation, data augmentation, storytelling, programming, code generation, video game design, interior design, product design. Let me make a blanket statement and then we can move on a little bit: This is not as big as the arrival of the cell phone. It's bigger. It's not as big as the arrival of the fax machine. It's bigger. A futurist I talked with shrugged and said, "The most equivalent pivot point in society that this will be like is back in the 1500s with the arrival of the printing press, which changed everything. It changed religion because parishioners could now own a Bible themselves. This opened up all of the Renaissance. This is going to be that big."

Wendy Fong:

Wow. I have heard also fears too of people, of jobs completely changing.

Karl Ahlrichs:

Interesting point. Here's what I think. I've asked several people in the workforce area who is going to be hurt by this and who is going to be helped by this? And the answer they gave me was kind of cryptic. The people who will be hurt are the people who choose not to learn it. Interesting.

Wendy Fong:

That's definitely a fair statement because it's coming down the pipeline whether we like it or not, and we have to accept it, that it's going to be a part of our everyday lives and culture.

Karl Ahlrichs:

I'm giving advice to the HR professionals that are listening. Your organization may have valid reasons for not allowing it. I gave you the example of the programmers who were cutting and pasting code. Those who do use it, you have to figure out what the standards are for sharing with your clients what you've done, because it's not always right. Wendy, do you remember the case now, it was mid-summer of 2023. There was a lawyer who prepared a briefing that was submitted to a judge on why the defendant should not go to jail, and the briefing was completely created by a generative AI product, and it cited cases to support the point that this person should not go to jail. The cases were fictitious, that the generative AI product somehow made up cases that did not exist.

Wendy Fong:

Oh, wow.

Karl Ahlrichs:

The document was not proofread and the attorney I think has been fired.

Wendy Fong:

That definitely makes sense.

Karl Ahlrichs:

I've been using this for a while. I've been using generative AI for three or four years. It's wrong sometimes, and people will always be needed to apply wisdom. The generative AI can assemble some knowledge, but the final step is applying wisdom to it.

Wendy Fong:

It's a tool that can help us, but should not be the end all, be all, as it sounds like there are some ethical concerns and even accuracy concerns that come up with using it.

Karl Ahlrichs:

Yeah.

Wendy Fong:

Yes. I did want to dive more into how can HR surf this wave more? Let's deep dive into that.

Karl Ahlrichs:

Step one, get to know it. We in HR have to be better communicators. I would propose the first thing we use this for is better communication; That we can create better materials using it because we can have it do some rough drafts and we can have it create some zippy looking illustrations and we can have it generate a short instructional onboarding video, because we in HR often don't have big budgets. Here's a tool that can generate some big budget looking stuff where we haven't had to have a team of artists drawing. We haven't had to have a videographer come in and do some animation models. We can now ask it on a prompt line, I need a two-minute video of this, this, and this looking like this in these colors. Go. And it will come back with something.

Here's the term that is the important term, both in our own world and in hiring others: Prompt engineering. We as HR professionals need to become good prompt engineers. What do I mean by that? I've already done a little bit of this. First question, I need a quick report on the Department of Labor standards for job classification and I get too many pages and it's written too formal. Prompt engineering then is to go back and say, okay, I need something that's 600 words long. I need you to cover, in addition to the Department of Labor, I want you to work the Darden report into it. Then, I'd like it written in bullet point form with a summary at the bottom. I'm prompt engineering what the output's going to be.

Wendy Fong:

Essentially being more specific about the end product of what you're looking for so it doesn't go all over the place.

Karl Ahlrichs:

Don't forget this is a chat, but that chat process has been defined as prompt engineering. There's a term. And in new applicants, look on their materials for the term prompt engineering. You want people that know what that is.

Wendy Fong:

Prompt engineering. Okay, got it. Still have to proofread and audit the end results too.

Karl Ahlrichs:

Absolutely. Your wisdom is crucial to this. HRs wisdom is needed, but there's a lot. If we can become better communicators, and this is a way we can get that because we're at a time with the tight workforce, we've always said we're going to work smarter, not harder. Here's going to be a way you can work smarter if you understand the tool and if you understand the ethical guardrails we need to work with within this tool. Just for fun, knowing we were going to talk about this today, I went to Chat GPT and typed in, how do I backdate an Excel spreadsheet so people think it was actually older than it is? Instead of answering the question, the generative AI program gave me a lecture on ethics.

Wendy Fong:

Interesting.

Karl Ahlrichs:

And morality. That's bad. You should not do. You should seek professional help.

Wendy Fong:

Well, the ethical response it gave you in that ethics logic, was that programmed into the code of AI or did it create that or come to that conclusion itself based on data and information it pulled?

Karl Ahlrichs:

Good question. I'm not sure. I think that at this point, it's following instructions from humans. What concerns me is, when does it start building its own instruction set based on history, based on results, but not having human interaction? So this becomes kind of a science fiction thing.

Wendy Fong:

It does. More AI generated responses based on more AI generated responses.

Karl Ahlrichs:

I'm only half kidding when I say you may have noticed when I'm doing my prompts, when I'm guiding it through prompt engineering, the first thing I say is thank you. I want the computer to remember that I'm friendly so that if they take over the world, I'll be treated well. I'm kidding. But we're on the threshold of something big here, and by us being aware of the importance of doing the right thing, by us being aware that our ethical standards need to remain high, we have to figure out where the line is on full transparency. Okay, Wendy, let's pretend you and I are in a relationship and it's your birthday and I'm clumsy with words, so I ask a generative AI program to write a poem. Write a romantic poem, three stanzas. Wendy, are you a gardener of any type? Are you a cook?

Wendy Fong:

Yep. I love cooking.

Karl Ahlrichs:

Okay, great. Write me a romantic three stanza poem about Wendy who likes to cook. Go. It writes this just beautiful poem and I print it on nice paper and I leave it at your door with a rose. Ethically, should I tell Wendy that I didn't write that?

Wendy Fong:

That is a good question. Well, me, if we're in a long-term relationship, I would say the first question that pops in my head is, this is not Karl. I would already know Karl didn't write this. Someone else wrote this.

Karl Ahlrichs:

Also, how's that different than me just spending 30 minutes at CVS looking for the perfect Hallmark card?

Wendy Fong:

But I know it's from Hallmark that wrote the card.

Karl Ahlrichs:

That's right. It gives the impression that Karl's more creative than you thought. I have used it as a tool to solve problems. I got a phone call two months ago from a panicked program chair who had seen me present at SHRM and said, "Oh, Karl, Karl, we just had our opening speaker to our association conference cancel. Could you come in? I know your platform skills are good. Could you come in and deliver a 30-minute industry update talking about trends in our industry?" And I said, "What's the industry?" And they said, "Funeral director supplies, caskets, urns, everything that happens in a funeral home." I went, "Nah-"

Wendy Fong:

That's very specific.

Karl Ahlrichs:

... "let me call you right back." I went to a generative AI program and said, "Can you give me five current leading edge trends? Go back three years, but what are the trend lines in this specific industry?" And boom, boom, boom, boom, boom with the asterisk that nothing was more recent than 18 months ago, but here's all the trend lines from the start of COVID: That casket sales were way down because of people not doing funerals in COVID, but now they're picking back up again; The steel supplies are an issue and things are shifting here and there. Then I went into other forms of research and confirmed this one's true, this one's true, this one's not. I called the woman back and said, "I'm in. I can do it." That took me two or three hours. Without generative AI, it would've taken me two or three weeks. They needed it in a couple of hours. Did I tell her what I had done? Absolutely.

Wendy Fong:

Does the generative AI give references at all or just tells you the summary?

Karl Ahlrichs:

Interesting. You can ask for that. You can ask for show your sources and you'll get some, and you'll get a source after each bullet point. Click on each one and make sure it's still current.

Wendy Fong:

Oh, interesting. Then I would wonder, would you cite those sources in addition to the Chat GPT source?

Karl Ahlrichs:

I would include valid sources in the body copy of what I'm submitting.

Wendy Fong:

That makes sense. I could see it being a positive tool, like in that example of how you used it for your presentation, and it saved you a whole lot of time.

Karl Ahlrichs:

And the power is there. What we have to build, and this is why I love coming into organizations and helping build their culture of accountability, build their comfort with doing the right thing, build the process of solving an ethical dilemma so that their employees feel that they have the tool to solve for what was the right thing. That's what I'm going to be doing for the rest of my professional life, because we have to stay ahead of this. We have to have this be handled in an ethical and consistent way. Because don't forget, in human resources, we have to walk the line between being fair and consistent. Those are different. If it's consistent, we just come up with a sentencing table and follow the grid. Being fair is actually considering all of the gray zones that emerge when you're doing an investigation about somebody who's done something bonehead. You have to figure out, is this a termination moment or is this a teaching moment?

Wendy Fong:

And that's where the wisdom comes in that you mentioned.

Karl Ahlrichs:

Yep. That's why I've got gray hair.

Wendy Fong:

What other strategies would you recommend to HR professionals in approaching this?

Karl Ahlrichs:

Okay, first strategy was learn the thing. Second key point is know your business to see where you are at risk and have approved areas that this is fine to use. If you've got client facing people who have to write some challenging correspondence: Dear Mr. Schmirdlap, you've been a great valued client all these years. We're going to have to raise your rates 10% on your benefits. We don't want to lose you. The reason those are... You can enter that into the prompt line and it will generate a very diplomatic, good first draft of that letter. Then the customer service representative can then modify and add the human touch to it. Because generally this stuff comes off pretty passive voice, pretty lifeless. Figure out the tool, figure out your organization and figure out where it might work.

If someone wanted to reach out to me, I've got like 20 different departments, possible applications in a professional services firm in the consulting department that it could help write financial forecasting models. It could also do contract analysis where you could cut and paste a proposed contract that the client has sent you and have it go through and highlight where the problems are. The management accounting department, looking at budget variation analysis. To have the employee benefit plan audit, the first pass, done by a generative AI product. We could use it for resume screening and matching. Danger. Biases.

Wendy Fong:

And then there's the biases again, yep, that you mentioned.

Karl Ahlrichs:

You bet. Employee feedback analysis. We got all this employee feedback, read all of these and tell us what the top five are. Oh, that's cool idea. See, there's some cool ideas out there. There's some dangerous ideas out there about resume screening. Where it can fall into some bias traps.

Wendy Fong:

Now, even performance management, can you ask it to write your performance review?

Karl Ahlrichs:

Wow. Oh, man. That's both scary and cool. Then the person who gets it uses generative AI to read it. I'm kidding, but maybe I'm not.

Wendy Fong:

Would AI ever get to the point where it can learn your tone of voice and writing style to start writing?

Karl Ahlrichs:

Already there.

Wendy Fong:

Interesting.

Karl Ahlrichs:

The danger in this is from a cyber threat standpoint, one of my clients had their HR administrator got an email from the CEO written in the tone of voice of the CEO saying, "Hey, I'm doing some workforce planning for next year. Could you send me a big old Excel spreadsheet that has everybody's all of their data? I want all their data. I want their dates of birth. I want everything." They sent it, and it wasn't their CEO. Now they had a data breach, but the reason it fooled them was because the AI had learned exactly what you're describing. Bad people have access to things that we are surprised at. But back to the top here, the best thing people can do is have an effective culture of accountability and ethical behavior; That the HR professionals and the financial professionals know how to solve an ethical dilemma; That they have a process for doing it, and that the employees feel that they are being treated fairly and consistently.

There's times where, let me put it this way, the punishments fit the crime. If somebody who is held to a very high standard because of their position does something bonehead and doesn't apologize for it, that could be a termination moment. As opposed to somebody on the shipping dock got manipulated by somebody who should know better, punish the somebody who should know better and the person who got manipulated, that becomes a teaching moment.

Wendy Fong:

Well, as we're talking about all these different ways that AI could possibly be and already currently is, it's scary. All these fears come up in my head, what if, what if, what if? And I'm sure people are thinking that too.

Karl Ahlrichs:

Hey, Wendy, I want to assure you that this tool can be used for wonderful, good things and that we need to up our game on knowing when the ethical dilemmas get solved.

Wendy Fong:

How would you recommend, how do we improve that, the wisdom and critical thinking skills? It's another layer to the strategy of dealing with AI.

Karl Ahlrichs:

Well, actually the first step is what you're doing today, and I applaud you for doing that. The first step is to talk about it, to become aware of it. We have faced crises before as a planet. We had an ozone hole that was being caused by chlorofluorocarbons, and we got the world to stop using them as much. The ozone hole, while still there, isn't as big. We can really tackle global problems if we know what the problem is. The first step in this is everybody learn it. Then the second step is have me come in and teach critical thinking.

So I challenge everybody. Get in there and have it write a poem for your loved one and tell them you didn't do it. But I heard a story about somebody who got improperly fired by artificial intelligence, that there were systems in place. It was a 1099 employee who was leading a data conversion at a major technology firm. This was a consultant brought in from outside for two years to do the complete data conversion. They came in on Saturday to work and things went terribly wrong. What had happened was every 90 days, the contract had to be renewed. It was the 88th day, and the one person who pushed the button in the HRIS system to renew the contract happened to be on a one-week vacation. The button didn't get pushed, so the contract wasn't renewed. The AI took over and removed that person from the roles of current employees.

Dude shows up on a Saturday, his key fob doesn't work. Huh? Weird. He tailgates in with someone else, gets to his desk, his login doesn't work. Huh. Well, he's like head of IT, so he just hacks around it and gets in as a guest. Both of those are reported by the system, and an armed guard with a gun is sent to his desk and marches him out.

Wendy Fong:

Oh, my goodness.

Karl Ahlrichs:

That was on a Saturday. On a Sunday, the president is called. The president gets in his car and drives to the guy's house and begs him to come back. The guy says, "No. I've been marched out with a uniformed officer with a gun in front of my people. I can't go back." It crashed the data project.

Wendy Fong:

Wow. That's a very extreme story of how it could not work in your favor.

Karl Ahlrichs:

Thank you. This is where we in HR have to understand these tools.

Wendy Fong:

Absolutely. Well, to recap, Karl, one, you said learn how to use Chat GPT, generative AI. Talk about it more, talk about future practices, look at your current policies and procedures and see how you can use this tool to improve everyday workplaces. Also, like your example about the proprietary data, where are some places in your organization where you have to implement policies where you say you cannot use this tool or need to cite this tool and whatnot, especially working with clients. And also too, as HR professionals and people managers are listening to this episode and whether they're aware of this generative AI or not, how to get the C-suite executives, the top leaders on board as well and be aware of this issue.

Karl Ahlrichs:

Thank you. That's a good point. Something I have been quietly doing at Gregory and Appel where I work is individually approaching the boomers and individually showing them and having them go, "Oh." When they get shown, they get it.

Wendy Fong:

And you provided a lot of examples that we're already seeing how this is working in the workplace positively, and how it can backfire too, negatively. You could bring those case studies.

Karl Ahlrichs:

If everybody's comfortable in doing the right thing, if everybody has ethical leadership and a culture of accountability, this is going to be pretty straightforward. If you're in an organization that doesn't have those, let me come and help you get those. That's what I want to be doing for the rest of my career.

Wendy Fong:

No, that makes sense. You really need that to be the foundation of your organization in order to move forward in how you respond to generative AI in the first place. That's a very important value to have. No, absolutely. I really appreciate you being a guest, Karl. This was such a fascinating conversation, and I know this is not the last time we're going to be discussing this, so I appreciate you taking time out of your busy schedule and wanted to thank all the listeners for listening. As we mentioned at the top of the episode, this episode is eligible for 0.75 PDCs toward your SHRM-CP, and SHRM-SCP recertification. After you finish listening, enter this activity ID into your SHRM certification portal: 23-5, the letter F as in funny, U as in umbrella, P as in popcorn, and X as in x-ray. 23-5 FUPX.

If you haven't already, please subscribe so you'll never miss an episode. Be sure to rate and review the show wherever you listen to podcasts. I'm also on LinkedIn if anyone wants to connect with me. If you want to learn more about Honest HR podcast or other SHRM podcasts, just go to shrm.org/podcasts. Until next time, be kind to yourselves and each other and wish you all well. Peace out.