Mark Stelzner is the Founder and Managing Principal of IA, an HR consulting firm. In this episode, Mark discusses what people aren’t talking about when it comes to AI; why organizations should let their employees safely explore AI in the workplace; and how basic AI guidelines and data governance can be transformational.
This conversation took place at the HR Tech 2024 conference in Las Vegas.
[0:00] Introduction
[2:47] What are we missing when it comes to AI?
[11:41] How can organizations safely explore AI?
[22:49] What role does data governance play in an organization’s AI rollout?
[32:26] Closing
Connect with Mark:
Connect with David:
Podcast Manager, Karissa Harris:
Production by Affogato Media
Resources:
Announcer: 0:01
The world of business is more complex than ever. The world of human resources and compensation is also getting more complex. Welcome to the HR Data Labs podcast, your direct source for the latest trends from experts inside and outside the world of human resources. Listen as we explore the impact that compensation strategy, data and people analytics can have on your organization. This podcast is sponsored by Salary.com, your source for data, technology and consulting for compensation and beyond. Now here are your hosts, David Turetsky and Dwight Brown.
David Turetsky: 0:38
Hello and welcome to the HR Data Labs podcast. I'm your host David Turetsky, we are live recording from the HR Technology show here in Las Vegas, Nevada, the beautiful Mandalay Bay exposition center. Today, I have the absolute pleasure again to talk to Mark Stelzner of AI, sorry, IA!
Mark Stelzner: 1:00
Well, that was prescient!
David Turetsky: 1:02
Exactly! Well, really, I mean, could you rebrand and just change the letters around?
Mark Stelzner: 1:07
I'm intelligently artificial. So, I'll own that branding. Good to see you my friend, how you doing?
David Turetsky: 1:13
I'm okay! How are you?
Mark Stelzner: 1:14
I'm well!
David Turetsky: 1:15
You look well!
Mark Stelzner: 1:15
Thank you! Well, I you know, looks can deceive us sometimes. So
David Turetsky: 1:19
Well, hopefully, hopefully it does everything's
Mark Stelzner: 1:22
Thank you. I appreciate it. okay.
David Turetsky: 1:23
So Mark, you know how this works. What's one fun thing that no one knows about you?
Mark Stelzner: 1:28
Boy, um, what's one thing? I have torn both my rotator cuffs in the last two years, and actually, maybe a more fun story than my klutziness. Well, this is klutziness. I broke my toe six months ago moving a peloton. So little known fact, if you've ever had a peloton or use one, there's little tiny weights that are hidden under the seat. And when you kick the peloton up to a 70 degree angle to get the little janky wheels to start to move, those weights have a gravitational pull.
David Turetsky: 1:59
Oh, my God.
Mark Stelzner: 2:00
And a well placed toe can catch those weights and break into pieces. So I'm just getting more klutzy. I need to be covered in bubble wrap. I'm not sure what's going on with me right now.
David Turetsky: 2:12
You should go play hockey, and then you have all the equipment on!
Mark Stelzner: 2:14
I should wear that all the time.
David Turetsky: 2:16
Yeah
Mark Stelzner: 2:16
So, but trying to, trying to thrive and survive despite my own self interest so
David Turetsky: 2:21
well. As we get older, I find that I'm doing klutzy things as well. And it's, you know, I used to be really coordinated, I guess I'm not anymore.
Mark Stelzner: 2:29
I used to think I was and now I'm questioning my own memory, but, but I'm starting trying to stay upright and glad that, glad to be back in action!
David Turetsky: 2:36
well, and glad that you're here at the HR technology show.
Mark Stelzner: 2:39
Thank you!
David Turetsky: 2:47
So Mark, I think you see on every single booth here, and there are a lot of booths, you know that the topic of conversation that is on everybody's mind is around AI.
Mark Stelzner: 3:00
I've heard of this! This AI thing, yeah,
David Turetsky: 3:02
And your branding is very, as you said, prescient.
Mark Stelzner: 3:04
Right?
David Turetsky: 3:05
But let's talk beyond the hype cycle, what else aren't we talking about from an HR technology perspective, and it could include AI. But what aren't we talking about these days?
Mark Stelzner: 3:15
It's funny, we, we, I was just in a meeting, actually, with a very large organization, and one of the things we spend a lot of time on is what I'll call client readiness. There is no shortage of tools technology capability in our market. It is another solid day of innovation, and frankly, a lot of noise in the system. Our clients are struggling to separate signal from noise and figure out how to thread and unify all these wonderful capabilities into a journey for their employees. And one of the conversations we had, and I think the cynicism that's starting to grow is, how do we apply these tools to real use cases, for real people, for real value, and in a way where it can be trusted?
David Turetsky: 3:58
Right
Mark Stelzner: 3:58
And it can learn and it can grow and it can be trusted. It can learn and can grow. But part of how HR is organized is the fact that really, there's nobody that owns AI, as it were, as a capability, our function owns end to end processes. So part of what we spend a lot of time on is, how do you deconstruct and reconstruct journeys? How do we apply it to a multitude of personas? And our temptation our industry right now is to talk about employees, but what about candidates? What about free hire? What about onboarding? What about family members, however that's defined. What about alumnus? And everybody moves through that journey, and has moved through that journey multiple times in their career. And then we apply the cyclical or point in time or moment to matter processes, however you define it. So where does AI thread in? And one of the biggest issues, one of the biggest opportunities we see, for example, is what I'll call authoritative content. If an organization has not invested in content that is accurate, content that is refreshed, content that is in the language that one. To consume, an AI model will hallucinate, as AI people like to say, and it will infer the wrong information or present you with the wrong information, because the large language model is sourcing from content which is not truly authoritative.
David Turetsky: 5:13
Right
Mark Stelzner: 5:14
And we work with really large, complex, global organizations, and part of the problem is nobody really owns content either!
David Turetsky: 5:23
Well, I mean, there are pieces of content that learning owns, there's a piece that OD owns, there's a piece that comp owns, but those are disparate.
Mark Stelzner: 5:31
Yes, and so where should what's the repository? Is it even possible to imagine a repository where content would live? Do we have enough time and protection to update our policies and our programs, and I'd say even the marketing and sales pitch associated with performance and merit, the answer is typically no.
David Turetsky: 5:52
So even an HRMS doesn't usually hold documentation of process, it gets instantiated and implemented with process
Mark Stelzner: 6:02
Exactly, exactly so. So if we could solve authoritative content, then we can put some of these AI tools against it, but then we have this notion of permissioning and consent.
David Turetsky: 6:13
Yeah.
Mark Stelzner: 6:13
And a lot of the debates in the organizations we work with is, is AI about opting in, or is it about opting out?
David Turetsky: 6:20
Does that matter? Where they are? Is it California versus London versus Paris?
Mark Stelzner: 6:25
I think it could. I think based on privacy laws, GDPR, of course, any of the protections that are here in the States and hyper localized around the world, it should matter. But I think philosophically, organizations haven't aligned on where do we believe we know better than you, right? Rightfully so, perhaps that you're not taking advantage of these amazing programs, or you're not consuming the information in the way that we would expect. And we want to use AI to push information. We want to use AI to activate other modalities. But how do you decide which modality you prefer?
David Turetsky: 6:55
Right
Mark Stelzner: 6:55
I might like text right now, but depending on the use case, I still might need to talk to a live person.
David Turetsky: 7:00
Right
Mark Stelzner: 7:01
I might want to open a case through the case management tool. I might want to have a live chat, or I might want to just engage with a virtual agent
David Turetsky: 7:08
Right
Mark Stelzner: 7:08
But it depends on the content and the context in which I'm actually leveraging the tool in my employee journey.
David Turetsky: 7:13
Can I ask you a question that may foundationally change what you just said?
Mark Stelzner: 7:17
Sure!
David Turetsky: 7:18
You had mentioned who the owner is of all this, shouldn't the owner really be IT, since this is a technology play?
Mark Stelzner: 7:24
It's a really good question. I think IT should own the policies associated with AI.
David Turetsky: 7:32
Okay
Mark Stelzner: 7:32
So we work with Microsoft. Microsoft has the office of responsible AI.
David Turetsky: 7:38
Sure
Mark Stelzner: 7:39
ORA, as they refer to it, right? Is not only for their first party tools, but it's also for the ingestion of co pilot in this instance, for their own employee journeys, in lieu of defining the parameters through which AI can be employed. It's a little bit of the Wild West. So we need our technology partners to establish governing principles and policies and practices that give us the guard rails through which we can then activate.
David Turetsky: 8:03
Yeah. Right
Mark Stelzner: 8:03
But that, that, in and of itself, is evolving so So yeah, the irony or cognitive distance quickly that they need to, they being our technology partners, need to have dedicated resources that wake up every day thinking about this on behalf of all the functions. We have, we have a client we work with who, when they sent us our contract, it had AI in the contract, you were not allowed to use anything that you learned from us for the purposes of a model. Well, guess what the project was about? AI models! isn't lost on me, but the it's happening so fast. We all have various belief systems around the value proposition.
David Turetsky: 8:42
Right
Mark Stelzner: 8:42
It is being pushed into our inboxes and our brains through every medium that one can imagine, and it does have a ton of practical value.
David Turetsky: 8:51
Right
Mark Stelzner: 8:52
But organizations need to pause for a moment and get themselves ready to ingest, deploy, learn, injest, deploy, learn and and that's the that's one of the barriers I'm seeing at this moment. Yeah.
David Turetsky: 9:02
But that requires what you were talking about before, like ORA. That requires an organizational response that doesn't take into consideration that we're all consumers.
Mark Stelzner: 9:13
That's right
David Turetsky: 9:14
And we all hear the hype cycle about chatGPT 4.0 and copilot and Gemini and everything else, yeah. And there are a lot of people going rogue, quote, unquote, and doing it on their own.
Mark Stelzner: 9:23
That's right
David Turetsky: 9:24
Even asking questions that may actually cause risk management to either be furious or to ask, you know what you're doing?
Mark Stelzner: 9:33
100% 100% but, but like everything in the consumer market, everything that you see in this hall, this fast hall that we're in, there are consumer expectations. We all have consumer expectations. Nothing is more frustrating than using a consumer like experience to bring me into an organization, only for me to slip into a wormhole from 40 years ago, because we don't have device enablements! And I'm talking about sort of basic tenets of bringing technology into the hands of populations. We have a lot of frontline workers that we work with, and they don't have, yes, they're on Active Directory for badging, but they don't have email addresses. They're not allowed to bring devices for wage and hour concerns, as you certainly know. And so all this wonderful tech, I talk to our clients about the addressable market. Like, what is the addressable market? Who is the actual consumer for these tools? If it's more for HR, we're actually increasing the dependency on HR!
David Turetsky: 10:28
That's right
Mark Stelzner: 10:28
Versus necessarily bringing a different level of activation with our guidance, with our ethos, with our culture, through bringing these tools to the front line, to the best of our ability, where people should own their career!
David Turetsky: 10:38
Right. You're stressing out HR by trying to learn a new trade, really, by trying to train a new colleague, quote, unquote, or new or new productive worker, which might be an agent.
Mark Stelzner: 10:49
That's right.
David Turetsky: 10:50
And are you giving them something that is offloading something that's useful to them to get their job done, but giving them the tools to enable them to really see that? Are you just stressing them the hell out?
Mark Stelzner: 11:01
Well? And and with all the talk I need to talk Like what you hear so far? Make sure you never miss a show by to Stacia and other wonderful minds here about skills, with all the talk about skills and skills taxonomy, what are the skills that we believe AI should bring to us as that complimentary agent, right? And therefore, what are the skills we no longer require that should be redacted from the job description, and as AI gets smarter, we maybe don't need those skills, or the same application of those skills at scale! clicking subscribe. This podcast is made possible by Salary.com. Now back to the show
David Turetsky: 11:41
But but I'll argue with one thing. I don't think the I think the person's job description changes. I think that skill and those duties then need to get added to the agent's description. And I know there's a religious work going online on there's a religious work going on online at LinkedIn right now, about one company that I'm not going to name,
Mark Stelzner: 12:04
yes, yes, yes.
David Turetsky: 12:04
That tried to portray AI as being a worker,
Mark Stelzner: 12:09
Right, yes
David Turetsky: 12:09
being hired, and then literally, people came out of the woodwork screaming at this company. You know, what kind of hype bull crap is this? That you're, are you going to pay it? Or is it or is it going to fall in line with FLSA and wage hour regulation? Blah, blah, oh, my God. Like, this is the future!
Mark Stelzner: 12:29
And the future is here. My wife is a professor of fashion and design at SCAD in Atlanta.
David Turetsky: 12:35
Oh, cool.
Mark Stelzner: 12:37
She uses AI every day for what she does. Now think of what she's doing. She's looking at it for inspiration. She's looking at it for creative, she can actually build patterns. She can actually create a runway and see how the products flow. She can change the garment type. She can change the sizing. She can play with
David Turetsky: 12:51
Right the lighting. She can put jewelry on it. She can emulate the entire production life cycle using a variety of tools. Now they're not perfect, as tools are imperfect. But at the same time, she sat down with her seniors, she just told me this last night. Her seniors, and said, Listen, you're getting ready for your senior collection, which is the culmination of your experience here at our university. Why aren't you using AI? Yeah, wow
Mark Stelzner: 13:17
Why aren't you applying it? And which is fascinating, because you would think,
David Turetsky: 13:21
yeah, it would be the other way around.
Mark Stelzner: 13:23
A lot of creatives say, well, the creative part is what I should be doing, right, right? It's the non creative piece, but it's everywhere. It has endless applications, and once you play with it, and I do think people need to play.
David Turetsky: 13:36
Yeah.
Mark Stelzner: 13:36
So to your point about InfoSec losing their minds, or risk management losing their minds. How do we create a safe playground? Give parameters in which we want to encourage our employees to play an experiment and learn from that experimentation, if for nothing else, to give them another data point that they could apply to their work.
David Turetsky: 13:55
I think there are serious guardrails that need to be put in place because the intellectual property gets created by the AI.
Mark Stelzner: 14:06
That's right
David Turetsky: 14:06
Is a very big gray area right now. Or it's actually, like, if you use an Adobe product, they actually show you what the licensing terms are right away.
Mark Stelzner: 14:14
With citation, yes
David Turetsky: 14:15
Exactly. And like, I've used AI to create artwork in in in Illustrator.
Mark Stelzner: 14:22
Yeah
David Turetsky: 14:23
And I worry about the intellectual property that that creates, and whether or not I really own it, or whether someone else, to your point, someone else using the exact same terminology,
Mark Stelzner: 14:34
That's right
David Turetsky: 14:34
can make that exact same image. And then there be a an issue about who owns that.
Mark Stelzner: 14:42
One of the skills we talked about two years ago that we want the people function to develop is prompt engineering.
David Turetsky: 14:48
Oh my god, yeah.
Mark Stelzner: 14:50
And, and it's funny, we have someone on our team who's an expert in this. And a few years ago, the one thing that he told me that I never forgot, that I hadn't realized when. I was just starting to play with these tools. Is starting with the prompt of, imagine you are a blank. The first prompt should be you, telling the tool who you are emulating,
David Turetsky: 15:11
Wow!
Mark Stelzner: 15:12
and how we want the tool to actually be. Imagine
David Turetsky: 15:13
Absolutely. you are a people leader in a manufacturing firm in this vertical market, we're having an issue with employee retention,
Mark Stelzner: 15:19
And so the order in which one asks questions, we you know. And if you don't tell it to imagine, it will imagine whatever it wants to! do a lot of interviews, stakeholder interviews is another example of how we apply AI. We used to have two people on every interview. I invite an AI agent to every one of my interviews. I ask if people are comfortable, but we use it to record transcript and sentiment.
David Turetsky: 15:48
Right
Mark Stelzner: 15:48
And you know what the tool tells us? It tells us If we have gender bias,
David Turetsky: 15:53
Really? oh, it does bias testing!
Mark Stelzner: 15:55
It does, it'll tell us. It'll tell us if we actually have visual engagement, where we've lost engagement, or where we've gained engagement.
David Turetsky: 16:02
Wow.
Mark Stelzner: 16:03
And we can take that transcript, and I can save it off into a document, and I can load that into chatGPT.
David Turetsky: 16:08
Yeah.
Mark Stelzner: 16:09
And I can ask a million questions,
David Turetsky: 16:11
Wow
Mark Stelzner: 16:12
about what happened in that interview. And then I could take all the interviews and I can load those and ask for themes, and it's, it's really good. But what it freed me up to, yes, you would think, okay, great, that's more if we've used inappropriate language. efficient. Mark didn't need another worker.
David Turetsky: 16:26
Right
Mark Stelzner: 16:26
But what I really get value out of, I am fully engaged with the person across from me.
David Turetsky: 16:30
Yes!
Mark Stelzner: 16:31
I am 100% locked in to what we're discussing. I am not distracted with, wait a minute, what? How did they exactly phrase that? I've got someone catching that for me, right? Not bulletproof, but gets me maybe 99% of what I'm looking for, which is invaluable for the work that I do every day.
David Turetsky: 16:46
That's brilliant. I mean, I know zoom and Teams are actually doing something very similar in terms of having that AI agent listening. You have to enable it, of course, and it does tell everybody that this is being recorded and AI is being is transcribing, but it's just brilliant, because then that enables you to be so much more present!
Mark Stelzner: 17:05
Well, 100, and and that's what I wish we could start talking about, like, what is, what capability is this unlocking? What that unlocked for me, as I'm doing very senior stakeholder interviews, is one transparency, right?
David Turetsky: 17:20
Right.
Mark Stelzner: 17:21
I'm capturing, I'm intending to capture everything we discuss right now. It'll be de identified and anonymized, but what you say, and I would say, how you say, it is super important for the work that we do in transforming these organizations. Number two, I want your consent that you're okay with that.
David Turetsky: 17:37
Yeah.
Mark Stelzner: 17:37
I had a CTO interview the other day, and he said, I appreciate you asking. I said, Wait a minute, do people not ask you? 90% of the time, it's just running in the background.
David Turetsky: 17:47
Wow.
Mark Stelzner: 17:47
So ask! This is a great example of the type of consent we're talking about. Are people comfortable? Not everyone's on a different journey or has a different view.
David Turetsky: 17:54
That's true.
Mark Stelzner: 17:54
But, but my presence and my engagement and the way I would connect with people? Oh!
David Turetsky: 17:59
But Mark that actually brings up a good point, which is, Could those people who aren't asking the question, or the person who says, I'm okay with you doing it, are they potentially giving away intellectual property that that maybe, again, talking about the risk management, they may not agree with that, and they may say you shouldn't have given permission there.
Mark Stelzner: 18:20
That's sort of, it came up in the same organization I was describing, and that was the CTOs perspective. And he asked me, What did other people do?
David Turetsky: 18:29
Exactly!
Mark Stelzner: 18:30
He said, What did my team do?
David Turetsky: 18:31
Yeah
Mark Stelzner: 18:32
I said, half your team said no, and half your team thought it was really cool. And he laughed at that, because they're experimenting too!
David Turetsky: 18:42
Yes,
Mark Stelzner: 18:42
But, but it's, it's the perceived lack of control. And I would say, I mean, as we read the same publications and we're studying our market, of course, as one would, the reality is, is even the originators of these tools don't know these answers!
David Turetsky: 18:56
Right
Mark Stelzner: 18:56
Don't know where that information really could be derived for another purpose, hopefully a positive purpose. It should be protected. It should be localized, it should be isolated. But frankly, the incentive for the tools is to get smarter through usage.
David Turetsky: 19:11
Yes, and that's what bothers me, is they get, especially the ones that aren't firewalled, are trying to get data from everywhere as much as it can to be better at giving the right or giving an answer.
Mark Stelzner: 19:25
But, but I would say, like we as humans, how many apps do you have on your phone? 50? 100?
David Turetsky: 19:31
At least.
Mark Stelzner: 19:32
Yeah. And every time I get an update, and every time it is terms and conditions come up, I am in a race with myself to see how quickly I can scroll. What do you mean? Oh, this, this is one of those I have to scroll before I hit the button? We give away. And therefore, when I'm talking to my wife about something, I'm like, why did that just pop up my answer? Well shocker, I just gave all these permissions away. So we as consumers are giving away our personal information constantly.
David Turetsky: 19:59
Yeah
Mark Stelzner: 19:59
But an organization is different. And what we want to incentivize is we want our employees to tell us more. We want a higher level of engagement, of transparency. We want to know what people are thinking about whatever, in whatever form or format they want to do that. So we want to encourage them to provide this information and sentiment. But people are rightfully concerned about how will it be used, and where could it be applied? But as as humans who live in the modern era, most of us are freely giving away the most personal information one could imagine every day.
David Turetsky: 20:32
Well, I'm not an ethics lawyer, right? And I don't know if you are, as well.
Mark Stelzner: 20:36
No, I'm not.
David Turetsky: 20:37
But when we go into these meetings with that bot, for example, taking the transcription we don't know what's happening with other that, we don't know what's happening with that particular transcription, other than the fact that it's maybe going to be played back to us or provided to us later. But we don't know what else is going on with that!
Mark Stelzner: 20:55
Well, and that's why, and I like, I like how Microsoft has called this responsible AI, everything's evolving at an unprecedented speed. The investment. Look at the valuation of Nvidia. I mean, it's just it's going to get smarter and faster and smarter and faster. And there are, you know, two sides of the big market, which is, you know, we need to stop, it's almost too late. And there are others, which is, boy, we're about to unlock something we can't even possibly imagine, and both can absolutely be true and coexist.
David Turetsky: 21:25
And that's where the legislation is trying to head off, or at least understand it first, and try and head off the apocalypse,
Mark Stelzner: 21:32
Right, yeah. But, but what is responsible? And and to your point about governance, internal governance. Organizations being intentional about the responsible application of these tools in a way that's foundational. And I don't know about you, my data gets stolen every third week. I think I have, I have credit monitoring until I'm 150 years old
David Turetsky: 21:58
Yeah, absolutely.
Mark Stelzner: 21:58
with the number of letters I'm getting, but I'm
David Turetsky: 21:59
Hey, are you listening to this and thinking upset with it, and I certainly don't want my employer to give away my information without my consent. So we have to come up to yourself, Man, I wish I could talk to David about this. Well, with mechanisms methods to the best of our ability, and we have to have governing bodies that provide prescriptive guardrails you're in luck. We have a special offer for listeners of that determine the appropriate application. Some organizations are pulling way back. And some are saying this is the future, we've just got to find a way to bind it. the HR Data Labs podcast, a free half hour call with me about any of the topics we cover on the podcast or whatever is on your mind. Go to salary.com/HRDLconsulting to schedule your free 30 minute call today. And I think one of the really key areas that's tangential to this, but key to it, is data governance.
Mark Stelzner: 22:57
Oh, my God yes.
David Turetsky: 22:58
And what data are we using? What data do we have? And especially as we've talked about in the past, HR, data being the very nature of its existence, of always being in some kind of flux, how do we provide the right information?
Mark Stelzner: 23:15
And what's fascinating about that is we have some very forward looking employers that are even thinking about data governance in the context of, when have you left us? Meaning the auspices of what it means in the employee and employer relationship? When do we have to be intentional about the fact that we will enable these capabilities? But you are now outside the walls of this relationship. A great example of this could be an HSA account,
David Turetsky: 23:42
yeah,
Mark Stelzner: 23:43
or your 401, K account, that that's a banking relationship that you have. You own that bank account.
David Turetsky: 23:49
It transcends the employment relationship.
Mark Stelzner: 23:51
Exactly. And I want to be really clear, and I may you move you from my bot to their bot.
David Turetsky: 23:56
Right, if it exists.
Mark Stelzner: 23:57
But in the in the moment, I want to make clear
David Turetsky: 23:57
But there's a transom you pass that, Congratulations, you now have left our land, and you're entering theirs, and they may have other requirements and
Mark Stelzner: 24:05
That's right consents that you need.
David Turetsky: 24:11
from being a current employee to being a former employee slash alumnus, or never hire back. Or, yeah, we'd love to get them back! But there's that transom you pass where the ownership of that person, the responsibility of that person, on some things, is gone, but in other ways, it transforms!
Mark Stelzner: 24:29
Well and global legislation for after hours notwithstanding, there aren't really clear lines of demarcation. It's really fuzzy in a lot of areas. I think it would take a lot of work, frankly, for many organizations to say, when had when has this relationship ended? When have you timed out, as it were, of what we would expect of you?
David Turetsky: 24:49
Right
Mark Stelzner: 24:50
And then the classification, of course, of you know, what type of employee I am, veers into that too, but, but if, without that, we don't know how to govern the application. Of some of these amazing tools and technologies.
David Turetsky: 25:01
I want to dive into something you said there, which I'm going to take it in a different way.
Mark Stelzner: 25:04
Sure, please.
David Turetsky: 25:05
You said, when the time runs out. Think about the current employee relationship. And I'm not talking about just more exempt roles or more professional roles, even though that's a really important topic, I'm talking about the two thirds of employees in the US, which are non exempt or hourly workers, when they leave the office, they're off the clock. That's right, if the bot reaches out to them after hours, does that constitute work?
Mark Stelzner: 25:33
And that's a there's a lot of interesting conversations about whether this is a compensatory event.
David Turetsky: 25:39
Exactly!
Mark Stelzner: 25:39
Now and then it goes to the point. And this is sort of the opt in opt out catalyst as well. If we have offered you a series of capabilities, and you have attested a desire to learn more about you know, you want to do professional development and move from handling material in the distribution center to being a shift leader in the distribution center, and that requires professional development. It's optional. We haven't put you on that trajectory. We've provided the learning tools and the interventions for you to do so. Is this you developing yourself outside the auspices of work, or is this us paying you to develop yourself? Have you? Is this about us driving you for internal mobility, which, again, we would like for the sake of growth and retention, but you've chosen that. We didn't force you, quote, unquote. And I'm not saying that's the right answer, but I'm saying that's
David Turetsky: 26:32
I don't know if FLSA makes a distinction on that, does it?
Mark Stelzner: 26:34
No, not to my understanding. And so, but those distinctions get really, what about benefits? So you're eligible for benefits. We want to provide you all the tools necessary to consume and be aware of maintaining financial, physical, emotional well being. We want you to thrive! We want your loved ones to thrive. But if you engage with a tool outside of hours, which is mostly when you're likely to engage with a benefits experience,
David Turetsky: 26:55
Of course, yeah.
Mark Stelzner: 26:56
Is that a compensatory event? Probably not?
David Turetsky: 27:00
But this is where the law hasn't kept up with the times.
Mark Stelzner: 27:04
Yes
David Turetsky: 27:05
Whether it's from a non exempt or exempt hourly worker, salary worker, it just hasn't caught up!
Mark Stelzner: 27:11
Right
David Turetsky: 27:11
And we, I'm gonna say something really rude. All of us that work in professional jobs where we're at least middle management or senior management, we work all the freaking time. There's literally not a time that I don't
Mark Stelzner: 27:25
I'm with you
David Turetsky: 27:26
check my phone, and some days I'm really mad about that, because, and not that I should get paid for that, you could say it's built into my compensation.
Mark Stelzner: 27:34
Right
David Turetsky: 27:35
But I'm losing my life. I'm not living then, right Mark? I know you love your life outside of work.
Mark Stelzner: 27:43
I love my life I outside of work!
David Turetsky: 27:44
I see that in how you post!
Mark Stelzner: 27:46
I'm clawing at it every single day, my friend, but yeah, and then, you know, people will try geo fencing. Well, let's try to geo fence it, but you can't geo fence and then not enable, right?
David Turetsky: 27:55
Right.
Mark Stelzner: 27:56
So we've got to pick lanes. And I think we and I guess I would say the big nexus of what we discussed today is intentionality.
David Turetsky: 28:04
Yes.
Mark Stelzner: 28:05
Experimentation is great. I think we should experiment. I think we need to. But if we're not intentional about the guidelines, the use, the capabilities, the activation, the interpretive applications, the journeys, the personas that we're bringing to life, it's a bit of a free for all. And I spend every day trying to unwind that hairball that is, that is the that is the free for all of trying to knit all this amazing stuff together. Everybody needs to coexist with everybody else. And there's no one person or provider in this hall that does everything and so, and there's a
David Turetsky: 28:42
That would be a very boring HR Tech!
Mark Stelzner: 28:42
oh God, exactly, yeah. What would we have to talk about? God forbid, so
David Turetsky: 28:48
What are they coming out with next? It's like, it's like listening to an Apple announcement. But seriously, though, one of the beautiful things about being in this hall is the differences in the thought processes, what? What are the areas that each one of these groups are attacking?
Mark Stelzner: 29:05
Right
David Turetsky: 29:06
And, you know, last year, we knew that this year was going to be all about AI, because last year was the starting. And as I said, beyond the hype cycle, what I'm hoping for next year is for real use case, real world examples of where people used it, and what were the outcomes, and how did it transform things?
Mark Stelzner: 29:24
100% we relentlessly hit this refrain everywhere, process led tech enabled. Process led tech enabled. When we we run a lot of RFPs, I don't care about the 8000 cut and paste ridiculous questions that no one reads, yes, I need some information for the purpose of protecting the procurement process. But I want high value use cases, like I want. I want you to come in and I want you to tell me how my hypothesis, an RFP is a hypothesis,
David Turetsky: 29:51
right
Mark Stelzner: 29:51
How I want you to activate real use cases. I want you to show me, and then I can tell you Yes, yes, no, or tell me more about this. And how would you connect what. All these other experiences and the other 100 tools that we've already bought. How does everything come together? Because people are lost in this ecosystem. They don't we work with some really fascinating organizations, the utilization of these tools is shockingly low.
David Turetsky: 30:14
Yeah.
Mark Stelzner: 30:15
And then new capability comes out. A new capability comes out, people aren't ingesting even what they have.
David Turetsky: 30:19
In my conversation with Stacia this morning, it was amazing. Her research basically says that that a lot of companies say that they do AI, but then when companies buy, especially at the front end, they don't realize those benefits, because it was oversold under delivered.
Mark Stelzner: 30:37
Oh, completely.
David Turetsky: 30:37
And then at the end of it, they look at it and they go, Well, this was really disappointing.
Mark Stelzner: 30:43
Yeah, of course! Because when you listen to all the keynotes, and, you know, I love my peer group, but we are creating an expectation that thus far, has rarely delivered. Now we're not too far away from that. I think there's a lot of investment as I talk to the providers. And I'm, you know, I think I've had 40 breakfasts in two days. I don't know if I can consume anymore. I'm gonna go to one. I'm gonna go to my fourth lunch, probably right after this. But, but when we talk to them, we talk to them about the fact, like you really have to get use case centered. You really have to get journey centered. Because once people get it, it's it could be game changing. It can be game changing.
David Turetsky: 31:19
But stories, the stories that we can tell about the implementation of it and how it worked and how it transformed. That's the next. That's the next. It's got to be the next wave of this.
Mark Stelzner: 31:31
We don't want to, I mean, I've no offense to everyone. I don't want to hear any more from providers. I want to hear from real organizations, right? I want to hear from the panel of experimenters and the things that went wrong!
David Turetsky: 31:42
right
Mark Stelzner: 31:42
As much as the things that are well packaged and presented that went right.
David Turetsky: 31:46
I love hearing failures.
Mark Stelzner: 31:48
Yeah,
David Turetsky: 31:48
I've learned so much in my life from failing.
Mark Stelzner: 31:50
And thank god, that's why I'm here.
David Turetsky: 31:51
Yeah. Exactly
Mark Stelzner: 31:52
Yeah, I've failed more than I've succeeded. And thank God for it, how boring would life be?
David Turetsky: 31:56
Yeah, exactly. Hey, I went out and I put a dime in and I won a million dollars, another dime in and I won another million.
Mark Stelzner: 32:03
I'm paying for light fixtures and chairs in the casino right now. So it's all good.
David Turetsky: 32:08
Well, they're naming a wing after me, I don't know, but Mandalay Bay is having the Turetsky pavilion for all the money I've stuffed into the coin slots. Mark, as always, it's a pleasure and an honor to spend time with you.
Mark Stelzner: 32:29
Likewise
David Turetsky: 32:30
I learned so much talking to you
Mark Stelzner: 32:31
As do I from you!
David Turetsky: 32:32
And I wait. Thank you. I wait for next show on we don't have to. We could do an earlier one next year, coming back and asking exactly the same question.
Mark Stelzner: 32:40
Let's see where we are! I hope we are, I hope we evolve. But thank you. Enjoyed the conversation as always. Thank you so much.
David Turetsky: 32:44
Take care and stay safe.
Announcer: 32:47
That was the HR Data Labs podcast. If you liked the episode, please subscribe. And if you know anyone that might like to hear it, please send it their way. Thank you for joining us this week, and stay tuned for our next episode. Stay safe.
In this show we cover topics on Analytics, HR Processes, and Rewards with a focus on getting answers that organizations need by demystifying People Analytics.