Kasara Weinrich, Director of Sales Technology and AI Solutions Designer at ADP, joins us this week to discuss how organizations weigh generative AI adoption. As a PhD candidate in anthropology and social change, Kasara offers a unique historical perspective on how workers have adapted to technological disruption.
[0:00] Introduction
[5:47] What’s wrong with the world of work today?
[18:46] What are the implications of unregulated use of people data by generative AI?
[28:07] How does an organization prepare for generative AI adoption?
[38:46] Closing
Connect with Kasara:
Connect with David:
Connect with Dwight:
Podcast Manager, Karissa Harris:
Production by Affogato Media
Announcer: 0:01
The world of business is more complex than ever. The world of human resources and compensation is also getting more complex. Welcome to the HR Data Labs podcast, your direct source for the latest trends from experts inside and outside the world of human resources. Listen as we explore the impact that compensation strategy data and people analytics can have on your organization. This podcast is sponsored by salary.com Your source for data technology and consulting for compensation and beyond. Now here are your hosts, David Turetsky and Dwight Brown.
David Turetsky: 0:39
Hello and welcome to the HR Data Labs podcast. I'm your host, David Turetsky, alongside co-host, best friend, genius, partner in crime, lots of different adjectives, my friend from Salary.com, Dwight Brown.
Dwight Brown: 0:52
I was looking around to see who you were talking about there
David Turetsky: 0:56
when I lost your best friend? Wait what?
Dwight Brown: 0:58
Yeah. Well, when I heard genius, I'm like, yeah, gotta be talking about the guy behind me.
David Turetsky: 1:02
Should I say brilliant? Hmm, we're gonna have to go back to the transcript on that one.
Dwight Brown: 1:06
Yeah, exactly. We may have to re record this.
David Turetsky: 1:09
But we do have a genius with us, because in the world of ADP, there are very smart people. This person stands head and shoulders above most of those other folks that work at ADP. And I know this because other people have told me this too, and I've had a chance to hear this person speak at conferences and on webinars, and this person exudes brilliance. I'm talking about none other than Kasara Weinrich. Kasara, how are you?
Kasara Weinrich: 1:38
I'm wonderful, especially, I mean, I've peaked. It's, it's early in the day to have peaked. But thank you.
David Turetsky: 1:46
Well, come on. I mean, you know when, when you look at the body of work, from what I've seen, from what you've done at ADP, I think I've downplayed it a little bit.
Kasara Weinrich: 1:57
No, no way. I think, I think ADP has given me a beautiful opportunity to mix ambition and determination with a lot of Right Place Right Time. There's a lot of open doors here, and I've had a lot of opportunities to walk through the right one. So I'm lucky
David Turetsky: 2:12
Well, why don't you tell us about those doors? I mean, tell us about what you're doing at ADP and tell us a little bit about you.
Kasara Weinrich: 2:17
Right now, I have a very awesome opportunity to be working under worldwide commercial operations in designing and developing AI and generative AI driven technologies for our entire sales force. And so, outside of designing and creating that and working, of course, cross functionally with so many brilliant people. I am also able to take all of these passions and and speak to the market
David Turetsky: 2:46
That's so cool. You're You're literally on the about them. leading slash bleeding edge within a technology company that prides itself on being on the leading edge.
Kasara Weinrich: 2:56
It's true.
David Turetsky: 2:56
And by the way, I'm not a shareholder anymore, so I'm allowed to talk about it. Okay, but one thing we ask all of our guests, Kasara, what's one fun thing that no one knows about you?
Kasara Weinrich: 3:11
I guess I would say, and this ties back directly to the fact that I am able to speak publicly now. So early on the first, I was born with very limited hearing, and I started speaking incredibly young, which, much to my, you know, parents dismay, I haven't stopped since. But for their first 13 years of my life, I had a lisp that was like, like, like this. So it was, and my name is Kasara, my name has an S in it. So that was, you know, almost a cruel turn of fate. But so for the first 13 years of my life, I had a very difficult lisp and speaking impediment, and now I get up on stages and I speak, and I hear people talk about how my voice is soothing, and it's just it makes me giggle to think of of how I started.
David Turetsky: 3:58
Well, I'm very happy to hear that, because I think a lot of us have come from places where we've overcome those kind of issues. I wouldn't call them necessarily. I'd call them challenges, because they are and a lot of us overcome them. In fact, lately, Dwight and I have been talking to people that have talked about us overcoming some of the things from our past too, and they've gotten really emotional.
Dwight Brown: 4:23
Yeah, we don't want to go too far down that road, yeah, but
David Turetsky: 4:26
We're gonna need tissues again! But one of the things that we we love talking about here on the program, is that there everybody's human, and we all have our foibles, and we all overcome and it's wonderful to know that such an amazing person like yourself with what you've accomplished, have been able to do that with that as one of your challenges. And again, you're one of their best public speakers, and you've definitely done a great job of overcoming something like that. So kudos to you. It's wonderful.
Kasara Weinrich: 4:58
Thank you so much. And. Yeah, no, no one really knows that it got me, with the exception of the people who knew me at 13 and earlier.
David Turetsky: 5:05
So now, now the people who listen, hopefully all the people in the world will hear
Kasara Weinrich: 5:12
everyone,
David Turetsky: 5:13
and they'll appreciate even more so now that when they see you speak that it's it's not there.
Kasara Weinrich: 5:19
Thank you.
David Turetsky: 5:20
So the topic for today is going to be one that has graced this podcast quite often, which is Gen AI. But we're going to be talking about a really fascinating look at Gen AI through the lens of anthropology and looking at how it affects the world of work. So the first question is, what's wrong with the world of work today? What? Why does Gen AI come up? What? What's wrong with the world of work?
Kasara Weinrich: 5:55
I think Gen AI is a, it's been an opportunity for organizations to realize, you know, it's so reliant upon data. It's incredibly reliant upon having an organization with, you know, learning agility at the core, having an organization that has been ready, that is historically adaptable and curious. And so I think with the onset of this emerging technology, as well as the combination of organizational culture even, and I should probably say I don't know that this was mentioned, but the reason that we're talking about this from an anthropological perspective is that's the program that I'm currently studying within for my PhD. And so I am studying human centered AI. And so not just the influence and impact of AI and generative AI on an organizational culture perspective, but also human beings culture and their propensity to adapt. So organizations are really facing, they've been kind of hit with a mirror in generative AI. And then I'll also say very quickly that due to the exponential growth in this space, there was a race to basically be first in some ways? Many organizations wanted to be the quickest and the fastest to adopt and to innovate and and so now we're seeing that, you know, I like to call those folks minimal risk adopters, the ones who are like, we stood up a closed instance of this GPT platform. So we did it, we innovated, and they didn't actually have a good understanding of the broader benefits and applications, and so we're seeing a lot of these projects failing, and it's because of that. But I think the great news is that while that has kind of shown some of the things that might be wrong with the world of work, there is a massive opportunity to make a lot of things go incredibly well and to turn this into something that's right.
David Turetsky: 8:03
So let me ask you a question about that.
Kasara Weinrich: 8:04
Yeah
David Turetsky: 8:05
And we'll get to the right part. But my question before that is, what the heck's wrong? What are we solving for? And is that, has the push to those early adopters been to just be an adopter, or was there a goal that they were chasing that's not just save cost and eliminate people and eliminate positions. What was wrong? And I guess this is where I was going with the original question, what's wrong with the world at work, or world of work that we needed to bring the robots in?
Kasara Weinrich: 8:37
Yeah. So I guess if we look at it from that perspective, it all comes down to human beings and the work that they enjoy doing versus the work that they've been made to do for an incredibly long time. And so we're seeing that gone are the days of the nine to five. You know, I was talking to somebody this week at a conference, and he said, would you believe that somebody asked me when I was retiring, like, he's like, I cannot believe I'm at the point in my career where other people think that it's time for me, right? That I thought that's what's on my mind, and he's like, but I don't want to retire! I love my work. I love the things that I am doing. I have no desire to retire. And so I think what we're seeing here is that in certain industries, for certain for certain work, generative AI can come in and take the things that can be augmented and automated away, while bringing the humanity and the human skills to the top. But again, only if it's done strategically and not just done because it can be done.
Dwight Brown: 9:41
Is there a little bit of the bright, shiny thingness to this whole thing?
Kasara Weinrich: 9:47
110%
David Turetsky: 9:49
Squirrel! Yeah, sorry,
Dwight Brown: 9:52
story in my life.
Kasara Weinrich: 9:53
Yeah. And, and, I think, I think, here's the thing. So, you know, being a. Anthropologist, it makes me somewhat of a historian in in many capacities, right? Not just studying current culture, but prior cultures, how we got here and and when I'm researching AI and generative AI, I'm also looking at the history of these technologies. And at every point in AI's history, we've seen massive increases in capabilities, then massive increases in expectations, and if those things weren't met, we saw a steep decline, right, a complete disinterest, lack of funding, and it's gone into what's called an AI winter, and we're at the point right now where we can absolutely prevent that from happening if we do not allow the shiny object syndrome to happen, and we do implement this in the most you know, adequate and strategic ways.
David Turetsky: 10:52
But I think there's also, though, and again, correct me if wrong, because you said this before, a little bit, there's a laziness that some of the key challenges to AI, like data and like past decisions and like history and also the crap up on the internet. They're all facts, right? Those things need to get, if not assumptions put around them, corrected or adjusted for because if we're looking at the AI, Gen AI, to look at all that stuff to be able to help us with these processes that you're talking about, they need to get a hell of a lot better, because they suck. I mean, I'm just being honest here. I mean, a lot of the data we're talking about, it's not good enough, right?
Kasara Weinrich: 11:44
Yeah, and I think I have, you know, the organization that I work for allows me to have an incredibly high standard for data, right? Not just data quality, but data governance, ethical use, privacy, security, so all of those things need to be top of mind for organizations that are implementing the technology, but also they need to be prepared to ask the right questions, you know? So there's, I put, I put organizations into one of three buckets. Right now, they've either decided to take minimal or no risk at all, and they're just, you know, they're risk averse, and they're just observing what's happening. They're saying, Okay, if it gets to me, If a partner I already work with brings it to me, then I will think about it. But in the meantime, I'm just gonna let this play out and see what happens. Then there's those folks we've already talked about there. They took some risk, they stood up a closed instance of something, but that was the beginning and the end of their journey. And then there are these folks that my organization falls into where, you know, we're calculated risk takers, so we know this is a path we want to go down. We know the broader benefits and the and the wide applications of these technologies, they see it as a horizontal so it's not just this is our emerging technology silo. It's emerging technologies AI and generative AI need to be seen as a thread across everything that we're focused on. Where are the use cases? Where are the implications? So I think, and two, who you've been doesn't need to be who you'll always be. So if you have just been observing. Aim to be more strategic. If you have stood up this closed instance and you thought that you were done, you're not, you can do better and be better. So I think if we see organizations striving to take this calculated risk and to find the use cases, prioritize the right ones, this could be exceptional and like a really cool point in our history.
Dwight Brown: 13:43
So when you look at the history of AI, you talk about the AI winters that have happened along the way. And, yeah, I think a lot of times when people think of AI, they think it's a brand new kind of thing, when in actuality, AI has been going on since the 60s, basically. But is the fact that this is the first time that it's really been put in the hands of end users, is that having a different impact? And let me give you a little context of why I asked that the you know, I think of for a while, the buzzword was machine learning. Everything was machine learning, machine learning, blah, blah, blah, and then it was all about data science. And, you know, everybody wanted to be a data scientist, and everything was data science. That was the buzz word. Now we're at AI, which actually machine learning, data science is all a component of that, but it's the you know, back when it was data science, not everybody could do it. You had to have specialized learning, you had to have specialized knowledge, you had to be able to program and code and all that kind of stuff. Where now this is in the hands of end users, and they're able to do something impactful with it. How has that really influenced where this sits at this moment in history, and that impact on the world of work, and what's wrong with the world work?
Kasara Weinrich: 15:12
Ah, Dwight, that's that's such a powerful question and perspective and and I think, depending on your organization, it's seen in one of two ways. So for organizations that are incredibly risk averse, having this type of tool in the hands of the end user or their employees and associates can be correlated to a very high risk right? So while they might be more productive, they might, you know, be able to use this tool to accomplish more. There's still a lot of risk involved with these emerging technologies, and so it's a difficult balance of, how do we educate? How do we make sure that they know responsible and ethical use? How do we make sure that they understand. I mean, if I saw an image and and I haven't been able to find it again, although I could probably ask Gen AI to just recreate it for me! But I saw an image where it's a bunch of people walking, and they're on their cell phones, and they're, you know, doing this thing. And then there's two robots, like, kind of knelt by a bench with, you know, a coloring sheet, and they're and they're coloring. So, so what that image said to me is, we're, are we giving the technology too many of the tasks that actually make us feel good? Or we're allowing it to create now, we're allowing it to write and to create images and to create videos, right? Are we actually giving away the things that do bring us joy and that make us feel a sense of ownership over content and creativity? So two, how do we ensure that balance? How do we ensure that that we don't give too much away and and that we continue doing the things that make us feel good, that make us feel human. And then I think on the other side of that, though very briefly, the technology is exceptional and and if it done in the right way, you know, as as a woman, as a mother, I very early on, one of the first things I did was I said, Oh my gosh, I can. I can have this thing make my meal plan for my kids for the week, and getting the shopping list the recipes. I don't have to spend three hours a week doing that now? You know, there are so many use cases, both personally and professionally, that can elevate us, that can relieve some of these things we don't actually enjoy, you know? There's so much here.
David Turetsky: 17:41
And the applicability of the world at work there is that people hate administration. People hate rules. They hate all the BS that, and by the way, a lot of that falls into HR too obviously. But that's where I come back to the codification of all of HR, as well as the the data from HR, and let's just take the codification of the rules and the policies and things like that. There will be instances where we may be able to give the AI overlords the ability to make these, if not, if not, recommendations, decisions, on, yes we can make that promotion, or yes we can do that increase. That might be fine, because we can set up the rules and all that other stuff.
Announcer: 18:35
Like what you hear so far? Make sure you never miss a show by clicking subscribe. This podcast is made possible by Salary.com. Now back to the show.
David Turetsky: 18:44
The question, I think, goes back to one of your earlier points about closed systems versus open systems. None of this data can ever get out, right? So it's got to be a closed system. And I guess where I'm going with this is there's a lot of nuances to the technologies we're not talking about, which makes it very complicated. But in the world of HR, those complications come with other problems, like regulation, laws and people and so how these things get adopted is going to have to be really careful, because I don't know if you're familiar with the FCRA, the Fair Credit Reporting Act, where, when you start creating algorithms on people, then you actually have to report to the federal government what those activities are and what those algorithms are. And I'm not Experian, and I don't want to be, but some of those things might actually fall under that. So we start getting much more complicated when we start bringing people into the equation, I guess, where I was going with this. How does that fit in a world where our regulations really haven't caught up yet with where we are today in technology, but much less where we're going?
Kasara Weinrich: 19:53
I think, I think there's a couple things. And so I think this answer is twofold. First, you're not going to be able to think of the regulatory environment if you haven't even done the foundational step of knowing your organizational readiness for all of this, right? And so we can dive in, maybe in a little bit, to some of those readiness indicators. I know we've talked about data and some of this, but there's a lot there. But on the other hand, the responsibility is huge at this point. And it's not just the responsibility of the organization, the C suite at an organization, it's also very much so their responsibility to partner with the appropriate people, to partner with organizations that have this as a central theme of their development. And it feels a lot less risky if you know that you're partnered with folks that have been either doing this for a long time, or,
David Turetsky: 20:53
I think I know where you're going with that.
Kasara Weinrich: 20:55
Yeah, maybe 75 years or so, yeah. Or really organizations that operate even globally. So if you're the if you are if you are compliant in the most restricted areas of the world, then you're going to be compliant everywhere else. And so I think partnerships are key here. I think eventually technology and and especially in the United States, technology has always outpaced the regulatory environment, but, but I think it's also going to come back to whether it's Corporate Social Responsibility, or if you call it ESG, whichever we want to focus on here, it's almost like prioritizing your stakeholders and your shareholders and your consumers needs to be number one on your ESG checklist. And if you're doing that, then all of the things that compliance and regulations and regulations would try to achieve, you'll have already been doing.
David Turetsky: 21:52
I'm thinking, though, that there's sometimes been gaps in that. And I'll go back to trading systems, for example, which have been on the leading edge of these things, right? They've been on the leading edge of using data to look for signals, to look for opportunities. And sometimes they go bad and they go rogue. And I'm not gonna, I don't want to talk about some of the areas where those things have gotten away from certain companies, and it's brought the company down. But to your point, when you're designing for the eventuality, or you're designing for the goal of being risk mitigating, and you're taking to your point, that's a great point. You're taking the right partners, and you're taking the right technologies, and you're taking the right goal here, you frame your goal appropriately from the beginning, which is, what are we trying to do? Where are we trying to go, not just cost savings. So you know, to me, you have to, we have to create the goal with the end in mind, and then use those technologies to drive that, just like we always have. But now this is a new, new foundational tool for all that.
Kasara Weinrich: 23:06
Yeah, and I think this, you know, maybe it's I do. I've often been told that I view the world through rose colored glasses, and I've certainly been okay with that, but I do see this as a maybe, maybe not once in a lifetime anymore, because exponential growth means that we're going to see things at a faster pace. But this is a very unique opportunity for HR, and it's unique in so many ways, but the way that I see it is, HR has been on this journey of becoming more and more strategic, more and more relied upon by the business to provide, whether it's people, data or strategic guidance, advice. Long gone are the days where HR is purely seen as an administrative function. And so if this next phase, if this technological revolution, can mean that now the CHRO, or HR leadership is partnering with the CIO, the CTO, the CISO, if we're getting together this cross functional group, and we're reimagining our work, we're reimagining our workforce, we're reimagining the way that work is done and and using this as an opportunity to, you know, completely deconstruct, maybe the box and and look at things and say, do we need to keep doing it this way? Why are we still doing it this way? And, and I think for a long time, digital transformation was simply taking a manual process and finding a technology that could just do that same manual process on a computer. That's not transformation, that's that's copy paste, right? You're just taking it down. And so, so now the opportunity to truly transform, right? And and to take a deep, hard look at what can be augmented and automated, and what, what human skills can now rise to the top. Up and and really help your people to be engaged and empowered in their work. This is it can be exceptional!
David Turetsky: 25:06
But that means preparing your people and the organization to take on that tact, because you might have your leadership team agreeing to it, but then if you're going to need to get rid of all your staff and hire a totally new one. First of all, it's never gonna happen, but we've been talking a lot on this program about reskilling and upskilling, so that's the opportunity, isn't it, to take what you have, change them degrees and be able to give them those skills to be able to get that done, right?
Kasara Weinrich: 25:41
Yes, and and also not just re Skilling and upskilling just to do it. I mean, I attended a seminar with a Harvard a Harvard professor, and she was discussing this very long study that they had done on reskilling and upskilling and the outcomes, and this was prior to the release of generative AI, but she was discussing, you know, who was upskilled why. And at the end, I asked her, I said, did the organization sit down prior to the reskilling and upskill initiatives and identify exactly which skills they would need why they were upskilling and reskilling in certain areas? And she said it wasn't at all strategic. It was many of the individuals in the study were simply looking at low skilled on their Excel file and saying, Okay, we need to make them a little bit more skilled. And maybe it was soft skills or computer skills, tech skills, but they just kind of threw darts at the wall to determine what those skills should be and, and so I think, yes, reskilling upskilling 100% but along with your strategy! And the thing that I was so lucky to be able to speak on prior to the release of all of this technology, was data literacy and, and it's one of those skills that often is reserved for our data team or for analysts, or now in some cases, leadership, but it's not often brought all the way down through every layer of the organization. And, and that's one of those things that will be critical moving forward having a data literate organization, evaluating your employees, determining their ability to use with, read, work with data, right, make decisions with it. And that's just one of many indicators on org readiness for this next, you know, phase of our history.
David Turetsky: 27:42
Hey, are you listening to this and thinking to yourself, Man, I wish I could talk to David about this. Well, you're in luck. We have a special offer for listeners of the HR Data Labs podcast, a free half hour call with me about any of the topics we cover on the podcast or whatever is on your mind. Go to salary.com/hrdlconsulting to schedule your free 30 minute call today. So that kind of sets us up for so how does the organization get ready? Does it mean that we now need to create the right job architecture and job descriptions with skills that talk to data literacy? Does it mean we need to start assessing people? Where do we go? What do we do? How do we take the next steps to get there?
Kasara Weinrich: 28:30
So I think there are plenty of different frameworks around readiness, but from my perspective, what I've seen over and over and over again in the market and in conversations has been, yes, data literacy is something that can be measured and can be trained. So that's a beautiful thing, and it's a really great place to start. And I think a lot of these things, they do, need to run parallel. So you know, having employees and leaders that understand data, that have access to the appropriate data, and that know how to work with it. That is one function, but then even doing an analysis of your tech infrastructure. Do you have the foundation required to support these tools and systems? Do you have a data taxonomy in place? Do you know where all of your data lives? Is it in a bunch of different systems that don't speak to each other? You know, do a, do a deep analysis and and get some insights as to what your tech infrastructure looks like, organization wide. And then, I would also say, kind of in three parts, but, but all kind of pointing back to the same outcome would be cultural adaptability. You know, I wouldn't be the resident anthropologist in this conversation if I didn't point back to this. So does their ability to adapt to and to really embrace change, is your culture ready? Is it supported? And so again, you know something as simple as, do you operate in Silicon Valley, where they're used to innovation and risk and, and, you know the the cycle of try, iterate, fail, right? And, or are you operating globally, where you might have a team in in Asia that is prioritizing harmony and and hierarchy in some instances, and the implementation of this technology could be far too disruptive for them to have the the desire to adopt. Do you have those insights? It's another thing that you can measure and look at and and assess, and then finally, would be leadership support and employee engagement. So leaders are the ones that are going to drive the perceptions of these emerging technologies. Is, is the perception fear based that is going to steal your job? Or is the perception excitement and and the desire to try and curiosity and learning agility, right? Your leaders are the ones that are driving that. So do they have a clear understanding of your outcomes and your plan, and then, in turn, are your employees ready for it? Right? Beyond the data literacy, you know, they need to be bought in to these initiatives.
David Turetsky: 31:21
And a lot of times, they're the last group to not only be bought into it, but also to really just fundamentally understand it exists. And we've even seen where projects have been rolled out, you know, 100% done, communicated, you know, to all the leadership team, and then they start talking to their employees, and the employees go, Wait a minute. What? Where'd this come from? So, yeah. So your point is, don't make it an afterthought to talk to employees, bring them in early.
Kasara Weinrich: 31:50
yeah. Get get a cross function group of stakeholders.
Dwight Brown: 31:54
Or sometimes it's the other way around! The employees end up knowing more about the technology and the and then what the leaders do. And you, you know, you kind of end up with this backwards cycle of how to roll it out, implement it. You know, leaders are going one way. The employees are looking at going, what are you guys thinking? This is not even, you have no clue what this technology is and what it does! And it probably depends on the organization which which of those scenarios plays out.
Kasara Weinrich: 32:25
Yeah, and back to your earlier your very first question about what's wrong with the world of work, there's often a deep misalignment between senior leadership's view on what the problem is versus the employee's knowledge of what the problem is. And so if you're sourcing use cases, and you go to your employees, and you first educate them on if they don't already know what this tech is and what it does and how it works, then saying, okay, in your function, how could this technology be used to improve the world of work for you? They are going to have the very best use cases and a prioritization of of what can completely change their function versus what might have, you know, minuscule impacts and effects. And so, yes, an early buy in and going to them first could be immensely impactful.
David Turetsky: 33:16
As long as the culture is about trust and not
Kasara Weinrich: 33:17
Yeah,
Dwight Brown: 33:17
Will you give us use cases? Nah, they're no use about, hey, our goal is to cut costs. We're going to talk to you about AI doing your job. cases.
David Turetsky: 33:32
Yeah tell us what you do. You want to document everything you do. Give us a task by task list of everything.
Kasara Weinrich: 33:39
Everything here is manual. I don't
David Turetsky: 33:41
You need to be involved in every decision.
Dwight Brown: 33:46
Nothing to automate to be seen here. Just keep right on moving
David Turetsky: 33:49
Keep on going. Actually, I think that guy down there, he's in a job, but we can definitely outsource the robots! But, but I think self preservation is going to be deep in this, because people watch too many freaking movies where they go, Yeah, I mean, look at the Cylons. Look what they did! Sorry Battlestar Galactica. No. But seriously, they're going to be like, you know, the robots are going to just keep taking over. We're if we bring them in, unless they're educated. Look, we all. I'm sorry, I shouldn't say we all. I lived in a world where there weren't computers at work. Then there were, and people were fearful for their jobs. Now there were some people who used to be receptionists, they used to be secretaries, they used to be administrative assistants, and they've had to move on and do other things, because everybody does their calendaring themselves and all that other stuff that was extremely valuable at that point. And I think we have to go through that transition, don't we? I mean, as an anthropologist, don't you see that there's going to be this kind of maturity and migration from to certain tasks and certain roles and certain things to others?
Kasara Weinrich: 35:00
Oh, of course, I think I actually believe I said this week that I'm not convinced that there's any current skill set that can't also be applied in some capacity. So as an example, if an organization is looking to either outsource to tech or to make more efficient through tech and so therefore, maybe cutting down on head count a data analyst role
David Turetsky: 35:24
right
Kasara Weinrich: 35:25
the skills and the functions of a data analyst, they're not just specific to doing data analyzing. You can then move that person into data governance. You can move that person into being a key component or a consultant for partner and vendor relationships when it comes to these technologies, if they have a deep understanding of the data. Oh, and by the way, data is the is the goal, right? That's the thing that's fueling all of this. So, so yes, I think there's going to be a lot of fear potentially, but it is really up to the organization to mitigate that and then to be very transparent in in their plans and and, you know, take on the learning and development. I think, you know, ADP Research Institute recently released some of the sentiment, right? And 80% of people know that their jobs can be influenced by AI in some way. And you know, the jury's still out on whether it's primarily positive or negative, and it's going to evolve, it's going to change, but it really is up to the organization to be transparent.
David Turetsky: 36:37
I think what we should do, Kasara, is to get you back on the program two years from now?
Kasara Weinrich: 36:42
Hmm
David Turetsky: 36:44
to say, Hmm, well, I mean, even next year, yeah, and we've been, we've actually been talking Dwight and I have been talking about, you know, how things change and how quickly they do and come back and see, well, did we get it right, or were we wrong? Was there a backlash, or did it surprise all of us, and it got adopted much, much more quickly than we actually thought it would. Now, some of the AI is actually going to be built into solutions like your company is pioneering, but there are others that are using it in the back and not bringing it forth, and not, you know, bringing it to to, you know, each of the consumers, but, but keeping it in the background, doing a lot of work, and then having the productivity of the other people around it, or the technologies around it being better.
Kasara Weinrich: 37:32
And I think I, you know, when, when ATMs were released, there was this massive panic and uproar. I will never be able to go into a bank and talk to a person again. I'm only going to be able to work with this robot, right? And now we know that that is not true. And while we could probably look,especially your organization, we could probably look at the over under, on how that specific like, whether a teller has changed, you know, in in quantity and right? But you can still walk into a bank and see a person.
David Turetsky: 38:03
If you want to!
Kasara Weinrich: 38:05
If you want to, right?
David Turetsky: 38:06
No, seriously, because you can deposit checks and, you know, cash.
Dwight Brown: 38:10
yeah, you got a choice!
Kasara Weinrich: 38:11
Yeah, and that's exactly it. Like at every point in human history, human beings with tools have eventually replaced human beings without, right? Like the rock was replaced by the hammer, the satchel was replaced by the wheelbarrow. This is just, it's another tool and and those who are willing to try and do their work with it are going to, eventually, you know, still be here.
David Turetsky: 38:37
Beautifully said. And I think we'll leave it at that, because, as I said, I think what we should do next time we have you on is to see where did this go and how fast was it adopted? So
Kasara Weinrich: 38:56
I would love that.
David Turetsky: 38:57
Kasara, thank you. We appreciate it.
Kasara Weinrich: 38:59
Thank you both so much. Thank you for the time and for the conversation. This was wonderful.
David Turetsky: 39:04
My pleasure. Dwight, thank you.
Dwight Brown: 39:06
And thank you! thanks for being with us today, Kasara.
David Turetsky: 39:09
And thank you all for listening. Take care and stay safe.
Announcer: 39:13
That was the HR Data Labs podcast. If you liked the episode, please subscribe. And if you know anyone that might like to hear it, please send it their way. Thank you for joining us this week, and stay tuned for our next episode. Stay safe.
In this show we cover topics on Analytics, HR Processes, and Rewards with a focus on getting answers that organizations need by demystifying People Analytics.