Jason Albert is the Global Chief Privacy Officer at ADP. He’s worked at the intersection of technology and law for over 25 years and has focused on privacy and AI. In this episode, Jason unpacks the new EU AI Act, how it will affect companies around the globe, and what they can do to prepare for this new regulation.
[0:00 - 3:32] Introduction
[3:33 - 8:35] What are the implications of the new EU AI Act?
[8:36 - 14:43] What are the key points of the EU AI Act?
[14:44 - 26:56] What can companies around the world do to be compliant with this new regulation?
[26:57 - 27:46] Closing
Connect with Jason:
Connect with David:
Podcast Manager, Karissa Harris:
Production by Affogato Media
Resources:
Announcer: 0:01
The world of business is more complex than ever. The world of human resources and compensation is also getting more complex. Welcome to the HR Data Labs podcast, your direct source for the latest trends from experts inside and outside the world of human resources. Listen as we explore the impact that compensation strategy, data and people analytics can have on your organization. This podcast is sponsored by Salary.com, your source for data, technology and consulting for compensation and beyond. Now here are your hosts, David Turetsky and Dwight Brown.
David Turetsky: 0:38
Hello and welcome to the HR Data Labs podcast. I'm your host, David Turetsky. Like always, we try and find brilliant people inside and outside the world of HR to tell you what's happening, what's going on, what's the latest? Today we have with us, Jason Albert from ADP. Jason, how are you?
Jason Albert: 0:53
I'm doing well, it's great to be here!
David Turetsky: 0:55
Well, it's great to have you. Jason, tell us a little bit about you and ADP.
Jason Albert: 1:00
Sure! Well, you know, I'm Jason Albert. I'm the Global Chief Privacy Officer at ADP. I've worked for over 25 years at the intersection of technology and law, with a focus on privacy and more recently, AI.
David Turetsky: 1:12
Oh, cool.
Jason Albert: 1:12
Having worked in Europe, in the US and in law firms and tech companies, I'm inspired by the opportunity the new technology provides.
David Turetsky: 1:19
I worked in Europe too, when I worked at Morgan Stanley back in the 19, early 1990s. Isn't it fun to have like bi continental experience?
Jason Albert: 1:29
Oh, absolutely! Yeah. I spent about five years
David Turetsky: 1:32
Yeah, it's so much fun. In some ways, you had in Europe, and it was great because you get to see a different culture, experience, a different legal system, see alternate approaches to how to regulate and so, you know, learned a tremendous amount over there. to relearn what you thought you knew when you came over. So that was even more fun.
Jason Albert: 1:52
Very true.
David Turetsky: 1:53
So Jason, what we ask every one of our guests to do, tell us one fun thing that no one knows about Jason Albert.
Jason Albert: 2:02
Well, my thought on that would be that I actually, at college studied plate tectonics under the professor who actually developed the theory, wrote the seminal paper on the topic.
David Turetsky: 2:15
Wow. I think you may have heard just recently, the San Andreas Fault is moving again.
Jason Albert: 2:20
Well, we had an earthquake here in New Jersey, just a couple weeks ago!
David Turetsky: 2:24
Yeah, yes, the ground shook. And actually, I think we were actually we were supposed to feel it up here in Massachusetts, if I'm not mistaken?
Jason Albert: 2:33
Yeah. I mean, definitely people in New York felt it, and elsewhere, it was interesting. I was on a call at the time, and I looked pretty close to the epicenter, so the house started shaking, and then a few seconds later, you can see my colleague in the office, you know, she started shaking. And then we had somebody in from New York City, and they started shaking.
David Turetsky: 2:51
Wow.
Jason Albert: 2:51
You can see the propagation across
David Turetsky: 2:53
Absolutely. Scary too!
Jason Albert: 2:55
Well, not that scary.
David Turetsky: 2:59
Well, I would have been a little bit freaked out. I know my dogs would have been more freaked out. But all right, well, I'm gonna be calling you next time we have an earthquake, so I'll try and get the skinny on what was, what I actually should feel then. But today, we're gonna talk about something that does feel like an earthquake, probably to many in the European Union who are thinking about or using artificial intelligence, which is the EU AI act. So our first question is, the EU has just passed an act governing AI, give us about the 10,000 foot view first, and then let's discuss the implications of what companies are doing to be able to deal with this in the EU.
Jason Albert: 3:48
Absolutely right. We see that policy makers around the world are really grappling with how to realize the benefits of AI while at the same time protecting individuals. And as you mentioned, the EU is taking a first step with its artificial intelligence Act, which, when finalized, we expect that next month, it'll be the first comprehensive regulation AI anywhere in the world. But, you know, it was approved by the European Parliament on March 13. And then, you know, they've gone through some linguistic analysis, and finally, has to be approved by the European Council. Yu know, and the interesting thing about the EU AI, so it takes a horizontal approach. It regulates AI, whether it's a standalone offering in software or whether it's embedded in hardware, like in a self driving car. And it takes a life cycle approach. It starts from the initial development to the usage of the AI to post market monitoring. And it covers all the parties involved, those that are developing the AI, those that are introducing it on the market, those who are selling it, distributing it, and then ultimately using it. And you know, it's important to know that it applies to organizations also that are outside the EU, if they supply AI systems in the EU, or if the outputs of their systems wll be used in the European Union.
David Turetsky: 5:02
So it really is a comprehensive view of artificial intelligence. Now, do you think the EU, and this might sound funny, but do you think the EU is worried because of some of those people who are talking about the Doomsday, or they had been watching about too many movies about AI, maybe even the movie AI, about how this could escalate out of control? Do you think they were kind of responding to that, or is this just kind of setting setting, kind of the boundaries?
Jason Albert: 5:30
Look, I think really, when you, when you think about this, it's a question of setting boundaries, right? We all know that ultimately, society decides how technology is going to be used, right? And, you know, I mentioned at the outset, you know how, you know I'm personally excited and inspired by the potential technology and AI offers so much potential. It's going to help make tasks faster. It's going to give us new opportunities. It's going to help us do things that we weren't able to do before. It's going to give us insights that we weren't able to achieve before, before the development of this type of programming and this type of computing power. But at the same time, obviously, it's important to have guardrails. It's important to protect against risks. It's important to protect against potential bias. It's important to protect against misuse. And so I think really, this regulatory structure was adopted to try to strike a balance between the realization of those benefits and addressing possible
David Turetsky: 6:27
Well, we've seen that there are risks, and there risks. are actually real risks, like, for example, utilizing artificial intelligence to develop technology that can use people's voice patterns and be able to create like for the for example, the robo calls that just happened during the political process here in the US, where they were faking candidate or many candidates, faking their voice and tricking people into either not showing up to the polls or trying to make the wrong decision. Do you think that's kind of one of those, quote, unquote, they're foreseeable risks, but those are one of the scarier ones, where, you know, we're actually seeing rogue players use it do the wrong thing right now!
Jason Albert: 7:11
I think when you really look at technology, right, technology is a tool, right? And like any tool that can be used for for various purposes, right? I have a hammer here in the house. I've used it to hang some pictures. You know, back in my days studying geology, I was out in the field, and I was using it to, you know, chop off rocks so I could, you know, look at the grain pattern and look at, you know, how they were, how they were tilted relative to the landscape. So I really think, when you think about AI, it's important to think through sort of the usage. It's important to, you know, adopt ethical principles that govern how, you know, you'll use AI. You know, we've, we've done that at ADP. You know, we have a we have our own set of ethical principles that cover things like explainability, transparency, human oversight, addressing bias, you know, having an inclusive development process, training, all those types of, all those types of things.
David Turetsky: 8:09
In essence, it's just like a hammer. It can be used for good things or it can be used for bad things.
Jason Albert: 8:14
Well, yeah, although I don't think it probably can be used to chip off rocks.
David Turetsky: 8:18
Well, when the robots start being utilized by the AI that might be true!
Announcer: 8:25
Like what you hear so far? Make sure you never miss a show by clicking subscribe. This podcast is made possible by Salary.com, now back to the show.
David Turetsky: 8:36
So let's get to question two. Let's get a little more detailed. How do you think the EU AI act, hw is it structured? What are the key points and when does it actually go into effect?
Jason Albert: 8:46
Look, I think that's a great question. And if you think about the EU AI act, it addresses three key risk areas. First, it bans certain uses of AI that are seen as posing unacceptable risks. One example is real time biometric identification by law enforcement. Another one is like social scoring, things like that. Second, it adopts a regulatory regime for so called high risk use cases. You know, those where the use of AI could impact the rights or opportunities available to individuals. You know whether in things like education, employment, access to credit, things like that. Third, for foundational models, such as large language models, it imposes transparency obligations, so you get more information about how those models are developed and when they're being used. So you can see it covers sort of the gamut. Certain things that are are banned, certain things that are seen as high risk and such a judicial protections, you know, certain things for the LLMs that we've all come to know and love in the age of generative AI. And then it has different timelines for implementing these provisions, depending on the risk category. So for those things that are that are banned, those will come into effect six months after the the the acts adopted. The provisions around LLMs and general purpose models will be for 12 months. And then for high risk, it'll be 24 months. And then there are few small little things about AI embedded and other things that actually extend out for three years. But so, so that's essentially this, that's essentially the way that it's structured.
David Turetsky: 10:18
So I know that there's been a lot of talk in the US about the biometric problems that AI and the training sets in the AI have been utilized in the past, especially going through things like airports and the TSA. Do you see the US kind of taking on similar, I don't want to, I don't necessarily know if they're going to be regulations, they might turn out to be! Regulations or laws based on where the EU is going with that?
Jason Albert: 10:42
Well, I don't know that it'll be necessarily, you know, based on where the EU is going. But I think in the US, you know, as elsewhere around the globe, you know, there's this concern about enabling the potential of AI while addressing the risks. And so we see a couple different things here in the US, right? We see, you know, the National Institute of Standards and Technology, which is part of the Department of Commerce, has adopted an AI risk management framework, and it really provides a way of, sort of thinking through, how do you identify risk? So has govern, you know, you have to have this governance regime map. You have to map your activities and the risks. You have to then, you know, measure the risks and see, you know whether you think they're large or small, and then you have to, you know, monitor how you know the steps you've taken to address those risks are performing. So it's a self regulatory system, but it's pretty detailed in terms of, you know, what it requires looking across, you know, again, different types of AI. You know, there's a bill pending in Connecticut right as we speak, that just passed, you know, part of the legislature that also would adopt sort of a risk framework, you know, not, you know, quite as involved as what was adopted in Europe. But again, the sort of similar life cycle approach about looking about how you develop and and deploy and use AI technology.
David Turetsky: 12:00
I think there was a there was a legislation that was pending quite a long time ago from California, which talked about the use of employee data in models, and it was potentially going to be disastrous for HR, because it would have basically said that employees had to sign off on usage of their data in any type of algorithm that included their data, and that would have made it really hard to do a head count analysis. And you know, if Fred didn't want you to count them in the head count well, then we couldn't use it. So it would have made things like analytics and even doing things like bonus planning or merit increase modeling pretty impossible if we didn't have access to that. Do you see that where Connecticut may be going or or having those kind of protections for using employee or HR data in that, in that way?
Jason Albert: 12:53
So what we see, for example, are a couple of different things we see in the context of certain automated employment decision technologies, you know, things that are used to help with hiring or recruitment. You see, you know, requirements to give, you know, applicants the ability to opt out of having that technology, you know, act on them so that that's that's one thing that you see, you know, but the AI laws really tend to be a little bit more focused on, you know, things like data quality, things like, you know, making sure that you know the AI is, you know, accurate it has validity, making sure that you provide transparency so somebody knows that AI is being used in a certain context, you know, in terms of the data collection, right? That's usually governed either by, you know, IP laws, in terms of, you know, what you can do with sort of existing materials or privacy laws in terms of the ability and the rights that people have around their around their personal data, and what it can be and what it can be used for.
David Turetsky: 13:53
So what you're saying is, is that it's possible that those things might come in, but it's not really, it's not going to be necessarily around AI, it may be around other things.
Jason Albert: 14:04
Right. Yeah, I think, I think what you see is you tend to see things related AI, to really sort of be focused on that. And then, you know, the controls and things that might be inputs tend to, tend to exist, you know, in related legal areas.
David Turetsky: 14:18
Hey, are you listening to this and thinking to yourself, Man, I wish I could talk to David about this. Well, you're in luck. We have a special offer for listeners of the HR Data Labs podcast, a free half hour call with me about any of the topics we cover on the podcast or whatever is on your mind. Go to salary.com/HRDLconsulting to schedule your free 30 minute call today. So why don't we turn our attention back to the EU act? What can companies do, pretty much around the world to be compliant with this new regulation?
Jason Albert: 14:54
Look, I think there's strong alignment between good AI governance and the requirements of the EU AI act, right? When you're thinking about any sort of AI development, it's important to have a process to identify and assess and manage risks. Similarly, you know, to make sure that you have good outputs, you need to use high quality data. You have to monitor performance of the model. You want to make sure that it doesn't drift. You want to make sure that you, you know, address any potential bias, and obviously AI that makes recommendations needs to be subject to human oversight. So, you know, as we've talked a little bit about, you know, risk management is really at the core of the EU AIX approach. You know, because of this, you know, companies should identify who owns overall responsibility for risk management, and then it's gonna be important for that person to work with a cross functional team with individuals from legal and from privacy, from security and from the business, to map what the AI systems or the company is using and developing or doing, and for each of those to evaluate potential risks and then to identify how to address and manage those risks once, once you know what they are, you know, and even where a company implements AI systems that are developed by someone else, they have to use those systems in accordance with the instructions of the developer, and they also have to make sure there's human oversight over the use of system output. So it's not just going to be about who builds the AI system. It's going to be about the people who actually deploy them. They're going to have some obligations as well, right? But fundamentally, at the end of the day, building a strong AI Governance Program will get you much of the way there. You'll have to still account for the specifics of the EU AI act. But as with a strong privacy program, those are adjustments that would be made on top of a strong foundation.
David Turetsky: 16:36
I was just going to ask you, because, Jason, we remember GDPR happening, and GDPR was a series of rules around the data privacy and the as you mentioned before, the right to be forgotten. I guess you could kind of boil it down to from the employee or from the individual's perspective. Do you see that as a roadmap for for this EU AI act?
Jason Albert: 16:57
Well, look, I think, you know, the the work that companies did to, you know, come into compliance with GDPR, I think will be, will be very instructive, right? You'll need something that's programmatic, you know, like the EU AI act, GDPR was a horizontal regulation applied to use of personal data, whether as a consumer or as an employee or, you know, or things like that and so you had to think about this across your business processes. I think, you know, one thing that's a little bit different is that, you know, GDPR really includes, you know, privacy by design when you think about product development. But a lot of it is about processes, about, you know, allowing individuals to access or delete their data. You mentioned the right to be forgotten, you know, rules around, you know, transfer data. The EU AI act is going to focus more I think, on, you know, product and service development, there's still elements that are that are after the fact, such as the instructions you have to give to people, and the human oversight and the post market monitoring. But so I think it'll be a little bit different in that aspect, but fundamentally, the same sort of horizontal, programmatic approach will be valuable.
David Turetsky: 18:01
Talk a little bit about the US and other countries having to now look at this and say, are we compliant with it? Do we have to be compliant with it? Is there any understanding about, you know, what footprint you have to have in the EU to be able to be or to think about compliance with this?
Jason Albert: 18:19
Yeah, well, I think, look, I think it's pretty straightforward, right? If you are making a system that will be involves AI that will be used in the EU that fits into one of these categories, whether it's a large language model, whether it's a, you know, a high risk system, then you're going to need to comply with the act. So it's really, it's really, think about it, more sort of a product approach. Am I going to put a product, put a product on the market in the EU? That's sort of one test, right? And the hardward part is very easy like, you know, is the self driving car, you know, in Brussels, or is it somewhere else? But even with software, you have a pretty good sense, right? But then the other thing is, if you have an AI system that that operates, regardless of where it operates, but if the outputs of it are going to be used in the EU, then it too has to comply with the act.
David Turetsky: 19:08
So the answer is, get ready US! Get ready other countries, including the UK! You now have to figure out how you're going to be compliant with this within, I imagine, within the same timelines that the companies that are headquartered in the EU do right?
Jason Albert: 19:23
Yeah, no, I think that'd be right, like the timing
David Turetsky: 19:25
Yikes. So I remember when we were working would be the same. with GDPR regulations, which is the global data privacy regs that came out of the EU, that was really a big deal for us to try and figure out how we were going to be compliant with that. And it took us a while, because you basically are turning the Titanic in many different ways for or for organizations and for think about it as applications that had never kind of considered that we had to create net new things that were able to facilitate this, this issue. Now with this regulation, some companies have been using AI for years, and now have to figure out how they're going to be compliant with those categories. So I guess the question is, this isn't just for new technology. This is for existing technology, and any company who has any component as a of AI, you said that before, right?
Jason Albert: 20:17
Right! Yeah, yeah, exactly. It's not just, it's not just going forward, right? If you have, if you're using some AI that, again, falls within sort of the ambit of what the act regulates, it's going to need to be compliant as of the effective date of that portion of the act.
David Turetsky: 20:34
Well, then we have very little time. We better. We're going to get back to it.
Jason Albert: 20:39
We have some time!
David Turetsky: 20:40
Right. I mean, but, but, as you said before, though, there are certain pieces of this, especially the ones that are considered most high risk, where there's a, there's a ticking clock on this, we've got to get going, right?
Jason Albert: 20:50
Yeah, no, I think that's right. But, you know, I think, you know, not many companies are really involved in sort of the things that are banned, and for the and for the high risk stuff, which, you know, yeah, a lot of companies use, but you've got, you've got 24 months, right? So you have time to, you know, understand the acts requirements, to be able to then figure out how you're going, what steps you're going to take to address that, so that the product, the products are compliant, and, you know, there's going to be, obviously, you know, guidance along the way, as we all go on that journey.
David Turetsky: 21:17
I guess one question that I think would be burning in the ears of the or the minds of the listeners would be, what does HR have to do? Is there anything that's kind of incumbent upon the users of these systems, whether they're in the EU or not, what would be their need to be able to deal with this response?
Jason Albert: 21:39
So I think there really are three things that HR has to think about, right? The first, of course, is to make sure that for any system that that fits in this category that implicates some of the high risk things in employment, you know that you understand how the system you're using operates, and that you, you know, follow the instructions of the developer of that system. And that's, that's one clear obligation, you know. The second one is, that is to have human oversight, right? You know, we already see this a little bit with automated decision making under GDPR, but it's going to be important to just not like system and follow it blindly. But you want to actually have to have human oversight, like how it's performing of the decisions, you know, how do you review them? How do you assess those? And then third, you know, to make sure that there's good transparency to the end users where AI is involved, so that you know, employees and others really understand where AI may be. Maybe they may be interacting with AI really.
David Turetsky: 22:33
So to boil it down, do you have a lot of work to do, HR! You have to look at your systems that may use AI. You have to talk to them about what the AI portions are, and internally, you probably need to get some governance and some risk management people together to talk about, how are you going to enforce compliance from your perspective, to make sure, as you say, there's human oversight and that we're understanding what's being done, and then communicating that transparently to the people who it affects.
Jason Albert: 23:05
Right. I think that's exactly right.
David Turetsky: 23:07
Oh boy, there's gonna be a lot of work from this. And you know that that means that if there's anybody who has employees who are in the EU, right? I mean, I guess let's boil it back to the HR people. Is it just the people in the EU that they're gonna need to worry about these kind of regulations, or if they're like a US company, and they have some some presence in the EU, is it just those employees, or is it really all their employees?
Jason Albert: 23:36
Well, really, again, I think it's look this is, you know, you know, while transparency really applies to, sort of, the individuals who the AI acts upon. You know, first, like we talked at the beginning about how, you know, this is circus with good governance, you're probably gonna want to be transparent anywhere, right? You're gonna want that under this AI framework. You're gonna want that certainly under what we're seeing in terms of, you know, existing and pending proposals in the US. So, you know, I wouldn't view this sort of sort of narrowly that way. And then when you think about sort of the more broader things, whether it's sort of human oversight, whether it's falling short like that, you know, those things apply, really to systems. And so unless you're really running a separate system in the EU, those are things that are going to you're probably gonna need to grapple with, you know, on a company wide basis.
David Turetsky: 24:18
So when we talk about things like pay transparency in the US, we, we I mean people here at Salary.com we talk about having like a lowest common denominator, where whatever the most harsh regulation, or whatever the most thorough regulation is, we suggest to companies that they use that across the board, for all of their entities, for all their employees. And I think what you're saying is, in the same vein, since the systems may touch all of your employees, it probably makes sense to be more transparent with all of them and have the governance facilitated across all of them.
Jason Albert: 24:54
Look, I think, yeah, no, I think each company's gonna ultimately, you know, decide, you know, what requirements apply to it, and ow it's going to how it's going to address them, but I do think you know, with with the Act, there are some benefits to taking a more holistic approach.
David Turetsky: 25:08
Right. Well, Jason, I think what we're gonna need to do is come back and visit this, maybe in about six to 12 months, and see how it has influenced not only organizational compliance, but also how it's influenced US compliance, or US efforts to regulate AI. And I got to be honest with you, I'm proud of them from doing something, because some of the things we've heard out of Congress has been kind of wackadoodle, and I'll use that as a technical term, some of the things that the US have said seemed kind of draconian, because I don't think they're actually thinking about this from a from a realistic perspective. I think they're watching too many movies.
Jason Albert: 25:48
Well, look, I think, you know, what we've seen in the US is we've seen a lot of desire by legislators to learn more to you know, we've had the, you know, the Insight forms in the Senate. We have a task force in the house. Look they I think we all, you know, AI is going to be a transformative technology, you know, it may be, you know, perhaps, in the end, even more transformative than the than the internet. And I think, you know, as we all have to, you know, figure out how we see that, how we see the opportunities, how we enable those opportunities in a way that that also is cognizant of of the risks that we have to that we have to address. I think, a colleague of mine really put it well in a sign off for some review forms that we do around around AI, responsible fun is the best fun.
David Turetsky: 26:42
Spoken like a true ethical risk taker or risk not taker. That's, that's great. Well, Jason, thank you very much for being on the HR data labs podcast. We really appreciate it. And as I said, I reserve the right to call you back in six to 12 months to say, How far has it gone and has it gone well enough.
Jason Albert: 27:08
I look forward to it!
David Turetsky: 27:09
All right. Great. Well, thank you very much for being here. We really appreciate you being on the podcast, and thank you all for listening, take care and stay safe.
Announcer: 27:18
That was the HR Data Labs podcast, if you liked the episode, please subscribe. And if you know anyone that might like to hear it, please send it their way. Thank you for joining us this week, and stay tuned for our next episode. Stay safe.
In this show we cover topics on Analytics, HR Processes, and Rewards with a focus on getting answers that organizations need by demystifying People Analytics.