It’s no exaggeration to say that Generative AI is transforming the legal industry, both in the realms of practices and the profession. Data privacy is particularly impacted, as privacy professionals are being called upon to navigate the shifting landscape of laws and regulations.

We sat down with Moya Novella, legal counsel for privacy and AI policy at IBM, to get her thoughts on the opportunities and obstacles in this brave new world of generative AI.

Novella’s work for IBM includes advising on advocacy, ethics and policy for global privacy and AI legislations and regulations. Her work informs the tech giant’s Government and Regulatory Affairs team, Office of Privacy and Responsible Technology, and AI Ethics Office, and she provides insight to AI policy makers, including influential bill sponsors and senators.

Novella also serves as adjunct professor of data privacy law at Albany Law School and Seton Hall University's Law School. She is regularly called to speak on the topics of AI and data privacy, including most recently at the Chief Litigation Officer Summit in Boston. She will also be participating in the upcoming Artificial Intelligence and Law conference at the University of Cape Town’s Faculty of Law in South Africa.

With hundreds of AI laws currently pending and emerging across the U.S., Novella’s work at the forefront of this revolutionary technology provides fascinating insight into the legal landscape of generative AI.

Disclaimer: Views are Moya Novella’s own and not necessarily reflective of IBM, or any other organization to which she may be affiliated.

Lawdragon: You’ve had such an impactful career so far. Take us back: How did you first decide to become a lawyer? What was your path?

Moya Novella: My law career started as a teenager. I graduated high school at 16 and I started an associate degree in legal studies while I was working full time as an accounting clerk. I thought at the time I would perhaps be a fashion designer or seamstress, skills I had learned from my family growing up in Jamaica. But I always had a curiosity about law ever since I was a little girl. I used to read books like Encyclopedia Brown and Nancy Drew. When I was 18, I first joined the legal field as a legal assistant at the in-house law firm for Zurich Insurance Company.

At 18, I was perhaps too young to appreciate what an incredible opportunity that was to work as a legal assistant for one of the largest insurance companies in the globe. But that first experience is what led me to an eight-year career as a paralegal, at a law firm then for a national mortgage banking entity. I thought I would remain as a paralegal, but I was encouraged by a lot of the attorneys I worked with to pursue a law degree.

I didn't take the advice immediately. I took a few detours first. I completed a bachelor’s degree in sociology and a master’s in public policy. I had a really strong interest in understanding the legal process of making laws and regulations, and also the policy basis behind laws and rulemaking. That led me down a career path in regulatory compliance, first for a national mortgage banker. Then I entered the medical device and pharmaceutical industry working for a Fortune 500 company called Henry Schein Inc.

I worked full-time at Henry Schein in regulatory compliance while I pursued a law degree. I also studied internationally, in Spanish language and culture at University of Santiago de Compostela in Spain, and courses in European competition and trade law at Trinity College in Cambridge University in England. So, these were many of the detours I took before finally studying law for my juris doctorate at St. John’s University Law School. It has been quite a journey for me to be where I am today as I am a first generation lawyer and college graduate.

I believe the best way to learn and hone skills is to teach. The teaching process is in and of itself a learning process.

LD: And when did teaching become a part of the work that you're doing?

MN: I started teaching first at Seton Hall in around 2019, where I instruct privacy law courses as part of both of the law school's online Juris doctor and graduate degree programs. I started at Albany Law School in 2021, and develop and instruct privacy law courses also as part of their online law and graduate programs.

I believe the best way to learn and hone skills is to teach. The teaching process is in and of itself a learning process. I also enjoy being able to mentor students and being a source of support as they approach their careers in privacy. Also, my privacy professorships I found have really complemented my work as a privacy and AI counsel because the principles and the laws that I teach in my courses are the same in my work and practice area.

LD: It's got to be interesting speaking to young legal minds about data privacy and AI. There’s a different mindset. I imagine it’s informative.

MN: Absolutely. That's one of the challenges I’m working through right now in my courses – what's the best way to advise students in terms of their use of AI? Educating not just about the policy aspects and the laws surrounding AI, but what's the best way to find AI tools to be useful and to use them responsibly? A lot of students will use ChatGPT to write their assignments. What's the best way to approach that?

I also do see a varied set of questions and approaches coming from students. I have students with a very broad age range, so some are very young. Last semester I had a student who was in his seventies. He admitted that he was terrified addressing a topic of privacy and AI, but he did really well. So it's been really interesting to see the perspectives across the generations within the course and to get the different approaches and perspectives that these students are raising as we all try to navigate this new world of AI.

LD: How did you first come to specialize in data privacy?

MN: At Henry Schein, which is a pharmaceutical and medical devices company, I managed a very broad range of regulatory and legal compliance matters. That's where I first was introduced to the area of privacy. I led regulatory due diligence for global mergers and acquisitions, and that entailed having a broad range of compliance understanding, including laws governing medical devices licensure and registration laws, and FDA DEA compliance requirements, import, export sanction requirements, anti-corruption, Sunshine Act disclosures – and importantly, data privacy. I managed over 200 due diligence projects at Henry Schein. I had an eight-year tenure there, and I advised on a very broad range of regulatory compliance topics and deals spanning across six continents, which included privacy law due diligence across these global jurisdictions.

I was admitted to practice in 2016, and I became Henry Schein's first compliance attorney in a newly formed, at the time, compliance department. This was also the year that the EU's General Data Protection Regulation was adopted. So my first role as compliance attorney and later senior compliance attorney at Schein was to lead the company's global and cross-functional implementation of its GDPR compliance program. That was for preparation for when the regulation would take effect. I was really inspired by the work I did on the GDPR implementation program, not just the substantive aspects of privacy.

I was fascinated by the EU's Fundamental Rights Framework surrounding privacy, which is quite different from our constitutional right to privacy here in the U.S. It is not expressed under the U.S. Constitution, but it is expressed under the EU charter. So I was initially really intrigued and inspired by that. The substantive requirements, the mechanics that go into developing a privacy compliance program, and also the global cross-functional relationship development that required. This was a global company, it's a global compliance framework, GDPR, applies extra territorially. So I had the opportunity to interface and coordinate cross-functionally with colleagues across Europe, the U.S., wherever personal data would've been processed.

There are literally hundreds of AI laws pending and emerging across the U.S. currently. I spend my days digging into the text of these legislations.

LD: Tell us about the work you do now at IBM.

MN: So just one month after I joined IBM, the EU General Data Protection Regulation actually took effect. This was a really pivotal time in my career, and a revolutionary time in global privacy policy. My role at IBM started in the transactional area, and that's where I supported the global organization’s compliance with GDPR relative to business-to-business transactions, cloud services agreements, helping to negotiate data processing agreements. I was also lead privacy attorney for M&A and divestitures. I supported with overhauling the privacy due diligence program for IBM's M&A practice, and advised IBM's corporate development team, the Chief Privacy Office, and a broader range of teams on privacy, compliance and M&A.

LD: How is generative AI impacting the work you do?

MN: Well, I had included AI in the scope of my work well before generative AI and ChatGPT became all the rage. Back in 2019, I drafted my first detailed guidance document on AI within IBM. That was for advising IBM's business units and developing AI tools. Then I decided to pursue several AI certifications from IBM's AI Skills Academy because based on the work that I was doing, I anticipated that I needed an AI competency as a privacy professional. In 2020, I led privacy due diligence for a major transaction where IBM acquired a first of its kind AI conversational tool from a McDonald's restaurant subsidiary.

That was an important strategic acquisition. This tool was an AI voice agent called an automated order taker, which was used to take and process orders at McDonald’s drive thru restaurants. The tool included language recognition capabilities. I advised on AI ethics assessments to ensure that the tool was able to recognize different accents, for example, and to ensure compliance with privacy law requirements and that the tool didn't collect biometric data. At the time, it was powered by traditional or discriminative AI machine learning. It’s now powered by generative AI.

The fundamental assessment principles that I followed back then to review that AI tool are still relevant today, where the AI tools are being much more widely developed and more widely emerging.

Also between those two years, in this pre-ChatGPT era, I led several other privacy due diligence for other acquisitions at IBM that included M&A deals and AI tools that had a wide range of capabilities.

I first started in more of a transactional, M&A space at IBM, whereas now I am legal counsel for global privacy and AI policy. It's such an interesting time to be in this role. It’s fascinating work. I am advocacy and policy advisor for emerging global privacy and AI legislations and regulatory policies. I provide legal advice to IBM's government and regulatory affairs team as well as the Office of Privacy and Responsible Technology, and the AI ethics team. This entails keeping apprised of globally emerging AI and privacy policies, and then conducting comparative analyses of these requirements.

A big part of this work is engaging in cross-functional collaboration with policy stakeholders to, in essence, try to shape effective privacy and AI laws around the globe to really promote harmonization and interoperability across the different legal framework. That’s been a significant and very interesting change for me. There are literally hundreds of AI laws pending and emerging across the U.S. currently. I spend my days digging into the text of these legislations, providing commentary that are communicated back to bill sponsors and senators, AI policy makers, and also looking cross-jurisdictionally across these policies.

LD: Can you speak to how generative AI is impacting data privacy work?

Privacy professionals are being called to become AI governance professionals and to leverage their existing expertise into the AI context.

MN: AI is really impacting the privacy profession more broadly in the sense that data is central to the development and use of AI systems. It's critical throughout the entire lifecycle of AI. And that's where privacy professionals and their expertise are really valuable because of our understanding and looking at AI and developing AI governance surrounding personal data. This includes where personal data is within the scope of training AI tools or where AI is used in decision making that involves personal data. So from this standpoint, privacy professionals are being called to become AI governance professionals and to leverage their existing expertise into the AI context, and determine how existing privacy laws will apply to AI use or development.

One example that we see in the U.S. is with the California Privacy Protection Agency. They're promulgating detailed rules on governing AI development and use. These rules, which will require risk assessments and transparency notices, are actually built on the agency's framework of requiring privacy impact assessments and privacy transparency notices. We also see that many organizations are leveraging their privacy programs to establish their AI governance processes. So when we think about the principles of transparency and accountability or fairness that are central to privacy governance, these are also central to AI governance.

From this perspective, AI really changes and expands the scope of competencies for privacy professionals. We also see chief privacy officers being called on to become AI ethics officers. On top of that, AI is changing how privacy compliance is actually operationalized and how privacy compliance tasks are actually conducted. So it's impacting the development of privacy enhancing technologies, for example.

As privacy professionals, we can look at how we leverage tools in completing certain functions like privacy by design functions and conducting privacy impact assessments. AI can be used in drafting privacy policies, conducting research, summarizing privacy laws and regulations. I am exploring how I can use it as a tool in my work in doing comparative analyses of privacy laws across jurisdictions. AI is impacting the policy from a macro level, but also the actual tasks that privacy professionals are doing on a micro scale.

LD: You are on the speaking circuit these days. Are you primarily being asked to speak about this intersection between AI and data privacy?

MN: At the Chief Litigation Officer Summit in Boston, I’ll be on a panel entitled “Generative AI in the Legal Realm.” I expect to share my insights on how the legal field is being impacted by generative AI. Not just from a privacy perspective because we'll have a much broader audience, but also how legal professionals can promote trustworthy AI governance, responsible AI use. There are a lot of things to think about when we talk about the broader impact on the legal space. You've probably heard of the case called Mata v. Avianca, where a lawyer used ChatGPT in drafting a submission for court and there was a flaw that generated false citations in his document. Those are very real issues.

Lawyers are using AI writing briefs, doing legal research, developing legal strategy. So I intend to talk about the risks that lawyers need to be aware of, how to identify when it makes sense for you to use a tool versus when it makes sense for you to use your own skill, how to ensure that it's augmenting, rather replacing your skill.

We need to think about loss of skill with any new technology. I think I still know how to read a map, but a lot of the younger generation grew up with GPS navigation systems, so we lose those analog skills. So you want to think about that within the AI context. As a lawyer, how much of your legal analysis and reasoning skills are you willing to lose if you're going to rely on AI to conduct some of those tasks for you?

There are risks to be aware of and strategies to keep ahead of, and fear to minimize. There’s a lot of fear in discussion surrounding AI, and we can temper some of that by gaining more knowledge about the tools.

As a lawyer, how much of your legal analysis and reasoning skills are you willing to lose if you're going to rely on AI to conduct some of those tasks for you?

After the Boston conference, I will travel to attend an AI conference at the University of Cape Town Faculty of Law. So my work in AI policy, it is global, and I do have a keen interest in understanding not just how AI policies are being shaped in the U.S. and the EU and Asia, but also the global south.

Participating in this conference will be a learning experience for myself to really understand how AI policies are being shaped in a uniquely African context. This is important to me. On a personal level, I have Jamaican heritage and therefore African heritage. There is also significant focus from what I can see on U.S. and EU and Asia policies. There are Africa AI policies too, and we should be paying attention to those as well. My aim is to uncover through this conference how I can advance more of those discussions surrounding the impact of AI on the countries of Africa.

LD: I like how you’re saying, “augment, not replace.” There is a lot of fear in every industry of workers being replaced by AI, but it seems more realistic that it will just increase efficiency and let the great legal minds focus more on strategy.

MN: That’s exactly right. How can we use the tools in the most effective way while at the same time maintaining the skills that are useful for us?

You can also choose to forgo the tools. There are, currently, legal analysis tools that I could rely on to do a lot of comparative and summarization work. But I want to be able to continue to do that level of analysis. It is a real challenge that we have to figure out. What is the repetitive grunt work that you can build efficiencies around, versus the more higher-level reasoning and calculations skills that you'll want to be able to maintain?

Of course, we also want to think about at what point are we at a disadvantage because we're not incorporating these tools in our practice. Lawyers in particular have to think about duty of competency, duty of care. If a lawyer today refused to use a computer, that would be shocking. There’s no way you could efficiently practicing without a computer. AI is similarly becoming a critical tool, one we need to use in order to comply with the rules of ethics and professional responsibilities. We're not going to escape it as lawyers, but it's about understanding that line, when to use it, when not to use it.

LD: Do you have any book suggestions for lawyers to start thinking more deeply about the use of AI in this profession?

MN: I recently started a book club, which wasn’t intended to be an AI book club, but it continues to come up. We’re currently reading Unmasking AI by Joy Buolamwini. She's done a lot of work on facial recognition bias, which is one of the many important areas we need to navigate in terms of AI.

LD: Is this an all-lawyer book club?

MN: Mostly friends from law school. It’s an effort to stay connected. I live in the world of AI and technology, and so we are also learning more and more about how disconnected and isolating technology can cause us to become. Many of us work remotely to some degree, so I think a lot about these things. I grew up in a small rural mountainous village in the Caribbean, in Jamaica. Even though we were geographically isolated in a sense, we were very closely connected in terms of social connections and community. This book club was another effort of mine to really be intentional about staying connected in person and having conversations in person. I have a mission as a privacy and AI professional. But my next most important mission is nurturing my village and finding ways to be more connected in community.