LD500

From France to Switzerland to Hong Kong to the Bay Area, the world of data privacy and cybersecurity is changing – and fast. Just ask globetrotter privacy and security lawyer Paul Lanois.

French native Lanois started his legal education at the Panthéon-Sorbonne University in Paris before moving to the U.S. to attend the University of Pennsylvania Law School. Never in one place for long, he then set out as a U.S. associate in the London and Luxembourg offices of several top law firms before moving in-house as senior legal counsel at Credit Suisse in Switzerland. While there, he also worked in Hong Kong testing cutting-edge digital banking technology.

In 2019, Lanois moved back into the law firm environment: He is now a Director at Fieldfisher’s Palo Alto office. With locations in 12 countries, including all those Lanois has worked in, Lanois was attracted to the firm’s strong European data protection practice.

His move back to private practice was initiated in part by Europe’s revolutionary General Data Protection Regulation (GDPR), enacted in 2018. With major companies rushing to comply with updated international data privacy regulations, Lanois’s unique experience made him the perfect legal consultant.

Now, Lanois helps companies from tech startups to the world’s largest corporations develop their data privacy policies for new technologies while complying with the GDPR and other major regulations that have rolled out in the last five years. Those regulations continue to develop: With advancements in tech products – take AI, cryptocurrency and virtual reality – changing data privacy and cybersecurity expectations daily, companies with new products must take those regulations into account and foresee what may come next.

In addition to his international background, Lanois’s love of technology makes him uniquely able to look into the future. He loves the hands-on approach to new tech and works to understand how privacy works on a real-world product level. In many cases, the products he works with make him question the very laws he advises on: “You’re dealing in gray areas where new technologies were not imagined by legislators, which raises a number of issues,” he says. “You definitely have to think outside of the box.”

Lanois is a member of Lawdragon 500 X – The Next Generation; but he is inspired even by those lawyers who have come after him. Since the beginning of his career, Lanois has taught outside of his practice; currently, he teaches privacy compliance at University of California College of the Law, San Francisco (formerly UC Hastings College of the Law). While his expertise is beneficial for new lawyers, he finds that he learns as much as his students: “Students ask questions in class, and it allows you to look at things from a new perspective,” he says.

Lawdragon: What first brought you to the law?

Paul Lanois: I wanted to combine my love of economics and technology with the legal side. You get to explore and discover new technology without focusing on selling it. You’re also involved in the development of new products from the early stages, incorporating privacy and security by design. So, we all get more exposure to the product itself without being engineers.

LD: Tell me about some of your earlier work in that area – working on mobile banking apps with Credit Suisse in Hong Kong.

PL: Over there, people view their phones a bit differently: They use them for everything. You have those apps which are a bit like what Elon Musk is trying to emulate now with X, where he says that he wants X to be used for everything. You already have that in Asia. Certain apps are used for universal purposes rolled up in a single app, such as chat, payment and as a search engine, among many other things.

As a result, when testing new mobile banking apps, it's ideal to test in Asia because people are already using their phones for everything. People there are going to find specific use issues more quickly.

LD: That’s interesting. What brought you to that in-house transition, and what brought you back to private practice?

PL: I made the transition back in 2014 when I joined Credit Suisse at the headquarters in Zurich. One of the fascinating things about banks is that security is the product that you're selling. If the data gets stolen from the bank, that's it, you can close shop. So, in a way, security is even more important than privacy for a bank – especially in Switzerland.

LD: Is that where you started building up your privacy and security practice, or were you doing that work before going to Credit Suisse?

PL: I was doing it in the law firms that I was at, but things have fundamentally changed since the introduction of the GDPR. Now, every medium and large-size organization has someone in charge of privacy. Even small organizations need to reach out to privacy counsel in ways they didn’t before. Now, law firms all have privacy departments – before, it was a niche practice.

LD: Was it the explosion of privacy and security departments after the GDPR that brought you back to the law firm environment?

PL: Yes. I wanted to come to the U.S., and I thought that I’d be able to leverage my international background more in a law firm setting.

LD: How so?

PL: Because I've worked in so many different countries, I can understand the complexities that organizations are facing.

The work that I do now is twofold. First, there is still a huge chunk of work in relation to complying with the GDPR, privacy directives, new requirements, new case law and so forth. But then the other aspect of the work is, how do you comply with privacy globally speaking? Can we adopt some higher-level principles instead of looking at privacy from a state-by-state or country-by-country basis? For most organizations, they see what the global standards are, try to comply with the higher requirements, then apply those worldwide. Of course, there may be some local variations.

Even small organizations need to reach out to privacy counsel in ways they didn’t before.

LD: Interesting. What does your day-to-day look like?

PL: I'm not sure there is a typical breakdown. That's what makes privacy interesting: There are so many changes happening. It's not like some other areas of law where things are more settled and you have a routine.

LD: What are some areas keeping you busy right now?

PL: Data transfers from the EU to the U.S. is a huge topic which came up following a decision from the European Court of Justice, and afterwards cases in the surrounding world. Companies want to know how they sign up for those transfers and if they meet the requirements.

AI is also a huge topic. Lots of organizations are looking into it, and they’re at different stages. Some of them have a solution in place and it's now more about working on the necessary disclosures, transparency and so forth. Other organizations are just starting to look into it and are exploring the pros and cons. Other organizations haven't yet made a decision.

LD: What are the most common discussions surrounding AI?

PL: Building an internal AI acceptable use policy is a big topic of conversation. Some organizations would authorize internal AI use in certain areas; others would have a wide ban on it.

LD: Why would they ban it?

PL: Once you start uploading content into some of those platforms, it can be used to train generative AI. Therefore, do you still have copyright over that material? Do you still have ownership? IP-wise, it's debatable.

Then, some organizations may not necessarily be using AI as part of their core fundamental products or services, but are thinking, "Well, maybe we can have a chat bot; maybe we can have ancillary solutions which help customers." There is still some privacy work to be done in relation to that, because there may be data collection and monitoring, which trigger a number of different laws and regulations.

LD: AI has exploded as a topic of conversation recently. How long has it been something that you've been concerned with in your work?

PL: With non-generative AI – which I tend to call traditional AI – it’s been quite a while. For example, when I was working at Credit Suisse, we were working on global advisory solutions whereby you can get personalized investment advice from an app without human intervention. That was entirely automated, so that was an AI as well.

For lots of organizations, including those with assisted driving and video games, AI existed before the huge AI boom that we've seen in recent months with ChatGPT and other generative AI solutions – but it has definitely attracted more interest now.

That’s especially true in the tech space, where people tend to congregate towards the new trends and buzzwords. Before, it was cryptocurrencies, blockchains and NFTs. I'm not saying that those are dying; I do think that there were very compelling use cases which are still being developed. It's just that they have attracted less interest now. The organizations that are really helping build new things are the ones who focus not just on the trendy buzzwords just to attract investors, but look at new technologies to assess whether they can help the business.

LD: What do you enjoy about the breakneck pace of this space, or what do you find challenging about it?

PL: It's really challenging. Things are changing so quickly. You always have to keep abreast of new developments and technologies. It’s necessary to spend a couple of hours every day following what's going on in the news – whether that’s in new laws and regulations being introduced, or in discussions and trends.

The organizations that are really helping build new things are the ones who focus not just on the trendy buzzwords just to attract investors, but look at new technologies to assess whether they can help the business.

LD: Tell me about some of your work in specific sectors – I was interested, for example, in your privacy work in the video game industry.

PL: One of the more interesting video game matters I’ve worked on was in relation to anti-cheat solutions.

With online multiplayer games, as you can imagine, it becomes a bit competitive when rankings are involved, and there is this temptation to cheat. In one scenario, you might be cheating alone and nobody else is playing with you, but you just want to complete the game. You can do whatever you want in your own room if you don't disturb others.

But if you have rankings and competitions, players who cheat disrupt the game for others. So, a number of companies have been working on anti-cheat solutions so that the game is enjoyable for everyone.

Obviously, that involves collection of data: Are there any processes being run on the device that are interfering with the game and trying to change certain values? So, the challenge of those anti-cheat solutions is collecting only the right amount of data. You don't want to be sucking up, for example, an open window with an email that I've been typing in Word. You don't want to be collecting any personal data, or any personally identifiable information.

Outside of anti-cheat solutions, there’s also work in relation to new devices, like virtual reality headsets. You might say, “It’s a device, why is privacy involved?” But when you're putting on those devices, they are collecting data on the composition of your room in order to detect, for example, how close you are to the device. Is there anything in front of you which may cause you to stumble, and so forth? So, then you have those cameras, and you need to factor in those privacy considerations as well.

LD: Fascinating. And are you still doing financial work?

PL: Yes. The great thing about working in a law firm is that you get to work with different industries, you get different perspectives on how things are being done and it helps to inform clients as to trends across different industries.

LD: What new technologies have you been most intrigued by?

PL: Like many people, I'm intrigued by generative AI because it has huge potential. But I would say that you need to have guardrails in place. You cannot fully rely on it. For example, for research, it can save you time by doing some initial research for you. But that should only be seen as a starting point because you still have to verify the sources and check whether they even exist. There have been instances where AI has hallucinated and invented legal cases which aren’t real.

The same is true in relation to AI-generated content, like drawings. In many cases it produces a very good result. It's a great starting point. But then is it sufficient? Probably not in all cases – especially because there's always a transparency issue. As an artist, say, it's fine if you want to use AI as a starting point, because maybe it helps you to do things that you could not. But then you cannot pass off the AI-generated content as your own. It’s the same for text produced by ChatGPT – you can’t pass that product off as your own. Transparency is fundamental.

LD: Do you see AI becoming a big component of your work, at least in the short-term?

PL: It is taking on a big role. But is it going to replace everyone and everything? Maybe, but not at the moment. I think that it would be a mistake for organizations to rush and implement something just because it's the trendy thing to do without having carefully considered the pros and cons. There is a lot of scrutiny going on, and I would say that we're just one misstep away from a new law which could come in to regulate the space.