Thanks for chatting Nicole! I’ve never asked you this before but how did you end up as a tech lawyer working in California? 

I started my career in the late 1990s as a First Amendment [freedom of speech] lawyer representing newspapers and TV stations and radio broadcasters in Northern California. Soon after I started practising, those traditional media companies began moving some of their content online. Their initial explorations were very quickly eclipsed by internet companies and I began representing some of those players in Silicon Valley too, such as Yahoo, Netscape, Hotmail before its acquisition by Microsoft, and a slew of small startups. In the early days, many of these companies thought about their products as software, rather than as a communications medium, and I was able to help them understand the rights and responsibilities that come with making information available. 

From there, I eventually made my way to work at a small startup called Google, where you and I met, and where I was responsible for the product counsel team that supported the launch of products globally. Following the trajectory of those early tech companies meant I learned a lot about the legal issues around growing that sort of business – IP, content moderation, privacy and competition issues. Over the past 20 years, we’ve gone from making it up as we go along to actual case law. 

I always thought you had one of the most interesting but also one of the most terrible jobs in the world. You were dealing with a lot of very clever product managers and engineers who liked to argue with you about the law. 

I think for us lawyers, that’s our challenge – to stay humble about what we believe we know. Because engineers would sometimes have ideas that were clearly against the law. But sometimes, they had ideas that I could work with and were really good for the product  – if they made certain tweaks to the interface, we could make a coherent case about why a new product or feature was both legal and good for our users. So actually I found that interchange one of the most fun parts of the job. Definitely frustrating at times, but also the most interesting part. 

As well as Larry Page, Sergey Brin and Eric Schmidt, you’ve advised other titans of the tech industry about some of the world’s most popular online products. What do you make of the sector? 

Most of the attention and the criticism of the technology sector has really centered on the largest and the most visible US-based tech companies – Google, Apple, Facebook, Amazon, Microsoft, and maybe Netflix. But actually the tech community is much broader, more diverse and more global than that. We don’t want to sweep all of these players into one bucket.

Twenty years ago, the promise of the internet relied on the pillars of openness, of transparency, of decentralisation, of democratising access to information … and we thought that we would be a better society globally because of it. But those principles have not consistently advanced those goals. Part of that is the commercialisation of the infrastructure that we’ve built, but I think we also need some reassessment of what our goals are. The encouragement of public participation towards the building of something, not just participation for participation’s sake, for example is something I would like to see prioritised. 

Both before and after we worked together at Google, you’ve built up quite the CV! How did you end up working for the Obama administration? 

That kind of fell out of the sky in a way that I will not adequately ever be able to explain. The Obama administration in general, and President Obama in particular, really loved learning about science and technology. He was very committed to it and he created the very first US Chief Technology Officer position. I was lucky enough to join during his second administration as one of the deputies. I got the call in January of 2013 and I was due to start in July. 

The advice they give you when you take a job in the White House is to choose three things you will focus on. I wanted to champion internet governance and freedom of expression, because Hillary Clinton had done such wonderful work laying the groundwork for free expression on the internet. And then I thought I might do something around privacy. But a month before I was due to start, the Snowden disclosure story broke and my boss said, “Can you come now?” My entire time at the White House was about privacy, national security and Big Data. The only thing that mattered at that time was the government surveillance of people and leaders around the world. Which makes sense considering what was happening at the time. 

How significant is the appointment of Lina Khan as chair of the Federal Trade Commission (FTC)? Does it mark a change in attitudes in the US? 

The Biden Harris administration has been very clear that they intend to both scrutinise and work to reverse the concentration of industries and abuses of market power. That’s not just in tech, but in agriculture, healthcare and other markets. But it’s not even just this administration. There are many people who’ve been working hard over the years towards economic and social justice, and who see the concentration of power by a small group of companies as a keystone to that problem. That said, it’s been really interesting to see how swiftly the FTC, under Commissioner Khan, has moved to address some of those practices. We’ve seen the SpyFone case for example, or the recent settlement with Amazon, after it was using tips to subsidise the pay of independent contract workers. That is bold action but it sits on top of many years of work. 

How much do the general public care about privacy in the US?

The United States has a long history of privacy law, in terms of the protection of sensitive personal information. We didn’t call it data protection but it related to the right to be protected from public disclosure of private facts and the misappropriation of your identity, for example. I think today, Americans are much more sensitive to the use and abuse of their personal data, particularly how it’s wrapped up in commercial machines. That’s been the subject of a lot of reporting over the past several years, which has highly sensitised the public. 

But that hasn’t always translated into changes of behaviour. Sometimes it’s because of the convenience of clicking a button and getting what you want, and sometimes because the companies have not built alternatives. What we have to think through is what is our social responsibility to change the behaviour of people and the incentives that are given for them to give away their personal information? And what do we have to do to ensure that the technology built has appropriate alternatives, so that it’s not a choice between either giving away your information or not having a service at all?

You also sit on the board of The Markup, a non-profit investigative news organisation covering technology. It’s good, isn’t it? 

What I think is innovative and fantastic about The Markup is they build technology in order to do their reporting. For example, they’ve just done a story on what appears in different people’s Facebook feeds. This reporting is based on a panel of people across the country who agreed to contribute data so that we can get under the hood about whether Facebook is telling us the truth about whether they’re removing content or not. Another piece they did on what Google shows in search results came up at a recent congressional hearing. 

They measure their success based on the impact they have, how much they contribute to public debate or the change they inspire around company practices. They’re not playing solely for clicks. When we think about what we want out of our journalistic sphere, and how to change the metrics from click-driven advertising to impact journalism that makes change, they’re a wonderful standard bearer for that. 

What do you think the future of privacy will look like in the US?

My home state of California has passed a very rigorous privacy law [the CCPA], which has changed the dynamics around what is possible. When I was in the Obama administration, one of the things the President asked us to do during that period was to really look at the public policy implications of Big Data. We did a study across the government and private sectors that revealed two important findings. The first was that everyone recognised the profound asymmetry of power between those who collect data, and the individuals who are the subject of that data. And I think that for the future of privacy, whether it’s the US or globally, that’s one of the things we have to address. 

The second finding was that the policy implications of data collection are not just about privacy. If you asked people what they were concerned about, it was how they would be treated because of someone else’s use of their data – whether they would be disadvantaged, whether they would miss out on opportunities, whether they would be discriminated against. And those are concerns about fairness, not privacy. So if you’re a policymaker, the tools you use to address fairness may require different remedies than what we would use to address privacy. For example, data minimisation doesn’t necessarily address fairness.

Read more in our Let’s Talk Privacy series, with Tim Clements, Jon Baines, and Paul Jordan.