Thinking about embedding AI into your products or services? Read this first

The risk artificial intelligence poses to privacy and data protection means the regulator is watching developments closely

By Emma Sheppard

Writer

July 2023

The promise of AI is proving tempting for businesses in every sector, with greater efficiencies, faster decision making, and more productivity up for grabs. Next-generation apps and tools are flying off the proverbial shelves. When the chatbot tool ChatGPT launched in November 2022, for example, it gained more than 100 million active users in its first two months. That made it the (then) fastest growing consumer application in history, but it’s since been surpassed by Meta’s launch of its Twitter rival, Threads. 

Yet there are those urging caution. At a recent Bloomberg conference, some of the biggest names in AI warned the technology is already incredibly intrusive and chipping away at hard-won privacy rights. Meredith Whittaker, president of the secure messaging app Signal, said the “Venn diagram of AI concerns and privacy concerns is a circle”. She added: “The majority of the population is the subject of AI … Most of the ways that AI interpolates our life and makes determinations that shape our access to resources and opportunities are made behind the scenes in ways we probably don’t even know.” 

Regulators are scrambling to keep up. According to the OECD, there are more than 800 policy initiatives from 69 countries, territories and the EU. The European Artificial Intelligence Act took another step towards becoming law last month but isn’t expected to come into force until 2025 by the very earliest. It’s also working with the US on an AI voluntary code of conduct, with hopes other regions will sign up too. There is also progress happening at a local level. In New York City, a new law recently passed, whereby organisations using AI in the recruitment process must pass an audit to show it’s free of racist and sexist bias. OpenAI, which developed ChatGPT, is also being sued in California for its data collection practices, as is Google in relation to its Bard chatbot.  

Under the GDPR, principles of lawfulness, fairness, transparency and accountability are paramount. But the speed at which AI systems and tools are being adopted means some of these values are being overlooked. 

Last month, the UK’s data protection watchdog warned against developers rushing to adopt powerful AI technology without doing proper due diligence on the risk to privacy and data protection. The ICO’s executive director of regulatory risk, Stephen Almond, said the regulator will be “taking action where there is risk of harm to people through poor use of their data”.

“Businesses are right to see the opportunity that generative AI offers,” he added, “but they must not be blind to the privacy risks. Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won’t upset customers or regulators.” 

With that in mind, here’s how to keep a privacy-first approach while using AI: 

Lawfulness, fairness and transparency

Organisations must have a lawful basis for processing personal information and individuals must be informed about how and why their personal information is being processed, how long it is kept and who it is shared with. AI systems and other algorithms should not discriminate on the basis of race, gender, age or other protected characteristics and should not be used to make decisions on matters which affect people’s livelihoods. Where an AI system has made decisions about people, it must be transparent about the process and how individuals can exercise their right to challenge automated decisions.

Answer our GDPR compliance checklist questions and we will email you an objective, personalised audit report within minutes, completely free of charge.

Get your audit

Purpose limitation

Personal information should be collected for one purpose and not used for another incompatible purpose without people’s knowledge or consent. In the field of AI, there have been issues with personal information being used to train AI systems, which was not the purpose for which it was collected. 

Data minimisation

Businesses must not process more personal information than they need just because the information might become useful in the future. Any personal information processed must be adequate, relevant and limited to what’s necessary. Consider whether an objective can be achieved using anonymised or synthetic data. Document decisions and the reasoning behind them. Take a risk-based approach and be mindful of the risks an AI tool poses to the rights and freedoms of individuals.

If you want more practical content like this article, please click below to sign up for our monthly newsletter.

Sign up now

Accuracy

Accurate up-to-date datasets must be used for training to avoid bias or discrimination. If inputs are inaccurate or contain bias, the decisions they make will too. 

Storage limitation

Organisations must only retain personal information for as long as it’s needed to achieve a stated purpose. The process for deleting or anonymising personal information once it’s no longer needed should be documented in an information retention and disposal policy.

More to watch and read

SHARE THIS ARTICLE