Why we need a human-centred, privacy-first approach to artificial intelligence

Those using AI to boost profits risk losing sight of the bigger picture. We need to reset our priorities before it’s too late

By Karima Noren

Co-founder of the Privacy Compliance Hub

April 2023

News recently broke that Just Eat UK couriers had lost their jobs over alleged overpayments, which many contest. It underlines growing concerns about decisions being made by artificial intelligence – algorithms without human oversight or the opportunity to challenge the outcome. 

AI technology is being used increasingly in workplaces around the globe – even for sensitive matters such as hiring and firing. The emergence of language models such as ChatGPT has only accelerated this, with experts predicting we’re only at the start of a revolution that will transform the world as we know it.  

But what role does privacy play in this vision for the future? On 1 April, Italy became the first western country to suspend ChatGPT, citing privacy concerns. The Italian watchdog found there was no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”. It also expressed concern about exposing minors to unsuitable answers. The Canadian Federal privacy regulator subsequently opened an investigation into ChatGPT, and it’s already blocked in a number of countries such as China, Iran, North Korea and Russia. 

The boom of artificial intelligence poses real challenges to those of us that care about privacy. ChatGPT has already shown an impressive range of capabilities – passing exams, debugging code, and even holding therapy sessions. But on the downside  it lacks common sense, has no transparency, will defend falsehoods unhesitatingly and, as the action taken by the Italian and Canadian data protection authorities indicates, has little regard for data protection rights. In fact, ChatGPT already had a data breach in March 2023 which is what led to the decision by the Italian data protection authority. 

AI is advancing at such a pace that even those in charge of developing it are concerned. While OpenAI, the startup behind ChatGPT claims it abides by all privacy regulations, its CEO, Sam Altman, has said the technology behind the tool could have negative consequences for humanity. “I think people should be happy that we are a little bit scared of this. I’m particularly worried that these models could be used for large-scale disinformation … [or] used for offensive cyber-attacks,” he says. But, he adds, it could also be “the greatest technology humanity has yet developed”. Tesla CEO Elon Musk, one of the first investors in OpenAI and a co-signatory to an open letter calling for a moratorium on further development of AI systems, has called artificial general intelligence more dangerous than a nuclear weapon. 

In a 2022 poll of machine learning researchers, nearly half believed there is a one in 10 chance or greater that the effects of AI would be extremely bad (eg. human extinction). DeepMind founder Demis Hassabis, has also urged caution, saying: “I think a lot of times, especially in Silicon Valley, there’s a sort of hacker mentality of: ‘we’ll just hack it and put it out there and see what happens’. And I think that’s exactly the wrong approach for technologies as impactful and potentially powerful as AI … I think it’s going to be the most beneficial thing ever to humanity, things like curing diseases, helping with climate … But it’s also a dual-use technology – it depends on how, as a society, we decide to deploy it – and what we use it for.” 

How have the regulators responded? 

Rather than leave such decisions up to Silicon Valley, regulators around the world are scrambling to put guardrails in place. The European Union is considering a new legal framework, known as the Artificial Intelligence Act, which aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use”. Essentially, it uses a classification system that determines the level of risk an AI technology could pose to the health and safety or human rights of a person. There are four risk tiers: unacceptable, high, limited and minimal. And at the high/unacceptable level there may be technology that should just not be built at all. 

Answer our GDPR compliance checklist questions and we will email you an objective, personalised audit report within minutes, completely free of charge.

Get your audit


At the same time, the Irish data regulator has warned against rushing into chatbot bans. Ireland’s Data Protection Commissioner, Helen Dixon told a Blomberg conference: “where we are at is trying to understand a little bit more about the technology, about the large language models, about where the training data is sourced. So I think it’s early days, but it’s time to be having those conversations now rather than rushing into prohibitions that really aren’t going to stand up.”

The UK is taking a different approach. In a recent whitepaper, the government acknowledges the questions that have been raised about the future risks AI could pose to people’s privacy, human rights and safety, and concerns about the fairness of organisations using AI tools to make decisions in loan or mortgage applications, for example. But it wants to avoid imposing new legislation that it says could stifle innovation, and will instead rely on existing regulators, such as the ICO, the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority to come up with specific guidelines for how AI is being used in their sectors. The ICO has recently affirmed that data protection legislation applies to generative AI when it processes personal data from publicly accessible sources.

Putting the human at the heart of AI

In many ways, concerns about how we regulate and govern AI mirror the same concerns that resulted in the creation of the GDPR. AI is also about data – lots of it. The amount of data used to train large language models (LLMs) like ChatGPT is gargantuan. And we need high level, human-centric principles to protect the people who could be harmed by its collection and processing. We need to enable the subjects of that data to still have their say; to be able to break open that black box and understand exactly how a decision has been made and why. In fact, the Italian data protection authority has agreed to lift the suspension on ChatGPT in Italy if Open AI complies with specified privacy requirements by 30 April 2023 and agrees to run a public information campaign on Italian TV, radio, websites and newspapers explaining to people how it uses personal data to train its ChatGPT algorithm.

If you want more practical content like this article, please click below to sign up for our monthly newsletter.

Sign up now


Those who are determined to use AI to boost profits run the risk of losing sight of the bigger picture. It’s true that AI can help automate repetitive tasks, review vast swathes of data much more quickly than a human ever could, and scale beyond our comprehension. But we need to be asking questions about how it reflects society’s values and protects our fundamental human rights. We need to consider privacy, security and fairness. And we need ethical codes and transparency to be built into algorithmic decision making. Above all, just because we can build something, doesn’t mean we necessarily should.

The future of AI needs a human touch. And just like privacy, it’s a journey, not a destination. We can’t afford to be found asleep at the wheel. It’s like Google CEO Sundar Pichai once said: “AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.” Let’s make sure we don’t get burned.

More to watch and read

SHARE THIS ARTICLE