News recently broke that Just Eat UK couriers had lost their jobs over alleged overpayments, which many contest. It underlines growing concerns about decisions being made by artificial intelligence – algorithms without human oversight or the opportunity to challenge the outcome.
AI technology is being used increasingly in workplaces around the globe – even for sensitive matters such as hiring and firing. The emergence of language models such as ChatGPT has only accelerated this, with experts predicting we’re only at the start of a revolution that will transform the world as we know it.
But what role does privacy play in this vision for the future? On 1 April, Italy became the first western country to suspend ChatGPT, citing privacy concerns. The Italian watchdog found there was no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”. It also expressed concern about exposing minors to unsuitable answers. The Canadian Federal privacy regulator subsequently opened an investigation into ChatGPT, and it’s already blocked in a number of countries such as China, Iran, North Korea and Russia.
The boom of artificial intelligence poses real challenges to those of us that care about privacy. ChatGPT has already shown an impressive range of capabilities – passing exams, debugging code, and even holding therapy sessions. But on the downside it lacks common sense, has no transparency, will defend falsehoods unhesitatingly and, as the action taken by the Italian and Canadian data protection authorities indicates, has little regard for data protection rights. In fact, ChatGPT already had a data breach in March 2023 which is what led to the decision by the Italian data protection authority.
AI is advancing at such a pace that even those in charge of developing it are concerned. While OpenAI, the startup behind ChatGPT claims it abides by all privacy regulations, its CEO, Sam Altman, has said the technology behind the tool could have negative consequences for humanity. “I think people should be happy that we are a little bit scared of this. I’m particularly worried that these models could be used for large-scale disinformation … [or] used for offensive cyber-attacks,” he says. But, he adds, it could also be “the greatest technology humanity has yet developed”. Tesla CEO Elon Musk, one of the first investors in OpenAI and a co-signatory to an open letter calling for a moratorium on further development of AI systems, has called artificial general intelligence more dangerous than a nuclear weapon.
In a 2022 poll of machine learning researchers, nearly half believed there is a one in 10 chance or greater that the effects of AI would be extremely bad (eg. human extinction). DeepMind founder Demis Hassabis, has also urged caution, saying: “I think a lot of times, especially in Silicon Valley, there’s a sort of hacker mentality of: ‘we’ll just hack it and put it out there and see what happens’. And I think that’s exactly the wrong approach for technologies as impactful and potentially powerful as AI … I think it’s going to be the most beneficial thing ever to humanity, things like curing diseases, helping with climate … But it’s also a dual-use technology – it depends on how, as a society, we decide to deploy it – and what we use it for.”
How have the regulators responded?
Rather than leave such decisions up to Silicon Valley, regulators around the world are scrambling to put guardrails in place. The European Union is considering a new legal framework, known as the Artificial Intelligence Act, which aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use”. Essentially, it uses a classification system that determines the level of risk an AI technology could pose to the health and safety or human rights of a person. There are four risk tiers: unacceptable, high, limited and minimal. And at the high/unacceptable level there may be technology that should just not be built at all.