Scroll Top

Europe’s new Artificial Intelligence Act can demand AI models are retrained and deleted

Futurist_aiact

WHY THIS MATTERS IN BRIEF

Regulating AI, which is often seen as a black box is at best incredibly difficult and at worst impossible especially as we see AI’s spontaneously evolve as we have several times now. But it needs to be regulated.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Noone likes a biased  or rogue AI,  and while you can retrain the former even Google haven’t figured out how to kill the latter – even though they’ve tried creating AI kill switches. Now though the European Union thinks it has the answer to rogue AI’s – just press the delete key – and they’ve created a new regulatory framework to back it up.

The European Union’s planned risk-based framework for regulating AI includes powers for oversight bodies to order the withdrawal of a commercial AI system – by deleting it if necessary – and  or require that an AI model be retrained if it’s deemed high risk, according to an analysis of the proposal by a legal expert.

 

RELATED
Lawyers are air dropping NFT's on anonymous criminals to serve court orders

 

That suggests there’s significant enforcement firepower lurking in the EU’s (still not yet adopted) Artificial Intelligence Act — assuming the bloc’s patchwork of Member State-level oversight authorities can effectively direct it at harmful algorithms to force product change in the interests of fairness and the public good.

The draft Act continues to face criticism over a number of structural shortcomings — and may still fall far short of the goal of fostering broadly “trustworthy” and “human-centric” AI, which EU lawmakers have claimed for it. But, on paper at least, there looks to be some potent regulatory powers.

The European Commission put out its proposal for an AI Act just over a year ago — presenting a framework that prohibits a tiny list of AI use cases, such as a China-style social credit scoring system, considered too dangerous to people’s safety or EU citizens’ fundamental rights to be allowed, while regulating other uses based on perceived risk — with a subset of “high risk” use cases subject to a regime of both ex ante (before) and ex post (after) market surveillance.

In the draft Act, high-risk systems are explicitly defined as: Biometric identification and categorisation of natural persons; Management and operation of critical infrastructure; Education and vocational training; Employment, workers management and access to self-employment; Access to and enjoyment of essential private services and public services and benefits; Law enforcement; Migration, asylum and border control management; Administration of justice and democratic processes.

 

RELATED
Locked down Bruce Willis used DeepFake tech to create ads and pick up $2million

 

Under the original proposal, almost nothing is banned outright — and most use cases for AI won’t face serious regulation under the Act as they would be judged to pose “low risk” so largely left to self regulate — with a voluntary code of standards and a certification scheme to recognize compliance AI systems.

There is also another category of AIs, such as deepfakes and chatbots, which are judged to fall in the middle and are given some specific transparency requirements to limit their potential to be misused and cause harms.

The Commission’s proposal has attracted a fair amount of criticism already — such as from civil society groups who warned last fall that the proposal falls far short of protecting fundamental rights from AI-fuelled harms like scaled discrimination and blackbox bias.

A number of EU institutions have also called explicitly for a more fulsome ban on remote biometric identification than the Commission chose to include in the Act (which is limited to law enforcement used and riddled with caveats).

Despite that, major revisions to the proposal seem unlikely at this relatively late stage of the EU’s co-legislative process. But the Council and Parliament are still debating their positions — and final agreement isn’t expected before 2023 — so there is potential for some detail (if not the entire legislative structure) to be tweaked.

An analysis of the Act for the U.K.-based Ada Lovelace Institute by a leading internet law academic, Lilian Edwards, who holds a chair in law, innovation and society at Newcastle University, highlights some of the limitations of the framework — which she says derive from it being locked to existing EU internal market law; and, specifically, from the decision to model it along the lines of existing EU product regulations.

 

RELATED
New shark spotter algorithm saves lives, and sharks

 

Those EU-specific limitations mean it’s not necessarily the best template for other regions to looks to when thinking about how they should regulate AI, she suggests, despite the EU often having ambitions for translating its first mover legislator activity in the digital sphere into a global standards-setting role. (Other limitations on the EU’s competence means the Act can’t touch on military uses of AI at all, for example, most of which you’d expect to be risk-ridden by default.)

Unsurprisingly, to anyone with a passing understanding of machine learning, physical product regs for things like washing machines and toys don’t scope well to AI — given the obviously large differences between a manufactured thing being put into the market versus an AI system which may be based on a model created by one entity for a certain purpose and deployed by a very different entity for an entirely distinct use, also after it may have been fed different training data along the way.

Nonetheless, the AI Act puts the onus of duties and rights on an initial “provider” (aka “manufacturer”) of an AI system.

Edwards argues that’s far too limited a way to oversee how AI is developed and deployed — joining others in recommending that the Act’s category of AI “users”, who only have a “highly limited” regulated role, should be renamed “deployers” and given duties commensurate to their actual responsibility for how the AI system is being applied, however complex that may be to figure out.

“Translating this complex web of actors, data, models and services into a legal regime that places duties and rights on certain identifiable actors is extremely hard,” she writes. “The Act fails to take on the work, which is admittedly difficult, of determining what the distribution of sole and joint responsibility should be contextually throughout the AI lifecycle, to protect the fundamental rights of end users most practically and completely. It can be compared unfavourably to recent developments in GDPR case law, where courts are attempting to distribute responsibility for data protection among various controllers at the most relevant times.”

 

RELATED
Japanese researchers unveil their stunning "full body DeepFakes"

 

Another major shortfall she discusses in the paper is the lack of any recourse in the Act for actual humans to raise complaints about the impact of an AI system upon them personally (or upon a group of people) — which stands in stark contrast to the EU’s existing data protection framework, GDPR, which both enables individual complaints and allows for collective remedy by empowering civil society to complain on behalf of affected individuals.

“By deriving the design of the AI Act primarily from product safety and not from other instruments, the role of end users of AI systems as subjects of rights, not just as objects impacted, has been obscured and their human dignity neglected. This is incompatible with an instrument whose function is ostensibly to safeguard fundamental rights,” is her concise assessment there.

She is also critical of the “arbitrary” — most likely politically informed — list of systems the Commission has said should be prohibited, without it proving an explanation of how it came up with this handful of banned items.

Nor, she says, does the Act allow for changes/additions to prohibited list or for the creation of new top-level categories to be added to the high-risk section, which she assesses as another unfortunate limitation.

In capping these prohibited and high-risk lists the Commission likely had its eye on creating certainty for the market — as it seeks to encourage AI ‘innovation’ in parallel. Yet its rhetoric around the Act has — for years — been heavy with highfalutin talk of fashioning ethical guardrails for “human-centric” AI that reflects European values. So its balance there looks dubious.

While Edwards’ paper is framed as a critique she has plenty of praise for the EU Act too — describing it as “the world’s first comprehensive attempt to regulate AI, addressing issues such as data-driven or algorithmic social scoring, remote biometric identification and the use of AI systems in law enforcement, education and employment”.

 

RELATED
A controversial AI can allegedly reveal your personality from just a photo

 

“[T]he AI Act is itself an excellent starting point for a holistic approach to AI regulation,” she also writes in the paper, before segueing into cautioning the rest of the world against copypasting an “ambitious, yet flawed, regime held in place by the twin constraints of the [EU’s] New Legislative Framework… and the legislative basis of EU law” (hence “in this paper, it seems important to flag the debates both EU policymakers and the world beyond the EU should be having at this crucial regulatory turning point”).

Still, she believes that as it stands — assuming no substantial amendments to address key criticisms — the legislation will, most likely, have “relatively minimal” effect, she tells us.

“In a funny kind of way what the EU has done is they’ve put most of their eggs in DSA [Digital Services Act], DMA [Digital Markets Act],” she suggests, referring to two other pieces of in-train EU digital legislation focused on updating e-commerce rules and reforming the EU’s competition regime, when asked how she thinks the Act will impact. “Because if you look at what’s in high risk, and this another obvious point but it can’t be said enough, high risk currently doesn’t include most of the AI that people meet every day of their lives.

“It doesn’t include search, it doesn’t include social networking, it doesn’t include profiling — unless you’re basically the Chinese state! It’s really focusing on things that are already essentially regulated by the markets, which is things like credit scoring, where there’s a very long history of people checking to see if the algorithm seems fair, and things like that… And regulating the state, who again should have been susceptible to scrutiny forever. At least in the U.K. there would have been judicial review and so forth. But — in reality — we know that the state is doing bad things with algorithms… so in many ways that’s what it’s really aimed at. The recent scare stories, sentencing systems, crime prediction, all that kind of stuff.”

 

RELATED
Stephen Hawking says creating AI is the biggest event of our civilisation

 

Whether the AI Act will even be able to stop what are already ample harms from state-deployed algorithms in the public sector remains to be seen. Not least given how difficult it can be to understand how these AI systems work and how exactly they’re being deployed.

“Do even campaigners understand the technology well enough to agitate against the state under the AI Act?” wonders Edwards. “So I wonder if its impact will actually be a lot less than people think.”

But one bright spot for defenders of fundamental rights inside the EU — attached to aforementioned powers set out in Article 65 of the Act — is Edwards’ assessment that it empowers public oversight bodies to take decisive steps to prevent damage by high risk AI systems, enabling them to order the withdrawal of commercial AI systems from the market (which she argues is akin to ordering a model’s destruction). These powers could also let oversight authorities order retraining of AI models, per Edwards.

Such a power isn’t a given under GDPR, in her view, where individual users have a right to request deletion of their personal data — but it’s less clear cut how (or whether) the DPA regulators are able to wield deletion powers.

In a recent GDPR-related complaints in the EU against the controversial AI applier, Clearview, the company has been ordered to delete European citizens’ data — but none of the regulators could, seemingly, order the destruction of the underlying model trained on all those stolen selfies.

There’s also the case of the emergency procedure taken by the Italian DPA against TikTok last year, related to child safety concerns around viral challenges — which led to the social network agreeing to delete hundreds of thousands of accounts which it said it could not confirm did not belong to people under age.

 

RELATED
Google just gave its AI its own "human" psychologist

 

Just imagine if the EU regulator had had the power to order TikTok to pull its algorithm or — at least — retrain its AI so that it no longer posed specific child safety concerns… Albeit, TikTok’s AI would be unlikely to fall under the AI Act’s high-risk scope — but Clearview’s AI, which is sold to law enforcement, very clearly would.

“It’s incredibly wide-set powers [in the AI Act],” says Edwards. “It says that the market surveillance authority — which is probably going to be the data protection authority in most EU states — it’s got lots of powers under the market surveillance legislation — but it also explicitly says they will be able to ‘take all appropriate corrective actions to bring the AI system into compliance including withdrawing it from the market, recalling it commensurate with the nature of the risk as it may prescribe’ — it’s incredibly wide! And they can see that ‘all appropriate corrective action is taken’ etc.

“So I think this almost explicitly — and there are other places — says that you can require the system to be withdrawn from the market which is equivalent, really, to having it deleted. But also I think there is enough scope there to require it to be trained if data had been removed which I think is very interesting. Because we’re not getting that from the GDPR — at least I don’t think we are.”

“The powers of market surveillance authorities are terribly wide,” she adds. “Obviously that’s going to have to be interpreted by individual, national authorities but obviously this is a regulation, not a directive, so it’s not simply up to the Member States.

 

RELATED
An AI Bot posing as a Teaching Assistant fooled students for five months

 

“I think this does give wider powers. If you think about it as like consumer products — which is what this is based on — you had the right to get dangerous toys recalled from the market or, in some EU countries, the right to get dangerous toys destroyed. You didn’t just say you could no longer import them. You actually had the right to get them destroyed and this is kind of parallel powers.”

Current GDPR powers which may allow users to ask for their own data to be deleted and therefore removed from an AI model — or even potentially for a regulatory authority to ask for such deletion on their citizens’ behalf — don’t necessary mean the model itself would be affected, argues Edwards. Hence she sees the AI Act amping up the regulatory fire power.

“What’s more likely [with the GDPR] is you’ll be able to ask for your data to be deleted from their database but that doesn’t really change the model — even if you ask for your data to be removed from the model it will probably not change the way it’s been trained. So these are not good remedies,” she argues.

“What you actually want is for the model to be either deleted or you want it to be totally retained — perhaps with better data. And that is exactly what the AI Act is about [as regards high risk AI systems]. So then you could argue that the market surveillance authority could say now you are the provider of a new model, a new AI system that falls in high risk you must meet [requirements in the Act, such as around data quality] and therefore you must do it all again — with your new data.

“So that gives you a chance to produce a much better product — a much fairer product — at the end than simply the powers that [DPA regulators have] under GDPR.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This