9 C
London
Wednesday, November 27, 2024

France’s Mistral dials up call for EU AI rules to fix rules for apps, not model makers

Divisions over how to set rules for applying artificial intelligence are complicating talks between European Union lawmakers trying to secure a political deal on draft legislation in the next few weeks, as we reported earlier this week. Key among the contested issues is how the law should approach upstream AI model makers.

French startup Mistral AI has found itself at the center of this debate after it was reported to be leading a lobbying charge to row back on a European Parliament’s proposal pushing for a tiered approach to regulating generative AI. What to do about so-called foundational models — or the (typically general purpose and/or generative) base models that app developers can tap into to build out automation software for specific use-cases — has turned into a major bone of contention for the EU’s AI Act. 

The Commission originally proposed the risk-based framework for regulating applications of artificial intelligence back in April 2021. And while that first draft didn’t have much to say about generative AI (beyond suggesting some transparency requirements for techs like AI chatbots) much has happened at the blistering edge of developments in large language models (LLM) and generative AI since then.

So when parliamentarians took up the baton earlier this year, setting their negotiating mandate as co-legislators, they were determined to ensure the AI Act would not be outrun by developments in the fast-moving field. MEPs settled on pushing for different layers of obligations — including transparency requirements for foundational model makers. They also wanted rules for all general purpose AIs, aiming to regulate relationships in the AI value chain to above liabilities being pushed onto downstream deployers. For generative AI tools specifically, they suggested transparency requirements aimed at limiting risks in areas like disinformation and copyright infringement — such as an obligation to document material used to train models.

But the parliament’s effort has met opposition from some Member States in the Council during trilogue talks on the file — and its not clear whether EU lawmakers will find a way through the stalemate on issues like how (or indeed whether) to regulate foundational models with such a dwindling timeframe left to snatch a political compromise.

More cynical tech industry watchers might suggest legislative stalemate is the objective for some AI giants, who — for all their public calls for regulation — may prefer to set their own rules than bend to hard laws.

For its part, Mistral denies lobbying to block regulation of AI. Indeed, the startup claims to support the EU’s goal of regulating the safety and trustworthiness of AI apps. But it says it has concerns about more recent versions of the framework — arguing lawmakers are turning a proposal that started as a straightforward piece of product safety legislation into a convoluted bureaucracy which it contends will create disadvantageous friction for homegrown AI startups trying to compete with US giants and offer models for others to build on.

Fleshing out Mistral’s position in a call with , CEO and co-founder, Arthur Mensch, argues a law focused on product safety will generate competitive pressure that does the job of ensuring AI apps are safe — driving model makers to compete for the business of AI app makers subject to hard rules by offering a range of tools to benchmark their product’s safety and trustworthiness.

Trickle down responsibility

“We think that the deployer should bear the risk, bear the responsibility. And we think it’s the best way of enforcing some second-order pressure on the foundational model makers,” he told us. “You foster some healthy competition on foundational model layer in producing the best tools, the most controllable models, and providing them to the application makers. So that’s the way in which public safety actually trickles down to commercial model makers in a fully principled way — which is not the case, if you put some direct pressure on the model makers. This is what we’ve been saying.”

The tiered approach lawmakers in the European Parliament are pushing for in trilogue talks with Member States would, Mensch also contends, be counterproductive as he says it’s not an effective way to improve the safety and trustworthiness of AI apps — claiming this can only be done through benchmarking specific use-cases. (And, therefore, via tools upstream model makers would also be providing to deployers to meet app makers’ need to comply with AI safety rules.)

“We’re advocating for hard laws on the product safety side. And by enforcing these laws the application makers turn to the foundational model makers for the tools and for the guarantees that the model is controllable and safe,” he suggested. “There’s no need for specific pressure directly imposed to the foundational model maker. There’s no need. And it’s actually not possible to do.

“Because in order to regulate the technology you need to have an understanding of its use case — you can’t regulate something that can take all forms possible. We cannot regulate a Polynesian language, you cannot regulate C. Whereas if you use C you can write malware, you can do whatever you want with it. So foundational models are nothing else than a higher abstraction to programming languages. And there’s no reason to change the framework of regulation that we’ve been using.”

Also on the call was Cédric O: Formerly a digital minister in the French government, now a non-executive co-founder and advisor at Mistral — neatly illustrating the policy pressure the young startup is feeling as the bloc zeros in on confirming its AI rulebook.

O also pushed back on the idea that safety and trustworthiness of AI applications can be achieved by imposing obligations upstream, suggesting lawmakers are misunderstanding how the technology works. “You don’t need to have access to the secrets of the creation of the foundational model to actually know how it performs on a sustained application,” he argued. “One thing you need is some proper evaluation and proper testing of this very application. And that’s something that we can provide. We can provide all of the guarantees, all of the tools to ensure that when deployed the foundational national model is actually usable and safe for the purpose it is deployed for.”

“If you want to know how a model will behave, the only way of doing it is to run it,” Mensch also suggested. “You do need to have some empirical testing, what’s happening. Knowing the input data that has been used for training is not going to tell you whether your model is going to behave well in [a healthcare use-case], for instance. You don’t really care about what’s in the training data. You do care about the empirical behaviour of the model. So you don’t need knowledge of the training data. And if you had knowledge of the training data, it wouldn’t even teach you whether the model is going to behave well or not. So this is why I’m saying it’s neither necessary nor sufficient.”

US influence

Zooming out, US AI giants have also bristled at the prospect of tighter regulations coming down the pipe in Europe. Earlier this year, OpenAI’s Sam Altman even infamously suggested the company behind ChatGPT could leave the region if the EU’s AI rules aren’t to its liking — earning him a public rebuke from internal market commissioner, Thierry Breton. Altman subsequently walked back the suggestion — saying OpenAI would work to comply with the bloc’s rules. But he combined his public remarks with a whistlestop tour of European capitals, meeting local lawmakers in countries including France and Germany to keep pressing a pitch against “over-regulating”.

Fast forward a few months and Member States governments in the European Council, reportedly led by France and Germany, are pressing back against tighter regulation of foundational models. However Mistral suggests push-back from Member States on tiered obligations for foundational model is broader than countries with direct skin in the game (i.e. in the form of budding generative AI startups they’re hoping to scale into national champions; Germany’s Aleph Alpha* being the other recently reported example) — saying opposition is also coming from the likes of Italy, the Netherlands and Denmark.

“This is about the European general interest to find a balance between how the technology is developed in Europe, and how we protect the consumers and the citizens,” said O. “And I think this is very important to mention that. That is not only about the interests of Mistral and Aleph Alpha, which — from our point of view (but we are biased) — is important because you don’t have that many players that can play the game at the global level. The real question is, okay, we have a legislation, that is a good legislation — that’s already the strongest thing in the world, when it comes to product safety. That will basically be protecting consumers and citizens. So we should be very cautious to go further. Because what is at stake is really European jobs, European growth and, by the way, European cultures of power.”

Other US tech giants scrambling to make a mark in the generative AI game have also been lobbying EU lawmakers — with OpenAI investor Microsoft calling for AI rules focused on “the risk of the applications and not on the technology”, according to an upcoming Corporate Europe Observatory report on lobbying around the file which reviewed ahead of publication.

US tech giants’ position on the EU AI Act, pushing for regulation of end uses (apps) not base “infrastructure”, sounds akin to Mistral’s pitch — but Mensch argues its position on the legislation is “very different” vs US rivals.

“The first reason is that we are advocating for hard rules. And we are not advocating for Code of Conduct [i.e. self regulation]. Let’s see what’s happening today. We are advocating for hard rules on the EU side. And actually the product safety legislation is hard rules. On the other hand, what we see in the US is that there [are] no rules — no rules; and self commitment. So let’s be very honest, it’s not serious. I mean, there’s so much at stake that things that are, first, not global, and, second, not hard rules are not serious.”

“It’s not up to the coolest company in the world, maybe the cleverest company in the world, to decide what the regulation is. I mean, it should be in the hands of the regulator and it’s really needed,” he added.

“If we have to come to have a third party regulatory [body] that would look at what’s happening on the technological side, it should be fully independent, it should be super well funded by [EU Member] States, and it should be defended against regulatory capture,” Mensch also urged.

Mistral’s approach to making its mark in an emerging AI market already dominated by US tech giants includes making some of its base models free to download — hence sometimes referring to itself as “open source”. (Although others dispute this kind of characterization, given how much of the tech remains private and privately owned.)

Mensch clarified this during the call — saying Mistral creates “some open source assets”. He then pointed to this as part of how its differentiating vs a number of US AI giants (but not Meta which has also been releasing AI models) — suggesting EU regulators should be more supportive of model release as a pro-safety democratic check and balance on generative AI.

“With Meta, we are advocating for the public authorities to push more strongly open source because we think this is heavily needed in terms of democratic checks and balances; ability to check the safety, by the way; ability not to have some business capture or economic capture by a handful of players. So we have a very, very different vision than they have,” he suggested.

“Some debates and different positions we have [vs] the big US companies is that we believe that [creating open source assets] is the safest way of creating AI. We believe that making strong models, putting them in the open, fostering a community around them, identifying the flaws they may have through community scrutiny is the right way of creating safety.

“What US companies have been advocating for is that they should be in charge of self regulating and self identifying the flaws of the models that create. And I think this is a very strong difference.”

O also suggested open models will be critical for regulators to effectively oversee the AI market. “To regulate big LLMs regulators we need big LLMs,” he predicted. “It’s going to be better for them to have an opened weight LLM, because they control how it is working and the way this is working. Because otherwise the European regulators will have to ask OpenAI to provide GPT-5 to regulate Gemini or Bard and ask Google to provide Gemini to regulate GPT-5 — which is a problem.

“So that’s also why open source — an open way, especially — is very important, because it’s going to be very useful for regulators, for NGOs, for universities to be able to check whether those LLMs are working. It’s not humanly possible to control those models the right way, especially as they become more and more powerful.”

Product safety vs systemic risk

Earlier today, ahead of our call, Mensch also tweeted a wordy explainer of the startup’s position on the legislation — repeatedly calling for lawmakers to stick to the product safety knitting and abandon the bid for “two-level” regulation, as he put it. (Although the text he posted to social media resembles something a seasoned policymaker, such as O, might have crafted.)

Enforcing AI product safety will naturally affect the way we develop foundational models,” Mensch wrote on X. “By requiring AI application providers to comply with specific rules, the regulator fosters healthy competition among foundation model providers. It incentivises them to develop models and tools (filters, affordances for aligning models to one’s beliefs) that allow for the fast development of safe products. As a small company, we can bring innovation into this space — creating good models and designing appropriate control mechanisms for deploying AI applications is why we founded Mistral. Note that we will eventually supply AI products, and we will craft them for zealous product safety.”

His post also criticized recent versions of the draft for having “started to address ill-defined ‘systemic risks’” — again arguing such concerns have no place in safety rules for products.

“The AI Act comes up with the worst taxonomy possible to address systemic risks,” he wrote. “The current version has no set rules (beyond the term highly capable) to determine whether a model brings systemic risk and should face heavy or limited regulation. We have been arguing that the least absurd set of rules for determining the capabilities of a model is post-training evaluation (but again, applications should be the focus; it is unrealistic to cover all usages of an engine in a regulatory test), followed by compute threshold (model capabilities being loosely related to compute). In its current format, the EU AI Act establishes no decision criteria. For all its pitfalls, the US Executive Order bears at least the merit of clarity in relying on compute threshold.”

So a homegrown effort from within Europe’s AI ecosystem to push-back and reframe the AI Act as purely concerning product safety does look to be in full flow.

There is a counter effort driving in the other direction too, though. Hence the risk of the legislation stalling.

The Ada Lovelace Institute, a UK research-focused organization funded by the Nuffield Foundation charitable trust, which last year published critical analysis of the EU’s attempt to repurpose product safety legislation as a template for regulating something as obviously more complex to produce and consequential for people’s rights as artificial intelligence, has joined those sounding the alarm over the prospect of a carve-out for upstream AI models whose tech is intended to be adapted and deployed for specific use-cases by app developers downstream.

In a statement responding to reports of Council co-legislators pushing for a regulatory carve-out for foundational models, the Institute argues — conversely — that a ‘tiered’ approach, which puts obligations not just on downstream deployers of generative AI apps but also on those who provide the tech they’re building on, would be “a fair compromise — ensuring compliance and assurance from the large-scale foundation models, while giving EU businesses building smaller models a lighter burden until their models become as impactful”, per Connor Dunlop, its EU public policy lead.

“It would be irresponsible for the EU to cast aside regulation of large-scale foundation model providers to protect one or two ‘national champions’. Doing so would ultimately stifle innovation in the EU AI ecosystem — of which downstream SMEs and startups are the vast majority,” he also wrote. “These smaller companies will most likely integrate AI by building on top of foundation models. They may not have the expertise, capacity or — importantly — access to the models to make their AI applications compliant with the AI Act. Larger model providers are significantly better placed to ensure safe outputs, and only they are aware of the full extent of models’ capabilities and shortcomings.”

“With the EU AI Act, Europe has a rare opportunity to establish harmonised rules, institutions and processes to protect the interests of the tens of thousands of businesses that will use foundation models, and to protect the millions of people who could be impacted by their potential harms,” Dunlop went on, adding: The EU has done this in many other sectors without sacrificing its economic advantage, including civil aviation, cybersecurity, automotives, financial services and climate, all of which benefit from hard regulation. The evidence is clear that voluntary codes of conduct are ineffective. When it comes to ensuring that foundation model providers prioritise the interests of people and society, there is no substitute for regulation.”

Analysis of the draft legislation the Institute published last year, penned by internet law academic, Lilian Edwards, also critically highlighted the Commission’s decision to model the framework primarily on EU product regulations as a particular limitation — warning then that: “[T]he role of end users of AI systems as subjects of rights, not just as objects impacted, has been obscured and their human dignity neglected. This is incompatible with an instrument whose function is ostensibly to safeguard fundamental rights.”

So it’s interesting (but perhaps not surprising) to see how eagerly Big Tech (and would-be European AI giants) have latched onto the (narrower) product safety concept.

Evidently there’s little-to-no industry appetite for the Pandora’s Box that opens where AI tech intersects with people’s fundamental rights. Or IP liability. Which leaves lawmakers in the hot seat to deal with this fast scaling complexity.

Pressed on potential risks and harms that don’t fit easily into a product safety law template — such as copyright risks, where, as noted above, MEPs have been pressing for transparency requirements for copyrighted material used to train generative AIs; or privacy, a fundamental right in the EU that’s already opened up legal challenges for the likes of ChatGPT — Mensch suggested these are “complex” issues in the context of AI models trained on large data-sets which require “a conversation”. One he implied is likely to take longer than the few months lawmakers have to nail down the terms of the Act.

“The EU AU Act is about product safety. It has always been about product safety. And we can’t resolve those discussions in three months,” he argued.

Asked whether greater transparency on training data wouldn’t help resolve privacy risks related to the use of personal data to train LLMs and the like, Mensch advocated instead for tools to test for and fix privacy concerns — suggesting, for example, that app developers could be provided with tech to help them run adaptive tests to see whether a model outputs sensitive information. “This is a tool you need to have to measure whether there’s a liability here or not. And this is a tool we need to provide,” he said. “Well, you can provide tools for measurements. But you can also provide tools for contracting this effect. So you can add additional features to make sure that the model never outputs personal data.”

Thing is, under existing EU rules, processing personal data without a valid legal basis is itself a liability since it’s a violation of data protection rules. Tools to indicate if a model contains unlawfully processed personal information after the fact won’t solve that problem. Hence, presumably, the “complex” conversation coming down the pipe on generative AI and privacy. (And, in the meanwhile, EU data protection regulators have the tricky task of figuring out how to enforce existing laws on generative AI tools like ChatGPT.)

On harms related to bias and discrimination, Mensch said Mistral is actively working on building benchmarking tools — saying it’s “something that needs to be measured” at the deployer’s end. “Whenever an application is deployed that generated content the measurement of bias is important. It can be asked to the developer to measure these kind of biases. In that case, the tool providers — and I mean, we’re working on that but there’s dozens of startups working on very good tools for measuring these biases — well, these tools will be used. But the only thing you need to ask is safety before putting the product on the market.”

Again, he argued a law that seeks to regulate the risks of bias by forcing model makers to disclose data sets or run their own anti-bias checks wouldn’t be effective.

“We need to remember that we’re talking about data-sets that are thousands of billions of flops. So how, based on this data set, how are we going to know that we’ve done a good job at having no biases in the output of the data model? And in fact, the actual, actionable way of reducing biases in model is not during pre training, so not during the phase where you see all of the data-set, it’s rather during fine tuning, when you use a very small data-set to set these things appropriately. And so to correct the biases it’s really not going to help to know the input data set.”

“The one thing that is going to help is to come up with — for the application maker — specialised models to pour in its editorial choices. And it’s something that we’re working on enabling,” he added.

*Aleph Alpha also denies being anti-regulation. Spokesman Tim-André Thomas told us its involvement in discussions around the file has focused on making the regulation “effective” by offering “recommendations on the technological capabilities which should be considered by lawmakers when formulating a sensible and technology-based approach to AI regulation”. “Aleph Alpha has always been in favour of regulation and welcomes regulation which introduces defined and sufficiently binding legislation for the AI sector to further foster innovation, research, and the development of responsible AI in Europe,” he added. “We respect the ongoing legislative processes and aim to contribute constructively to the ongoing EU trialogue on the EU AI Act. Our contribution has geared towards making the regulation effective and to ensure that the AI sector is legally obligated to develop safe and trustworthy AI technology.”

Latest news
Related news