The EU AI Act: How AI Regulation Can Drive Business Advantage

This blog explores the impact of the EU's AI Act on businesses, emphasizing how this legislation ensures the safe and ethical use of AI and serves as a catalyst for innovation and growth.

Guðmundur Kristjánsson (GK)
3 min

AI regulation can drive business advantage. Let’s see it as a commercial boost, not a straitjacket.

It took several rounds of talks and drawn-out debate, but EU policymakers have finally agreed on the details of the pioneering Artificial Intelligence Act.

The Act becomes the world’s first comprehensive AI law, a key piece of legislation that puts guardrails around the development and use of the technology.

The AI Act classifies AI systems by risk and mandates various development and use requirements to ensure that AI systems are safe, transparent, traceable, non-discriminatory, and environmentally friendly. 

It’s easy to be wary or critical of the legislation, and some claim that such regulation will stifle innovation, slow advances in the technology, and restrict opportunities for businesses in Europe.

But I don’t think this Act puts AI in a straitjacket; quite the opposite: Safer AI can create business advantage because the principles of the EU Act – such as human oversight, transparency, and ethics – can also be commercial benefits. 

What do I mean by that? Well, we know the new legislation focuses primarily on strengthening rules around data quality, transparency, human oversight, and accountability. These principles should be thought of not only as containing AI but also as amplifying its potential.

For example, ensuring human interaction and oversight means that AI is more likely to be viewed as a partner by the human user, empowering and upskilling them to achieve more and operate at a higher level faster – thus increasing their value to a business. We’ve estimated that this type of upskilling in compliance could save a Tier 1 bank between $15 million and $36 million annually in training and recruitment costs.

It's because our AI systems are designed to enhance, not replace, human abilities that they can quickly augment the skills of workers.

In addition, building transparency and explainability into an AI system will accelerate trust in its decision-making and data, fuelling collaboration between humans and machines. The faster that users become confident in the capabilities of an AI system, the faster the benefits will be seen and the greater the productivity returns. 

Our conservative estimates from working with compliance teams show that a bank could save up to $ 100 million annually by using generative AI to cut financial crime investigation times from 2.5 hours to 25 minutes. 

When it comes to accountability, this can also be seen as a regulatory principle that naturally drives growth and innovation rather than restricting development. 

By owning the output of the AI and learning from its mistakes, constant improvements can be made, which results in continuous progress and an ever-present drive for excellence (as well as commercial success). 

It’s no surprise that we welcome the EU AI Act and its aim to mitigate the risks and encourage the opportunities promised by technological advances. 

Right from the start of Lucinity’s journey, we have adhered to an ethical pledge to make sure we use Artificial Intelligence in a way that empowers individuals, safeguards societies, and ensures integrity in the financial ecosystem. You can read it at https://lucinity.com/ethical-ai-pledge . 

We have also started carrying out a risk report with an independent third party, which we’re confident will confirm that Lucinity’s use of generative AI meets all requirements for continuing to operate within the EU and would be deemed a minimal risk by the new regulatory framework (the lowest level of risk, as with most of the AI systems we interact with today).

Perhaps the biggest challenge for the regulators is the sheer pace of development in the AI industry. Just this week, we’ve seen Google launch Gemini, its new AI model, which it hopes will help narrow the gap in its race with OpenAI. Google claims that Gemini outperforms ChatGPT in tests and displays “advanced reasoning” across multiple formats, including an ability to view and mark a student’s physics homework.

I suppose the next question is: Can the regulators keep up with the speed of AI, both to harness its potential and limit its possible harms? It will be fascinating to witness it play out.

Learn more about Lucinity and how we are leveraging Generative AI to drive business growth:

 

 

Sign up for insights from Lucinity

Recent Posts