Navigate the challenges and achieve AI readiness

The EU AI Act 2024

workforce
  • Publication
  • 8 minute read
  • June 28, 2024

Currently at the forefront of innovation, Artificial Intelligence (AI) is transforming businesses with its vast capability to solve complex challenges. However, this limitless potential also gives rise to uncertainty, caution and scepticism, emphasising the need for responsible development. This then prompts the question: How do we instil trust and ensure the safety, transparency and fairness of AI solutions?

About the EU AI Act

To address this, the European Union adopted a groundbreaking regulation on laying down harmonised rules on Artificial Intelligence(AI) Act. This Act aims to ensure that all AI tools used by or impacting the European market are safe and trustworthy products by establishing standards for AI systems, fostering transparency and innovation while safeguarding the rights of EU citizens. The Act takes a risk-based approach to impose a wide range of legal, technical and organisational obligations, with non-compliance potentially resulting in fines or injunctions to cease use by companies, both locally and globally.

Who does the EU AI Act affect?

The new Act signifies a pivotal milestone with its broad applicability and wider reach across borders. As a result, nearly every company across the globe has to  commence work on its own compliance trajectory, define the obligations it is subject to, and adhere to their obligations. 

Coverage across the entire AI value chain

Every company engaged in developing, commercialising, using, or having used AI to its benefit, is subject to some obligations under the act. Hence, the Act is not solely for big tech companies or data scientists. Conventional companies that are not providing IT services but using the AI for internal purposes, shall have their amount of obligations to comply with. Same goes for parties importing or distributing AI between suppliers and clients.  

Wide territorial scope and reach

This extends beyond companies located or operating in the EU.  

This Act is applicable to AI providers who introduce AI systems or general-purpose AI models to the EU market or put them into service, irrespective of their location, whether inside the Union or in a third country. It also applies to deployers of AI systems who are established or located within the Union, as well as to individuals within the EU who are impacted by these systems.

While the EU AI Act will gradually come into effect around 2025, the time to act is now! Proactive organisations are not merely adhering to the rules; they are setting a standard with AI systems that are synonymous with quality and innovation.

Challenges faced

Setting up the quality AI systems in a responsible and ethical way comes with its own set of challenges. Based on our expertise and experience within PwC and our firmwide connections worldwide as well as our collaboration with PwC Legal law firm, we foresee these challenges arising for companies seeking compliance within the tight timeline provided:

1 Risk qualification

A key step in order to determine a company’s obligations under the EU AI Act, is to assess the risk qualification of the AI system. The Act divides AI systems into four categories depending on the risks they present. The first category of AI systems presents a critical risk to European fundamental rights and values, and so it is prohibited. The second category, high-risk AI systems, can have a significant impact on health, safety or fundamental rights and therefore need to comply with a large range of obligations to mitigate these risks. It will thus be important to correctly identify the risk level of an AI system. The exact scope of some provisions (including the way of assessing risks) remains open to interpretation. By failure of guidelines by the competent authorities and interpretation by lawyers, companies are facing legal uncertainties.

2 Extensive self assessment and validation required

Due to the rapidly evolving nature of the field, EU legislators have opted for a wide range of self assessments , rather than introducing clear cut obligations on companies. This, coupled with the difficulty of obtaining a clear cross-sectoral overview, may hinder companies in completing adequate risk assessments and accurately ascertaining their classification. Time and resource constraints will further impact many companies, which will likely have to prioritise where they start on their compliance journey.

3 Navigating diverse regulations

Organisations must consider obligations beyond the AI Act like other regulations and industry standards to maintain a competitive edge. If an AI system processes personal data, companies must comply with the GDPR. Depending on the sector in which the AI system is used, other legal obligations might apply. In addition,  ISO 42001 sets criteria for trustworthy AI systems throughout their lifecycle. Companies can also implement internal policies regarding the acceptable use of AI systems within the company to ensure the protection of trade secrets and confidential information. Combining those requirements into one unified regulatory path will represent a main challenge for companies.

4 Risk of inconsistent enforcement practices

With AI being an emerging area for regulators, it is unclear how regulators may react, so enforcement approaches may vary across regions and countries. In the Belgian context, with its complex and fragmented regulatory oversight and practices, it is still unclear which entities and businesses will be subject to which regulators.

5 Compliance delays due to technical dependencies

Aside from issues like quality of data, the reliance of third-party AI models means organisations have no direct control over model updates or changes which creates technical dependencies that can cause compliance delays.

How can PwC help

Distilling best practices for effective implementation is one of the biggest hurdles leaders and managers are currently facing. If this resonates with you, take comfort in knowing you're not alone in navigating this dynamic terrain.

At PwC, we have the expertise and experience to help you navigate the various challenges along your journey towards AI-adoption. From strategy and governance to implementation and continued assistance, we offer multi-disciplinary know-how required for our clients to stay competitive and future-ready.

Video

How can you ensure trust in your AI applications?

To harness the full potential of (Gen)AI and transform industries, there needs to be trust in your AI – which is also key to complying with the EU’s new AI Act. This means embedding privacy and security measures from the outset, implementing good governance, enabling transparent and accountable decision making, and more.

2:54
More tools
  • Full screen
  • Share
  • Closed captions

Playback of this video is not currently available

Transcript
Video

Does your GenAI roadmap comply with the EU AI Act?

AI is transforming businesses and shaping innovation. But, with the introduction of the new EU AI Act, how can you ensure that your GenAI roadmap complies with the new legislation? At PwC, we look at Responsible AI from 3 dimensions: What are your legal obligations? What can you do to ensure the safe, trustworthy and transparent use of AI?

2:33
More tools
  • Full screen
  • Share
  • Closed captions

Playback of this video is not currently available

Transcript

Whitepaper: Trustworthy AI

Want to know more about how the recent EU AI Act impacts your organisation and the necessary steps for compliance?

AI in Business

Artificial Intelligence isn't just a set of tools. AI heralds a new era.

Contact us

Wim Rymen

Wim Rymen

Partner, PwC Belgium

Tel: +32 473 26 92 27

Xavier Verhaeghe

Xavier Verhaeghe

Partner Technology Consulting & Innovation, PwC Belgium

Tel: +32 495 59 08 40

Connect with PwC Belgium