To address this, the European Union adopted a groundbreaking regulation on laying down harmonised rules on Artificial Intelligence(AI) Act. This Act aims to ensure that all AI tools used by or impacting the European market are safe and trustworthy products by establishing standards for AI systems, fostering transparency and innovation while safeguarding the rights of EU citizens. The Act takes a risk-based approach to impose a wide range of legal, technical and organisational obligations, with non-compliance potentially resulting in fines or injunctions to cease use by companies, both locally and globally.
The new Act signifies a pivotal milestone with its broad applicability and wider reach across borders. As a result, nearly every company across the globe has to commence work on its own compliance trajectory, define the obligations it is subject to, and adhere to their obligations.
Every company engaged in developing, commercialising, using, or having used AI to its benefit, is subject to some obligations under the act. Hence, the Act is not solely for big tech companies or data scientists. Conventional companies that are not providing IT services but using the AI for internal purposes, shall have their amount of obligations to comply with. Same goes for parties importing or distributing AI between suppliers and clients.
This extends beyond companies located or operating in the EU.
This Act is applicable to AI providers who introduce AI systems or general-purpose AI models to the EU market or put them into service, irrespective of their location, whether inside the Union or in a third country. It also applies to deployers of AI systems who are established or located within the Union, as well as to individuals within the EU who are impacted by these systems.
While the EU AI Act will gradually come into effect around 2025, the time to act is now! Proactive organisations are not merely adhering to the rules; they are setting a standard with AI systems that are synonymous with quality and innovation.
Setting up the quality AI systems in a responsible and ethical way comes with its own set of challenges. Based on our expertise and experience within PwC and our firmwide connections worldwide as well as our collaboration with PwC Legal law firm, we foresee these challenges arising for companies seeking compliance within the tight timeline provided:
A key step in order to determine a company’s obligations under the EU AI Act, is to assess the risk qualification of the AI system. The Act divides AI systems into four categories depending on the risks they present. The first category of AI systems presents a critical risk to European fundamental rights and values, and so it is prohibited. The second category, high-risk AI systems, can have a significant impact on health, safety or fundamental rights and therefore need to comply with a large range of obligations to mitigate these risks. It will thus be important to correctly identify the risk level of an AI system. The exact scope of some provisions (including the way of assessing risks) remains open to interpretation. By failure of guidelines by the competent authorities and interpretation by lawyers, companies are facing legal uncertainties.
Due to the rapidly evolving nature of the field, EU legislators have opted for a wide range of self assessments , rather than introducing clear cut obligations on companies. This, coupled with the difficulty of obtaining a clear cross-sectoral overview, may hinder companies in completing adequate risk assessments and accurately ascertaining their classification. Time and resource constraints will further impact many companies, which will likely have to prioritise where they start on their compliance journey.
Organisations must consider obligations beyond the AI Act like other regulations and industry standards to maintain a competitive edge. If an AI system processes personal data, companies must comply with the GDPR. Depending on the sector in which the AI system is used, other legal obligations might apply. In addition, ISO 42001 sets criteria for trustworthy AI systems throughout their lifecycle. Companies can also implement internal policies regarding the acceptable use of AI systems within the company to ensure the protection of trade secrets and confidential information. Combining those requirements into one unified regulatory path will represent a main challenge for companies.
With AI being an emerging area for regulators, it is unclear how regulators may react, so enforcement approaches may vary across regions and countries. In the Belgian context, with its complex and fragmented regulatory oversight and practices, it is still unclear which entities and businesses will be subject to which regulators.
Aside from issues like quality of data, the reliance of third-party AI models means organisations have no direct control over model updates or changes which creates technical dependencies that can cause compliance delays.
Distilling best practices for effective implementation is one of the biggest hurdles leaders and managers are currently facing. If this resonates with you, take comfort in knowing you're not alone in navigating this dynamic terrain.
At PwC, we have the expertise and experience to help you navigate the various challenges along your journey towards AI-adoption. From strategy and governance to implementation and continued assistance, we offer multi-disciplinary know-how required for our clients to stay competitive and future-ready.
To harness the full potential of (Gen)AI and transform industries, there needs to be trust in your AI – which is also key to complying with the EU’s new AI Act. This means embedding privacy and security measures from the outset, implementing good governance, enabling transparent and accountable decision making, and more.
Playback of this video is not currently available
AI is transforming businesses and shaping innovation. But, with the introduction of the new EU AI Act, how can you ensure that your GenAI roadmap complies with the new legislation? At PwC, we look at Responsible AI from 3 dimensions: What are your legal obligations? What can you do to ensure the safe, trustworthy and transparent use of AI?
Playback of this video is not currently available
The new EU regulations aim to ensure that AI development, commercialisation and usage is safe, transparent and trustworthy for EU residents. But what does your company need to do to comply with the new EU AI Act? And what are the potential financial and reputational consequences for non-compliance? Contact PwC Legal to discover how to implement Responsible AI that works for your business and complies with all relevant legislation.
Playback of this video is not currently available
Want to know more about how the recent EU AI Act impacts your organisation and the necessary steps for compliance?
Artificial Intelligence isn't just a set of tools. AI heralds a new era.
Partner Technology Consulting & Innovation, PwC Belgium
Tel: +32 495 59 08 40