AI is expected to be a direct or indirect component in all processes and products along the entire value chain by 2030. In addition to its economic value, trust is an essential factor in deciding for or against the use of AI systems: trust in generative AI’s performance, security, reliability and fairness. In fact, PwC’s Voice of the Consumer Survey shows that to develop trust, it is imperative to incorporate and experiment with AI tools in business operations while maintaining a human element, especially in more complex and personal services.
How does your AI usage measure up?
Every company that develops, commercialises, uses or benefits from AI is affected by at least some parts of the EU AI Act, as well as the General Data Protection Regulation (GDPR), the Data Act, the Digital Services Act and other national and regional regulations. Compliance requires your company to consider trust from the outset, implementing good governance to realise potential benefits of AI and manage its risks.
Following new standards, such as ISO/IEC 42001 which focuses on establishing, implementing, maintaining and continuously improving an artificial intelligence management system (AIMS), can help show compliance with the EU AI Act. Our experts can guide your AI processes to improve your company’s compliance.
As companies explore generative AI and its possibilities, they are discovering a wide world of opportunities that creates value. Unfortunately, trust is often added only as an afterthought when there is proven value in it, even though this increases the potential risk to the company’s reputation, as well as making it difficult to comply with the EU AI Act and other regulations.
Trust needs to be incorporated at every stage of the process, from the design to the run phase. And everyone who works with AI - from the board of directors and senior leadership to sales representatives and admin staff - has a role in ensuring good governance of the system.
PwC uses proactive project assurance to help companies embed trust in all their AI systems from the outset. This involves taking a broader view of the company’s risks to proactively managing them, implementing good governance for existing systems and providing third party attestation that the AI systems are running as they should be.