AI is rapidly transforming the landscape of third-party services, introducing significant challenges to established risk management frameworks. Its presence is now pervasive across various activities within third parties, yet its responsible use remains uncertain. Consequently, third-party risk management (TPRM) must evolve to address these emerging concerns.
Effective AI governance in vendor ecosystems enables both innovation and compliance, reducing friction in onboarding and risk exposure.
Modernising third-party risk management can help businesses adopt AI faster and more securely across their third-party ecosystem.
As AI rapidly integrates into the core of organisational processes, many companies may not fully recognise the extent of its use by their vendors and partners.
Organisations are leveraging AI across various activities to boost performance and streamline decision-making, including conducting data analysis, enhancing functionalities within cloud platforms and SaaS tools, personalising marketing efforts, deploying customer support chatbots and detecting fraud.
Consequently, the quality of services and delivery can suffer if AI systems are inadequately implemented or misunderstood. Furthermore, the implications of AI use or misuse are profound, encompassing ethical issues, security vulnerabilities, reputational risks and potential legal ramifications.
A few examples include:
GenAI hallucinations in professional services. Several press articles and/or lawsuits have revealed instances where case citations or references were entirely fabricated by GenAI.
Sensitive data exposure. Inadequate data anonymisation to Google and the University of Chicago Medical Centre facing a lawsuit accusing them of sharing patient records with AI teams, with potential re-identification risks.
Automated decision-making bias. In 2019, the credit card algorithm from Apple and Goldman Sachs was scrutinised for allegedly discriminating against women by providing lower credit limits despite similar financial profiles as their male counterparts.
Chatbot failures. Air Canada’s customer service chatbot misinterpreted refund policies, granting an unauthorised refund. A tribunal ruled against Air Canada, holding the company liable for the chatbot’s output.
As a result, organisations must be vigilant in identifying, assessing and managing the risks associated with AI technologies, ensuring that they address any errors or biases in AI-driven processes, uphold ethical standards, protect sensitive information and comply with regulatory requirements.
TPRM functions must keep pace with rapid AI adoption across the vendor landscape while ensuring integrity, security and compliance in these relationships. Beyond oversight, TPRM also plays a pivotal role in promoting Responsible AI practices among vendors, helping businesses to embrace innovation and new technologies while effectively mitigating risk exposure.
Many third-party vendors, such as providers of off-the-shelf software, have also begun to embed AI into their products, often without full visibility or the understanding of their customers. Service providers may also leverage AI to enhance their service delivery, again without clients’ explicit awareness. Gaining visibility into when and how these third parties are using AI is a growing challenge for enterprises.
Traditional tools for managing vendors focus largely on standard risks, such as financial stability, reputation and compliance with standard regulations. They weren’t built to address the specific challenges that AI can raise.
To manage this new class of risk, organisations should go beyond checkbox diligence. This means integrating AI-oriented controls into their risk frameworks, updating vendor contracts and rethinking their oversight strategies. This could involve developing criteria for AI ethics, bias testing, model transparency and privacy standards.
Additionally, businesses should rethink how they identify, evaluate and monitor third-party AI use. This includes incorporating AI-specific controls into their risk models and enhancing their due diligence processes. It can also require revisiting contractual obligations to confirm that vendors disclose AI deployments, provide adequate governance and align their AI usage with the enterprise’s risk profile.
This deep dive into AI's environmental impact is just the beginning of our comprehensive exploration into its role in environmental, social and governance principles. While the technological solutions for a greener AI are rapidly evolving, a truly sustainable future demands a broader perspective. In our upcoming articles, we will shift our focus to how AI is shaping customer and employee engagement, the critical aspects of trustworthy AI and the imperative of effective AI governance in light of evolving regulations like the EU AI. In the last article, we will look at how to empower leaders to strategically navigate this transformative era and build an AI future that is both powerful and profoundly sustainable.
Key actions to prioritise
Here’s how you can get started to help effectively manage the evolving risks and opportunities of third-party AI use:
Understand existing exposure to risks. Identify the AI-driven solutions used in your company's operations, and evaluate third parties utilising AI, e.g. for the delivery of goods and services to your company.
Scrutinise data usage policies. Confirm whether third parties are using your organisation’s data to train AI models. Require clear documentation of data-handling practices, consent mechanisms and any limitations placed on data reuse.
Enhance third-party risk-tiering frameworks. Modify risk scoring to account for AI use cases. Prioritise due diligence based on the type of AI deployed, the sensitivity of the data being used and the potential business impact of AI failures, outages or misuse.
Perform AI-specific due diligence and ongoing monitoring. Push vendors to provide greater transparency and evidence of holistic controls on model development, data privacy, bias mitigation and auditability. To support these efforts, consider adding AI-focused addenda to SOC 2 reports1, independent attestations or other governance tools.
Increase AI-focused inquiries during assessments. Ask targeted questions about AI model design, data sources used in training, risk controls, explainability and monitoring processes.
Implement oversight. Maintain an inventory of AI usage, including its applications and data sources.
Revisit vendor contracts to encourage Responsible AI use. Update agreements to require disclosure when vendors use AI in service delivery. Include provisions for notification and risk transparency. When it’s appropriate, create incentives for vendors to innovate responsibly with AI.
Track and respond to evolving regulations. Stay ahead of emerging AI governance mandates, such as the EU AI Act, and confirm that your third parties’ practices align with the relevant regional and sector-specific requirements. For instance, to ensure compliance with EU AI Act, a company should verify how its third party ensures transparency, human oversight and security if using AI professionally.
With strong AI governance in place, TPRM teams can move faster and with more confidence.
When TPRM shifts from being a reactive gatekeeper to a proactive enabler, organisations are likely to be better able to adopt AI-driven solutions responsibly and at scale.
As more companies use AI with their third-party partners, it’s important to adopt AI in a way that brings benefits while managing risks. PwC helps organisations do this by using smart risk tools, AI-focused controls and careful vendor management to keep things safe and effective.
Identify, manage and mitigate risks resulting from third-party relationships
Looking at AI through the lens of ESG principles