Responsible artificial intelligence

Artificial intelligence you can trust

Explore our service

Artificial intelligence (AI) is bringing new societal opportunities, but it's worth also considering the risks

When using AI to support business-critical decisions, it's important to ensure decisions are being made responsibly. This means, in part, without breaching data privacy policies, or introducing racial, gender or any other bias. Companies must take on the challenge of making sure that the AI acts in a responsible way. Key questions include:

  • Are accurate, bias-free decisions being made? 

  • Is anyone's privacy being violated? 

  • Is the technology being appropriately governed and monitored? 

Woman working on digital interface with artificial intelligence
Woman working on multiple monitors artificial intelligence

Responsible AI (RAI) is artificial intelligence that is lawful and that adheres to well-defined ethical guidelines regarding fundamental values, including individual rights, privacy, non-discrimination, and non-manipulation. RAI principles are increasingly important with the rise of new generative AI models and upcoming regulation in the sector. RAI can also be a competitive advantage because consumers have growing concerns about privacy and biassed decisions. 

At PwC Belgium, we’ve been supporting companies of all sizes and in all sectors with their AI-related projects for years. We’ve encountered all the challenges and risks that AI can bring and have developed solutions for each of them. To implement responsible AI, we’ve compiled all our experiences, lessons learned, and best practices into ready-to-use tools tailored to your company's needs.

Are you ready to implement responsible AI? Take our Responsible AI Diagnostic test to find out!

With responsible AI, you’ll stay ahead of upcoming regulations

On 21 April 2021, the European Commission released its much-awaited first draft of the Artificial Intelligence Act. The proposed legal framework focuses on the use of AI systems and the associated risks. It proposes to create a technology-neutral definition of AI systems in EU law, and to set out classifications for AI systems with different requirements and obligations according to a ‘risk-based approach’. Last December, the EU Council adopted its common position on the AI act. Now, the European Parliament has to discuss and adopt its own position before the final negotiations starts.

  • 2018
    European AI Strategy

    In 2018, the European Commission outlined its vision for artificial intelligence, which promotes ethical, secure, and innovative AI created in Europe.

  • 2019
    Ethics Guidelines

    In 2019, the European Commission established a framework for creating trustworthy AI, which includes seven essential requirements.

  • 2021
    Proposal for a Regulation on AI

    In 2021, the European Commission unveiled a new proposal for a legal framework focused on the specific use of AI systems and associated risks.

  • 2022
    Adoption of Council's common position

    In December 2022, The Council adopted its general approach on the EU AI Act. The text clarifies key concepts of the upcoming regulation and sets criteria to define AI systems.

  • End 2023
    AI Act Adoption

    In March 2023, the European Parliament finalised his own position about the general proposal. The trilogues between the Parliament, the Council and the Commission will start. By the end of 2023, the AI Act will be adopted.

  • 2024
    Implementation of the acts

    During 2024, policy makers will be working on harmonised versions of the standards and will be implementing the acts.

  • 2025
    AI Act application

    In 2025, the transitional period will end. The AI Act will then become applicable.


Artificial intelligence pyramid

Impact for your company

Some AI systems presenting ‘unacceptable’ risks would be prohibited, for example AI systems used for social scoring. A wide range of ‘high risk’ AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market. AI-based cameras for autonomous vehicles will typically fall into this category. Those AI systems presenting only ‘limited risk’ would be subject to very light transparency obligations. That would be the case for chatbots or spam filters.

Our Responsible AI Toolkit will guide you through the responsible AI journey

Our Responsible AI Toolkit is a customizable suite of frameworks, tools, and processes designed to help you use AI ethically and responsibly, from strategy to execution. We'll tailor our solutions to your organisation's unique business requirements and AI maturity level.

Who is accountable for your AI system?

Is your AI unbiased? Is it fair?

How was that decision made?

Will your AI behave as intended? What are the security risks and implications that should be managed?

Is your AI legal and ethical?

< Back

< Back
[+] Read More

Whatever your maturity level, we take you one step further

Responsible AI is of utmost importance for companies of all sizes, regardless of the complexity of their AI systems. It is recommended that businesses just beginning to develop AI algorithms incorporate responsible practices into their systems, processes, and corporate culture from the start. This is because it is often easier to address issues such as security, compliance, and bias in the early stages of production, rather than when the system is already established and running. It’s advisable to begin integrating responsible practices into your organisation now. By  doing so, you can ensure that the AI you are developing is effective, ethical and responsible. 

By improving your models' transparency and explainability you can make more informed business decisions with trust and become more competitive in the market. This approach can also help you to foster better relationships with stakeholders, build customer trust, attract and retain talent, comply with regulations, manage risks more effectively, and boost revenue.

Whether you're just starting out or getting ready to scale, responsible AI can help. Drawing on our expertise in AI innovation and deep global business knowledge, we'll evaluate your end-to-end needs, and design a solution to help you address your unique risks and challenges.

You are not yet using AI and you want it to be responsible  from the very beginning.
  • What is the role AI will play  in your organisation?
  • Are your employees equipped with the skills to create reliable AI?
  • Do you build or buy?

You have started to use AI as proof-of-concepts and you want responsible AI models in production.
  • Does your AI strategy align with your organization's values?
  • Do you have a clear delineation of roles & responsibilities?
  • How do you maintain your AI systems?

You are using AI in a day-to-day basis in production and want to assess the responsible side of it.
  • How do you consider the impending regulations surrounding your AI?
  • How can you build better AI?
  • Is your AI fair? Secure? Robust?

You are using AI in a daily basis and are facing some performance issues related to lack of transparency, biased model.
  • Are you applying AI and data best practises?
  • Is your AI performing as expected?

We have the right team to make big ideas happen at all stages of AI adoption

We have the right team to build AI responsibly, both internally and for our clients. We can bring big ideas to life at all stages of AI adoption.

AI adoption toolkit stages

And with our Responsible AI Toolkit, we bring a collection of frameworks, templates, and code-based assets to streamline your RAI journey. These accelerators include:

  • AI and Data Ethics Traceability Matrix

  • Ethical AI and Data Framework

  • AI and Data Ethics Impact Assessment and Organisational Maturity Assessment

  • Data Protection and AI Risk and Controls Library 

  • Project Management

  • AI and Data Ethics Training

Companies from key sectors in Belgium trust us

Icon community public sector

Public sector

The development of rigorous frameworks to shape decision-making in public sector organisations will be crucial for realising AI’s potential to transform public services and administration. Ethical decisions regarding citizens’ well-being must be at the forefront of the government's efforts to explore and adopt this technology. The public sector can be a leader in adopting secure, trustworthy and sustainable AI systems. Europe's public sector has the potential to leverage its significant collective purchasing power to act as a catalyst and stimulate demand for responsible AI.

As an AI expert, PwC has already collaborated with public organisations such as the Federal Public Service Economy and the Brussels region to develop their AI action plans. Responsible AI has been systematically addressed to ensure it is considered when implementing AI in Belgium.

Financial sector icon

Financial sector

AI in the financial sector has been a game-changer, automating processes to make them more efficient and cost-effective. Credit adjudication, personalised marketing offers, next-best action, cybersecurity, and fraud detection are all examples of AI applications. The AI Act will ensure that societal and environmental well-being, diversity, non-discrimination, fairness, and transparency are taken into account in these use cases. Financial firms must consider how compliance can be achieved and how the new rules will interact with other existing (such as GDPR, MiFID II or RTS 6).

To that end, PwC has already supported European banks and insurance companies in auditing AI systems and in creating a robust framework to proactively identify risks and the appropriate controls to demonstrate ongoing compliance.

Healthcare sector icon

Healthcare sector

AI has the potential to revolutionise healthcare, including, but not limited to, clinical practice, biomedical research, public health, and health administration. Responsible AI is especially important to reduce the risks associated with AI, such as patient harm caused by AI mistakes, misuse of medical AI tools, bias, lack of transparency, and privacy and security issues. Existing regulations already set out detailed requirements for medical device safety and performance. The AI Act will bring additional regulatory obligations and ethical considerations, however, it should not impede the adoption of AI, which holds immense potential for the healthcare industry.

PwC has a long history of collaboration with healthcare providers. We’ve helped large pharmaceutical companies to implement data ethics and develop strategic data and AI ethical guidelines.

Discover our Responsible AI service offering

Contact us

Xavier Verhaeghe

Xavier Verhaeghe

Partner, Technology Consulting, PwC Belgium

Tel: +32 495 59 08 40

Michiel De Keyzer

Michiel De Keyzer

Director, PwC Belgium

Tel: +32 494 88 95 74

Wouter Travers

Wouter Travers

Senior manager, PwC Belgium

Tel: +32 479 10 56 05

Natacha Dewyngaert

Natacha Dewyngaert

Senior Manager, PwC Belgium

Tel: +32 472 51 28 25

Connect with PwC Belgium