Responsible AI - Are you ready?

Do you know what responsible AI is and are you already committed to creating, using and maintaining only responsible AI when looking to boost value creation within your organisation?

Over the past few years, technology management has proven to be a decisive factor in unleashing business value and consolidating the top positions of leading firms within a competitive industry. Some firms have progressively learned to exploit the power of technology to disrupt their own business operations and their interactions with customers, but as the saying goes "with great power comes great responsibility!".

Without a doubt, artificial intelligence (AI)1 can now be considered to be a "not so" emerging technology given its strong adoption rate across industries and business operations. The amplitude of AI’s impact already ranges from revolutionising the detection of cancer for patients in the medical sector to improving customer satisfaction using big data for tailored services and products in retail. However, behind this impressive progress, alarm bells are beginning to sound regarding the ethical and philosophical implications of letting our decisions be guided by a hard-coded and self-learning electronic agent.

This has been recently singled out by a study from Gartner identifying the 10 strategic technology trends for 20202. Among them, two perfectly illustrate the subtle balance between the power of AI and its responsible management; (1) Transparency & Traceability and (2) AI security. Together, they encompass the complexity of managing (sometimes private) data using AI to deliver additional value for different economic agents (companies, customers, public authorities, etc.). 

Are you already committed to creating, using and maintaining only "responsible AI" when looking to boost value creation within your organisation?

How can we make AI responsible for its actions?

AI’s implemented in diverse situations to gain insights and optimise business processes. An example could be the adoption of a chatbot to speed up customer services, where machine learning algorithms (a form of AI) are developed to analyse vast quantities of data and identify patterns to ultimately improve customer satisfaction. 

However, there are various risks involved: 

  • Does AI make decisions in line with your company values or does your chatbot deliver undesired answers? 

  • Do your customers trust the way your chatbot processes their (personal) data? 

  • Were you able to develop AI free of gender bias or does your chatbot treat locals and foreigners, young and old, male and female, etc. differently? 

These questions have raised the need for "responsible AI". But what does that actually mean? In the study "In Ethics guidelines for trustworthy AI"3, the European Commission describes trustworthy AI around three criteria:

  1. AI must be lawful - respecting all applicable laws and regulations

  2. AI must be ethical - respecting ethical principles and values

  3. AI must be robust - from a technical perspective and taking into account its social environment

Ethics guidelines for trustworthy AI: Ethics and Regulation, Performance and Goverance

We have developed our own Responsible AI toolkit that builds on these three pillars with our own industry expertise, adding in Performance and Governance criteria. 

Our Responsible AI toolkit covers five dimensions; (1) Ethics and Regulation, (2) Governance, (3) Bias & Fairness, (4) Interpretability & Explainability and (5) Robustness & Security. We consider the last three as core to the assessment of AI performance. 

These five dimensions provide a generic framework to assess AI, where the depth of analysis will always depend on the maturity level of the AI technology being used by the firm and the country/region of operations. 

For example, Ethics and Regulation is directly connected to the location in which AI’s operating. In Belgium, you might face regional, federal or European regulations that must be taken into account when developing or maintaining an AI application. 

Governance refers to the management of the AI technology itself, from strategy through operations and support. The extent of the Governance framework is related to the maturity of the technology. We could expect that a firm just starting to explore AI proofs-of-concept focuses more on business value whereas a mature firm invests in implementing governance best practices.

Performance is composed of three main elements

  • Bias & Fairness: the ability to ensure that your AI application won’t discriminate between customers due to either developer bias - when a community of developers with the same background tends to replicate their own affinity in the AI programming code - or data bias - when the AI bases its thinking process on historical data and tends to repeat discrimination from the past. 

  • Interpretability & Explainability: the ability for any AI developer or AI user to interpret the results produced by AI and explain the AI thinking process in a way that turns specific input of data into specific output. This is crucial to provide confidence to customers, employees and even public authorities that the technology’s being fully controlled by the organisation. 
  • Robustness & Security: the ability to protect your AI application from cyberattacks while increasing the resilience of the system and to prepare fallback plan scenarios.  

 

PwC's expertise in AI 

Our Belgian technology practice’s currently developing the AI Action Plan for the Federal Public Service Economy that’ll list and describe every initiative and policy statement referring to AI implementation in Belgium, including topics such as Responsible AI. A similar assignment will be undertaken for the Region of Brussels. In the past, our teams have also been appointed to carry out international benchmarking of Flanders to get a broad picture of where Flanders stands in terms of AI.  

PwC Belgium also has vast experience in the private sector, developing AI applications in the Retail and Banking sectors where data processing’s always carried out in accordance with the EU’s General Data Protection Regulation (GDPR) and potential bias is detected and discussed.

We've also built proofs-of-concept for the world of academia based on public data where the logic (transparent AI) was fully documented, including delivering the full code to our client and explaining the application’s functionalities in understandable terms. In every project we undertake, we always make sure the client has full transparency of the code and solid documentation to enable full transparency and explainability.

From regulatory understanding to deep technical expertise, our team has a complete, integrated set of unique and holistic competences to deliver a complete solution from analyses and diagnostics to improved processes.


References used in this article

1. Broadly defined as a technology capable of imitating human behaviour
2. Gartner - Top 10 strategic technology trends for 2020 (October 21, 2019)
3. Ethics guidelines for trustworthy AI (European Commission, April 2019)

Take our Responsible AI Diagnostic test

See how responsible (or not) your AI applications currently are.
A first analysis of your company’s situation will allow you to see where we can deliver added value to your organisation. 

Go to the test now

 

Contact us

Xavier Verhaeghe

Xavier Verhaeghe

Partner, Technology Consulting, PwC Belgium

Tel: +32 495 59 08 40

Dirk Vangeneugden

Dirk Vangeneugden

Partner, PwC Belgium

Tel: +32 475 52 63 23

Martijn Cuypers

Martijn Cuypers

Director, PwC Belgium

Tel: +32 475 55 69 54

Connect with PwC Belgium