Responsible AI

Beyond the algorithm: Why sustainable AI is the defining human challenge of 2026

climate impact ai
  • Blog
  • 8 minute read
  • December 15, 2025

Welcome to our series on the interaction between AI and environmental, social and governance (ESG) principles. Our goal is to provide you with a deep dive into different perspectives on the impact of AI, addressing a responsible way forward. The first article started with the environmental impact, which we build on here by highlighting the social impact on AI. The third article looks at governance, before we finish by discussing a C-Suite playbook for responsible AI in the last article. Stay tuned for the following articles! 


As we close the book on 2025, the conversation around artificial intelligence has fundamentally shifted. We have moved past the initial excitement of adoption and into the complex, messy reality of integration. We know 'Agentic AI'—systems that can plan, reason and take action—is no longer just a prediction, it is rewriting the global economic landscape.

But as productivity surges in AI-exposed sectors, a critical, human question remains: is this progress sustainable?

True sustainability isn't just about energy consumption, green computing or carbon footprints. It is about the social contract. It’s about building a workforce and a society that can survive and thrive alongside the machine. If we allow a 'two-tier' economy to calcify—where a small, tech-literate elite commands huge premiums while others drift into functional obsolescence—we fail the sustainability test.

To build a fearless future, we must look beyond the code and confront the human side of the equation. 

1 The economics of inequality

The economic data from 2025 is compelling, but it also sounds a warning bell. According to our Global AI Jobs Barometer1, sectors with high AI potential—such as financial services, IT and professional services—are witnessing labour productivity grow nearly four times faster than the broader economy, thanks to augmented decision support and automation of specific tasks.

This productivity growth is vital for economies like Belgium, where we face the headwinds of an aging population and shrinking labour supply. However, this growth is not being shared equally. We are witnessing a dramatic re-pricing of human labour that threatens to widen the gap between those who are well versed in AI and those who are not.

Globally, the wage premium for AI-literate talent has surged to 56% on average. While specific Belgian wage studies are still emerging, the strong global signals are likely to be indicative of local pressures:

  • Managing Directors and CEOs receive an 84% wage premium for AI skills, reflecting the market’s demand for leaders who can shape strategic AI vision.

  • Lawyers see a 49% increase, as automation takes over document review.

  • Database Professionals, conversely, see a smaller 18% increase, suggesting that technical maintenance is less valued than strategic application.

This divergence poses a significant risk to increased inequality between roles and job level. A sustainable AI strategy must address this valuation gap. If the market signals that only those who can 'orchestrate' AI are valuable, we risk devaluing most of the workforce who are still learning to work alongside it.

2 The Belgian paradox: Capability vs. connectivity

In Belgium, the challenge of social sustainability has a specific local flavour. We are seeing a 'reluctant capability'—a workforce that is highly educated but hesitant to adopt.

The numbers are: 40% of Belgian workers do not interact with AI tools at all in their daily work. Even more concerning for long-term competitiveness, 67% of the workforce has never heard of 'AI agents', the very technology poised to redefine their daily operations2.

This hesitation creates a 'digital divide' that isn't just about skills; it is about geography and social cohesion. Belgium’s federal structure is creating distinct adoption micro-climates. The Brussels Capital Region, with its high concentration of financial and legal services, has an AI exposure of 48.2%. In contrast, regions with higher densities of manufacturing or logistics face a steeper challenge in digital upskilling.

If the AI wage premium flows disproportionately to Brussels-based knowledge workers, we risk exacerbating existing regional economic inequalities. Sustainable AI means ensuring that the benefits of this technology ripple out beyond the capital and into the broader economy. It means bridging the gap so that a worker in Wallonia or Flanders has the same opportunity to capture the productivity dividend as their peers in the city centre. 

3 Protecting the pipeline: The crisis of the junior role

Perhaps the most urgent threat to the sustainability of our industries is the 'hollowing out' of the talent pyramid.

For decades, the professional services industry, law firms and corporate functions have operated on a pyramid model. Large cohorts of junior staff were hired to perform routine tasks—data collection, document review, meeting minutes. By doing this grunt work, they learned the trade through osmosis.

In 2025, AI agents broke this model. They can perform these tasks faster, cheaper and often more accurately than a first-year associate. The pyramid is morphing into a diamond, shrinking the entry-level roles that served as the traditional training ground.

This creates a pedagogical paradox: If juniors don’t do the groundwork, how do they develop the judgment required to become seniors? Senior professionals develop their gut instincts by processing thousands of data points manually over years. If an AI processes the data and presents only the conclusion, the junior professional is deprived of the learning journey.

At PwC we believe a sustainable business model requires us to actively protect these pathways. We cannot simply cut entry-level roles to save costs today or we will have no leaders tomorrow. We are actively redesigning the junior experience, moving them from creators to AI supervisors from day one. This requires a higher baseline of critical thinking—the confidence to challenge an algorithmic output—which is a skill that traditionally took years to build.

We are facing this pedagogical challenge head-on. In a recent dialogue between Patrick Boone, our Chairman, and Oscar Monville, a junior Cloud Data and AI expert, we explored exactly how the entry-level experience is evolving. Oscar represents a new generation of talent who balances digital curiosity with critical execution. He uses our secure, bespoke AI tools to fast-track routine tasks like translation and brainstorming, but we know that access to technology is only half the equation. We are helping our juniors balance this new efficiency with the development of irreplaceable human skills—client interaction, complex argumentation and ethical judgment—while maintaining human connection and value in the workplace. We provide the innovation and the guardrails so our people can grow from a digital native into a strategic leader, ensuring the learning journey isn't lost, but transformed3.

4 Traction requires trust

Sustainability is also about trust. The transition to an automated future is fragile and the trust barrier in Belgium is high.

Currently, 58% of consumers say they are uncomfortable using AI tools to engage with brands. This scepticism is mirrored inside the organisation: only 28% of Belgian CEOs report high trust in their own organisation's ability to embed AI4.

Why is trust so low? Part of it is the fear of the black box.

  • Consumer anxiety: People are wary of hallucinations and data privacy breaches. For example, if an AI agent makes a mistake with a bank transfer, who is responsible? 

  • Employee anxiety: There is a lack of clear governance. Shockingly, only 11% of Belgian firms have clear internal rules for AI use. Without guardrails, employees fear surveillance or unfair algorithmic management5.

Leading Belgian firms like KBC are trying to bridge this gap by focusing on intent-based services—using AI to anticipate customer needs (like KBC’s banking app suggesting a renovation loan) rather than just reacting to queries. But even successful efficiency gains, like Colruyt’s cashier-less supermarkets, raise questions about the loss of social connection for vulnerable populations.

Sustainable AI requires us to act with integrity. It demands 'explainable AI' that allows employees and customers to understand how decisions are made. It means complying with emerging regulations—like the Belgian guidelines on disclosing AI-generated content in advertising —not just because it's the law, but because it's the right thing to do to maintain social contracts.

5 The ethics of the algorithm

Finally, we must confront the risk of bias. True sustainability implies fairness. Reports from Unia (the Interfederal Centre for Equal Opportunities) highlight that AI models used in hiring can inadvertently replicate historical biases. There is a specific, growing risk of age discrimination. If algorithms use proxies for age—such as 'year of graduation'—to filter candidates, we risk systematically excluding the 50+ demographic, which is a vital part of the Belgian workforce6.

In a labour market that needs every pair of hands, automating discrimination is not just unethical; it is economically self-defeating. 

6 A roadmap for a sustainable 2026

The fearless future isn't guaranteed. It is a prize to be won. The potential for a productivity boom is real, but so is the social risk. To turn the disruption of 2025 into the opportunity of 2026, we need a strategy that harmonises the speed of the machine with the needs of the human.

Here is practical guidance on how to make your AI strategy socially sustainable:

For business leaders: Democratise the development

  • Stop the top-down mandate. The fear of AI dissipates when workers use it to solve their own problems.

  • Launch citizen developer programmes. In 2026, give your non-technical staff access to safe, sandboxed AI agents. Encourage them to automate their most frustrating administrative tasks. This builds the AI literacy that is currently missing.

  • Trace the logic. Task your entry-level staff with tracing the AI's thought process and critiquing its output. This solves the pedagogical crisis: by analysing how the AI reached a conclusion, juniors develop the critical judgment and subject matter expertise that was previously gained through grunt work.

For HR and talent leaders: Hire for adaptability

  • Rethink the resume. The degree is a fading signal of competence. Move toward skills-based hiring where you test for adaptability rather than specific historical credentials.

  • Shift from autonomy to collaboration. Do not aim for fully autonomous AI just yet. Instead of letting the AI work in isolation, mandate a human-in-the-loop workflow where juniors act as reviewers and supervisors.

  • Value the human premium. As technical tasks are automated, prioritise empathy and complex problem-solving in your hiring. These are the soft skills that are becoming the new hard currency.

For policymakers: Bridge the divide

  • Launch a comprehensive national skills initiative. We need a broad strategy that goes deeper than technical literacy to prevent a polarised society between the AI-haves and AI-have-nots.

  • Teach AI civics. The workforce of 2026 needs to know how to question an algorithm, spot a hallucination and govern an agent.

At PwC we combine deep industry knowledge and tech expertise so you can harness the superagency of the machine while doubling down on the empathy that makes us human.

Contact us

Xavier Verhaeghe
Xavier Verhaeghe

Partner Technology Consulting & Innovation, PwC Belgium

Michiel De Keyzer
Michiel De Keyzer

Director, PwC Belgium

Connect with PwC Belgium