This year’s Edelman Trust Barometer revealed the largest-ever global drop in trust across all four key institutions – government, business, media and NGOs. One sector, however, has so far weathered the anti-establishment storm: tech. Edelman found that 76 percent of the global general population continues to trust the sector.

But will this trust survive the widespread adoption of artificial intelligence? AI, after all, is known for being a black-box approach. It can easily lead to a ‘computer says no’ scenario.

New regulations in Europe could bring this to the fore soon. The General Data Protection Regulation (GDPR), contains specific guidance on the rights of individuals when it comes to their data. Point 1 of Article 22 of GDPR states that:

“1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

The Article continues by stating that the data controller must implement suitable measures to “safeguard the data subject’s rights and freedoms and legitimate interests, or at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”

In short, consumers are entitled to clear-cut reasons as to how they were adversely impacted by a decision. For model-based decision making, the model must be able to demonstrate the drivers of negative scores. This is a fairly simple process for scorecard-based credit decision models, but when you add AI to the mix it becomes more complicated.

[easy-tweet tweet=”Businesses that deploy AI in their decision-making processes must be accountable and transparent.” hashtags=”AI, GDPR”]

There is the potential with AI-based decision-making, for example, for discrimination against individuals, based on factors such as geographic location; combating such discrimination is an important part of the Digital Single Market being planned by the European Union.

To ensure continued trust in the tech sector at a time of great public scepticism, businesses that deploy AI in their decision-making processes must be accountable and transparent.

Explainable AI

That’s where Explainable AI comes in. This is a field of science that seeks to remove the black box and deliver the performance capabilities of AI while also providing an explanation as to how and why a model derives its decisions.

There are several ways to explain AI in a risk or regulatory context:

  1. Scoring algorithms that inject noiseand score additional data points around an actual data record being computed, to observe what features are driving the score in that part of decision phase space. This technique is called Local Interpretable Model-agnostic Explanations (LIME), and it involves manipulating data variables in infinitesimal ways to see what moves the score the most.
  2. Models that are built to express interpretability on top of inputs of the AI model.Examples here include And-Or Graphs (AOG) that try to associate concepts in deterministic subsets of input values, such that if the deterministic set is expressed, it could provide evidence-based ranking of how the AI reached its decision. These are often utilised and best described to make sense of images.
  3. Models that change the entire form of the AIto make the latent features exposable. This approach allows reasons to be driven into the latent features (learned features) internal to the model. This approach requires rethinking how to design an AI model from the ground up, with a view to explaining the latent features that drive outcomes. This is entirely different than how native neural network models learn. This remains an area of research, and a production-ready version of Explainable AI like this is several years away.

Ultimately, businesses to convince their customers that they can trust AI, regardless of the failures that are likely to occur along the way. It may seem like we’re entering a world where machines do all the thinking, but we need the ability of people to check the machines’ logic — to get the algorithms to “show their work,” as maths teachers are so fond of saying.

The same applies to machine learning, incidentally. Machine learning gobbles up data, but that means bad data could create bad equations that would lead to bad decisions. Most machine learning tools today are not good enough at recognising limitations in the data they’re devouring. The responsible use of machine learning demands that data scientists build in explainability and apply governance processes for building and monitoring models.

That’s the challenge before businesses today. To deploy AI — and enjoy the benefits that come with it — they must get customers to accept the reasoning behind decisions that affect their future.

+ posts

CIF Presents TWF - Miguel Clarke

Newsletter

Related articles

Generative AI and the copyright conundrum

In the last days of 2023, The New York...

Cloud ERP shouldn’t be a challenge or a chore

More integrated applications and a streamlined approach mean that...

Top 7 Cloud FinOps Strategies for Optimising Cloud Costs

According to a survey by Everest Group, 67% of...

Eco-friendly Data Centres Demand Hybrid Cloud Sustainability

With COP28’s talking points echoing globally, sustainability commitments and...

The Path to Cloud Adoption Success

As digital transformation continues to be a priority for...

Subscribe to our Newsletter