Separator

Have We Lost Trust In Ai-Enabled Machines?

Separator
Sreekar Krishna, Managing Director - Data Science, Artificial Intelligence and Innovation, KPMG USFrom self-driving cars, to robo-advisors, to facial recognition on mobile devices, it seems as if AI algorithms have taken over as the backbone of our new digital economy. The reality is that algorithms have always played a key role in our economy.

From decades old navigational algorithms that routed planes, trains, and ships, to putting a man on the moon, the evolution of decision-making from human ‘gut-instinct’ to a reliable and repeatable process has always been powered by sophisticated mathematical models and algorithms. Yet, recently, a ‘trust gap’ has emerged and is starting to rattle the core foundation of mathematical modeling and algorithmic decision-making.

One of the critical factors that seems to drive this trust gap is the inequality of skills between industries, when it comes to algorithmic know-how. The technology sector has been investing for decades in creating environments amenable and accessible to engineers, data scientists and mathematicians, while other industries have not necessarily focused on adopting the same conducive operating model where algorithmic science can live in harmony with lines of business. The result has been a deterioration of trust, fed by fear of the unknown around AI-technologies.

Another important reason trust in data science and AI is being questioned is related to the issue of Algorithmic Fragility. Algorithmic fragility refers to the instability of algorithmic outcomes when the input data deviates ever so slightly from normalcy. The well acclaimed ‘Cat Classifier’ by Google, known for the Deep Learning model that was trained with over a million cat images, was shown to be highly sensitive to even the slightest of alterations in the image pixels (which otherwise did not change the semantic content of the image in any way). This form of fragility introduces trust issues, especially when the algorithms are being used in life-critical decisions. Research is underway to understand and reinforce algorithmic frameworks, but we are at least a few years away from developing AI frameworks that will comprehensively address fragility.

Associated very closely with the fragility issue is the fact that today’s implementations of Machine Learning models are highly exposed to bias in their training data. Since most algorithms are modeled based on historical data, any bias in existing data is immediately learned by the models. The recent case of Amazon’s candidate recommender latching on to gender bias is a very simple example of the greediness of the optimization algorithms that form the basis of most AI modeling frameworks. Popularly called ‘greedy optimization’, most mathematical formulations of optimization techniques seek the easiest data attribute to base their decisions upon. As demonstrated in Amazon’s case, the greedy algorithm picked gender of the candidate as the easiest attribute it could make a prediction upon.

In order for AI to be trusted in life-critical industries, organizations have to invest and deploy the right AI tools and technologies


While in theory, this was algorithmically correct, it most certainly begs the question of how can we overlay societal morality on algorithmic optimization. It’s not that it cannot be done, it's just that data scientists have not yet been chartered with this mandate, until now. If trust in algorithms is to increase, we definitely need to address bias in data in a holistic sense, not just on a project by project basis.

So, how do we build trust back into AI-enabled machines?

We need to add the human-in-the-loop. When we hear oft-referenced successes of Google, Facebook, Uber and their counterparts in utilizing AI to solve hard problems, a critical element that is usually not discussed is the element of how these institutions have painstakingly built a human-in-the-loop process for most algorithmic training. Google and Microsoft famously spent hundreds of millions of dollars in collecting very valuable human judge labels for training their web search algorithms. Netflix and Amazon rely heavily on their web and mobile interfaces to entice their audiences to spend a lot of time browsing and sharing their likes and dislikes. The human-in-the-loop can’t be emphasized enough.

When the broader industry discusses Machine Learning and the deployment of algorithmic frameworks, there is a severe under representation of the investment needed to bolster the human-in-the-loop oversight needed to support and improve algorithms– we need a sense of urgency to address this issue in order to build trust in our algorithmic processes.

In order for AI to be trusted in life-critical industries, organizations have to invest and deploy the right AI tools and technologies that can seamlessly bring the human-in-the-loop to (a) train the algorithms the right way, and (b) provide the confidence for the end consumer that critical decisions are supported by strong human oversight – including moral, ethical, and societal considerations.

Until we comprehensively address the trust gap and its needs within the core algorithmic framework, it will be very hard for companies to gain and maintain trust in algorithmic decision making.