siliconindia | | December 20189model that was trained with over a million cat images, was shown to be highly sensitive to even the slightest of alterations in the image pixels (which otherwise did not change the semantic content of the image in any way). This form of fragility introduces trust issues, especially when the algorithms are being used in life-critical decisions. Research is underway to understand and reinforce algorithmic frameworks, but we are at least a few years away from developing AI frameworks that will comprehensively address fragility. Associated very closely with the fragility issue is the fact that today's implementations of Machine Learning models are highly exposed to bias in their training data. Since most algorithms are modeled based on historical data, any bias in existing data is immediately learned by the models. The recent case of Amazon's candidate recommender latching on to gender bias is a very simple example of the greediness of the optimization algorithms that form the basis of most AI modeling frameworks. Popularly called `greedy optimization', most mathematical formulations of optimization techniques seek the easiest data attribute to base their decisions upon. As demonstrated in Amazon's case, the greedy algorithm picked gender of the candidate as the easiest attribute it could make a prediction upon. While in theory, this was algorithmically correct, it most certainly begs the question of how can we overlay societal morality on algorithmic optimization. It's not that it cannot be done, it's just that data scientists have not yet been chartered with this mandate, until now. If trust in algorithms is to increase, we definitely need to address bias in data in a holistic sense, not just on a project by project basis.So, how do we build trust back into AI-enabled machines?We need to add the human-in-the-loop. When we hear oft-referenced successes of Google, Facebook, Uber and their counterparts in utilizing AI to solve hard problems, a critical element that is usually not discussed is the element of how these institutions have painstakingly built a human-in-the-loop process for most algorithmic training. Google and Microsoft famously spent hundreds of millions of dollars in collecting very valuable human judge labels for training their web search algorithms. Netflix and Amazon rely heavily on their web and mobile interfaces to entice their audiences to spend a lot of time browsing and sharing their likes and dislikes. The human-in-the-loop can't be emphasized enough.When the broader industry discusses Machine Learning and the deployment of algorithmic frameworks, there is a severe under representation of the investment needed to bolster the human-in-the-loop oversight needed to support and improve algorithms we need a sense of urgency to address this issue in order to build trust in our algorithmic processes. In order for AI to be trusted in life-critical industries, organizations have to invest and deploy the right AI tools and technologies that can seamlessly bring the human-in-the-loop to (a) train the algorithms the right way, and (b) provide the confidence for the end consumer that critical decisions are supported by strong human oversight including moral, ethical, and societal considerations.Until we comprehensively address the trust gap and its needs within the core algorithmic framework, it will be very hard for companies to gain and maintain trust in algorithmic decision making. Sreekar KrishnaIn order for AI to be trusted in life-critical industries, organizations have to invest and deploy the right AI tools and technologies
<
Page 8 |
Page 10 >