Separator

The Fire of Artificial Intelligence is Burning High

Separator
Shriram Natarajan, CTO, Persistent Systems [NSE: PERSISTENT] and Gopi Rangan, Founder and General Partner, Sure VenturesArtificial Intelligence (AI) is a flaming hot trend. The heightened level of hype in AI is a reflection of growing public awareness and a corresponding increase in investments in consumer and enterprise-ready ideas. Visionary business leaders like Elon Musk and Ray Kurzweil welcome the age of AI—from fully autonomous level 5 self-driving cars to achieving singularity where machines surpass human intelligence. On the other side, Stephen Hawking, Naval Ravikant, and Jack Ma are sounding the fire alarm on potential dangers. Will the flames of AI brighten our future or burn down the house? How should investors, consumers, and technologists think about this?

Let us look at trends, ethical constraints, and structural pitfalls of this technological advancement.

Be Wary, Prepare Now and Educate Yourself:

Do not move too fast. Till the previous generation, the impact of underlying technology could be envisioned. Solutions like email, eCommerce, and internet telephony were able to parlay onto rapid adoption, disrupt legacy industries, and create new ones. The architecture of AI is opaque. The current generation “move fast and break things” ethos will not work in AI. Therefore, we need to assess the widespread impact of AI-based solutions ahead of mass-market deployment.

In order to maximize revenue from advertisements, social media platforms frantically retain eyeballs on web pages by publishing engaging content ranging from narcissistic newsfeeds to conspiracy theories. The habit of inbred egotism is more addictive than smoking. AI can create highly personalized social simulations; echo-chambers that polarize people subscribing to a worldview. As a result, we lose objective thinking and fail to communicate with neighbors. We must seek to understand the far reaching consequences of choices, optimize for complex social priorities, and compromise on short-term business goals.

Expect the Unexpected, and Uphold High Standards of Ethics:

Even well-meaning AI based solutions produce unintended consequences. Amazon’s facial recognition tool left a bad taste of discrimination when people of color were not recognized. Fortunately, the creators of the tool had good intentions. They withdrew products to make remedial changes before the impact became dangerous and irreversible.

The current generation ‘move fast and break things’ ethos will not work in AI. Therefore, we need to assess the widespread impact of AI based solutions ahead of mass market deployment


However, we need to prepare for bad actors who will misuse the power of AI. In response to their ruthlessness, policy-makers would be forced to retaliate imposing harsh regulations. Such premature legislation will arrest the progress of benevolent use of AI. Investments will dwindle, resulting in regression of innovations. Therefore, we have an urgent need for self-regulation.

This time the stakes are high. We need a diverse group of innovators to form ethical frameworks for AI. The European Union is aggressively considering regulating social networks as are a few politicians in the United States. Asilomar AI principles recognize that technology gives life the potential to flourish like never before or to self-destruct. The California legislature has adopted these principles. Every citizen has to understand the implications of the use of AI form an informed opinion and influence policy makers.

Avoid Unfair Biases, and Collaborate Openly for the Greater Good of Humanity:

Quality of data is a major impediment to the adoption of AI. Models trained on limited datasets cannot be trusted to make accurate decisions. Even if an AI technology successfully solves one type of problem, it cannot be broadly applied to other generic cases.

In 2016, Tesla Model S was involved in a fatal accident. Its AI assumed a turning 18-wheel truck was a billboard and drove under the trailer, killing the driver. The autopilot was trained to detect trucks on highways but was not ready for city streets where trucks could appear from the sides. Tesla’s autopilot AI failed to brake, resulting in the fatal collision. Detecting changes in circumstances may be simple for humans. But, AI can be dangerous when applied out of context.

In another example, a resume sorting algorithm reinforced biases and amplified their impact. The AI was trained to identify patterns based on past successful hires, and it learned to eliminate women candidates! The problem of replicating biases in past decisions further compounded when data in the resume was poorly interpreted. The AI searched for the word “women” and proceeded to eliminate qualified candidates affiliated with any women’s club or professional association. Ideally, AI should detect unfair biases and improve human decisions, not worsen the status quo.

Be Positive. The Adventure has Begun:

We are in the early days of truly intrusive AI. While deep fakes and emotion recognition are growing, the silver lining is that the defensive tech is equally improving. There is an opportunity for brilliant minds to build new tools in cloaking/debunking capabilities. Open source is becoming prevalent, making new inventions instantly available to others to build on. Technology is evolving from rules-based engines to machine learning capabilities. Institutions like XPrize’s data commons enable solutions for collective progress in AI. From tools, raw data to actual models: sharing and stewardship is in style. One entity alone cannot provide all the answers—so developers and consumers stand to benefit from generous collaboration.

In conclusion, AI is like a fire. It punishes those who trifle with it. It illuminates those who learn it, respect it, and use it responsibly. Like fire, expect AI to be an inflection point for human civilization. Brace yourself for the inevitable future when life without AI will be primitive.