Propelling the Financial Services Industry Forward with AI and Machine Learning
In financial services, customer data provides the most relevant services and advice. However, people often use different financial institutions according to their needs - their mortgage with one, their credit card with another, their investments, savings, and checking accounts with another. Because the financial industry is so competitive and highly regulated, there hasn't been much incentive for institutions to collaborate and share data.
It is impossible for financial institutions to form a precise picture of customer needs due to deterministic customer data (that is, relying on first-person sources), as stated by Amlan Patnaik, Associate Director of Software Development at NTT Data, who is leading the digital transformation for some of the largest financial institutions in the United States.
"Fragmented data is detrimental," he said. "How do we solve it as an industry?"
He and his team consistently incorporate artificial intelligence (AI) and machine learning (ML) initiatives to optimize operations, streamline services, and enhance customer experiences while advocating for ways to solve this customer data challenge.
The hard part is getting a good picture of a customer's needs, Patnaik said. "How do we actually get a full customer profile?" he asked.
Financial services initiatives using artificial intelligence
As the legacy 100-year-old multinational financial giants are competing in an estimated $22.5 trillion industry representing roughly a quarter of the world economy, Patnaik’s team advances efforts around smart technology modernization of online and mobile banking apps, content management, robotics and intelligent automation, distributed ledger technology, advanced artificial intelligence, and quantum computing.
In addition, Patnaik leads NTT DATA’s partnerships with academia and industry, including California Institute of Technology (Caltech), Cornell University, University of Michigan, Massachusetts Institute of Technology (MIT), NASA Ames Research Center in Silicon Valley, Stanford University, Swinburne University of Technology, and quantum computing software company 1QBit.
In their work, Amlan Patnaik's team relies on an array of AI and ML tools, including traditional statistical models, deep learning networks, and logistic regression testing (used for classification and predictive analysis). Alongside Google and Azure, they also utilize in-house systems developed based on data locality.
Patnaik elaborated on the utilization of long short-term memory (LSTM), a recurrent neural network capable of processing individual data points and complete sequences. This technology finds application in natural language processing (NLP) and spoken language understanding, capturing intent from textual content. An instance lies in complaints management, where LSTM generates "specific targeted summaries" from complaints, expediting appropriate actions. Moreover, NLP techniques extend to website form requests with greater complexity than dropdown menu options.
For fundamental image and character recognition, they employ traditional deep learning methods like feedforward neural networks, which propel information forward in a single loop. Conversely, Patnaik notes that deep learning techniques, such as convolutional neural networks, analyze documents based on pixel data.
The latter approach validates specific elements of submitted scanned documents and assesses images within them, ensuring completeness and adherence to anticipated attributes, content, and annotations. For instance, if a checking account statement is expected to contain six attributes based on inputs, but only four are detected, the system flags it for attention. In total, this streamlines and accelerates a variety of processes, according to Amlan.
The team is also leveraging cloud-native and serverless components for these initiatives, as well as transformer neural network models for processing sequential information, such as natural language text, genome sequences, sound signals, and time series. For classification, regression, and other tasks, Patnaik plans to increasingly use random forest machine learning pipelines, a technique for supervised learning that uses multiple decision trees.
“This is an area that will push forward majority of financial institutions”, Patnaik said.
Optimizing and accelerating in the midst of regulations
Amlan Patnaik and his team face a significant challenge in deploying AI and ML for their financial clients in a highly regulated industry. Patnaik said it takes only a couple of days to build a model on top of a data set of features, and deploy it into production in a nonregulated industry. A regulated industry requires external risk assessment and internal validation at every stage.
"We rely more on statistical models, "Patnaik said," and we thoroughly scrutinize large neural network- based solutions."
Three independent groups review and challenge models - a frontline independent risk group, a model risk gover- nance group, and an audit group, he said. These groups build separate models to create independent sources of data; apply post hoc processes to analyze the results of experimental data; verify that data sets and models are within the "right range" and apply techniques to challenge them.
In average, Patnaik's team deploys nearly 60 models per year, keeping the champion-challenger framework in mind. During this process, multiple competing strategies are continuously monitored and compared in a production environment and their performance is evaluated over time. The technique helps to identify which model produces the best results (the "champion") and which option produces the runner-up (the "challenger").
His department has already made strides in that direction, having reduced the AI modeling process - discovery to market - from 60 to 15 weeks.
It's not about a specific AI model, it’s a question of “How can you optimize that whole end to end flow and automate as much as possible? It's more about how much muscle memory do we have to bring these things to market and add value?" Patnaik said.
"The value of ML specifically will be in use cases that we haven't even imagined yet." he said.
Dialogue with the financial services industry
As a whole, the industry will also benefit from bridging the digital divide between big and small players. Collaboration, Patnaik said, can help foster new insights and enhance customer interaction. Patnaik said such capabilities as secure multiparty computation and zero-knowledge proof platforms could achieve this. It is a cryptographic method for distributing computations among multiple parties without revealing inputs or allowing individual parties to see other parties' data. Cryptographic zero knowledge proofing is a method by which one party can prove to another that a given statement is indeed true, without divulging any additional (potentially sensitive) information.
This will enable institutions to collaborate and share information safely without having privacy and data loss concerns, while at the same time competing in an ecosystem appropriately, Patnaik said.
The industry will have a firmer hypothesis about collaboration and the use of these advanced tools within five years, Amlan predicted.
In a similar manner, Amlan and his team maintains a dialogue with regulators on behalf of his financial clients. A positive sign is the fact that Patnaik has recently received requests from regulators regarding AI/ML processes and techniques - which has never happened before. The problem could be critical, since institutions use a variety of tools to build models, and the process could be industrialized, Patnaik explained.
“I think there's a lot more interest, motivation and appetite on the part of regulators to understand this a little better so that they can think through this and engage more,” Patnaik said.
