Home PublicationsData Innovators 5 Q’s for Adam Wenchel, Vice President of AI and Data Innovation at Capital One

5 Q’s for Adam Wenchel, Vice President of AI and Data Innovation at Capital One

by Joshua New
by
Adam Wenchel

The Center for Data Innovation spoke with Adam Wenchel, vice president of AI and data innovation at Capital One. Wenchel discussed why Capital One is interested in developing explainable AI systems and the opportunities AI creates for the financial services sector.

Joshua New: Since this is the Center for Data Innovation, I obviously am a fan of your title, which I don’t think I’ve seen before. Why does Capital One have someone in charge of “data innovation,” and not just “innovation,” as many others seem to?

Adam Wenchel: Software and data is an integral part of Capital One, and harnessing data for the good of our customers is a key goal for us. My title ties back to my role as the head of our Center for Machine Learning, which is essentially an in-house consultancy and knowledge center for machine learning product delivery, innovation, education, and partnership across the business. Machine learning is of course driven by data, so at the end of the day it all relates to the immense opportunities around data.

New: Can you explain why Capital One is so interested in the issue of AI explainability?

Wenchel: We’re pushing further into deep learning for a variety of use cases that we think will better serve our customers—from fraud detection to flagging double swipes, anti-money laundering, and many other areas. We’re focused on explainable AI because it is an imperative to be able to articulate how these sophisticated deep learning models come to a logical decision.

To do this in a way that is fully transparent, legal, and meets regulatory guidelines, we need to be able to explain to our customers, regulators, and fellow colleagues how these decisions were made, and ensure they were made without bias. We want to maintain the highest standards for explainability as we drive forward with these types of much more advanced—and inherently more opaque—models.

New: One of the most common examples of AI being used in the financial services sector is improving fraud detection. What makes AI so well-suited for this task?

Wenchel: There’s a wide array of machine learning techniques that are well-suited for fraud detection—both supervised and unsupervised techniques, often used in conjunction—all of which are adept at spotting anomalies and irregularities in the data within milliseconds.

Our ability to more quickly and accurately detect fraud through machine learning advances ensures that our customers are being protected by some of the most sophisticated mechanisms out there.

New: While fraud detection is of course important, I don’t hear a lot about other ways these companies are using AI. What other kinds of applications also stand to benefit from this technology?

Wenchel: With machine learning, we’re developing a suite of technologies that we think will help us continue to change banking for the better while unlocking more opportunities for our customers. We’ve developed a range of applications to help our customers become more financially empowered, such as helping them make predictions about upcoming bills, detect irregular or mistaken transactions, and better manage their spending.

A great example of this is our Second Look app, which automatically sends our customers alerts for renewal charges, monthly service fees, and duplicate charges. We also rolled out a gender-neutral chatbot, Eno—it helps our customers stay on top of their financial accounts and is powered by a range of AI applications, from natural language processing to machine learning.

We’ve also applied machine learning to a number of enterprise use cases; for example, to strengthen our cybersecurity efforts and make more informed business decisions.

New: Ensuring AI systems are ethical and free from bias is an important task, especially so for sectors like financial services where discrimination can have major consequences. You’ve discussed how you see this as both a technology issue and a people issue. Could you explain what that means?

Wenchel: This is an incredibly important topic, and it requires consideration of both machine learning models as well as the experts behind those models. At Capital One, we have a very strong culture of inclusion and diversity, and this extends across all of our work. We’re producing complex deep learning models, and we need to make certain that our team is anticipating and solving for blind spots in those models to prevent unconscious bias and ultimately create better models. This is why we’ve got a whole work stream of experts working on explainable AI and the ability to articulate how decisions are made.

While we’re very focused on ensuring that we’re acting well within the bounds of legal and regulatory frameworks, it’s also critical that we simply do the right thing when it comes to building out this technology.

You may also like

Show Buttons
Hide Buttons