The Center for Data Innovation spoke with Robin Jose, chief technology officer of Scorable, a Berlin-based startup that produces an AI-powered application to provide investment recommendations to asset managers and corporate investors. Jose discussed how Scorable’s model helps better analyze and detect credit risk and improve methods to prevent future crises.
Eline Chivot: What trends led you to create Scorable?
Robin Jose: We’ve identified a real need in the market for a more efficient risk assessment solution for asset management. Today, we have access to an ever-greater amount of data, which—in theory—should provide us with more precise analytical results. However, fixed income investment decisions still rely heavily on manually driven procedures. Incorporating additional data into your analysis can put a huge strain on capacities and is often not supported by established and traditional systems. Moreover, increasingly strict regulatory requirements add to an already challenging environment. This is why Scorable has developed an AI-based technology that helps financial professionals make more informed investment decisions with greater efficiency and clarity.
Chivot: What is your approach? Why is AI particularly relevant and helpful in this?
Jose: Scorable’s technology enables active asset managers to monitor corporate bonds and anticipate rating downgrades before they happen or markets price them in. Our AI system goes beyond quantitative data and incorporates qualitative data into risk assessments using natural language processing (NLP) and machine learning. By combining and contextualizing quantitative and qualitative data, we can achieve more comprehensive insights and calculate the probability of a rating downgrade with great accuracy.
Typical users of our systems are experts in their fields—and they don’t just want to see a number or a percentage provided by an AI model and take that as the final truth. They need to see intuitive and clear explanations as to why the model has determined if a rating would be downgraded. Our explainable AI approach allows users to intuitively understand the rationale behind the analysis and to see what drives changes in the risk score. Unlike black-box models which only show the input-output relationship, our models provide understandable features and a transparent machine learning process. Thus, we create transparency and traceability.
Chivot: What type of data sources do you use?
Jose: We use a range of professional qualitative and quantitative data sources, including issuer fundamental ratios, market data, credit ratings, industry data, and financial news. What’s different about our approach is, that within the framework of an AI-controlled analysis, quantitative and qualitative data are combined with each other. This way, we’re able to process much greater amounts of relevant data compared to traditional credit risk analysis. We continuously expand our data sources to further enhance our scoring models.
Chivot: Can you explain how your application benefits asset managers and corporate investors?
Jose: Our technology enables asset managers and corporate investors to cover a larger number of issuers and to incorporate more information into their analysis, thus speeding up investment decisions and improving their portfolio performance.
Scorable´s rating model predicts probabilities for rating downgrades in the next 12 months, currently covering more than 80 percent of outstanding corporate bond debt in major currencies. Thanks to our explainable AI approach, the model is fully transparent, allowing analysts to view the impact of the model input variables as well as comparisons across the issuer’s industry or similarly rated companies. Our solution also features an integrated, customizable alerts engine, which provides instant notifications as soon as credit risk-relevant events occur.
Our current users are particularly appreciative of the fact that we can spot these trends before announcements on ratings and outlooks from rating firms. This helps them react faster than their more traditional counterparts.
Chivot: Fintech services are becoming more popular, but there’s still a huge distinction between them and traditional banks. How do you think factors like AI and data are going to change this industry in the long-term?
Jose: To quote a popular saying “The future is already here – it’s just not evenly distributed.” AI and data have already revolutionized the financial industry, but the pervasiveness of the changes is still not widely visible. We can already see some well-established fintechs expanding further into services which were once considered traditional banking services. The incumbents are not sitting still either, many have made extensive investments into using data and AI to massively improve their capabilities.
In terms of asset management and investment banking, the real edge comes from algorithms and accurate data that can track indexes, corporate performance as well financial news in real time. We hope we can contribute to making this future more evenly distributed.
Chivot: What is the current state-of-the-art of explainable AI systems, and how quickly are these solutions developing? Which other industries are likely to be early adopters?
Jose: In terms of adoption, XAI (eXplainable AI) is still in its infancy. Even in 2017, hardly anything was written about explainable AI. DARPA gets a lot of credit for making the terminology popular and interesting for a large community of academics as well as practitioners. I would expect the term to gather more popularity in the coming years, as it can solve one of the biggest challenges in the adoption of AI. Specifically, the black-box nature of the AI systems. This is of significant importance to almost everyone who uses AI, but essential to any systems which work in a regulated environment and whose decisions have significant financial, legal, and ethical consequences. Thus, I expect the early adopters to come from industries who care seriously about such consequences.
However, the adoption will get a significant boost in the form of regulation. Many governments, especially in the EU and North America have started to think of how decision-making could be formalized through various initiatives to track algorithmic accountability. As regulations catch up, the adoption of XAI would not be just a choice, it would be a requirement.
Coming closer to home, XAI has significant benefits for financial use cases. When we recommend a company based on certain data, a pure black-box model requires the asset managers to take a leap of faith. It also assumes that they just follow the orders from a machine blindly. And neither of these is correct. Our systems are designed from the ground up to highlight the reasons why we are making a certain prediction, and make these reasons as intuitive as possible.