Global banking giant’s app can’t cope with user load
Surprisingly to many, large powerful corporations are no stranger to software glitches. Namely, one of the largest financial companies in the world HSBC has experienced difficulties keeping its mobile application live numerous times. For example, in November of 2018, HSBC mobile app crashed during Black Friday ― the largest global sales of the year. Thousands of people were unable to enter their accounts, therefore could not take advantage of the limited-time offers from online shops. Poorly functioning apps are always irritating, but when it comes to a program that enables you access to your own money, mild frustration can turn into righteous fury really fast.
There could be many reasons why banking software might fail, but when this happens during the long anticipated sales (as with HSBC), we can surely tell the application just couldn’t stand the user load. Such events will always be a challenge for banks due to the wave of transactions, so in our humble opinion, this is something the HSBC online-banking test team should have foreseen. Both stress and load are rather unexpected issues, yet they can (and should) be planned and assessed by R&D teams. The corporation prefers not to publicly discuss other reasons why their app eventually stops working, as opposed to their users, who were very vocal on social media when it came to complaining on application performance.
AI ruins credit scoring systems
It’s hard to imagine that in the time when people just won’t stop talking about all the amazing benefits of artificial intelligence and machine learning, some industries can actually deteriorate when adopting them.
Initially, AI and its applications for banking were aimed at making credit risk assessment faster and less biased by decreasing the level of human involvement. But as practice has shown, AI is influenced by the information it is fed, and the bias that was meant to be expelled from the system becomes part of the AI’s decisions. For example, the FICO score system which is used to measure consumer credit risk across the United States continuously works towards making the scoring process more transparent and gender- and race-neutral. However, given that artificial intelligence algorithms do not differentiate the discriminatory factors from fair-minded ones, the correlations they produce inevitably include the information about ethnicity, gender, social group, etc. Furthermore, there are also many small details like grocery preferences and music taste that end up in the AI-powered credit score analysis. All that is really fascinating, but the real question is, do we really need all that information to make the right lending decision? Finance practitioners admit that ML and AI applications for credit scoring rather create bias than help to prevent it.
Another issue the banking industry has with AI is the lack of explainability. All consumers who weren’t approved for loan, have the right to get an extensive explanation of the refusal. This information is nearly impossible to gather because of the unknown patterns a machine learning model follows to analyze consumer data. While making the rating procedure faster for credit managers, AI decreases its transparency for end users thereby affecting the level of trust to financial institutions in general.
Less eye-catching bugs are still there
The aforementioned software errors made their way to the news, yet this doesn’t mean character recognition, load stability, and control over AI extensions are the only things you have to think of when testing your banking application. There are thousands of less headline-worthy bugs that happen in finance software daily, and only a thoroughly planned and well executed QA process can help your business deal with each of them in advance. That being said, we’re moving to more aspects to consider when working on a banking application.