This is number 5 of a 10 part series on the ethical building blocks of Artificial Intelligence. I examine the values on which the technology has been built. AI is a tool that can be used for good or evil but it is not neutral, it has its own values baked in.
Algorithms are everywhere in banking, calculating your credit score, approving your loan, and making trades. The promise of AI in banking is the removal of human bias and emotion and the addition of machine generated insights derived from huge volumes of data. What would that look like, who does that benefit and what are the values AI in banking would be built on?
Firstly, the notion that AI would add value to banking is built on values that assume that:
- Our data is our destiny
- The improbable should not be possible
- Human emotion gets in the way of the finance sector
Before we examine the values, it helps to imagine how AI might be used in banking.
All AI have algorithms, but not all algorithms are AI. The difference is AI uses much more data and the algorithms ‘learn’ how to spot patterns that human might not.
Consumer banking already uses algorithms. Mainly in customer profiling and risk management, E.G. will the bank loan you money? How much? And at what interest rates? Data has always been used to identify the credit worthiness of potential customers. Your credit score, for example.
Where AI is different is the scale of the data and its ability to identify potential payment patterns from data that have nothing to do with your finances. This should, in theory, lead to a more accurate prediction of a person’s credit worthiness than traditional methods. This should result in more loans to people previously deemed unworthy. And fewer to those who would not pay them back or commit fraud. Great, right?
Perhaps. But where will this data be coming from? Could banks buy your social media data, looking for signs of reckless spending habits? Or even an illness such as depression that could impact your ability to pay? Could it assess your Fitbit data and deduce you won’t live long enough to repay the loan?
AI could reflect ingrained bias by reviewing historical data. In 2019 a US report found that lenders were 80% less likely to lend to black Americans than white people with the exact same financial characteristics. If the data input into the system is based on historical discrimination, it will reflect those human biases in the model. Garbage in, garbage out.
This human bias creeping into a ‘neutral’ decision maker happens a lot with AI. For example, Amazon’s attempt to create a C.V sifting AI deducted points from applicants who had been women’s chess club champions, because the historical data told it women had been underrepresented it tried to replicate that. No one programmed in the discrimination, it learned from the data. That is one of the risks of machine learning, we don’t know what correlations it uses.
Back to banking, the discrimination happened before we even get to the fact that it is harder for black people in the US to get the same “financial characteristics” as white people. For example, payday loan sellers tend to be more dominant in black neighbourhoods, even if you always repay on time a payday bank doesn’t record “good behaviours”, unlike traditional banks which do. This means payday loan sellers can only damage your credit score, they cannot improve it. So, depending on your neighbourhood bank the exact same financial behaviour is recorded differently. Data is only a proxy for truth. It is important to understand what led to that data before you use it. This is hard to do when your AI requires billions of data points.
As a result, AI programmers often don’t know why the algorithm has identified the pattern it has. If we cannot audit the algorithm and understand why it applies more merit to certain candidates, how can we be sure it does not contain unwarranted bias?
But, even if we could audit all the algorithms and you are comfortable with the banks using all your data, the idea of AI in banking is still built on some values which we should examine.
What is not probable should not be possible.
Despite all the data they have access to, AI makes decisions based on generalisations which will probably prove to be true. This is not unique to AI but applies to all process driven decision making.
In most cases someone on minimum wage won’t be able to pay back a million-pound loan. Lending it to them would be bad for the bank and bad for the person saddled with debt they can’t repay. But in some cases, a person will have a unique situation which means they will be able repay. They could be an inventor or an author about to have a big break. They could be about to inherit a fortune. They are improbable but still possible. In the past a human bank manager would have a level of discretion, he might have read the book or know the person’s very elderly mother. He could have applied an exception, and made a judgment call based on this unique knowledge. AI does not allow for that, no matter how much data it has it cannot apply judgment, it can only assess the probability of an outcome. This is good for the bank, sure they miss out on the odd improbable loan repayment, but they also avoid the additional risk. It is less good for society; we have greater banking profits but fewer people whose improbable but still possible success has been realised.
Our data is our destiny
Similarly, AI is built on data and all data is generated from actions in the past. It can only predict the future by extrapolating that past data. The problem is that prediction based on past data alone is unimaginative and calcifying. It reinforces a historical trend. AI in banking does not easily allow people to change in the future. For example, if you were previously an alcoholic and had atrocious spending habits as a result that would be reflected in your data. What would not be reflected in the data would be the three months you just spent in rehab which have now changed your future behaviour. Or you were in an abusive relationship with someone who spent your money. How do you reflect the fact that you have just left them in your data? Or maybe you were depressed for five years which stopped you from working but now you are receiving treatment. You cannot explain through your data that your situation has or will change. As such, in AI informed banking your past necessarily dictates your future. Your historical data is your destiny.
Human emotion gets in the way of the finance sector
AI in banking builds on the assumption that it is better to remove the human from the decision-making process. Remove the ego, the bravado, the fear from trading and you will have logical trades based on the numbers alone. Remove the bias, compassion and naiveté from consumer banking and you will have loans issued only to those who will repay them. However much of the sector needs emotion to make a profit. The finance sector is built on human emotions. Consumer and investor confidence, optimism and fear are a huge part of dictating whether we have a bear or bull market. Emotions can be based on financial facts but it can also be based on the weather, politics, nearby but unrelated natural disasters and trend setters. Through consumers and investors human emotion will find its way into the financial system, if there are no humans in that system who understand these emotions, AI systems are at risk of misinterpreting them.
So is AI banking Artificial?
Well the data will be drawn from humans, and humans will set many of the thresholds set for when an action such as a loan grant should be taken. But, just like in healthcare, AI algorithms could spot patterns that humans could not identify and use this to recommend granting or not granting a loan. So yes it will be at least partially artificial.
So is AI banking Intelligent?
Much like AI driving, it will be able to predict patterns and actions. It may even spot correlations a person would not. But it will be unable to imagine or use any wider context it has not been programmed to. This will give it advantages over humans but also limitations a human does not have. It won’t be able to experience hope, optimism or even fear. I believe emotions are an important component of intelligence.
So, is AI banking ethical?
AI is a tool that amplifies the power of those who deploy it. If banks deploy AI then they become even more powerful than their customers, it has yet to be seen if this shift is an incremental or an exponential one. It is up to you if you think more power to banks is a good or a bad thing.
So, would AI in the banking sector saved Silicon Valley Bank?
A run on SVB began on the 8th of March after it announced it needed to plug a gap created by the sale of its loss-making bond portfolio. SVB technically had 38,000 corporate accounts, but many of these were start-ups whose funding came from a much, much smaller pool of venture capitalists.
Once one venture capitalist got spooked, they pulled their cash and “suggested” companies they had invested in do the same. There were then only 48 hours between SVBs announcing the sale in bonds to complete collapse.
While this was the result of human action, it is not dissimilar to how AI in banking or Automated Trading might work. There are technically lots of accounts making decisions but really there are only a small number of decision-making points whereupon an action will be taken once a parameter is reached. Whether it is through WhatsApp messages amongst VCs or AI automated trading, we can expect to see technologically accelerated financial events in the future. So no – AI could not have saved SVB, it probably would have killed it faster.
If you enjoyed this article, please consider reading others in the AI ethics series.
1. AI knowledge: Is chatGPT smarter than you?
2. AI drivers: Is Tesla the going anywhere?
3. AI can see you: Facial Recognition is watching.
4. Dr. AI: The machine will heal you.
5. The bank of AI?
6. Algorithmic decisions – Bureaucracy of AI?
7. Chatbots – AI friends?
8. Deepfakes – AI faces?
9. Targeted marketing – The AI nudge?
10. Generative AI – The AI artist?
Absolutely love the end comparison to SVB collapse and you make a really good connection to the week’s theme!