This is number 6 of a 10 part series on the ethical building blocks of Artificial Intelligence. I examine the values on which the technology has been built. AI is a tool that can be used for good or evil but it is not neutral, it has its own values baked in.
2023 may have felt like the year of AI, but AI was already embedded in our lives, determining rental prices, jail sentences and insurance premiums. It is part of a long-term trend to remove the human from decision making.
Over the past 50 years there has been a steady creep of decisions away from the person who is impacted. The route of appeal used to be to the man in front of you. Now anyone seeking to appeal a decision must work back through multiple, often opaque layers to even understand, so much as challenge, a decision.
Policies have become processes, which have become algorithms, and finally algorithmic decisions have become ‘intelligent’ and AI decision making is replacing human judgment. With each step you are further away from being able to ask “why did you make that decision?” and even further away from being able to challenge it. In the case of artificial intelligence, even the programmer may not be able to explain why the AI they created came to the decision it did.
So what values are Artificially Intelligent bureaucracies built on?
Human time is valuable.
Humans are flawed.
Your data is your destiny.
I will return to these in a moment, but first a note on Algorithms vs Artificial Intelligence.
All artificial intelligence contains algorithms but not all algorithms are artificial intelligence. Algorithms are input-output machines. They have set pre-existing rules which they apply to ‘inputs’ to deliver an ‘output’. AI such as machine learning is a little different, they review training data and build a model to make predictions or decisions about the input data.
For example, the AI is tasked to approve mortgage applications. It is given the data from mortgage applications over the last 100 years as its training set. It then identifies patterns between the successful mortgages and set those as criteria for approving future mortgages.
An imaginary day of AI decisions in 2023.
You start your day lazily scrolling though Netflix trying to choose something to watch. The AI recommends some options, and you, like 75-80% of viewers are influenced by the recommendation. You click (unknowingly) on an image selected for you by AI, because it was the image most likely to get you to watch the show.
You are able to start the day this way because you are currently unemployed, you didn’t quite get the grades you needed when an algorithm decided the grades during the pandemic. You have applied for lots of jobs, but even though you were the captain of your county’s women’s chess team, no one is interested in your C.V.
To make matters worse your rent is going up. All the rents in the local area seem to suddenly be creeping up. You tried to get on the housing ladder and applied for a mortgage last year, but you were not successful even though you have the exact same finances as your friend who was approved.
The Netflix show turns out to be boring and in frustration from all your setbacks you hurl your laptop across the room. Just then your flat mate arrives back from the shops and receives a laptop to the face. She calls the police, and you are arrested. A judge is informed by AI that based on your profile (they can’t tell you exactly what they use to form that profile - it is propriety information,) you are likely to reoffend compared to other people who have committed the same crime. So you will received a longer prison sentence.
Admittedly it’s been a bad day.
These examples are a small selection of how AI is being used in decision making today. Many of them are problematic and some of those problems stem from the values they are built on. Let’s review those values and you can judge if bad outcomes are inevitable or just glitches we can work through.
Human time is valuable.
Human time is intrinsically valuable. We only have a fixed amount of it, it is irrecuperable. It is increasingly a luxury as we are driven to be more efficient and productive with our given time. But for corporations, time is also money. In the UK an hour of a human adult’s time will set you back a minimum of £7.49, £5.28 for a child’s hour. As such there is a business imperative to automate. To reduce and replace human time with the more cost-effective machine time. This does not necessarily mean a jobs apocalypse where we are all replaced by AI. It could mean that you can focus doing the things which only you can do: the most fun and engaging parts of your job while the AI addresses the dull, monotonous repetitive tasks. Since we only have a fixed and irrecuperable amount of time, this could be a very good thing.

But it is also possible that we end up going through our day with no humans to interact with. It gets worse. The fact that human time is valuable means that corporation don’t need to reduce the amount of time something takes, just the amount of time they pay for. Think about activities such as automated check outs or pre-recorded customer service voicemail. Both things take time. They may even take more human time. It is just that the time has shifted from the employee to the customer. The corporation has shifted the time cost from them to you. As we automate more decisions there is a risk that AI may lead to similar scenarios which require more of your unpaid time.
Humans are flawed.
Humans get tired, are riddled with conscious and unconscious bias, often don’t know all the facts, take short cuts and generally make mistakes. A study found that judges granted 65% of parole requests in the morning and almost zero at the end of the day. With the exception that for a short time after a break, snack or lunch, the rate jumped back up to 65% again. AI does not do that, it is ruthlessly consistent.
However, if you train and teach AI on historical data, which was created by humans, they will identify the patterns and seek to replicate them. So in my imaginary AI day, if you had your parole appeal at 11:55, even though there was no human involved, the AI may have inferred a pattern and assessed that parole is refused for applicants between the windows of 10:30-12:00. As I have discussed before on this blog, garbage in, garbage out, is a huge problem in AI.

The problem is compounded by the fact the decisions made by AI, or algorithms for that matter are considered to be neutral or un-bias, a social phenomenon referred to as mathswashing.
An excellent book to read on this is Weapons of maths destruction by Cathy O’Neil. She highlights that big data doesn't eliminate bias, it is just camouflaged with technology. Algorithms embed existing human judgment into code where is can be replicated and scaled.
The real problem is that you may never understand why the AI has made a decision about you. You may not be allowed to query the AI as the algorithms might be proprietary software. This was the case in the US when Eric Lomis was sentenced to six years after AI predicted that he was likely to commit further crime. He was not able to see what that judgement was based on and as such he could not challenge it.
Your data is your destiny.
I touched on this in the last article on AI in banking. AI is built on data and all data is generated from actions in the past. It can only predict the future by extrapolating that past data. The problem is that prediction based on past data alone is unimaginative and calcifying. It reinforces a historical trend.
It is also only able to make a prediction based on the data captured, and not all relevant data may be captured, or even capturable. We can capture data we think is a good indicator of what we are trying to measure. For example roses purchased on valentines day or number of years married might be good indicators, but we can’t data-ify being in love. This is called Surrogate Objectives. Are you really measuring what matters or are you measuring a surrogate? There are plenty of human decisions we need to make based on qualitative rather than quantitative information.
So how and where should AI be used in bureaucracy?
AI judgments should be transparent. If they are making important decisions about our lives we should be able to audit the data used to train them and understand what factors have been given weight when forming a judgement.
There should always be a human in the loop who has an override function, and who can be appealed to, to explain or overturn a decision.
If algorithms are being used to make decisions, they become a kind of law. In democracies laws should be scrutinised by elected officials and the public. AI decision making should not be set exclusively by tech companies.
There are some decisions which should require a human because of the impact they have on another human. We lose something as a society when the human we need to appeal to, or even hear a major decision from, is 10 steps removed. We can no longer appeal to the bank manager, or the corporate policy maker, or even the programmer. We feel powerless. Humans should make decisions which have a huge impact on other humans because if the decision is wrong, we need someone to own the responsibility for that judgement. Or at least to look you in the eye as they make it.
If you enjoyed this article, please consider reading others in the AI ethics series.
1. AI knowledge: Is chatGPT smarter than you?
2. AI drivers: Is Tesla the going anywhere?
3. AI can see you: Facial Recognition is watching.
4. Dr. AI: The machine will heal you.
6. Bureaucracy of AI: The rise of robotic red tape and algorithmic decision making.
7. Chatbots – AI friends?
8. Deepfakes – AI faces?
9. Targeted marketing – The AI nudge?
10. Generative AI – The AI artist?