Would you Like your Children to Make Better Decisions?
26 March, 2021
To fully understand artificial intelligence, and what it means for the human race, is hard. It’s evolving at super-speed. And even those in the know can’t seem to box it definitively.
The inability to grasp the essence of AI is unsettling because we know that it’s already starting to influence our lives in a big way – it makes us feel like we’re being left behind. And when we don’t fully understand something, our natural reaction is often fear.
Fear, though, is sometimes counterproductive; it stops us exploring experiences or opportunities that have the potential to enrich our lives.
Our industrious imaginations make it easy to vilify AI, thinking it a force with motives ulterior to our own. As you’ll read below, some concern is indeed justified. But, artificial intelligence can, and will, improve our lives and those of our children; more so for those who embrace it.
To better our understanding of AI, so that we may fear it less, let’s start by exploring the variety of artificial intelligence definitions.
Intelligent, or cunning?
In their comprehensive textbook, Artificial Intelligence: A Modern Approach, Stuart Russell and Peter Norvig put forward the following table to encapsulate the full spectrum of potential capabilities of AI:
Four Possible Goals for AI According to AIMA
The version of AI that scares us the most is the bottom left quadrant, where systems act like we do. ‘Acting human’ implies independent decision making that may or may not be in the best interests of other humans – what we might define as the ability to be cunning.
This form of AI is known as Artificial General Intelligence (AGI) or strong AI. It’s difficult to define when exactly AI becomes AGI because we’re not entirely sure what the latter will look like. But strictly speaking, AGI would need to exhibit some elements of human consciousness; the consensus is that we’re not there yet.
On the other hand, thinking like humans (top left quadrant), is an easier concept to grasp and well within AI’s current capabilities. Amazon’s definition of AI fits neatly into this box:
“The field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition.”
AI that can think like we do has the power to beat Grand Masters in chess, navigate traffic with us in the backseat, and understand human speech no matter our dialect.
Move across to the two quadrants on the righthand side of the box and you’re dealing with AI technologies concerned with rationality. In other words, technology that uses data to optimize human decision making. And why do we need this? Because our emotions often cloud our ability to think straight.
This is arguably AI’s greatest calling – to help us make better decisions.
Indeed, Russel and Norvig are unwavering in their belief that those working with artificial intelligence, and its related technologies, should be aiming for the bottom righthand quadrant. Here, machines are able to, first, process our data to understand what makes us tick, and, second, pair that knowledge with automation to help us make better decisions.
Really, it’s just us we need to worry about.
Nuclear fission, gunpowder, and internal combustion engines — all used with good and bad intentions. And it was us, not the things we invented, who defined their roles. The same applies to the use of artificial intelligence. So, just as AI can help us make better decisions, it can also be used in reverse.
If you’ve watched The Social Dilemma, you’ll have gained insight into how something as innocuous as a Google search bar can be anything but.
It must be emphasized that what you see in your search results or social media feeds is not a balanced representation of the world. These AI-enabled systems are designed to show you content that reinforces your existing ideas.
And because we like it when someone (or something) agrees with us, we keep our eyes on the screen. This gives those directing the AI more opportunity to put tailored advertising in your line of sight. What’s wrong with that?
It’s less about you buying things you don’t need (not great for your wallet) and more about the erosion of your capacity to engage with the world objectively (not great for your well-being).
The takeaway is that humans, not machines, will determine whether the technology acts in our best interests or not. There’s one area in particular where we need its help.
Should this be the legacy we leave our children?
That’s the average credit card debt per household in the US.
$25,000 - $35,000?
That’s the average debt US students are saddled with after graduation.
Those statistics alone should make you wonder about the sincerity of the current system. It seemingly gives consumers easy access to the shackles of debt, without any guidance on how to deliver themselves from it.
In and of itself, debt is not inherently problematic. It can be an effective way to amass wealth, like when it’s used to finance assets that appreciate in value, such as a house or a business.
But far too often debt is used to spend money we don’t have on things that leave us financially and emotionally worse off. We are partly to blame for these bad decisions, but the current system encourages us to make them.
Wonga, the defunct UK based payday lender, is a good example of the rot in the system. They used automated risk processing technology to approve online, short-term, high interest rates loans. How high exactly? As much as 5,853% per annum.
Just as the engineers who helped build social media platforms now limit their children’s use of those very apps, is it not time for those with finance and banking expertise to start having a conversation about the ills of the current system? About how easy it is for well-intentioned savers to end up beholden to debt?
If you answer in the affirmative – we don’t see how it could be otherwise – then it’s the businesses shaping the future of financial services, not just the relevant institutions or government bodies, who need to start taking ownership of how things are done.
How can we affect real change? By using artificial intelligence – and the practical automation it powers – to help us make decisions that deliver us from financial repression.
What are the possibilities?
What if the custodians of your money automated your budgeting, separating your income into envelopes that provisioned for savings, bills, emergencies, and guilt-free spending money? Wouldn’t that be a simple way to stop us spending everything and saving nothing?
What if they used automation to round-up the purchase price of your sneakers, using the difference to buy Nike shares for your personal stock portfolio? Wouldn’t that be a feel-good action, helping you to support the brands you love while seamlessly improving your financial well-being?
What if they presented you with a virtual ‘emergency only’ glass box to break when you tried to draw from your savings, using humorous language to discourage you? Wouldn’t that help you to become financially free, more quickly?
What if they used automation to keep their operating costs so low that they need only provide you with services that genuinely added value to your life? Wouldn’t it feel great to have your needs put first by those looking after your money?
What if they offered you a ‘virtual hand’ to give you autonomous financial advice and education without prejudice or conflict of interest? Shouldn’t everyone, not just those able to afford human financial advisers, have access to that sort of service?
What if their purpose, their reason for being, was to help you turn your dreams into reality using the power of AI and automation? Isn’t that the outcome we deserve?
Diederik MeeuwisRead more from Diederik Meeuwis