Remember back in 2004, when Oprah temporarily put Santa out of business by giving every member of her audience a car?
Well, it’s happening again.
But this time, it’s not Oprah. It’s technology, and the million-dollar gift isn’t a Pontiac G6. It’s AI.
These days, everyone claims to have an ‘AI-powered solution’ for everything — from drafting an email that puts a toxic manager in their place to washing day-old dishes.
But while artificial intelligence could very well be one of humanity’s greatest inventions since the internet, not every product or service that touts the tag is genuinely intelligent.
Let’s take a look at this phenomenon called AI washing and what it’s doing to the industry.
AI: The Not-So-Secret Sauce of Product Marketing
With a global market size estimated at nearly $200 billion, a compound annual growth rate of 36.6%, and a projected contribution of $15.7 trillion to the global economy by 2030, AI is quite frankly the diamond of the season.
And it’s for good cause.
According to a Forbes Advisor survey, the use cases of AI in business have seen explosive growth across key business functions like customer service, cybersecurity and fraud management, content production, digital assistance, recruitment and talent sourcing, product recommendations, and much more.
These have a direct impact on business productivity, revenue generation, and innovation, all culminating in measurable business success.
“When we fight against fraud, we have to analyze very large datasets to try and discern often very small and subtle patterns indicating malpractice. AI is perfect for doing this in a way that’s faster, more clever, and can handle an ever-larger dataset,” Gavin Stewart, Vice President of Sales at Oculeus, a software system solutions provider for the telecommunication industry, told HubSpot.
“Traditionally, anti-fraud solutions in our space have worked ‘retroactively,’ that is by analyzing past data sets. However by the time conclusions were drawn, it was already too late to stop the attacks. What AI does is dramatically shorten this time lag to achieve much faster interception and blocking of attacks – prevention always being better than cure.”
Image SourceThen there are the investments.
Five years ago, David Kelnar, head Of research for MMC, a venture capital fund investing in UK companies, said that “startups that are labeled as being in the field of artificial intelligence attract 15 to 50% more in their funding round than other technology startups.”
It’s 2024 now and this still holds, even stronger.
Tracking investments of over 8 million global public and private companies from 2013 to 2022, global corporate investments in artificial intelligence are at nearly $940 billion.
The consumers are also plugged in.
In the last decade, consumer perceptions about AI have moved from “This looks interesting. I might try it,” to “This will definitely make my life easier. Sign me up.”
Now, a ripple effect of this success is the creation of a bandwagon of wannabes, often inspired by FOMO (fear of missing out).
It’s only natural, because when people realize the tangible benefits of merely being associated with something, chances are they’d do anything to be part of the party.
They’d inflate reality or even lie, outrightly.
And the way to do this with AI is in three folds:
- Exaggerate how ‘artificially intelligent’ the solution is.
- Make outrightly bogus claims on what kind of models or systems the technology is built on.
- Use AI-related buzzwords or imagery in marketing and advertising when the product itself has nothing to do with the technology.
“For at least five years, we have seen that AI is frequently mentioned by solutions providers, as a kind of magical element, but without much substantiation regarding ‘what’ it does or ‘how’ it does it,” Stewart confirmed for the telco industry.
How are consumers responding to this?
To be fair, not many consumers know they’ve been AI-washed.
The only thing simple about AI is its two-letter acronym. Everything else can, and will sound like gibberish to anyone without technical expertise.
As long as it’s an improvement (no matter how slight) on what they’re used to, many may not give much thought to the details. This is how companies are getting away with it.
A chatbot built on pre-programmed responses and decision trees can claim to be ‘AI-powered’ without the incorporation of any natural language processes or even machine learning capabilities.
A consumer electronics company may also claim that their latest oven is ‘smart’ and ‘AI-powered’ simply because it follows fixed programmable logic controllers (PLCs) that are neither built to learn nor remember.
The list goes on.
But for those who can tell, the response is just what you’d expect from someone who’s been lied to.
While this X user was reacting to a post about an ‘AI-powered’ pizza maker, Kenneth Hendricks below threatened to pull a Butlerian revolt if he received another ‘unintelligent AI’ outreach email.
Hendricks’ grouse here isn’t that they’re AI-powered but that they, instead of leveraging on truly advanced natural language processing and generation, use static templates and predictive text.
Are there any tell-tale signs?
Yes.
As mentioned earlier, having the technical expertise sure makes it easier to spot. In the absence of this, however, there are a few things that could still get your wash alert ringing.
First, like Stewart noted, products and services that claim to be powered by AI without providing detail about how exactly it does this are red flags. Companies with genuinely intelligent solutions are very vocal about the algorithms, data sets and models their solutions are built on.
He said this is how the team at Oculeus approaches selling its AI solutions. When they market their AI proposition to customers, they first evaluate the client’s expertise level in the subject and if necessary, provide an initial briefing.
“Although public understanding of AI has recently grown (since ChatGPT and others), there is still a lot that’s mysterious! Next, and in a fraud context, we show how AI is also being exploited by the bad guys in different ways to normal.” said Stewart
He continues, “These innovations are likely to get past some of their current controls. Then finally, we show the features of our solution that rely on AI, and how they improve effectiveness with regard to the key areas of an anti-fraud process — detection, validation and application of controls.”
The second red flag involves automation solutions masquerading as AI solutions. Just like the X user above, if a so-called AI solution still needs hands-on human intervention to solve whatever problem it was designed to, then that’s just good ol’ automation. Nothing more.
See also if the technology can really do what it says it can. Usually, if the tech is AI-washed, the delivery of the claimed capabilities leave plenty to be desired. Disappointing enough to make a Butlerian revolt appealing.
What does this mean for the AI industry?
Years ago, ‘cloud’ was the reigning diamond of the season and vendors were known to (you guessed it) cloud-wash.
Washing is, in fact, not an uncommon phenomenon. But if the resulting effects of cloud-washing (and any kind of washing, really) is anything to go by, the AI industry has its work cut out.
Consumer trust in AI-labeled offerings is already on shaky grounds due to ethical and regulatory concerns. With this development, the skepticism will go through the roof.
So, even though most consumers want the brands they engage with to be transparent about their use of AI, many may not even trust this transparency anymore.
Also, the pervasiveness of these claims can dilute the market, making it hard for investors and consumers alike to separate the wheat from the chaff. These are bad for business, and genuine innovation.
Not to mention that if a company or service provider is publicly ousted as having AI-washed their offerings, their reputation may never recover. Even if it does, it’ll only be after a few lawsuits and dollar-intensive PR.
Case in point — $80 billion global data center, Equinix, was ousted by Hindenburg Research in March 2024. They were accused of many things, among which was selling ‘An AI Pipe Dream’ they could not afford.
Due to their influence in the data center market, the company was reported to have allegedly increased their valuation by telling investors their data centers would be a key player in powering the AI surge — a surge which would double the power demands of data centers within two years.
According to former employees who contributed to the report, this would be near-impossible as every single site in the estate is oversold by 25% and in the words of one of the former executives, “...there’s no easy way of fixing that.”
Overstating AI capabilities — check.
So far, these allegations have resulted in a class action lawsuit by investors who purchased Equinix shares between 3 May 2019 and 24 March 2024, as well as investigations by the U.S. Securities and Exchange Commission (SEC) and the U.S. Attorney’s Office.
Here’s What Federal Regulators Think
As the concerns rise, federal regulators have taken a stand.
In what would mark the beginning of the SEC’s crackdown on AI washing, the SEC Chair Gary Gensler in a speech at Yale Law School said, “If a company is raising money from the public,...it needs to be truthful about its use of AI and associated risk.
Companies should ask themselves some basic questions, such as: “If we’re discussing AI in earnings calls or having extensive discussions with the board, is it potentially material?”
Investment advisers or broker-dealers also shouldn’t mislead the public by saying they’re using an AI model when they aren’t, nor say they’re using an AI model in a particular way but not do so. Such AI washing, whether it’s by companies raising money or financial intermediaries like investment advisers and broker-dealers, may violate the SEC’s laws.
And that has happened several times already.
A classic example was when, in March of this year, two investment advisers paid the SEC a $400,000 fine for making public claims they couldn’t substantiate with real, hard evidence.
Published on both offline and online marketing materials, these claims boasted investment strategies that were powered by ‘expert AI-driven forecasts’ and that would ‘turn your data into an unfair investing advantage.’ One of them even claimed to be ‘the first regulated AI financial advisor.’
Then there’s the Federal Trade Commission (FTC) who told the good people of America to ‘keep their AI claims in check. And they’ve made good on their word so far, charging a suit of companies with fraudulent misrepresentation of ecommerce business opportunities supposedly powered by AI.
The consequences? A fine in double million-dollar figures and a lifetime ban from selling ecommerce opportunities, among others.
The US Department of Justice will not be missing out on the fun either.
Rounding Up
There’s a right way to leverage the hype.
You can develop the AI solution to a business problem, in which case the provider can take Stewart’s advice on marketing AI.
“Embrace head-on the tougher questions everyone has front-of-mind— starting with ‘What is it?,’ ‘Why do I need it?’ and ‘What new risks might its adoption bring?’”
Or, you can stick with your non-AI product, but learn to use AI tools to supercharge your marketing.
Easier to read our report on incorporating AI in your marketing technology stack than be slapped with a federal lawsuit.