What if the AI Revolution Doesn’t Happen?

Subscribe to HubSpot's Next in AI Newsletter
Curt del Principe
Curt del Principe

Updated:

Published:

AI is in a hype bubble.

Woman blowing a bubble

Listen only to Sam Altman or Jenson Huang and you might feel like AI is going to take over the world (or at least your job) any day now.

But as revenues lag far behind costs, not everyone agrees a revolution is inevitable.

“To be honest, I am not confident generative AI as currently envisioned will ever come together,” says Gary Marcus, an AI expert who recently testified before Congress alongside Sam Altman.

Click Here to Subscribe to HubSpot's AI Newsletter

So what happens if that bubble pops before costs are covered? What if the AI revolution doesn’t happen?

Blowing Bubbles

In 2023, OpenAI shook boardrooms and rattled nerves with a report that 80% of the workforce could be exposed to generative AI.

Since then, AI executives and investors have made bold claims about the impact of their products. From Altman’s claim that it will replace so many jobs that the government will have to support everyone to Jamie Dimon’s warning that no one will “escape [its] claws.”

The predictable result is that FOMO infected the C-suite, as executives feared falling behind. Meanwhile, 79% of American workers simply feared being replaced.

(Side note: One of the jobs the report targets as most at-risk? Clergymen. Are you attending ChurchGPT?)

On the flip side (and getting far less media coverage) are voices like Gary Marcus.

“You have software that isn’t making much money, isn’t secure, and is keeping a lot of people up at night,” Marcus writes in an article titled “When Will the GenAI Bubble Burst?

Gary Marcus is an AI researcher and cognitive scientist who’s founded multiple machine learning companies.

“The entire industry is based on hype, and on the specific hope that the kinds of problems we saw again and again with GPT-2, GPT-3, and GPT-4, not to mention Claude, Grok, Gemini, LLaMA, and so on, are on the verge of being solved,” he says.

So how do you know who to believe?

The Problem with Predictions

Predicting the future of tech is incredibly difficult but there’s little consequence to making bold claims.

Take, for example, this article in The Guardian from 2014. In it, the founder of the AI writing tool Quill claimed that “90% of news read by the general public would be generated by a computer” within 10 years.

Check your calendar if you don’t see the problem with that claim.

In 2015, voice assistants like Alexa were guaranteed to revolutionize media, kill Google, and change marketing forever. Only 7 years later, Alexa was declared a “colossal failure” as it stared down a $10 Billion loss.

But when you read about genAI’s rapid advances, there’s a feeling like it’s different this time. There’s a (carefully cultivated) sense that this time the revolution is both inevitable and imminent.

So, instead of making a prediction of our own, let’s examine what challenges AI companies have to overcome to make the revolution happen.

8 Hurdles AI Companies Face

To start with, let’s take a look at some of the hurdles that are inherent to the AI industry itself. Though they don’t catch headlines as often as the bold claims, they’re freely admitted by AI executives.

We’re short on computing power.

Stop and think about the sheer number of tasks being done by humans today.

In order to do 80% of that, software providers would need an amount of computing power that’s simply unknown in today’s world.

And it’s not enough just to be able to do those tasks; to get widespread adoption, these tools need to be able to do them better and more cheaply than they’re done now.

Now consider that, with existing computing power, just a few million users are enough to tax GPT-4 to the point that people are complaining that it’s getting worse.

Before there can be a revolution in AI, there must first be a revolution in computer infrastructure.

And to try to make that happen, Sam Altman has proposed a joint venture to raise a mind-boggling $7 Trillion. The largest proposed JV in the history of man (or machine).

But as GPUs — the specialty microchips needed to run AI — become more powerful and more in demand, they’re getting more expensive, not less.

It’s the same pattern we saw in the late 2010s when the crypto boom caused GPU prices to double or even triple.

And it’s not just a question of money, they’re also getting hungrier for energy.

An AI boom would be an energy crisis.

At current levels (pun intended) of demand, the International Energy Agency (IEA) estimates that by 2026, data centers alone will draw as much energy as all of Japan.

That’s without an energy-hungry AI boom.

The number of data centers required for Altman’s vision would be a crisis-level burden on our already aging infrastructure.

But don’t take my word for it. Take Altman’s.

“There’s no way to get there without a breakthrough,” Sam said at the recent World Economic Forum. “We still don’t appreciate the energy needs of this technology.”

So, in addition to an infrastructure revolution, we also need an energy revolution before we can have an AI revolution.

(And I’m not even going to touch on the parallel water crisis.)

The “Eye-Watering” Expenses

Since a buffet of free AI tools is already available, it’s easy to miss how jarringly expensive it is. But free tools are only available thanks to massive infusions of venture capital.

In 2022, Sam Altman tweeted that the costs to run ChatGPT were “eye-watering.”

A single response from GPT-3 was estimated to run 10 times the cost of a single ordinary Google search.

According to analysts from Morgan Stanley, Google adds an additional $1.2 Billion to its costs for every 10% of searches handled by its LLM tools.

And as LLMs become more powerful, those costs are only going up. A report from Stanford University noted that it cost $78 Million to train GPT-4 just in computing alone. That’s a big jump up from the already astounding $4.3 Million it took to train GPT-3.

And as those costs loom, so does another reality: Eventually, the investors pumping in cash are going to expect a return on those investments.

Monetization

Like any other product, these tools will have to make money. And it’s not entirely clear how that’s going to happen. It’s a trick none of the players have figured out yet.

The Wall Street Journal reports that the AI industry collectively brought in $3 Billion in revenue last year. At the same time, they collectively spent $50 Billion. Just on microchips.

To compensate, Google is reportedly looking to make its Search Generative Experience (SGE) a paid feature only. Meanwhile, their would-be rival, Perplexity, has shifted to an ad-revenue model as it sees its user base falling.

Both moves stand to potentially sandbag adoption.

In a February earnings call, Amazon CEO Andy Jassy noted that AI-related revenue was “relatively small.” Similarly, The Information reports that Salesforce executives have admitted they don’t expect genAI to make a “material contribution” to revenue growth this year.

In fact, so far, the only companies profiting from the AI bubble are those that are selling chips or cloud computing services.

One reason for that is that businesses are hesitant to take on potential liabilities associated with tools that occasionally just make s*** up.

And about that…

Hallucinations aren't going away.

LLMs don’t actually understand language, they simply use statistical models to predict it.

“They’re really just sort of designed to predict the next word,” says Daniela Amodei, co-founder and president of Anthropic, in an interview with Fortune. “And so there will be some rate at which the model does that inaccurately.”

In other words, hallucinations aren’t misunderstandings that can be taught away, they’re just statistical errors. And statistics always have a margin of error.

“If hallucinations were brought down to human expert levels by the end of 2024 I would be truly astonished,” echoes Gary Marcus on his substack.

That may not be a huge deal if you’re asking Siri what time the grocery store opens. It's a massive problem if you’re writing a health article, coding software, providing legal support, or diagnosing cancer.

So far, the answer to this issue has been to throw more data at the problem and hope for the best.

“This is the dirty secret of deep learning: It’s throwing spaghetti against the wall,” said Ari Morcos, an AI researcher who has worked at Google and Meta.

But that tactic only works as long as there’s more data.

They’re already running out of data.

Shockingly, AI companies have already hoovered up almost all of the usable internet.

In 2022, researchers estimated a 90% chance of AI researchers running out of data by 2026.

At the heart of the matter is the word “usable.” You can’t feed an LLM a steady diet of your annoying neighbor’s misspelled Tweets and expect it to pump out Shakespeare.

Models need to be trained on data that’s complete, accurate, high-quality, and publicly available.

And that last condition is the part that’s really thrown a wrench into the works.

In seeking high-quality data, all of the major companies — including Meta, Google, and OpenAI — have allegedly ignored data privacy and copyright law.

A wave of legal battles is in the works.

At the time of publishing, eight more major US newspapers have joined The New York Times in filing suit against OpenAI and Microsoft.

That’s in addition to a tidal wave of lawsuits from authors, actors, musicians, and even coders who allege that non-public works were scraped to train AI models.

And Meta isn’t out of the water either. Recordings obtained by The Times feature executives admitting to knowingly using copyrighted material without permission. The execs advise charging forward without permission because negotiating with publishers, artists, musicians, and news outlets would take too long.

If even one of these cases succeeds, it could force all AI companies to have to pay for licensing rights to any copyrighted works used in their data sets — adding to those eye-watering costs.

“The only practical way for these tools to exist is if they can be trained on massive amounts of data without having to license that data,” says Sy Damle, a lawyer for the VC firm Andreessen Horowitz, in an interview with The Times. “The data needed is so massive that even collective licensing really can’t work.”

But even more devastating would be the requirement to remove unlicensed materials. Since genAI systems can’t “unlearn” data, removing copyrighted materials would mean recreating these models from scratch.

In turn, that means incurring those massive training costs again, for potentially smaller and less useful models.

The Human Element

The last, and perhaps biggest, variable is the human element.

Now, don’t misunderstand me. I’m not about to make a sentimental argument that “Humanity will win because we have a special spark.”

No, instead, I’m saying humans are messy, illogical, and unpredictable in the best of ways.

That hangs a big question mark over the adoption of any emerging technology.

Take, for example, self-checkout, which also seemed like an inevitability. Retailers like Walmart and Target are now scaling back on automated checkout and hiring more human cashiers.

Why? It turns out that self-checkout is actually more expensive when you factor in the human element. Items get miscounted, merch gets stolen, customers rage-quit transactions when they get frustrated.

Similarly, Amazon is quietly killing its “Just Walk Out” experiment after the costs outweighed the value.

Need another example? Robots in factories have been the subject of anxiety around worker automation for decades. Their untiring labor seemed inevitable.

And yet, the 2010s saw Toyota removing robots and hiring more human factory workers. Why? To better respond to rapidly shifting sales trends caused by unpredictable human consumers.

What happens if the revolution doesn’t happen?

For now, these challenges add up to AI being more expensive than it is profitable. If that math doesn’t reverse before investors expect a return, the cost of AI will shift to businesses and consumers.

And if the value doesn’t match that cost, the bubble will pop.

Does that mean AI goes away? Well, did that whole “internet” thing disappear when the Dotcom Bubble burst?

Of course not. Now, even your granny has an IG for her knitting business.

But if the current AI boom fails to live up to its hype, it could trigger an AI winter where investment dries up and consumers become skeptical.

If that happens, there would be what Gary Marcus called “massive, gut-wrenching corrections” in valuation. Most of the startups would buckle, leaving AI to the major tech companies that can bear the expense. The abundant free offerings would become pricey enterprise tools for those who can afford them.

And those 79% of workers could relax a little. At least until the next hype bubble.

Click Here to Subscribe to HubSpot's AI Newsletter

Related Articles

A weekly newsletter covering AI and business.

SUBSCRIBE

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

START FREE OR GET A DEMO