Over the past decade, I've written extensively about some of the world’s greatest innovations. With these technologies, you know what to expect: an improvement here, a new functionality there. This one got faster, and that other one got cheaper.
But when the AI boom began with ChatGPT a few years ago, it was quite unlike anything I’d ever seen. It was exciting and, to be honest, a little bit scary — especially if you’ve seen as many sci-fi movies as I have.
It was easy to get caught up in the headlines and be carried away by varying predictions and “demystifications” of this new, disruptive technology. Unfortunately, a lot of ideas were either miscommunicated, assumed, or lost in translation.
The result? In came AI myths that were far from reality. So, let’s unpack those. In this article, I’ll discuss six of the biggest AI myths and shed light on what the reality truly is.
Table of Contents
But first, why are there so many myths about AI? I think there are three main factors.
There’s an entire film genre dedicated to imagined future scientific advances like AI. The technology either helps the nice protagonist achieve a goal or becomes a rogue, all-powerful entity.
This portrayal shapes public perception, much like it did mine when news of AI started making the rounds.
Even Fran Drescher, President of the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), agrees. According to her, with AI, she’s realized that “dystopian stories can become a self-fulfilling prophecy.”
News about AI is usually portrayed in extreme ways, either as a utopian miracle or a dystopian threat. These sensational headlines and reports feed into narratives around the technology, whether unfounded or otherwise.
Not many without the requisite technical expertise/knowledge know what AI is or how it works.
Many just know what it can do, at least within the purview of their own needs. This gap in understanding makes it easier for misconceptions to spread. People rely on generalizations or oversimplifications to reach a conclusion.
Now for the fun part: identifying and debunking the myths.
This myth stems from the speed and sophistication artificial intelligence technology promises (and is currently capable of).
One morning, you’re scouring the internet for inspiration for your next article, and the next day, there’s a new software that can generate those same ideas in seconds.
Many worry that these machines and algorithms will soon outperform humans in every industry, making entire professions obsolete, from manufacturing to office work.
In fact, there’s a website where professionals check the likelihood of AI taking their jobs. It’s called, Will Robots Take My Job?
While AI is certainly changing — and will continue to change — the job market, it’s not going to eradicate jobs.
In the words of Vince Molinari, CEO of Fintech TV, “The biggest misconception that’s being propagated about [AI] is that it will rapidly and virtually overnight displace industries and workers.”
And I agree that this is the biggest myth about AI so far.
Rather than replacing all jobs, AI will complement human labor, help people get better at their jobs, and increase their productivity by 10. The majority of employees and their employers say the same, according to LinkedIn’s 2023 Future of Work Report.
Speaking for the energy sector, for example, CEO of Lightcore Energy Thomas Petzold says, “The energy industry already makes great use of AI, but while AI excels at analyzing data and making predictions, it cannot (yet) replace the deep industry expertise and decision-making capacity of energy engineers, scientists, and policymakers.”
Also, despite being one of the most popular use cases for AI in healthcare, medical professionals agree that AI won’t replace human doctors in diagnostics.
“Yes, AI serves as a powerful tool that can enhance decision-making and help doctors with more accurate, data-driven diagnoses… but the human touch, experience, and judgment still remain crucial — especially for complex or ambiguous cases,” Gabriele Keil, CFO of Cell4Care, confirms.
For all we know, the fear of losing jobs to new technological advancements goes back as far as the 16th century.
But it’s been proven time and time again that these advancements create more jobs than they “destroy,” and only those who refuse to improve with the technology may get left behind.
There’s a common misconception that because AI is driven by data and algorithms (and not human emotions or biases), it is immune to errors and always provides truthful information.
Now, this paints AI as an infallible source of truth, leading to potentially harmful misconceptions about its reliability.
In reality, however, AI systems are only as good as the data they’re trained on and the algorithms that guide them.
“When it creates incorrect responses, it is based on how the AI has processed the information, including the commands it was given and also based on the data it has been trained on,” Nadja Atwal of the Global AI Council says.
They can misunderstand context, generate incorrect information, or even produce misleading answers — a phenomenon called AI hallucination. Interestingly, research has also proven that AI could lie on purpose.
In a simulation to study strategic deception, Apollo Research discovered a version of GPT-4 committing insider trading and then lying to cover it up. In cases where the AI is deliberately trained to be dishonest, research also found it’s impossible to fix or retrain the models.
Many people use the terms interchangeably, assuming they all refer to the same thing. Wherever AI is mentioned, it’s easy to automatically assume that it is referring to technology that learns and improves from experience. But this is not always the case.
Artificial intelligence accounts for a broad range of technologies and techniques called subsets. These subsets include machine learning (ML), natural language processing (NLP), robotics, neural networks, expert systems, and genetic algorithms, among others.
They’re often used in combination to develop some of your favorite AI solutions. Look at ChatGPT, for example. The platform is built on a combination of AI subsets, including ML (specifically reinforcement learning) and NLP.
In summary, when machines display human-like intelligence, it is considered AI, but it doesn't always involve machine learning.
It’s easy to feel a bit unsettled when you see AI generating art, writing stories, or composing music. If machines can create (and at greater speed), where does that leave people whose work revolves around this creativity?
The question has fueled major discussions. The writer’s strike by the Writers Guild of America (WGA) and SAG-AFTRA included concerns over AI in the film industry.
Over 200 artists joined their voices in an open letter demanding that AI developers not create tools that replace human artistry. Beyond that, 47% of marketers are concerned that AI will replace their jobs in the next few years.
But creativity isn’t just about producing things — it’s about emotion, experience, and the deeply personal process of bringing ideas to life.
“AI is a powerful tool, but it can’t replicate the nuance of human creativity. The reality is that AI can enhance creative processes by automating repetitive tasks, freeing up humans to focus on what they do best — imagination and innovation,” Jaxon Parrott, Founder of Presspool, attests.
Results from our State of AI in Marketing survey also prove that most marketers use AI because it allows them to spend more time on the creative aspects of their jobs. And when they do use it to create content, it is only to get them started and edit after. It’s good to also note that companies are innovating in this regard.
AI tools like Parrott’s Presspool assist both creators and companies by taking care of the time-consuming work, like targeting, placement, and tracking. This allows their marketing teams to direct their energy toward strategic thinking and creative storytelling.
So, no — AI will not replace human creativity. It will only aid it.
These days, everyone claims to have an AI-powered solution for every problem under the sun. This has fed into the myth that AI is just marketing hype with no real substance or staying power and will fade away in time.
As companies touting AI technologies have been known to attract more attention and funding, it’s easy to see why people might think AI is overhyped. The constant buzz and flashy promises can make AI seem like just another trend.
But this is untrue. AI, like many transformative technologies, has come to stay.
AI has already made a significant impact across various industries, from healthcare and finance to transportation and customer service. It’s driving real advancements in areas like drug discovery, predictive analytics, and autonomous systems. Yes, the noise may become deafening, but the key is to separate genuine progress from exaggerated claims.
Integrating AI into existing applications or building an AI solution from scratch is a cost-intensive process, and that much is not a lie.
This McKinsey report reveals that one training run for an AI model can cost anywhere from $4 million to $200 million, while maintaining the model can cost between $1 million to $4 million.
These costs, as well as the complexities and infrastructure required, are assumed to be within the reach of only big corporations and not smaller businesses.
The fact is you don’t have to build an AI model from scratch to tap into its benefits. Small and medium-sized companies can instead leverage already-existing AI tools to improve their output and efficiency. This way, everyone can have a piece of the AI cake.
According to a survey by the Small Business and Entrepreneurship Council, 75% of small businesses are using AI for various functions, saving an estimated $273.5 billion annually.
These businesses report AI is helping them compete more effectively, manage costs, and improve productivity, with 93% of small business owners agreeing that AI tools provide cost-effective solutions that boost profitability.
To get started, here are some of the AI tools I recommend for small businesses.
The funny thing about myths is if you let them linger for long enough, they become facts. While myths about Greek gods or origin stories may make for great bedtime stories, myths about technology like AI could hold back progress.
By separating fact from farce, I think we can develop a clear understanding of AI's actual capabilities and limitations, allowing us to make informed decisions and create better solutions.