Imagine you’ve been following an author for months.
You think they’re funny, you follow their recommendations, and look forward to their reviews. Only to find out that the author’s content is generated by AI.
Oh, and the author doesn’t exist either. Their author blurb and picture are generated by – you guessed it – AI. This is what one Futurism article alleges happened at Sports Illustrated.
(Aside: I asked ChatGPT to come up with the AI version of the popular expression, “Sheep in wolves’ clothing” but the best it could do was “AI in human guise.” Deep sigh.)
In what Futurism calls “a staggering fall from grace,” Sports Illustrated seems to have gotten caught in a pretty bad lie about its content.
The renowned publisher reportedly created fake author pages that were rotated every couple of months by new fake authors, with new fake bios and fake pictures.
These fake authors would then “write” AI-generated content that featured affiliate links, which earn publishers money for every sale generated by the site.
How did they find out the authors were being rotated? AI authors like Drew Ortiz, with no online presence other than the website, would be scrubbed from the site after a few months, with their profile pages redirected to new AI authors.
In addition, all articles written by the supposed author would get a byline from the new author, with no editor’s note explaining the change.
It’s worth noting that these articles often included strange sentences, very much reading like what we expect from AI, which likely led to this exposé in the first place.
But any reader may have dismissed their concerns once they saw an author bio like this:
"Drew has spent much of his life outdoors, and is excited to guide you through his never-ending list of the best products to keep you from falling to the perils of nature. Nowadays, there is rarely a weekend that goes by where Drew isn‘t out camping, hiking, or just back on his parents’ farm."
That would’ve fooled me.
Similar to a child caught with a face full of powdered sugar swearing they didn’t eat the donut, Sports Illustrated said, “No, we didn’t do it.”
In a statement, they explain that the articles were created by an external company, AdVon, who they were already investigating when Futurism reached out with the story. As for the author bios, writers were apparently instructed to use pseudonyms to protect their identities.
As a result, they’ve terminated their partnership with the vendor and have since scrubbed their site of any AI-generated content discovered by Futurism.
Although the company has attempted to distance itself from these claims, Futurism reportedly found that the same thing is happening with another publisher under the same parent company, The Arena Group.
Sports Illustrated is the latest of many companies burned by their use of AI in the court of public opinion.
Back in August, newspaper chain Gannett went viral for its use of AI to report on high school sporting events. Critics said every article followed the same structure and lacked valuable context.
This incident points to a larger conversation publishers and readers are having regarding what content should and shouldn’t be written by AI.
Meanwhile AI writing tools are more popular than ever. The keyword “ai writing” saw a substantial increase in traffic starting in late 2022, right around when ChatGPT was released.
Since then, companies and publishers alike have turned to AI as a way to easily generate content and redirect their resources.
But it’s unclear where the line is.
Some draw the line at news while others are against its use altogether.
As a result, many publishers have opted for full transparency, sharing their guidelines on how they leverage AI for content creation. But when publishers keep things behind the scenes, there’s always a chance it’ll go left and end up on the next news cycle.