G/O Media Execs Full Speed Ahead On Injecting Half-Cooked ‘AI’ Into A Very Broken US Media

from the I'm-sorry-I-can't-do-that,-Dave dept

While early “AI” systems have plenty of creativity and productivity potential, early implementations into the already very broken US journalism and media markets have proven to be an ugly mess. In part because the tech isn’t really fully cooked yet. But also because the kind of folks that get to run major modern US media companies are incompetent cheapskates.

Media giants like G/O Media and Red Ventures keep implementing such systems at outlets like CNET and Gizmodo (without being transparent with staff about it) and the result has consistently been a lot of error and plagiarism filled articles that lower brand quality under the pretense of progress.

Of the 77 articles published at CNET, more than half had significant errors. Gizmodo’s recent AI-Generated articles have also been terribly written and filled with mistakes. In many instances, it’s costing outlets more money to have a human-editor go in and fix errors than it would to just have a human generate the content in the first place.

Actual, human staffers understandably aren’t happy, and routinely say that publishers aren’t really communicating with staff as to how the technology is being implemented:

G/O employees, who tell me they don’t want to talk on the record for fear they’ll be disciplined by managers, say they’ve received no information from their managers about any use of AI — except a heads-up that the AI-written stories were going to appear on the site on July 5, which was sent the same day the stories ran.

The problem here isn’t inherently “AI.” This tech is in its early stages and will inevitably evolve to be very useful in helping to generate and edit content, especially of the rudimentary variety. Nor is this just the grousing of people whose livelihoods are being automated. Because they’re not, really. At least not well.

The problem is that lazy and terrible managers are injecting unfinished technology into an already very broken U.S. media sector. And they’re doing it without any real transparency, without consulting existing staff, and not with the goal of improving product quality, but with an eye on cutting corners, cutting costs, and leveraging it as a weapon against already underpaid labor.

There’s really zero indication that the folks running these outlets give much of a shit about what employees think about much of anything, AI or otherwise. Most of these outlets already violently underpay their staff, routinely bleed talent via mismanagement, aren’t genuinely that worried about substandard product, and are blindly chasing max engagement in a broken attention economy.

Even before “AI” arrived on the scene, nuance, substance, deep reporting, and smart analysis were increasingly being replaced with sensationalism and clickbait gibberish in a pursuit of short-term wealth. All while reporters and editors are generally paid in pocket lint and broken promises, then laid off in droves while the folks at the top of the chain make out like bandits.

Inject half-cooked machine learning chatbottery into that already broken mess and you’re not really revolutionizing anything, you’re just supercharging existing dysfunction. Shitty managers at these publications want to act as if they’re revolutionizing journalism and media, and insist they’re not looking to replace humans and journalism with automated, error-prone clickbait machines.

But long-mistreated staffers at most of these outlets know the score:

Spanfeller and Brown also say they won’t use AI to replace G/O’s staff. “Our goal is to hire more journalists,” Spanfeller said. (Spanfeller notes that, like other media companies — including Vox Media, which owns this site — G/O has laid off employees because of this “crappy economic market” — but called it a “de minimis amount of reduction.”)

That argument doesn’t persuade G/O staff, who say they assume G/O will inevitably use the tech to replace them.

“This is a not-so-veiled attempt to replace real journalism with machine-generated content,” another G/O journalist told me. “G/O’s MO is to make staff do more and more and publish more and more. It has never ceased to be that. This is a company that values quantity over quality.”

It would be one thing if this technology was being introduced transparently in a way that aids staff, boosts productivity, and improves product quality. But that’s most assuredly not what’s happening here. What’s happening here is incompetent, fail upward, trust fund brunchlords looking to build a massive, cheaply made, automated clickbait and bullshit generation machine that effectively shits money.

Real journalism, real progress, or real quality simply doesn’t enter into it.

Filed Under: , , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “G/O Media Execs Full Speed Ahead On Injecting Half-Cooked ‘AI’ Into A Very Broken US Media”

Subscribe: RSS Leave a comment
26 Comments
Mark Gisleson (profile) says:

Re: ??

Current AIs can’t tell stories at all. They just slop text around on a page until the space has been filled.

It is easier to rewrite AI text than it is to edit it.

Corporations don’t respect writers because of all the job slots to fill, their nephew/niece fit that slot best. Shuffling along with a substandard writing staff, managers lose respect for writing/editing (not that they had much to start with).

The problem isn’t AI or substandard human writers. The problem is a corporate management suite that struggles to read or comprehend but that most of all does not understand that writing is about telling truths. Sharing untruths is much much harder, way above an AI’s pay grade.

Pay the talent, fire the managers who fail to properly select and manage talent.

Not one TechDirt story ever could have been produced by AI. I’m pretty much done with this site (Russiagate¡¡¡)but I’d make a donation to help pay to hire AI to write a TechDirt story. Give it a phony byline and publish it. Then see how many of your readers notice. I’m guessing 100% of them but maybe less as it’s possible that senior management also reads TechDirt.

Stephen T. Stone (profile) says:

Re: Re:

Current AIs can’t tell stories at all.

The natural state of storytelling is people sitting around a campfire bullshitting one another. An LLM’s natural state is “autocomplete”. While “autocomplete” might work for writing up generic news stories, it’s anathema to storytelling. An LLM will never understand pacing, characterization, choosing the right details for maximum impact, and a thousand other individual decisions writers make in the course of telling their stories.

No language-learning model will ever be able to tell a story in the exact same way a human tells a story. Anyone who thinks otherwise is probably someone like David Zaslav⁠—i.e., a rich asshole who hates art.

Solidus says:

Re: Re:

You know, you don’t have to pay, let alone hire when you can just have AI do it.

As for Russiagate, can you blame them for believing that the guy who told Russia on TV he’d reward them for leaking stuff wanted Russia to leak stuff? His kid tweeted out that they met with the Reds about the sanctions on Russia. Are you seriously gonna believe the Lame Stream Media over Trump?

Mark Gisleson (profile) says:

Re: Re: Re: "leak stuff"

Another ostrich-American with their head in the sand. The “stuff” JULIAN ASSANGE published was actual emails from US govt officials. Which Americans are, of course, forbidden to read lest we get a clue what our government is doing in our name.

Quick show of hands: How many TechDirt readers wanted a war with Iraq? Afghanistan? Russia?

US govt operates in secrecy because their policies do not have our approval. How is that a good thing?

BTW, I tweet under my own name. Before a massive “you can’t blog about Israeli Apartheid!” DDOS attack in 2005, my blog used to get up to 5k readers a day. 1000 of them followed me to Twitter where they simply stopped seeing what I wrote almost immediately (following my one-year “not a suspension” I get 20-30 views on avg).

You can applaud my views being suppressed (and on this site ridiculed) but old skool USA USA was supposed to come with “freedom of speech.” Now it’s all about “OMG WE CAN’T LET PEOPLE SAY THAT!”

Anything else you don’t want us to talk about?

Anonymous Coward says:

Re: Re: Re:2

bullshit about Julian Assange

American opsec be damnded, I wonder how Julian Assange, noted asshole, got those emails…

Surely his high-placed friends in Russia didn’t help this anti-American asshole with at least some of the leaks… (like the Hilary stuff, for one)

naivè belief about how governments operate

You, sadly, may be right there, but reality says otherwise, more often than not. Zero-sum game and all that.

assertation that you are being censored for your views

Firstly, I can see your damn comments. Techdirt’s moderation queue isn’t fully automated and, well, there are known assholes polluting the comments section.

Also, freedom of speech does not guantaree freedom of reach, even if I sympathatize with the Palestinians and think Iran is using them to harass Israel.

lies about Techdirt and freedom of speech

Freedom of speech does npt guantaree freedom of reach. And I’m still reading your comments. Just because you could say mean things about minorities in the past doesn’t mean you can now, despite you wishing you can say whatever you wany without c9nsequences.

Also, I seem to remember that a very distinguished academic, who uses actual evidence and statistics from regional NGOs, compares them with other evidence from other NGOs, proved that there was something extremely fishy going on with the Israel issue, got slammed with a ton of bullshit allegations, and dealt with it with less whining and more humor than you.

challenge to the Techdirt team

As they say around here, fuck around and find out, bucko.

Mark Gisleson (profile) says:

Re: Re: Re:3 Whatevs

At least I can use my own name to say these things and I accept that doing so costs me an audience.

Every Anonymous Coward just reinforces the fact that there are serious consequences for publicly speaking your mind in this country.

But clearly sharing every few months on this site is burdensome to those still clinging to Russiagate, natural origin, or simply an inability to weather an election loss without melting down for seven years and counting so I’ll pass on further comments. Sorry to have bothered you, your side won and that makes them permanent winners forever and ever amen.

Anonymous Coward says:

Re: Re: Re:4

Funny fucking story, asshole

I’d daresay it’s YOUR SIDE that won and ruining countries EVERYWHERE. And can’t accept the loss.

Not all of us live in the US and are still getting turbofucked by REPUBLICAN bullshit spreading to other countries, corrupting their governments, bureaucracies and religious communities.

Getting exposed for being Quislings and treasonous curs?

YOUR ACTIONS HAVE CONSEQUENCES. And despite learning that lesson, you seem to want to not suffer thsoe consequences.

Tech Savvy Luddite says:

Just another day in the data mines

AI is the next episode of the good old fashioned pump and dump scam, arriving just in time for the sun to set on the crumbling crypto empire. “AI” as it exists today is just another egregious waste of electricity that adds no value to society, boosted by con-men peddling the idea that human salvation will come from a warehouse filled with GPU’s.

Power Stations of the Cross (profile) says:

Speaking of editing

You know, if you remove the word “artificial” from the conversation, all the hand-wringing about the strange and nefarious powers of AI seems a lot more familiar.

And when you understand that “intelligence” isn’t even a part of the deal yet, you’ll understand why corporate management has yet to engage in that sort of hand-wringing. Just wait until generative AI starts getting uppity and insisting that long-held conservative “truths” are, in fact, utter horse shit.

Adam Gordon says:

Re: Thank you...

The article is a bit heavy on the “yet”‘s. AI has been undercooked for YEARS. It’s been “almost there” forever. With the metaverse & crypto being sunsetted as the latest tech grifts, the industry was furiously looking around for the next way to keep the money rolling around, and ChatGPT got marched out.

AI is misunderstood…it’s dependent on the input data, in this case Wikipedia articles and stupid tweets, and that’s what it output. Garbage In, Garbage Out, a saying that goes back to the 60’s. AI will never “get there”.

Anonymous Coward says:

In many instances, it’s costing outlets more money to have a human-editor go in and fix errors than it would to just have a human generate the content in the first place.

There’s already plenty of evidence that news outlets aren’t editing things and haven’t been for years. Even headlines often have obvious grammatical errors. Stuff like incorrect units (a lot of news organizations use the Coulomb symbol for temperatures, quote loan rates as percent rather than percent per time-unit, don’t understand the difference between percentage changes and percentage-point changes…); bad use of phrasal verbs; misunderstanding what “between” means; and off-by-one errors with terms like “more than”.

Anonymous Coward says:

The one thing that strikes me about this AI data devouring aspect is the old computer adage, garbage-in results in garbage-out.

I see a future with AI having narrowly tailored inputs designed for particular subjects. For instance, AI could input all court decisions and laws since the founding of the US and generate legal output. That’s a lot of data right there. Medical could have their own AI.

The idea of a one-size-fits-all AI, like Chat-GPT is like a jack of all trades, master of none.

Anonymous Coward says:

Re: Re:

The lawyer in this matter used the jack-of-all-trades, ChatGPT to come up with his legal briefs.

That’s not what I’m saying. There would need to be a specialized AI, which I’m not aware of currently existing, that confines itself to legal data, and programed to recognize precedent and things like that. And it will augment the human, not replace the human, much like power steering helps the human in a car steer.

What this use of the ChatGPT legal briefs showed is, a generalized AI isn’t going to do the job.

N0083rp00f says:

Never is very absolutist.
Only Sith and lame script writers deal in absolutes.

As for never writing stories.
Give the coders time.
They just got the algorithm to handle sentences and paragraphs to be readable.

It will be a higher level of structuring to do the opening, middle filler bits, and closing for a proper story.

As for the nuances and such, that may take some time or may just pop up unannounced from some small company well outside the light of media coverage.

You really can’t tell how things will play out.

PB&J (profile) says:

I’m not a huge CNET fan, but would occasionally go there for product reviews, etc. and this just seems like they’re alienating a large part of your customer base.

If CNET is using AI — without explicitly noting it — then I have to assume that their product reviews are just re-treads of the manufacturer’s bullshit. And if I wanted to read the manufacturer’s bullshit, why would I bother to go to CNET?

Anonymous Coward says:

This is crypto redux. It is now the case that when there’s even a hint of a promising new technology, idiot money can be found to jump on it without any understanding or care for just how half-baked it all is in the hope of finding the next big thing.

ChatGPT and its siblings are especially vulnerable to this because decades of fiction have primed people to believe in intelligent computers and robots, and their pareidolia makes them read far more into LLM output than is really there.

The solution is going to be what it originally was – hand-crafted and hand-curated small spaces allowing only trusted people to write, while the mass internet continues to fill with garbage. “Try That In A Small Town”

ND says:

Of course, they’re not being straight with the staff. It’s going to be tough to say, “Hey, if this thing works out, we’ll replace all your asses.” Or, “Hey, if this thing needs a little help, why don’t you train it until it replaces all your asses?”

If this thing works out, which it probably will in the medium to long term, quality journalism will move into a different sphere or economic model.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...