More publishers have started experimenting with automation to produce content after news of the open AI chatbot ChatGPT went viral.
But publishers should be wary of how they use AI if they don’t want to upset Google. And ChatGPT, which launched at the end of November, has been told in an automated conversation that it “cannot replace human journalists”.
It emerged this week that personal finance site Bankrate and tech news and review site CNET have begun using AI to produce content.
The first tells readers that content published under the title “Bankrate” is “generated using automation technology.”
The website adds: “A dedicated team of Bankrate editors oversees the automated content production process from ideation to publication. These editors thoroughly edit and proofread the content, ensuring the information is accurate , authorized and useful to our audience.”
Bankrate’s sister site, Creditcards.com, uses AI in a similar way under the name “CreditCards.com Team.”
Content from our partners
Meanwhile, CNET’s experiment was widely revealed for the first time Gael Breton, marketing and SEO expert and then The Byte on Wednesday. The website has since published an explanation of why it had decided to try publishing 75 money articles using automated technology since November.
Editor-in-Chief Connie Guglielmo wrote, “Conversations about ChatGPT and other automated technologies have raised many important questions about how information will be created and shared, and whether the quality of stories will be useful to the public.
“We decided to do an experiment to answer this question for ourselves.”
Guglielmo said his goal was to find out if an AI engine could “efficiently help” his journalists “use publicly available facts to create the most useful content for our audience to make better decisions.”
The AI tool has been writing the stories or gathering information for certain stories, but they are always “reviewed, verified and edited by an experienced editor today” before publication, he added.
This week following the disclosure of CNET’s use of this technology, it changed the relevant name to CNET Money and made the disclosure of the technology easier to find. It says: “This story was assisted by an AI engine and reviewed, checked and edited by our newsroom.”
Guglielmo said, “We’ll continue to evaluate these new tools as well to determine if they’re right for our business. For now, CNET is doing what we do best: testing new technology so we can separate the hype from the reality.”
A Press Gazette webinar in October heard diverse views on how open publishers should be about their use of automation. British news agency PA, which has used its Radar service (reporters and data and bots) since 2017 for localized data stories, does not usually advertise the involvement of automation in its stories, while the ‘local American publisher McClatchy uses inscriptions to tell the reader.
PA Editor-in-Chief Pete Clifton said the knowledge “could unsettle readers,” while McClatchy’s vice president of audience growth and content monetization Cynthia DuBose said she thought the obertura had helped the sites in his group to perform well on Google.
“We haven’t seen any penalties… Google vol [automated content] to be identified, what we do, and we think we do it very well: with the byline bot, with the footer we have at the bottom and also [making sure it’s] non-repetitive,” he said.
How can using ChatGPT affect Google visibility?
SEO experts concerned about publishers’ use of AI asked Google’s in-house expert how it would affect their search visibility.
Google search link Danny Sullivan said on Twitter it depends on the quality and intent of the content. He said, “…content created primarily for search engine ranking, however it’s done, goes against our guidance. If the content is useful and created for people first, that’s not a problem”.
He has previously said that using 100 journalists to create copy aimed at increasing Google rankings would have the same effect as using something like ChatGPT for the same purpose. Google has been prioritizing “original, useful content written by people, for people” since August, when it introduced its Useful Content Update.
For many years, Google has followed what are known as “EAT” guidelines, meaning its goal is to ensure that its search results provide users with expertise, authority, and trustworthiness (as described in this piece on SEO tips for to editors). In December it added an extra “E” to the beginning, which represents experience.
Luke Budka, director of PR and SEO at B2B agency Definition, told Press Gazette that this made transparency in the process of human editing of AI content especially important to the likes of Bankrate and CNET . He described the addition as “an obvious way for Google to combat AI-generated copy. It makes what Google has previously said about author ‘reconciliation’ even more important: the consolidation of signals of expertise in a single author profile that denotes experience”.
Google’s spam content guidelines show the need for caution: they penalize content that has been “programmatically generated without producing anything original or adding enough value; instead, it has been generated with the primary purpose of manipulating search rankings and not helping users.” This may include text that has been “translated by an automated tool without human review or curation prior to publication” or “generated by automated processes without regard to quality or user experience.”
In general, however, Budka said that “it’s actually pretty simple…Google wants quality, reliable, original content, written by bona fide experts, for its index, because that’s what users want.” he said. “This is what will keep users coming back to Google.
“The question is not: ‘Will ChatGPT/AI content be penalized?’ but ‘Can ChatGPT/AI produce quality and accurate original content?'” The answer is no. It has been trained on a large dataset from existing sources (ChatGPT was trained on a text dataset called the “WebText” dataset which includes a lot of unverified information such as forum posts, personal blogs, and Wikipedia, so its accuracy is extremely questionable and by its nature not has “experience”.
“That doesn’t mean it can’t be part of the content production process, and the new versions will be even better ‘assistants’ for content creators.
“Meanwhile, some people will inevitably use it as the only source of content for their sites, but… Google will catch up with these actors and apply the appropriate penalties (detection techniques are already a big topic). “
To prove his point that ChatGPT cannot produce quality content, Budka asked the bot itself “Can ChatGPT replace journalists?”
Here’s what it said: “ChatGPT is a powerful language generation model that can produce coherent and well-written text on a variety of topics, including news articles. However, it’s important to note that ChatGPT is only a tool and it cannot replace human journalists.
“While it can help generate text quickly, it lacks the ability to fact-check, verify sources, or provide context and analysis that a human journalist can provide. It also lacks the ethical consideration that humans have and is unable to “to understand the nuances of human society and culture. Therefore, the use of ChatGPT in journalism should be seen as an aid and not a replacement for human journalists.”
e-mail email@example.com to point out bugs, provide story tips, or submit a letter to be posted on our “Letters Page” blog.