[ad_1]

If you visit any of CNET’s AI-written articles, you’ll now see an editor’s note at the top that says: “We are currently reviewing this story for accuracy. If we find errors, we will update and issue corrections.” The publication has added the note after being notified of major errors in at least one of the machine-written financial explainers it had published. 

If you’ll recall, CNET editor-in-chief Connie Guglielmo recently admitted that the publication had put out around 75 articles about basic financial topics since November last year. Guglielmo said the website decided to do an experiment to see if AI can truly be used in newsrooms and other information-based services in the coming months and years. Based on Futurism’s report, it looks like the answer is: Sure, but the pieces it generates need to thoroughly fact-checked by a human editor. 

Futurism combed through one of the articles Guglielmo highlighted in the post, namely the piece entitled “What Is Compound Interest?”, and found a handful of serious errors. While the article has since been corrected, the original version said that “you’ll earn $10,300 at the end of the first year” — instead of just $300 — if you deposit $10,000 into an account that earns 3 percent interest compounding annually. The AI also made errors in explaining loan interest rate payments and certificates of deposit or CDs. 

You’ll find a huge difference in quality when comparing CNET’s articles with machine-written pieces in previous years, which read more like a bunch of facts thrown together rather than coherent stories. As Futurism notes, the errors it found highlight the biggest issue with the current generation of AI text generators: They may be capable of responding in a human-like manner, but they still struggle with sifting out inaccuracies. 

“Models like ChatGPT have a notorious tendency to spew biased, harmful, and factually incorrect content,” MIT’s Tech Review wrote in a piece examining how Microsoft could use OpenAI’s ChatGPT tech with Bing. “They are great at generating slick language that reads as if a human wrote it. But they have no real understanding of what they are generating, and they state both facts and falsehoods with the same high level of confidence.” That said, OpenAI recently rolled out an update to ChatGPT meant to “improve accuracy and factuality.” 

As for CNET, a spokesperson told Futurism in a statement: “We are actively reviewing all our AI-assisted pieces to make sure no further inaccuracies made it through the editing process, as humans make mistakes, too. We will continue to issue any necessary corrections according to CNET’s correction policy.”

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

[ad_2]

Source link