The New York Times has cut ties with freelance writer Alex Preston following the discovery that he utilized artificial intelligence to assist in writing a book review. This decision comes amid increasing scrutiny regarding the use of AI-generated content within the publication.

AI-Assisted Review Raises Plagiarism Concerns

The issue surfaced when a reader alerted the NYT to striking similarities between Preston’s January review of Jean-Baptiste Andrea’s 'Watching Over Her' and a review of the same book published in The Guardian last August, penned by Christobel Kent. An investigation by the NYT confirmed the overlap.

Writer Admits to Using AI

Preston admitted to using an AI tool to help draft the review and acknowledged he failed to identify the sections that mirrored The Guardian’s piece. He expressed his regret, stating he was “hugely embarrassed” and had “made a serious mistake” in a statement to The Guardian.

NYT Response and Standards

A spokesperson for the NYT confirmed the situation to The Wrap, explaining that an editor’s note has been appended to the review. The spokesperson emphasized that relying on AI and including unattributed work from another writer constitutes a “serious violation” of the newspaper’s journalistic integrity and standards for both staff and freelance contributors.

Passage Similarity Highlighted

The copied passages describe the book’s characters. For example, the original review reads: “But the novel is also rich in smaller characters, from the lazy Machiavellian Stefano to hardworking Vittorio, whose otherworldly twin brother Emmanuele is prone to speaking in tongues and dressing up in ragtag begged-and-borrowed uniforms…” Preston’s review contains a nearly identical passage, with minor wording changes.

Broader Concerns About AI in Journalism

According to an editor’s note dated March 30, Preston stated he had not used AI in previous NYT reviews, and the paper’s investigation found no issues in those earlier pieces. This incident is part of a larger trend of AI-related controversies in journalism.

Last month, Ars Technica terminated a senior tech reporter after they inadvertently included AI-fabricated quotes in an article. The reporter claimed the error occurred while using an AI tool to generate notes. These events have fueled anxieties about the potential for AI to compromise journalistic standards.

Recent NYT Scrutiny

Earlier this month, a piece in the NYT’s ‘Modern Love’ column sparked debate, with readers suggesting it sounded “EXACTLY like AI slop.” A study published by The Atlantic found that opinion sections of major news outlets, including the NYT and The Wall Street Journal, were six times more likely to contain AI-generated content than news articles, suggesting that AI-written content may have been unknowingly published.

The author of the ‘Modern Love’ column admitted to using AI chatbots like ChatGPT as a “collaborative editor” for inspiration and guidance.