A recent study from Ludwig Maximilian University of Munich (LMU) has revealed that traditionally-crafted news articles are generally easier to understand than those created through automation. This research, published in the journal Journalism: Theory, Practice, and Criticism, involved a survey of over 3,000 online news consumers in the UK. Participants evaluated 24 news pieces, with half generated by AI and the other half written by journalists.
The lead author, Sina Thäsler-Kordonouri, noted that readers found the automated articles to be significantly less comprehensible, despite the fact that these articles had undergone sub-editing by human journalists before publication.
One major reason for this dissatisfaction was the word choice in the AI-generated texts. Readers reported that these articles included an excess of inappropriate, complex, or unusual terminology. Additionally, the automated pieces were perceived as less effective in their handling of numbers and data.
The study indicates that these shortcomings in the automated articles' presentation of data and language contributed to their lower comprehensibility. However, readers expressed similar satisfaction with both automated and manually-written articles regarding their writing style and overall narrative flow.
Professor Neil Thurman, who oversaw the research, emphasized the need for improved human sub-editing of automated news content. He suggested that journalists and technologists should focus on minimizing the use of numbers, clarifying difficult vocabulary, and enhancing descriptive language to help readers better grasp the stories.
This research is significant as it marks the first exploration of how manual and automated news articles compare in terms of comprehensibility and the underlying reasons for any differences. Thäsler-Kordonouri concluded that maintaining and refining human involvement in the automated production of data-driven news is essential for improving reader understanding.