A new study found that half of newsrooms use Generative AI tools. only 20% have guidelines

Ever since OpenAI’s ChatGPT burst onto the scene in late 2022, there has been no shortage of voices calling Artificial Intelligence and Generative AI a specific game-changing technology.

The news industry in particular is grappling with some deep and complex questions about what Generative AI (GenAI) could mean for journalism. While it seems increasingly unlikely that AI will threaten the jobs of journalists as some might have feared, news executives are asking questions about information accuracy, plagiarism and data privacy, for example.

To get an overview of the state of the industry, we polled a global community of journalists, editorial managers and other news professionals in late April and early May about their newsrooms’ use of Generative AI tools.

101 participants from all over the world took part in the survey. here are some key takeaways from their responses.

WAN-IFRA members can access the full results of the survey here.

Half of newsrooms are already running GenAI tools

Given that most Generative AI tools have only been available to the public for a few months at most, it’s pretty remarkable that Almost half of our survey respondents (49 percent) said their newsrooms use tools like ChatGPT.. On the other hand, since technology is still evolving rapidly and in possibly unpredictable ways, it’s understandable that many newsrooms are wary of it. This may be the case for respondents whose companies have not (yet) implemented these tools.

In general, the attitude towards Generative AI in the industry is extremely positive; 70 percent of survey respondents said they expect Generative AI tools to be useful for their journalists and editorial offices. Only 2 percent said they don’t see any value in the short term, and another 10 percent aren’t sure. 18 percent believe the technology needs more development to be truly useful.

Content summaries are the most common use cases

While there has been a bit of an alarmist response to ChatGPT, asking whether the technology could replace journalists, the number of newsrooms actually using GenAI tools to generate articles is relatively low. Instead, The primary use case is the ability of tools to digest and condense information, for example for summaries and bullet points, our respondents said: Other key tasks journalists use the technology for include simplified research/search, text correction and improved workflows.

Going forward, common use cases are likely to evolve, however, as more and more newsrooms look for ways to use the technology more broadly and integrate it more into their operations. Our respondents emphasized personalization, translation and workflow / efficiency improvement as specific examples or areas where they expect GenAI to be more useful in the future.

Few editors have guidelines for using GenAI

There is a wide spread of different practices when it comes to how the use of GenAI tools is controlled in newsrooms. So far, most publishers have taken a relaxed approach. Nearly half (49 percent) of survey respondents said their journalists have the freedom to use technology as they see fit. An additional 29 percent said they do not use GenAI.

Only a fifth (20 percent) of respondents said they have guidance from management on when and how to use GenAI tools., while 3 percent said the use of technology is not allowed in their publications. As newsrooms grapple with the many complex questions surrounding GenAI, it seems safe to assume that more and more publishers will establish AI-specific policies on how to use the technology (or perhaps ban its use entirely).

Inaccuracies and plagiarism are major concerns of newsrooms

Given that we have seen some cases where a media outlet has published content that was generated with the help of artificial intelligence tools and which was later found to be false or inaccurate, it is not surprising that Information inaccuracy/content quality is the number one concern among publishers when it comes to AI-generated content. 85 percent of survey respondents highlighted this as a specific issue with GenAI.

Another concern on publishers’ minds is plagiarism/copyright infringement issues, followed by data protection and privacy issues. It seems likely that the lack of clear guidelines (see previous point) only reinforces these uncertainties, and that AI policymaking should help mitigate them, as should staff training and open communication about the responsible use of GenAI tools.

WAN-IFRA members can access the full results of the survey here.

Source link