Explore key topics in this blog
While it is tempting to think that artificial intelligence (AI) is a relatively new phenomenon in the broadcast industry, the reality is that it has been used in different capacities to leverage data to support information gathering and decision making for over a decade.
Even though many conversations now are about generative AI (GenAI), as exemplified by Google’s Gemini and OpenAI’s ChatGPT, traditional AI has been used to automate workflow processes within the industry for several years, and we have more experience with it than many realize. (A big difference between GenAI and traditional AI is that GenAI can analyze narrative data and documents as well as create new assets or content from a vast amount of information it has been trained on and that is available to the tool.)
From research to verification of information, production, and distribution, and from accounting to workflow scheduling, AI and intelligent automation currently support routine tasks along the journalistic value chain. Indeed, it is highly likely that AI will have touched this blog you are reading now at several points in its journey, whether via a new generation of sophisticated spellcheckers and translation engines, efficient routing of internet traffic, search engine optimization, or some other AI-powered process.
Nevertheless, recent technological developments open new concerns regarding potential ethical considerations for the use of AI, which is of particular concern within newsroom environments.
GenAI use cases and dilemmas
When it comes to GenAI, there are two main ways it can be used in the modern newsroom.
- Support with text generation: GenAI can summarize existing text for shorter broadcasts, rewrite it for different audiences (i.e., for social media platforms and different demographics) and so on. This functionality is an aid to writers and can streamline and support the ability to take ideas and help frame the text to express the content. This is an already popular, everyday use case.
- Complete text generation: GenAI can be used to generate complete text from natural language prompts. This content must be checked by humans afterwards, preferably using the “four-eye” principle requiring two people for approval.
Both use cases concern the manipulation of text, and both raise several ethical dilemmas.
For example, say an error or misinformation makes its way into a published article, such as a news report misreporting the age or nationality of a major figure. Even though it may appear inconsequential, it can cause reputational damage to news organizations that pride themselves on accurate reporting. In this example, who is responsible for the error? Is it the programmer who wrote the original code? Is it the trainer who trained the AI model on the data set? Is it the journalist who wrote the prompt that delivered the false information? Or, is it the AI itself, as all GenAI platforms have been prone to what is termed “hallucinations,” where they effectively make up facts to fulfill the brief contained in the prompt.*
The answer is: All of the above.
Within a media organization, there needs to be a collective responsibility that acknowledges the complexity of AI implementations and makes individuals and departments accountable to ensure that AI productivity gains are not undermined by an erosion of the primary truth-telling function of the newsroom. Users of any AI solution must learn to be discerning in the information provided and leverage the tool as a support rather than a replacement for validation processes.
Keeping up with rapid change
Media organizations also need to understand urgently that this is a fast-evolving field with many open questions. For example, what is the source of current information being generated by the AI? GenAI is trained on datasets, but not all reflect the very latest information, even if collated from the internet. How can organizations ensure they access current information and have transparency of the information sources?
With global events contributing to increasing amounts of misinformation across more and more social channels, how do we detect fake news? And how do we prevent it from being recycled, first to the public that trusts the media for veracity, and second to the next generation of AI that is likely to treat the fake news as absolute fact?
The adage of “garbage in, garbage out” has never been more apt. We can even think of misinformation as a virus within an AI system, propagating and spreading through it with unknowable consequences. In some cases, the false information can be pulled from multiple sources that are not correctly linked and the AI output can be very convincing. While not intentionally falsified, the misinformation can be misleading and result in inaccuracies in what is reported.
So, how do we train AI to avoid incorrect associations and ensure that there is no misinformation and that it is free of any bias, and how do we avoid inaccuracies and bias when writing natural language prompts?
One of the central answers to the latter question is practical user training. Notably, the media organizations with the most successful implementations of AI so far have robust codes of practice in place detailing its usage and limitations. Whether it is by writing more effective prompts that shorten the iteration cycle, building transparency into the AI solutions, and/or understanding the technology’s limitations and how and where it can most appropriately assist in the newsroom, it is essential to have a detailed and responsible AI strategy rather than a succession of ad hoc responses.
It is also important to lean into expertise in the area. Newsroom solution providers such as CGI have made significant investments in not only adding AI tools into the workflow, but also incorporating guardrails into their systems to ensure that they can be used responsibly. Such tools have been built to help journalists in their day-to-day work to create better content, not replace them.
AI tools are likely to evolve with dizzying speed over the next months as organizations worldwide look to leverage the next generation of AI models. These advances will raise more ethical questions, in turn, as the variety of capabilities and use cases increase. Greater use of AI-generated presenters (virtual hosts created using AI) and fake videos are on the roadmap for 2024 election cycles. Media companies will need to understand the challenges these developments represent, and how they can be met with equal speed.
Please connect with me to continue this conversation.
*Read about preventing false data with humans in the loop (HITL), a best practice mandate for jurisdictional AI legislation, and at CGI.
This blog is based on the article Ethical considerations of AI in newsroom workflows originally published by TVB Europe.
Back to top