How Intelligence Technologies Are Reshaping Journalism
The constant breakthroughs and advancements in intelligence technologies have seeped into daily life in both unexpected, and increasingly normalized ways
by Elyette Levy
Advancements in intelligence technologies have seeped into daily life in both unexpected, and increasingly normalized ways. We connect these intelligence technologies to our corporeal bodies: they organize our schedules, tell us the weather, measure our health statistics, order our groceries, and inextricably invade our physical bodies in intimate ways. While technological advances have reshaped society in ways previously unimagined, it has also engendered many new philosophical discussions surrounding the place of the Digital in our lives.
As the private sphere has adopted and accepted these changes, the assistant has become the new ‘not enough’ role, with new technologies well along in their pursuit to not just assist, but replace humans altogether. Artificial intelligence has embedded itself in virtually every sector – much of it has been implemented in discrete ways that are often invisible to the user. This adds an additional layer of mystery to how AI is created and applied. And yet, emerging AI tools has also prompted a number of intersecting ethical debates, like those about eugenics or the morality of using ubiquitous facial recognition tools to stop crime.
Though intelligence technologies are expected to assist experts and improve accuracy in many fields, this may not be the case in journalism. By no means the universal equalizers they were intended to be, intelligence technologies — artificial writers, fact-checkers, and more specifically, algorithms — can in fact cause greater complications in journalism and compromise its integrity. The delegation of any journalistic task to an intelligent machine only worsens what media analysts lament as the loss of credibility and accuracy in the media, a problem that has been on the rise for decades.
In a world where we are confronted with never-ending streams of information and a growing list of platforms in which we get our information, one might expect people to self-educate and be more open to other perspectives. Quite the opposite is happening: experts found that false news is “over seventy percent more likely to be viralized or replicated than true news,” and all platforms are struggling to ensure their readership is properly educated. In this way, the creation of fact-checking bots could make dispelling misinformation much faster and free up resources for journalists to report on other issues.
This is something John Micklethwait, editor-in-chief of Bloomberg News, strongly advocated for when the publication launched a project designed to further integrate automation into their platform: “Done properly, automated journalism has the potential to make all our jobs more interesting. [...] The time spent laboriously trying to chase down facts can be spent trying to explain them.” Artificial intelligence has already been adopted by big media outlets to cover subjects that require more time and effort than they are worth, like reporting stories on sports scores, stock market prices, and earnings reports. Automated journalism could preserve critical resources in an increasingly precarious industry where more journalists are being laid off and media outlets shuttered. As such, this could help smaller publications and community newspapers allocate their limited resources to other newsworthy stories. This could also somewhat remedy the problem of declining advertising revenue.
Another way AI can help journalists is by connecting readers to the content they really have an interest all in one platform. Never before was it possible for a publisher to satisfy their entire readership, as newspapers and digital platforms only have so much space – and staff – to cover an array of topics. But with AI, editors can give their writers more freedom in what they write: no matter the topic, algorithms will make sure that these pieces get to the people who are interested in reading them, ensuring a certain amount of ad revenue, so that journalists can spend their time writing other more pertinent stories that are of interest to them and the communities in which they report.
This is also one of the main shortcomings of journalistic algorithms. Intelligence technologies and their journalistic application could compromise the integrity of the profession and the quality of the stories that are produced by AI. While a buzzword at this point, post-truth, defined as the “deliberate distortion that manipulates beliefs and emotions to influence public opinion” remains a concerning trend that must be quashed – AI may simply perpetuate this trend of reporting according to popularity in lieu of facts, as the programmers behind it may be tempted to adhere to ideological reporting over objective-based reporting.
Information manipulation is journalism’s number one threat, especially when it is financially motivated. And since agreement incites interaction, the capitalistic use of algorithms by mainstream media has exploited this concept to profit off of public misinformation and confirmation biases. This adds to the age-old journalist’s conundrum: does objectivity exist and is it even worth pursuing anymore? If even the most neutral stories can be targeted to a partisan audience — as we’ve seen in COVID-19 reporting in the past year — why bother getting both sides? How do we make the suppressed coverage of unpopular issues fair and objective?
As computers have been designed to correct human errors, one would expect them to improve the quality of journalism in reducing the potential for publishing inaccurate information. This may be true for statistics and numbers, but ultimately, human beings remain computers’ most significant teachers. Studies have shown the racial bias that has been programmed into some machines in the healthcare system, where precision and accuracy is quite literally a matter of life and death. In a system where patients of colour are already misdiagnosed at higher proportions and that has birthed a variety of myths about the innate medical differences between white people and people of colour, we can’t be surprised when computers don’t look beyond what’s encoded internally. In this sense, it is almost certain the same biases would be introduced in journalism, where the stakes are perhaps not as high, and therefore wouldn’t be programmed with the same kind of oversight. If machines are not immune to human biases, is there a point to using them as fact-checkers or editors? Why create tools to correct from human mistakes if they can’t act as the neutral arbiters we designed them to be?
"Journalism is a deeply human endeavour, with many tasks that can be augmented by machines but few that can be entirely replaced by them."
These words from journalism Seth C. Lewis describe a need to be consistently skeptical, especially as journalism becomes more artificialized and algorithmized. The firing of Dr. Timnit Gebru for co-authoring a paper highlighting the racial biases of AI has served as a reminder that Silicon Valley has traded accountability for profit, and that in their dearth of morals, it is our responsibility to keep journalism human.
Featured image made in Canva
Elyette Levy is a 22-year-old freelance journalist from Montreal. She has an interest in digital culture, politics, and arts. You can find her on Twitter (or anywhere else on the internet, really).
SUPPORT AD-FREE, READER-FUNDED INDEPENDENT COMMENTARY
Become a monthly supporter:Become a Patron!
Can't do monthly? Make a one-time payment via PayPal: