The Los Angeles Times' plan to add a ābias meterā to stories is one of the most boneheaded moves in media history. But as awful as it appears on the surface, it's actually worseāa lot worse.
First, let's look at a couple of statements from Times owner Patrick Soon-Shiong.
In the interview with Soon-Shiong in the Times, those two paragraphs are some distance apart, but do you see the massive contradiction?
Soon-Shiong first brags that under his leadership the Times is clearly thriving as the paper has won six Pulitzers during that periodāthen he admits that he wasn't involved in the journalistic decisions that brought home those wins. Those wins came while Soon-Shiong was preoccupied with biotech company ImmunityBio which generated much of his wealth. During the six and a half years that the paper has been winning awards, he has been letting the paper's authors and editors write their stories their way.
No more of that.
Now that his second big drug, Anktiva, has been approved, Soon-Shiong finds "the demands of his biomedical career slightly reduced," and he's using his new leisure hours to take more direct control of the Times. That became clear when he infamously stepped in to stop the Times from endorsing Kamala Harris and killed a whole series of editorials detailing the threat of Donald Trump.
He has followed up by insisting on "more conservative" voices on the editorial page and demanding a more "'fair and balanced' approach" from the paper. But mimicking the Fox News motto is far from all he's planning.
The Times' billionaire owner insists that he's going to create "a digital 'bias meter' to alert readers about the ideological tilt of the paperās content." That meter will be slapped onto stories, giving them a ranking from āfar leftā to āfar right.ā And to be sure that the meter itself is unbiased, Soon-Shiong says he will use his own unbiased creation.
The problems with this single paragraph are nearly limitless. First, there is no such thing as Artificial General Intelligenceāa system capable of reasoning and dealing with data of any sort. (In fact, very good arguments can be made that there's currently no Artificial Intelligence of any kind.) So the idea of taking a biomedical system and turning it into a media critic is, at best, an act of ultimate hubris.
Not only are medical AIs not trained for journalism, but they've repeatedly failed at their purpose-built tasks.
One system that gained acclaim for its ability to differentiate skin cancers from non-cancerous lesions turned out not to be detecting differences in texture or shape, but taking its cues from the fact that physicians often held up rulers next to cancers to indicate their size.
Making an artificially intelligent system robust enough to handle the variation inherent in image input also poses a hurdle ā¦ the algorithm will use features of nonstandardized photos to guide decision making. For instance, in our work, we noted that the algorithm appeared more likely to interpret images with rulers as malignant. Why? In our dataset, images with rulers were more likely to be malignant; thus the algorithm inadvertently ālearnedā that rulers are malignant.
But the little problem of "rulers = cancer" wasn't discovered until the paper about the systems' astounding accuracy had already been published and used as the basis for further work. Unsurprisingly, it's much easier to find media stories touting this amazing new tool than it is to find coverage of its eventual retraction.
Another system that was supposedly trained to spot tuberculosis faced similar ignominy a few months later when it turned out that the predictions were instead based on the age of the medical imagery. The researchers, in their effort to find sufficient numbers of TB-positive images, had reached back decades for those imagesāand old, blurry images are what the model learned to target.
These errors are hard to detect, even by researchers used to sorting through rafts of data and by medical journal readers who are quick to jump on potential issues. They're hard because these systems are essentially black boxes. What the system has actually learned remains a mystery locked inside the model. It's often not until these systems turn out to be colossal failures in the real world that the errors are discovered.
If Soon-Shiong takes his proprietary systemāa system created for a wholly different purposeāand grafts it onto the LA Times, it will absolutely replicate new forms of the "rulers = cancer" problem.
Because seeking that kind of association is exactly what these systems do.
Take for example the issue of the climate crisis. Even the term is likely to be enough to make the bias meter twitch to the left. Because right-wing sources don't use that term. They also don't quote United Nations Climate Action reports or academic research on the climate. They don't cite research linking severe weather to rising levels of manmade CO2, or discuss how problems with immigration are increasingly driven by a warming world. Instead, right-wing sources cite Republican politicians making trite observations that it is, indeed, cold again this winter or CEOs claiming that addressing the crisis will cost too much money.
Discussing the climate crisis will move the dial left. Interviewing a prominent scientist will move it further. Pointing out the absolute certainty between climate change and human actions will peg the dial.
The result is that any factual, well-researched article on the climate crisis will be invariably labeled "far left" by Soon-Shiong's bias meter. Repeat this kind of inevitable analysis for almost every issue out there.
Only pure ignorance will make it through as unaligned. For large sections of the Times' audience, any stamp that indicates a story is left or right will be tantamount to saying "This is inaccurate, so don't bother to read it."
What Patrick Soon-Shiong is creating is a system that tells his readers that the content of the paper he owns can't be relied on for accuracy. It's hard to imagine any way to more quickly delegitimize and decimate journalism.
Which may, of course, be the intent.
Update: To illustrate just how ridiculous and unhelpful a "bias meter" can be, here's a recent article from Public Enlightenment looking at some of the services that currently rank the political leanings of media outlets.
These services proudly ignore both accuracy and evidence to come to conclusions such as that the Associated Press is "far left" while Fox News is "center-right." They also disregard ownership, meaning that Reason Magazine, controlled by a libertarian think tank supported by the Koch Brothers, is rated as "center." That also means that Reason's position on the climate crisisāthat it should be ignored in favor of more fossil fuel useāis also considered "middle." Proudly pro-Trump websites such as Real Clear Politics were also rewarded a rating of "center."
When being pro-Trump and pro-big oil is considered the middle, exactly what is on the right? Apparently not much. Even tabloids like The Sun and New York Post and conservative mouthpieces like the Daily Mail are rated as "center."
To reproduce one of the images included in that Public Enlightment report...
If you're only rating perspective, and that perspective puts the New York Post in the center, all you're really doing is playing into the lie that accurate reporting is inherently biased to the left.
Any ranking service that examines articles on a political rather than factual basis is inherently harmful to independent, unbiased journalism. And every one of these bias charts seems to start with a huge bias.
Comments
We want Uncharted Blue to be a welcoming and progressive space.
Before commenting, make sure you've read our Community Guidelines.