Hi all!
As many of you have noticed, many Lemmy.World communities introduced a bot: @MediaBiasFactChecker@lemmy.world. This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives. It has been helpful and we would like to keep it around in one form or another.
The !news@lemmy.world mods want to give the community a chance to voice their thoughts on some potential changes to the MBFC bot. We have heard concerns that tend to fall into a few buckets. The most common concern we’ve heard is that the bot’s comment is too long. To address this, we’ve implemented a spoiler tag so that users need to click to see more information. We’ve also cut wording about donations that people argued made the bot feel like an ad.
Another common concern people have is with MBFC’s definition of “left” and “right,” which tend to be influenced by the American Overton window. Similarly, some have expressed that they feel MBFC’s process of rating reliability and credibility is opaque and/or subjective. To address this, we have discussed creating our own open source system of scoring news sources. We would essentially start with third-party ratings, including MBFC, and create an aggregate rating. We could also open a path for users to vote, so that any rating would reflect our instance’s opinions of a source. We would love to hear your thoughts on this, as well as suggestions for sources that rate news outlets’ bias, reliability, and/or credibility. Feel free to use this thread to share other constructive criticism about the bot too.
My personal view is to remove the bot. I don’t think we should be promoting one organisations particular views as an authority. My suggestion would be to replace it with a pinned post linking to useful resources for critical thinking and analysing news. Teaching to fish vs giving a fish kind of thing.
If we are determined to have a bot like this as a community then I would strongly suggest at the very least removing the bias rating. The factuality is based on an objective measure of failed fact checks which you can click through to see. Although this still has problems, sometimes corrections or retractions by the publisher are taken note of and sometimes not, leaving the reader with potentially a false impression of the reliability of the source.
For the bias rating, however, it is completely subjective and sometimes the claimed reasons for the rating actually contradict themselves or other 3rd party analysis. I made a thread on this in the support community but TLDR, see if you can tell the specific reason for the BBC’s bias rating of left-centre. I personally can’t. Is it because they posted a negative sounding headline about Trump once or is it biased story selection? What does biased story selection mean and how is it measured? This is troubling because in my view it casts doubt on the reliability of the whole system.
I can’t see how this can help advance the goal (and it is a good goal) of being aware of source bias when in effect, we are simply adding another bias to contend with. I suspect it’s actually an intractable problem which is why I suggest linking to educational resources instead. In my home country critical analysis of news is a required course but it’s probably not the case everywhere and honestly I could probably use a refresher myself if some good sources exist for that.
Thanks for those involved in the bot though for their work and for being open to feedback. I think the goal is a good one, I just don’t think this solution really helps but I’m sure others have different views.
Removing the bias rating might be enough indeed.
Nah even credibility is subjective to MBFC.
The bot calls Al Jazeera “mixed” factually (which is normally reserved for explicit propaganda sources), and then if you look at the details, they don’t even pretend it has anything to do with their factual record – just, okay they’re not lying but they’re so against Israel that we have to say something bad about them.
One issue with poor media literacy is that I don’t think people are going to go out of their way to improve their literacy on their own just from a pinned post. We could include a link in the bot’s comment to a resource like that though.
Do you think that the bias rating would be improved by aggregating multiple factors checkers’ opinions into one score?
Yeah it’s definitely a good point, although I would argue people not interested in improving their media literacy should not be exposed to a questionable bias rating as they are the most likely to take it at face value and be misled.
The idea of multiple bias sources is an interesting one. It’s less about quantity than quality though I think. If there are two organisations that use thorough and consistent rating systems it could be useful to have both. I’m still not convinced that it’s even a solvable problem though but maybe I’m just being too pessimistic and someone out there has come up with a good solution.
Either way I appreciate that it’s a really tough job to come up with a solution here so best of luck to you and thanks for reading the feedback.