Assessing current platforms' attempts to curb misinformation
- braedensteele
- Apr 19
- 6 min read
Blog Post Assignment #4
This week, I am looking at two social media companies' attempts to curb dis/misinformation on their respective platforms. The platforms I will be looking at are Facebook and Twitter/X. Both of these platforms have been at the forefront of dis/misinformation being shared at a high level, especially during election time. We have also seen both CEO's of the companies (Mark Zuckerberg for Facebook and Elon Musk for Twitter/X) become involved politically and make it potentially easier to spread dis/misinformation to better reflect their political views and support for right-wing policies and candidates.
Facebook, founded in 2004 by Mark Zuckerberg, is a social media platform where people can create profiles and share things with "friends" such as their day, family photos, and videos. But in recent years, Facebook has become known as a massive dispenser of false information. It is spread in a very interesting way as well. For example, according to an article by Northeastern Global News, most Facebook info is spread by a "big event post," or effectively a big account posting something that is then shared quickly, but misinformation is shared at a smaller rate and spreads almost like a virus. Effectively, someone sends it to a friend, who sends it to their friends, and so on. According to the article, Facebook at the time cracked down on individual pages where people shared, especially during election time.
But that article was about the 2020 election, which also had other misinformation issues such as COVID and Hunter Biden conspiracies. According to Mark Zuckerberg in an interview with Joe Rogan, facebook was willing to censor misinformation when government entities such as the FBI and White House requested it, such as misinformation about Hunter Biden. During the 2020 election, Facebook (and Twitter) restricted the sharing of a New York Post article that fed into unfounded rumors and conspiracy theories about the Biden family. Zuckerberg made it clear in the Joe Rogan interview that while the FBI did not force them to do anything, Facebook execs did feel heavy pressure to do what they asked.

However, while the algorithm and practices Facebook implemented were successful during the 2020 election, they quickly reverted back to their old algorithm after only a few months. Recently, Facebook's algorithm struggled to keep out the mass amounts of misinformation and hate speech that flood their platform, but this isn't exactly by mistake. Going back to Zuckerberg's interview, which was with right-wing podcaster Joe Rogan, Zuckerberg revealed that he resented the Biden administration and the FBI for pressuring the company into more strict monitoring of their users.
This led to Zuckerberg deregulating a lot of the measures monitoring hate speech and misinformation, which MSNBC believes could be revenge against Democrats for going after tech companies like his in court on many occasions. Since then, their algorithm has been flooded with misinformation and hate speech, which increased during the election last year. According to Meta's (Facebook's parent company) current policies surrounding misinformation, Meta only really cares about misinformation that may directly lead to violence or interrupt political events. Other than that, they generally just use community notes, but they obviously can't put community notes on every post. And as stated previously, much of the misinformation spread on Facebook is through individual user communication, not big posts.
In my opinion, seeing Zuckerberg be careless about misinformation likely because he felt slighted by Democrats is very disappointing. Who knows what kind of impact large amount of it on Facebook during the last presidential election affected the outcome, but it almost certainly had an impact. I personally do not use Facebook, but much of my family does. And sadly, they are also the ones who I see promote election and vaccine conspiracies a lot. I think the best thing that facebook can do is exactly what they did during 2020, when they closely monitored fake news around the election and COVID/vaccines. Twitter did the exact same thing, and it was possibly even more strict, which we will get into. But as long as Zuckerberg seems to be trying to please the new administration, I don't think any sweeping changes to their algorithm will happen soon.
Twitter/X
Twitter/X is an interesting company to analyze regarding misinformation policies because of just how swiftly they have changed course since Elon Musk took over. While I will try to keep my own personal experience of using the app to a minimum, I can say that I have seen a rise in hateful speech and dis/misinformation on the app. I can't tell you how many accounts I had to block to because of the heinous things they were saying. But do to the app's new Blue checkmark purchase that gets you more noticed, many of these accounts are the most prevalent and promoted you will see on the app.
Going back to the 2020 election, pre-Musk ownership (Jack Dorsey, founder, owned it at the time), Twitter's misinformation policies were far stricter. For example, after the Capital riots, Twitter permanently suspended Donald Trump's account, citing multiple violations of their personal conduct policies regarding misinformation. While Twitter generally suspended normal accounts without much fanfare, they had been under fire to do the same to Trump, but resisted because he was a world leader. Trump proceeded to try and make new accounts to get around the ban, but those were quickly suspended as well. I personally believe this was a great example of Twitter's previously unyielding policy towards misinformation, and a great example for other companies to follow.
This was until Elon Musk took over. Elon Musk had repeatedly criticized Twitter's misinformation policies, saying that they violated free speech (because Twitter is a private company and not a government platform, they can moderate speech however they please). It is no secret that Elon Musk is very conservative, and has promoted several Republicans and criticized Democrats and their policies around the world. This includes very incendiary rhetoric, such as trying to free a jailed far-right activist named Tommy Robinson.

After Musk bought Twitter for $44 billion, he made clear that he intended to loosen up the platforms restriction of allowed speech. Since the purchase, he laid off countless numbers of staff and rolled back several prominent policies and decisions made under those policies, such as reinstating Donald Trump's account, among several other banned ones.
According to Twitter/X's blog on their rules changes, they want to promote freedom of speech as much as possible without fear of censorship. While they make it clear that violent or graphic content is not allowed under almost any circumstance, they would much rather limit the reach of the content than outright ban it. Twitter also employs the use of community notes that can provide context or expose misinformation, but generally leaves it up with it (my last blog showed an example of one). However, Musk has repeatedly voiced his dislike for community notes, saying they are skewed towards "legacy media" and need a "fix." This is believed to be because he simply doesn't like that they disagree with his world view.
While Twitter is not a complete cesspool of misinformation and community notes do help expose a lot of the dis/misinformation on the site, I have personally seen a lot more misinformation, hate speech, and otherwise disturbing content. It is not just an opinion either, as the amount of hate speech and slurs increased dramatically immediately after Musk took over. The biggest thing I have seen is the increase of bot accounts that seem to just copy content or reply with responses that have nothing to do with the thing they are replying to (I am not the only one that notices this). I think the best way to fix the problem, to start with, is get new ownership. But in general, Twitter' sold policies, while criticized by conservatives, were far more effective in limiting misinformation and hate speech. And banning accounts who repeatedly spread misinformation, while staunch, is effective in my opinion.
It's simple but I think the best way to improve Facebook and Twitter's misinformation control is to just go back to the policies they had in years prior. They were more effective, less influenced by partisan pressure and the owner's own political beliefs, and would be drastically better than what they have now.
Comments