On January 7, 2025, Meta announced sweeping changes to its content moderation policies, including the end of third-party fact-checking in the U.S., and rollbacks to its hate speech policy globally that remove protections for women, people of color, trans people, and more. In the absence of data from Meta, we decided to go straight to users to assess if and how harmful content is manifesting on Meta platforms in the wake of January rollbacks.
There’s no speech police in the real world, you don’t need it online either.
That is the most simplistic, uneducated opinion on the subject that is possible to make. You should be ashamed.
Nah, and cool opinion.
As someone else wrote, why should anyone put much confidence in “some giant/evil megacorp”? They’re not a philanthropic organization & they’re not real authorities. We can expect them to act in their own interest.
If content is truly illegal or harmful, then the real authorities should handle it. Simply taking down that content doesn’t help real authorities or address credible threats. If it’s not illegal or harmful, then we can block or ignore.
People already curate their information offline. It seems reasonable to expect the same online.
If that’s the case, it should be easy enough for you to come up with an actual argument against it.