On January 7, 2025, Meta announced sweeping changes to its content moderation policies, including the end of third-party fact-checking in the U.S., and rollbacks to its hate speech policy globally that remove protections for women, people of color, trans people, and more. In the absence of data from Meta, we decided to go straight to users to assess if and how harmful content is manifesting on Meta platforms in the wake of January rollbacks.

    • @lmmarsano@lemmynsfw.com
      link
      fedilink
      English
      1
      edit-2
      2 hours ago

      Nah, and cool opinion.

      As someone else wrote, why should anyone put much confidence in “some giant/evil megacorp”? They’re not a philanthropic organization & they’re not real authorities. We can expect them to act in their own interest.

      If content is truly illegal or harmful, then the real authorities should handle it. Simply taking down that content doesn’t help real authorities or address credible threats. If it’s not illegal or harmful, then we can block or ignore.

      People already curate their information offline. It seems reasonable to expect the same online.

      • @zarkanian@sh.itjust.works
        link
        fedilink
        English
        149 minutes ago

        There are speech police in the real world. Workplaces don’t allow you to use slurs or to harass your co-workers. That’s just one example. In fact, any social group that I can think of will punish you for saying something. Some are more lenient than others, but every one has a line that you cannot cross.