On January 7, 2025, Meta announced sweeping changes to its content moderation policies, including the end of third-party fact-checking in the U.S., and rollbacks to its hate speech policy globally that remove protections for women, people of color, trans people, and more. In the absence of data from Meta, we decided to go straight to users to assess if and how harmful content is manifesting on Meta platforms in the wake of January rollbacks.

      • @lmmarsano@lemmynsfw.com
        link
        fedilink
        English
        1
        edit-2
        54 minutes ago

        Nah, and cool opinion.

        As someone else wrote, why should anyone put much confidence in “some giant/evil megacorp”? They’re not a philanthropic organization & they’re not real authorities. We can expect them to act in their own interest.

        If content is truly illegal or harmful, then the real authorities should handle it. Simply taking down that content doesn’t help real authorities or address credible threats. If it’s not illegal or harmful, then we can block or ignore.

        People already curate their information offline. It seems reasonable to expect the same online.