Facebook violated Palestinians’ right to free expression, says report commissioned by Meta


Meta has finally the findings of an outside report that examined how its content moderation policies affected Israelis and Palestinians amid an escalation of violence in the Gaza Strip last May. The from Business for Social Responsibility (BSR), found that Facebook and Instagram violated Palestinians’ right to free expression.

“Based on the data reviewed, examination of individual cases and related materials, and external stakeholder engagement, Meta’s actions in May 2021 appear to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred,” BSR writes in its report.

The report also notes that “an examination of individual cases” showed that some Israeli accounts were also erroneously banned or restricted during this period. But the report’s authors highlight several systemic issues they say disproportionately affected Palestinians.

According to the report, “Arabic content had greater over-enforcement,” and “proactive detection rates of potentially violating Arabic content were significantly higher than proactive detection rates of potentially violating Hebrew content.” The report also notes that Meta had an internal tool for detecting “hostile speech” in Arabic, but not in Hebrew, and that Meta’s systems and moderators had lower accuracy when assessing Palestinian Arabic.

As a result, many users’ accounts were hit with “false strikes,” and wrongly had posts removed by Facebook and Instagram. “These strikes remain in place for those users who did not appeal erroneous content removals,” the report notes.

Meta had commissioned the following report from the Oversight Board last fall. In to the report, Meta says it will update some of its policies, including several aspects of its Dangerous Individuals and Organizations (DOI) policy. The company says it’s “started a policy development process to review our definitions of praise, support and representation in our DOI Policy,” and that it’s “working on ways to make user experiences of our DOI strikes simpler and more transparent.”

Meta also notes that it has “begun experimentation on building a dialect-specific Arabic classifier” for written content, and that it has changed its internal process for managing keywords and “block lists” that affect content removals.

Notably, Meta says it’s “assessing the feasibility” of a recommendation that it notify users when it places “feature limiting and search limiting” on users’ accounts after they receive a strike. Instagram users have long complained that the app shadowbans or reduces the visibility of their account when they post about certain topics. These complaints increased last spring when users reported that they were barred from posting about Palestine, or that the reach of their posts was decreased. At the time, Meta an unspecified “glitch.” BSR’s report notes that the company had also implemented emergency “break glass” measures that temporarily throttled all “repeatedly reshared content.”

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *