Home Tech UP Technology What happened to the posts on Instagram during the national strike?

What happened to the posts on Instagram during the national strike?

0

Facebook, owner of the platform, ensures that it is not an act of censorship by the company, while adding that there are global problems with the content of the Stories. But the case once again reopens the huge debate about the moderation of online content, a particularly sensitive issue in the midst of a social outbreak like the one Colombia is experiencing.

This Wednesday was another night of national strike that was accompanied by complaints about problems with the flow of digital communications. On the morning of this Thursday, a long list of Instagram users in Colombia began to see how some publications that they made, or made, within the platform’s stories module began to disappear. Not all kinds of content, but in particular that related to the social outbreak that Colombia has been experiencing since last week.

Images or videos of the demonstrations, some with explicit content and violence shown graphically, but also photos of poems that speak of the protest or graphics about the right of habeas corpus were left off the platform.

Also read: Why did the internet go down in some parts of Cali on May 4?

This, at this precise moment, has led to a particular sensation of censorship or manipulation of the free flow of information. These are complex times, with a general distrust. Content removal reinforces these perceptions and feelings.

Facebook admitted Thursday that it has had problems with content on its platform related to the national strike. In its official statement, the company (owner of Instagram) said this: “We know that people come to Facebook to discuss the issues that interest them the most and we have rules to maintain a safe environment on our platforms, which do not allow the posting of violent images. . Within the framework of the demonstrations in Colombia, we are working to enable publications that are of public interest, warning users that they may be sensitive images ”.

The company explains that, in response to its rules on violent content, its algorithms, in effect, removed content related to the national strike. And next line he clarifies that there is a team of people reviewing this topic and reestablishing the publications that, in some cases, may carry a message that warns about the graphic content. The company puts it up front: They are not censoring or blocking the flow of information.

The company ensures that since Monday it has a team of 70 people (based in places like Mexico, Brazil, Colombia and the United States) dedicated exclusively to supervising operations in Colombia, with a view to offering supervision and human intervention in the midst of the current context. from the country.

About 2:00 pm this Thursday, Instagram posted the following through its Twitter account (originally in English): “We know that some people are experiencing problems uploading and viewing Stories. This is a global technical issue that is not related to any specific issue and that we are fixing right now. We will share more information soon ”. A little later he added, also via Twitter: “Thank you very much for reporting all the cases. We have just confirmed that it is a problem affecting the visibility of stories at a global level that is not related to the situation in Colombia, and that our teams are working hard to solve ”.

Follow the news of El Espectador on Google News

Moderation and reality

At this point in history we have to talk about content moderation, probably one of the largest and most interesting debates in terms of digital policy and that, at least until this week, did not seem to have a massive scale in Colombia.

Beyond suspicions and mistrust in the power of a global corporation (real and in some cases legitimate), the underlying debate here is that of content moderation. This is a kind of minefield in which there are problems on a global scale, but without solutions that can be implemented on the same scale: one size here does not serve everyone and, in many cases, no one.

Digital platforms, read companies like Twitter or Facebook, have rules of the game that usually include prohibitions on publishing child pornography (one of the clearest and most globally accepted red lines) or content that spreads hate speech or incites violence. These last two points are more complex and granular issues, as their definitions in contexts such as a demonstration can begin to cross lines with information or freedom of expression. The word context here is key, but we’ll come back to it a bit later.

Also read: What if there is no tax reform? These are some of the options to obtain resources

Moderating content on a global scale is practically impossible, in large part because a few people agree to define what is hate speech and what is not: whether it is limited to issues of race, gender, religion or other categories that typically have been subjected to hateful practices, if it includes all forms of harassment and bullying or if it only applies when it is exercised from a position of power against those who have not had it.

“These definitional problems, which often entangle judicial authorities around the world, also impact the work of online platforms. As a result of these debates and controversies, efforts to remove hate speech often come at the expense of freedom of expression, ”say Jullian York and David Greene, of the Electronic Frontier Foundation (EFF), one of the digital rights organizations. largest in the world.

“Hate speech represents one of the biggest problems that content moderation has on online platforms. Few of these services want to host hate speech, at best it is tolerated by some and none welcomes it. As a result of this, there are a series of efforts across various sites to try to solve this problem ”, they add from EFF.

And here we come to the importance of the context: is it incitement to violence to publish a video with images of police abuses in a time of social protests like the one Colombia is going through? The answer may be no.

A trained algorithm with clear instructions to disassemble content with explicit violence goes over one of the videos about the attack suffered by Lucas Villa in Pereira and can block access to this publication. Of course, the machine does not understand that this piece is key information about a violent act against unarmed civilians, nor does it recognize its value in the midst of protests that already have several dozen deaths and hundreds of complaints of abuse by the authorities.

For this, a human is needed, who knows how to recognize the contexts in which the content is produced, in order to privilege the flow of information and the free exercise of freedom of expression, to begin with.

And although the platforms have teams dedicated to this task around the world, the truth is that the first review is usually in the hands of a machine and when there are problems, such as complaints and appeals from users, the human can intervene there.

The process is light years away from being perfect, but it may even be a long way from being considered acceptable to many. “Making these determinations on a platform with billions of users from practically the entire planet is much more complicated, even more so when much of this work is assumed by outsourced and poorly paid workers or, worse, through automation technologies. York and Greene say.

The researchers add that “even though a portion of the moderation is done manually, Facebook and other companies are increasingly using automation technologies to deal with this content, implying that the human touch (which understands subtleties, nuances and contexts) is not always present in these tasks ”.

In the end, users may not notice it (or maybe they did not do it until this Wednesday), but the digital protest today is largely equivalent to exercising fundamental rights inside a shopping center: it is a public discussion, but in private environments, under rules established by a corporation.

You can also see: 5M National Strike: this is how the eighth day of protests was lived in the main cities of Colombia

According to Jonathan Bock, head of the Foundation for Press Freedom (Flip), “the discussion about platforms is key and relevant. This year that debate is going to move a lot. Platforms must establish some other type of contract with the people who use them. That is a conversation that must come from the private sector, about how these rules of the game are going to be established ”.

To return to a previous question: a video in which a person (uniformed or not) is seen shooting another can walk a delicate line between incitement to violence (depends on the context), but does they classify in this category a screenshot about a poem that talks about social protest or an image that explains how habeas corpus works? Probably not.

And yet, both were posts that experienced trouble on Instagram in Colombia. The explanation behind this, without going into censorship, may be more related to problems with the algorithm itself. Faced with an unusual flow of publications on the same topic, content removal can become massive (beyond the criteria of graphic violence) to prevent the spread of spam, for example. This happened during the Black Lives Matter protests in the United States.

“We are not totally clear about it yet, but what one can say is that when stress is placed on the algorithm, it can act in ways that were not foreseen”, clarifies Carolina Botero, director of the Karisma Foundation, a leading monitoring organization of digital rights in Colombia. And he adds: “But in any case, the burden of offering explanations and transparency always falls on the platforms, on the companies.”

And this inevitably brings problems. Even when you’re not talking about harsh and crude censorship. The example of what is happening with Instagram serves to illustrate the complexities of this scenario: it is very positive that there has been a team from Facebook reviewing the Colombian scenario for a few days, but, at the same time, it is delicate that there has been a withdrawal of content or problems accessing it, right at this critical moment for the country.

For Botero, “the feeling of censorship cannot be ignored. It is about looking for the origin and nature of the actions that take place on the platforms because it is different if the origin of the problem is the company, if it is a natural event (a natural disaster that breaks the infrastructure, for example) and it is different if the nature of the intervention is to protect the vulnerable community or if we are talking about an algorithm that acts strangely ”.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version