The Safety in Localization

Photo by Markus Winkler on Unsplash

Photo by Markus Winkler on Unsplash

Globalization Motivation is a series intended to oversimplify in order to evangelize localization, globalization, and above all collaboration. Here is part 7. 

When evangelizing localization, I often refer to the big three things to avoid: alienating your audience, non-compliance, and legal action. I am now considering adding a fourth element, which is to avoid harm. The safety of users and customers is a hot issue. Protection from people and entities seeking to use private information for illegal purposes and those who seek to hurt others.

This is not why social media was created and certainly not the mission of any of these platforms. They were all created to bring people together, which is wonderful. Create communities, stay in touch with loved ones, and connect with people all over the world who share common interests. That should be protected at all costs. Exactly why would anyone want to use these platforms to exploit innocent people is beyond my mental capacity. Unfortunately, the reality is that there are people who seek to ruin all that is beautiful about these communities. 

Hate speech and fake news have become huge challenges in recent years. On social media platforms, there have been predators who seek to abuse users. These companies have devoted resources to monitor and detect these occurrences and protect users who are using the platforms for exactly what they were intended to do: bring people together and create community. 

This adds extra challenges for localizers and engineers. What is considered hate speech and abuse varies by language and even by locale. When watching Netflix’s series Narcos back in 2017, it was important to understand what was an insult in Colombian drug cartel sub-culture in order to follow along. Insults vary per language, culture, and individual group. Imagine teaching machine translation to search for curses and hateful tone in various languages. It’s a daunting and ever-changing task. 

Harmful Content 

Referring back to the big three, why does avoiding harm need to be a unique category? Content governance’s goals are to comply with all three. We want to avoid offending our users, we also seek to not break any local laws and never want to incur a lawsuit. However, these three don’t necessary incur HARM. Harm is a completely separate issue because the brand didn’t create the content, and as a result, can’t easily control it. We are now referring only to user generated content because that is the content that is being monitored. The content that brands have the least control over. Most companies wouldn’t seek to hurt the people using their product or service or want them to be harmed by their supportive messaging in their marketing or user documentation. If they do, it is was not intended, and they will quickly work to fix the issue immediately. 

User Generated Content

This brings us to a different category of content, user generated content. There are incredible benefits to having the praise of ordinary people. It is not simply an aspect of social media. Many brands have become dependent on user trust to gain valued customers. This can also be referred to as influencer marketing. We often look to the opinions of the real people who give their feedback in the form of reviews or even video demos to influence our buying decision – this is known as social proof. You could have someone posting a video to TikTok demonstrating a product or sharing a review of a service on Yelp. 

Censorship vs. Safety 

This Fireside Chat with Vijaya Gadde (Twitter) and Berhan Taye (Access Now) from RightsCon 2021 explores the topics of censorship and safety. Vijaya is Head of Legal, Policy and Trust at Twitter. She is asked very difficult questions on globalization of content moderation, including exactly what is being moderated, how that moderation is executed and how it will need to constantly be evaluated and evolve. 

The values of individual companies will have a huge impact on the evolution of global moderation of user generated content. When asked how Twitter is addressing possible censorship in a region of conflict, Vijaya speaks to what the algorithms are designed to detect. This is a highly sensitive issue. The platforms trying to protect their users may be accused of censorship when that is not the intent. The technology itself and the intended use of that technology will continue to be debated and dissected forever. Meaning you may not agree with what a brand interprets as dangerous content, and what is considered dangerous today may not be considered harmful in the future. 

What is considered harmful to a particular culture, local government, individual, and ultimately the brand, will add to the complexity of this issue. However, it’s something that is being taken quite seriously by social media outlets. Brands are now tasked to INTERPRET what is harmful to consumers per market. 

Dangerous Content Types 

Here are some examples of what content is being evaluated for: 

·      Hate speech – Language that offends, insults, and is abusive. 

·      Misinformation – Spreading information that is false, leading to confusion, or worse, harm. 

·      Privacy – User information that should be kept private.

·      Predatory behavior – People who prey upon innocent people in order to deceive them. 

·      Incitement of violence – Attempt to rally people to hurt others. 

·      Proprietary content – Content that users share without the content owner’s consent. 

·      Fake accounts/Bots – Accounts that are created to do harm and collect information.  

Safe Global Community 

 My intention is not to make social media outlets to seem unsafe, but actually the opposite. This problem is not even limited to these platforms. Every company has a responsibility to customer and user safety and this has become a major priority for many companies over the last few years. 

Where localization fits in this equation is that they are best positioned to protect customers all over the world. This would need to be a collaboration with the brand. Wherever your customer lives, you are doing your very best to keep them safe. The greater our dependence on digital communities, the higher the priority will be on making sure that the community is a place for free expression and trust. The challenge will continue to be not only seeking out these abuses but discovering new threats. Digital communities will not be going away, they are what keeps us connected to knowledge and our loved ones. 

Interpreter of Values 

The localization specialists, when serving as cultural ambassadors, have the ability to interpret what the values are per locale. The moderation of user behavior will continue to be technical as it would be impossible to have a human view every post. This is not only an interpretation of what is considered abusive in a particular region because it must be a strategic collaboration with what the company also considers to be abusive based on those interpretations. This is a highly complex concept and who better to tackle it than our localization professionals? 

Trust as Currency 

The greatest currency brands have now is trust. Brand trust is paramount in today’s global marketplace and localization specialists are in the best position to assist with this mission of building trust with the global consumer. Ultimately, it is the brand’s dedication to trust and to the collaboration with localization professionals that will bring success to this business challenge. Localization specialists are KEY elements to PROTECT our environment and the world we live in. One more reason, and a vital one at that, why localization professionals are your trusted strategic partners for brand growth and the evolution of your business.