Trust and safety has become a critical business function for digital platforms and services. We look at why it’s so important, how it works and common challenges.

Content moderation for online services and communities is a hot topic of debate. Companies need to protect users and their brand, while grappling with issues around freedom of expression, misinformation, cyberbullying, radicalization, and more.

Trust and safety is the larger force behind content moderation. And increasingly, it requires investigators to adopt new levels of security and anonymity to avoid risk to themselves, their devices, and their company.

So what is trust and safety? Why is it important? And how does it work?

Understanding trust and safety

Evolving beyond just fraud prevention, trust and safety is now a business-critical discipline for building a culture of care, protection and positive experiences for all parties interacting with a company’s platform or services. Trust and safety teams focus on identifying and resolving potentially harmful situations, mitigating risks and establishing a foundation of confidence.

Typically, teams focus on key areas, such as:

  • Ensuring integrity of services: promoting customer/user confidence by establishing and enforcing policies for acceptable use and compliance
  • Protecting users of online services: preventing fraud and account takeovers or avoiding phishing scams
  • Protecting online services platforms: managing security threats and protecting from brand misuse around their services and how they operate

Learn more: Online markets move to block fake vaccination cards >

Why is trust and safety important?

Trust and safety teams are playing an increasingly critical role in helping companies protect customers from harmful content and preserve brand reputation. While automated moderation systems may flag content and usage violations as well as fraud, analysts need to understand the whole story behind an issue. Deeper research is often needed for scenarios such as:

  • Acceptable use enforcement: content moderation and site integrity maintenance, including review of harmful assets from hate groups and criminal enterprises, and phishing and malware-driven posts targeting the community
  • Marketplace surveillance: identifying, removing and potentially reporting to law enforcement counterfeit, stolen or fake digital and physical goods for sale
  • Fraud investigations: inquiries into compromised accounts (e.g., account takeovers and fake accounts) created using stolen credit cards and identity theft to make purchases, scam or phish other users, and spread malware

Learn more: COMB makes billions of leaked credentials easily discoverable >

Particularly with social media platforms and online forums, trust and safety is vital. Potential abuses can represent a wide range of risks for users. For example, analysts need to keep an eye out for “low risk’ abuses like spam or toxic language, ‘severe risk’ abuses like cyberbullying, doxxing and harassing, and ‘extreme risk’ abuses like selling child pornography online, spreading misinformation, and recruiting new members to terrorist groups.”

Trust and safety issues may also become heightened if minors are using the platform or services. The business may need to conform to more strict regulations around what’s considered appropriate use.

Another rising concern has been the effort to limit damaging disinformation. COVID conspiracies are a prime example. By early 2021, Facebook had removed more than 18 million posts on Facebook and Instagram “for violating its Covid-19 misinformation policy,” according to Bloomberg. Despite concerns around freedom of expression, many companies have chosen to take a stand to avoid the spread of content that could harm other users and damage the integrity of the community or the brand.

Regulations demand trust and safety action

Although online communities and services rose to prominence in the past 20 years, the government began regulating content even earlier. In the U.S. Communications Decency Act (CDA) of 1996, Section 230 requires platform providers to remove federally illegal materials (e.g., sex trafficking, copyright infringement). In contrast, however, the CDA gives platforms immunity from being held responsible for third-party content. But businesses cannot turn a blind eye without inviting risk, so trust and safety teams need to establish and enforce usage policies.

Trust and safety is further complicated by a lack of global standards. Currently, there is a patchwork of regulations around the world, with the EU, in particular, carrying hefty fines.

How trust and safety works

Typically, trust and safety efforts encompass a series of actions, from the business as well as users. Front line “infantry” in the company may proactively monitor for issues, while automated content moderation systems handle broader surveillance to flag potential usage violations. Community users also often report issues such as bullying, hate content or misinformation.

Whether pinged by system alerts or customers, trust and safety analysts need to prioritize and investigate issues. And that’s where the challenges — and risks — get more complex. Depending on the topic, region or demographics, investigators may need to research across the surface or deep web, but also in potentially harmful territory like the deep and dark web. All too often, businesses are not equipped to adequately protect against the risks to researchers, their workspace, and enterprise networks.

Common trust and safety challenges

As usage expands across digital communities and services, the volume of trust and safety issues expands along with it. Companies are feeling pressured to build a strong team of analysts, with backgrounds in military/intelligence, cybersecurity, fraud and brand misuse, and corporate research and protection.

Along with the challenge of getting the right expertise in place, businesses often struggle with how to empower investigators to gain actionable insights more quickly and safely.

Automated systems help moderate content and user behavior, but many issues need more context in order to take the right action. Organized criminal activity or disinformation are often the issues that demand deeper dives to solve the root problem, but seemingly innocuous or straightforward issues may as well. For example, automation may incorrectly flag issues with legitimate users, and if the platform blocks those people arbitrarily, the provider risks ruining the "trust" part of trust and safety.

With any of these investigations, there’s no telling where they’ll lead. Analysts might have to venture off-platform and sometimes into unsavory sites to collect evidence. Yet access to untrusted sites — such as forums used by hate groups or terrorist organizations — is often blocked by enterprise IT (for good reason).

While it’s critical for companies to protect against potential risks, analysts need the agility to overcome IT hurdles with hyper-secure ways to tap into rich resources for gathering insights. They also need new levels of anonymity for online research, so investigations can never be traced back to the analyst, their device or their company. VPNs and private browsing do not provide adequate protection. Analysts need greater control over how their online presence is attributed to them and their organization, so they can safeguard their research and ensure successful outcomes.

Learn more: What VPNs and Incognito Mode still give away in your online identity >

Putting the ‘safety’ in trust and safety

Minimizing risk is key — and that’s where cutting-edge tools for secure, anonymous research are essential. Learn how innovations in cloud isolation and managed attribution can empower trust and safety teams to do their best work — download our white paper.

Want to see for yourself how trust and safety analysts can conduct hyper-secure, anonymous investigations? Connect with us for a Silo for Research demo.

And stay tuned for the next blog is this series: Why trust and safety is a risky business.

Tags
OSINT research Social media Trust and safety