Welcome to NeedleStack, the podcast for professional online research. I'm your host, Matt Ashburn, a former intelligence officer who believes that OSINT is fundamental open source research.
And I'm Jeff Phillips, tech industry veteran and curious to a fault.
Today we're going to be continuing our use case study of OSINT. Again, that's open source intelligence, and we're going to talk about the role that it plays in the trust and safety teams of online communities. So think large tech companies, folks that want to keep online marketplaces and platforms safe for the users and for those that want to buy and sell goods online and use the services in whatever way that they're designed to be used. Now this could apply to many things. It could apply to acceptable use policies and enforcement on social media platforms or perhaps surveillance and investigation of counterfeit goods, or fraud prevention, or any number of other things that we'll talk about today.
Yeah, Matt, I think to go with a definition here to start us off that I'd say the field of trust and safety is about how people abuse the internet and services on the internet... you mentioned communities... to cause real harm, but typically, they're doing so by using the products in the way they were actually designed to work. Right? So maybe it's a platform for sending messages or posting content or selling something, as you mentioned, counterfeits, but they're doing it with malicious intent. And we've seen over the last couple of years, while trust and safety's been around for 20 years as a concept, have seen really that analyst role and the formation of dedicated trust and safety teams growing, as you mentioned with tech companies, but anyone with an online community, right, where they need to mitigate risk and preserve their brand reputation.
So some of the key areas of focus we tend to hear about are they want to ensure the integrity of the services. Right?
It's a good phrase... trust and safety. So got to have trust and safety to instill customer and user confidence by dealing with compliance issues and acceptable use and enforcing your acceptable use. The second thing, they are looking to protect their users and you hit on some of those things like scams or fraud or stolen goods.
But misinformation, hate groups, bullying, all those different types of things that can take place maliciously on, for example, a social media platform. And then sometimes those teams even go a step further about protecting the online service itself, right, from security threats or, again, from their brand and the platform being used in ways that they don't want to be associated with.
That's exactly right. And it's interesting to note that there's a ton of automation, a lot of algorithms and AIML taking place that will help to automatically catch objectionable content, for example, or counterfeit goods maybe in an online marketplace.
But there is also a lot of manual work that has to be done. As those issues get escalated up, there many times becomes a need for a manual review of this type of investigation. And so, of course, that's where open source intelligence comes in because then you have to go outside of the platform that you operate and go obtain information elsewhere.
Right? And that's also where the risk comes in in conducting open source research. Joining us today is, a special guest, who's going to tell us a little bit more about trust and safety. Abel, can you give us a quick breakdown of what this is and the role that trust and safety plays?
Thanks a lot, Matt. Yeah. So trust and safety has been around since the online community started growing, but recently it's really matured over the last at least two to three years with the Stanford Internet Observatory standing up a trust and safety academic journal to professional and trade associations, like the Trust and Safety Professional Association that's just about a year and a half old, a place where trust and safety professionals from all around the world can discuss and share ideas and coordinate the type of efforts that a lot of them are facing in a common manner.
So a couple of specific examples of trust and safety issues are, let's say, phishing. So there are a lot of instances of phishing and malware on social media where people are engaging with posts and they're not thinking about the links that are on the platform and clicking and being sent either to a site where they're tricked into entering credentials or malware is just downloaded directly onto their machine.
It can also include things like direct messages and social engineering.
So sending someone a message, engaging with them and using that type of social interaction directly that can be a little bit hidden from the platform moderators in order to engage and potentially defraud somebody. On the other side, the counterfeit and pirated goods, that's a very large underground marketplace where it can be anything from the Dark Web to open content, whether it's online marketplaces where those are legitimate marketplaces, but people are selling counterfeit, stolen, pirated goods.
There's also intellectual property crime that can take place. So businesses are interested in making sure that patents or other ideas that they have are not sold on a dark marketplace.
Thanks, Abel. That's interesting and tying this back to the theme around OSINT collection here in trust and safety investigations, I can think of a few areas where you need to consider when you're going out and investigating this type of stuff.
There's, of course, going to be things around company policy. If it's counterfeit, are you allowed to follow a lead to the Dark Web if that's where this investigation took you? Are you allowed to go to certain sites? There are sites that are hosted in certain countries. Your company might maybe block you as far as standard IT practices. Are you allowed to go around that? When you validate an incident through your OSINT investigation and announce this, when do you need to report them up through your legal team to take proper action?
There's privacy and civil liberty protections of the user community itself.
And the episode before this, if anyone's interested going deep into civil liberties and privacy and OSINT, I encourage you to check out our episode with Richard.
That is the episode before this. And then when you're doing this, there's, of course, wanting to protect the company from being known that you're maybe out doing some of these investigations.
There's protecting the moderator, the trust and safety analyst in terms of any retribution, in terms of you mentioned psychological. What are they coming across? What are they having to engage with? So protecting them as an analyst as well as protecting the company infrastructure. Right? You mentioned going off platform. What does that expose you to from a malware perspective and attributing this research to you and/ or the company?
Yeah. One other thing I'd add, Jeff, is coordination with law enforcement or other civilian enforcement agencies like the FTC.
When is it necessary for the company to get them involved? And where is that balance between handling these issues yourself as a company or handing it off or coordinating with law enforcement in order to stop whatever malicious activity is taking place?
Yeah, that's good. And again, lots of great information that the tech platforms in particular have, right, at their disposal from their own platform, but they also have to go and add to that, right, and enhance that data many times by performing open source intelligence.
And it's interesting that as we're seeing here, a lot of common threads. You have policy concerns, privacy issues, compliance issues, documentation of your evidence, lots of things that are very common within OSINT regardless of the industry that you're serving or the exact mission that you're performing. One of the most interesting examples that I think that we've seen over the past couple of years, certainly becoming more and more prominent, is the efforts to combat nation/ state level disinformation and misinformation.
Many times this is coined coordinated inauthentic activity. And performing the research that is needed to combat this, it's a massive undertaking. We've seen a number of tech companies very prominently display the results of their research, which is pretty fascinating.
Facebook and Twitter are two big ones that come to mine.
Yeah. And balancing that role of government intervention versus the pressure on these tech companies to manage speech and manage the type of content that's being engaged with when they're essentially combating nation/ state adversaries who are promoting these coordinated attempts to influence political campaigns or government perception of their activities all across the world. It's a big responsibility and requires a lot of intense effort from both the technical side of automated monitoring to the legal side of deciding what you can and can't do to engage with them.
It just covers a lot of different issues and it's such a huge problem today.
Yeah, that's absolutely right. Abel, thank you so much for joining us today. And folks that are out there, thank you also for joining us today. If you liked what you heard, you can always subscribe to our show wherever you get your podcasts and also watch our episodes on YouTube or view transcripts and other information about our podcast at our website, Authentic8. That's Authentic with the number eight dot com/ NeedleStack.
Also, be sure to follow up and ask any questions on our Twitter account. That's NeedleStack_pod on Twitter. Next week, you don't want to miss our conversation.
My friend, Mick, will be here with us talking about corporate research and protection and the role that OSINT plays in that, in particular some tips for protecting people on travel and some other interesting use cases of open source intelligence. You don't want to miss it. See you then.