News from the open internet

Opinion

Human’s Jay Benach on the battle between ‘good bots’ and ‘bad bots’

Jay Benach, GM of media security at cybersecurity company Human.

Illustration by Robyn Phelps / Shutterstock / The Current

As the digital advertising ecosystem grows more complex, so too do the threats that lurk beneath its surface. From increasingly sophisticated bots to covert fraud schemes hidden in legitimate apps, the challenges facing ad tech companies today are far from the low-tech attacks of the past. A lot of the technology behind these threats is misunderstood, especially when it comes to bots.

That’s where Human comes in. The cybersecurity company works with demand-side platforms (DSPs) and supply-side platforms (SSPs) to safeguard their ads from digital fraud and abuse, working with them to develop policy and find solutions to issues. The agency has also developed the Human Collective, a hand-selected group of trusted platforms, agencies, brands and telco operators with an aligned interest in creating a united front against fraud.

Jay Benach, GM of media security at Human, delves into how the company detects and combats malicious automation, today’s rapidly evolving threat landscape and distinguishes between good and bad bots.

How would you describe the current state of fraud in the ecosystem today?

It’s fair to say that it’s not that fraud itself is necessarily increasing, but the volume of automation on the internet — the amount of bot activity on the internet — is increasing substantially. Starting a few years ago, there was a Cambrian explosion in bots. You can think of them as bots that are conducting all types of research on behalf of many large language models.

The challenge for publishers is they now contend with bots that all have different goals: Ones they have licensing deals with, ones they don’t want to serve ads to, bots that are fraudulent such as ones that are trying to masquerade as scraper bots of language learning models — the list goes on. This level of complexity is driving a lot of interest for Human to deliver asymptotic levels of precision at both the network level and the site level.

Can you walk us through how Human’s detection system distinguishes between sophisticated bots and real human users?

This company was founded on an approach of using technical evidence to identify automation. Anything that is scripted or automated is going to do its best to fake humanity. Our job is to effectively catch the liars. That’s the first layer.

In addition to just detection, we’ve added on pre-bid prevention so platforms can avoid sophisticated bots before their ad supply is impacted. We’re also able to conduct internet-wide anomaly detection, which then enables us to have a full threat intelligence team go deep and unpack what’s going on, and if necessary, even reverse engineer the app in question to be able to distinguish a sophisticated fraud from real human activity.

There are so many different layers to bots. They’re not all bad, for instance. Can you explain the role of the “good bots”?

It’s probably not obvious to most people that there are bots that are necessary to make the internet and digital advertising work well. Many people may not be aware that there are creative scanners that scan a creative, both before it goes live and mid-campaign. They look for things like performance, load time, content alignment, malware insertions, all types of things. Then there are things like search engines, the Google bot, the Bing bot.

These are bots that are designed to do good. The distinction people have to understand is there’s a difference between an ad being served to a bot and an ad being counted as an impression on some billing report. The good bots are infrequent in comparison to true audiences and people know how to filter these out from what might be considered a billable impression event.

What is the harm behind bad bots? How do they hurt performance for platforms and publishers?

Bot activity can skew targeting and audience profiling, and advertisers want to reach their specific audience. If bots are not detected and filtered, then advertisers could wind up retargeting these bots. And that’s really where you get into a lot of this waste and fraud and abuse where you have great advertisers trying to reach an audience and are instead targeting bots. Of course, fraud manifests itself in many ways, not just with bots and automation. There’s increasing amounts of other kinds of fraud that is not necessarily with bots.

How do you then balance the need for security with performance demands of ad platforms and publishers?

When you really get into it, there’s not really a conflict between security and performance. On the surface, it seems like they’re diametrically opposed, but they’re actually not. Human does not only filter the fraud, but it also shines light on supply chains. Human tech basically lights up the room for platforms because they can now see the entire supply path, which then allows them to reduce duplicative supply, wasteful impressions, all types of redundancy. So people activate security saying, “Oh, I want to stop theft.” But a very positive by-product of that is saying, “Oh, now that I’m looking for the theft, I’m also seeing this other category of waste and redundancy that if I eliminate, I can improve platform efficiency.”

Is it getting more challenging to detect fraud?

When the company launched, we were dealing with very basic, primitive botnet design, and they were relatively easy to detect. Today, there are actually well-resourced threat actor groups, think of them as like mini cybercriminal enterprises. They are increasingly sophisticated. They have all the tools: They can use machine learning, they can use LLMs, they can use residential proxy networks, they can use malware to spread their stuff.

There was a famous mathematician named Claude Shannon who had a saying, which applies really well, which is “the enemy knows the system.” That is what we are dealing with. We are dealing with an adversary that understands digital advertising the same or better than many of the employees who actually work in digital advertising. So that’s the challenge of sophisticated fraudsters.

Can you give me an example of a non-bot threat?

You can have a scenario where a real human is on their real iPhone using an app that they got from the Apple App Store, and unbeknownst to them, there’s all these hidden video players being instantiated and making bid requests, all on their superfast LTE 5G network connection. That is very different than what was going on in 2013 when someone ran a botnet from their Windows computer in their apartment.