Engagement Metrics Can Help Publishers Detect Ad Fraud

This article was originally published on AdExchanger
Ad fraud is present across all layers of the advertising ecosystem, but there is one behavioral factor that is more likely to predict the presence of fraudulent bots than any other: third-party traffic sourcing.
Fifty-two percent of sourced traffic was bot fraud in a recent study [PDF] by White Ops and the Association of National Advertisers (ANA). This should raise a red flag for publishers, whose use of paid traffic-generating sources has increased as they seek to generate more impressions, fulfill advertising minimums and grow their audiences. As a result, botnet operators have stepped in to take advantage of the dollars funneling through these channels.
Publishers, however, can combat fraudulent bots by keeping a close eye on their third-party partners, diving into metrics most likely to indicate ad fraud and proactively cutting out underperformers and suspicious sources. The time-on-site metric may be one of the most powerful measures to help publishers combat bot-based fraud.
Bot traffic is becoming more sophisticated and human-looking every day, so using a combination of third-party verification, Google Analytics and big data resources is essential to catch evolving sources of fraud. As a starting point, analyzing a few key metrics in Google Analytics and associating the data points by referring domain can provide early indicators for identifying questionable traffic.
Page Depth And Browser Behavior
The practice of purchasing traffic is common among publishers of all sizes, even premium publishers, which often have dedicated audience acquisition budgets. But the practice is rife with potential pitfalls. This isn’t to say that publishers will or should stop their traffic acquisition efforts, since many services provide legitimate ways of acquiring new audiences and real readers.
For many years, it was relatively easy to spot bot traffic. Offending referring domains would often reveal a session length of just one page viewed per visit. In comparison, a typical site average is at least 1.1 pages viewed per visit but usually higher, as real humans played in the mix.
Today’s bots tend to be more sophisticated and can generate lots of page views per visit to avoid instant detection. However, many times, those views will be generated in a shorter period of time compared to the time it would take a real human to see the same amount of pages.

Within the referral channel grouping, Google Analytics’ comparison graph highlights outliers in pages per session. All graphics courtesy of Manny Puentes.
Bots are also much more common in older browsers than newer ones, as older versions are more susceptible to hijacking and malware. The White Ops/ANA study showed that a disproportionate amount of impressions generated by Internet Explorer 6 and 7 were bots – 58% and 46% respectively.
If a referring domain shows a browser makeup that’s markedly different from the overall site average, it’s worth digging into other potentially high-risk metrics and seeing if that source is problematic and possibly fraudulent.

Suspicious traffic sources can show higher-than-average use of Internet Explorer when compared to the overall site average
Time On Site
While other session-based signals can surface in instances of questionable traffic, time on site can be the most powerful metric to combat bot-based fraud, because of its importance to both publishers and advertisers. The metric is among the most meaningful to all parties when it comes to identifying truly engaged – and reliably human – audiences.
A session lasting a few seconds isn’t going to be inherently valuable to a publisher or advertiser, whether that session is produced by a bot or a human. Yet impression-based revenue models, notably cost per mille, have driven the growth of third-party traffic sources aimed solely at providing as many impressions per dollar as possible, with no consideration of actual reader engagement.

Find suspicious traffic domains by diving into the average session duration per source.
Some publishers are experimenting with transacting on the idea of time spent on site instead of traditional impressions, especially as native content and video become more meaningful revenue sources. Most notably, the Financial Times recently announced it would sell display ads based on time spent on site by charging a fixed amount for every second that a visitor actively engages with the content. The thought is that high-quality content and loyal readers will result in more time spent engaging with the publisher content and brand creative, leading to more long-term value for advertisers.
The time-on-site metric also plays strongly into viewability and the number of seconds that a reader is visually exposed to a brand’s message – both increasingly vital performance measures for digital advertisers.
As part of their extensive recommendations, The White Ops/ANA study suggested that advertisers maintain the right to not buy impressions based on sourced traffic. While it is yet to be seen if advertisers will take this to heart, publishers need to proactively clean up their third-party traffic sources, working to eliminate any potential for fraud.
By sourcing traffic with higher overall engagement metrics and terminating those with below-average performance, publishers can provide real audiences that meet the metrics that matter to advertisers.
Previous Article

Next Article
