Rebel AI Acquired by Logiq

image

By Manny Puentes, CEO & Founder Rebel AI

When we started Rebel AI in 2016, we set out with a mission to build the best-in-class advertising products to help marketers reach their customers securely at scale. We wanted to build an advertising platform from the ground up that allowed small to medium sized agencies and businesses the ability to compete in a market dominated by deeper pockets. Leveraging our twenty-plus years of in-market experience, we have done exactly that. We successfully built powerful data management and media buying products and formed lasting relationships with our clients across the world.

Today I’m excited to announce the next step in our journey with the announcement that Rebel AI has been acquired by global e-commerce leader Logiq. I am thrilled for what this means for the future of Rebel AI’s clients, partners, and employees. This was not a decision I made lightly or out of necessity. I saw an aligned vision and complementary assets that would accelerate our abilities to provide a great SMB e-commerce marketing offering.

Logiq understands today’s digital commerce landscape, and how important it has become for small businesses to reach new customers and grow their online presence. Our robust programmatic media buying and customer data management platform perfectly complements Logiq’s vision to democratize customer acquisition across channels. 

We also believe that Logiq’s audience segmentation tools and data service offerings bring even more value to our current clients, allowing us to bring powerful new features and functionality to the platform. 

Finally, Logiq shares our philosophy of pricing transparency, ensuring our clients get the maximum value out of their media dollars without hiding behind black box fees or big dollar minimums. 

As of today, we are excited to become Logiq Digital Marketing. We’ve put together a short video to talk about what this means for the future of both companies.

As a new team, we’re excited to continue helping small and medium businesses and agencies achieve their marketing goals and looking forward to what we can accomplish together. 

Related Stories

First-Party Consent Can Replace Third-Party Cookies

image

Originally published at AdExchanger.com

Google’s recent decision to deprecate third-party cookies on Chrome will severely cripple browser-based targeting, cross-site tracking, frequency capping and retargeting. Ad platforms will be blind outside of the contextual attributes passed in any opportunity to serve an ad.

Third-party cookies have been an anonymous, necessary evil to deliver highly targeted ads. Though third-party cookies are discussed in terms of consumer privacy, the third-party cookie itself is actually completely anonymous. Platforms can’t determine who you are based on the generated ID representing your device. Real privacy concerns start to proliferate when you mix third-party cookies with form-data, such as email, first name and last name, and send that data along with a cookie.

If you extrapolate this use case and allow for that same ID to persist from site to site, data platforms can get smarter about your behavior and interests.

Email won’t save us

Any ID, cookie or otherwise, that can eventually be tied to form-data/PII will become a privacy issue. In light of the coming changes to Chrome, some companies have announced they will use email or other forms of identifiers to replace some of the consumer targeting that will be lost.

Some platforms are proposing email as the Rosetta Stone to “anonymously” identify the consumer. Let’s take cross-site tracking as an example. If platforms are left with first-party cookies, all of the data will be siloed by site. That means the consumer will have a different first-party cookie ID from site to site as they surf the internet. In this new paradigm, email will be used as the key to stitch the data together to reveal behavior and interests, same as before, except with strong standardized joining criteria for offline data.

Email represents another ID tied to the consumer vs. the device, which is even more intrusive. With this change, consumers can further be tied to offline data, such as home refinancing applications or store visits if they gave their email to receive digital receipts.

Yes, I know it’s hashed and “anonymous,” and can’t be reverse-engineered. But an entity with raw consumer data and consumer emails can continue the practice of linking form-data / PII, therefore identifying the consumer all over again.

The problem isn’t a technical one. The industry will eventually figure out a way to technically track consumers. The real challenge is abiding by the emerging policy, legislation and regulation requirements that dictate what consumer data companies can and cannot collect.

The consent solution

Sites are highly dependent on the first-party cookie, and as the industry transitions to using first-party cookies to target advertising, consent becomes a more controllable asset. This is a new opportunity for consent platforms to provide the gateway to ensure that the needs of consumers and the ad ecosystem are met.

Consent platforms have come a long way in establishing a strong foundation to protect the consumer. As an industry, we are finally giving consumers the ability and opportunity to not be tracked. 

In tandem with these platforms, there’s technically still a way to use first-party cookies for cross-site tracking, frequency capping, targeting and retargeting without the need of a hashed email to keep the ID anonymous without using PII. 

Let’s say I browse to cnn.com, receive a prompt to allow cookies, and I hit “Allow.” If the consent platform took the “cnn.com” location in the browser and reset the location to point to “optin.com?url=http://cnn.com,” it would allow a first-party cookie to be set on “optin.com.” If optin.com would immediately redirect back with “http://cnn.com?optin_id=123,” it would allow for the first-party cookie to be read off of the URL set on “cnn.com” with the key of “opt_in” and the value of 123.

This technical workflow would allow for subsequent calls, if they had JavaScript on cnn.com to query for “opt_in” and pass the value to ad platforms on the URL, along with any metadata appended as a query string parameter to reenable targeting and cross-site tracking. The redirect in this use case, after you hit “Allow,” would give back the same ID on “optin.com” any time the consumer allows cookies for tracking.

For this to work, standards and specifications will be paramount, and the IAB must play a crucial role in standardizing the first-party cookie workflow outlined above. For example, we would need the key for the first-party cookie to retain a unique standardized name so that platforms that are interested in passing the ID (opt_id=123) know what key to query on the first-party cookie.

An open consortium would also be needed to manage and own the “optin.com” domain, the services required to apply the redirect and the open-sourced JavaScript to set first-party cookies off of the URL to later be queried by other platforms.

The aforementioned workflow would only activate after hitting “Allow Cookies” on a consent platform. As you can see, the ecosystem would all share the same ID when targeting and tracking, granting the consumer more control over consent and providing the roadmap for a safer consumer experience.

There will always be a workaround to track the consumer. While the industry is fretting about the death of the third-party cookie, the real problem is not a technical one. The issue remains what we are legally able to collect on the consumer while adhering to evolving standards surrounding consent and privacy.

We should also be looking at the data that Facebook and Google are collecting. In-home devices, Gmail, Google Documents, Google Maps, Search and Google Apps are all collecting data on a first-party basis, and killing the third-party cookie will do absolutely nothing to stop them from collecting data and monopolizing the advertising market. In fact, it’s empowered their initiatives.

I’m optimistic about the long-term opportunities that this change heralds. The third-party cookie was messy for reasons not related to privacy. Though Google and other industry giants have given themselves an advantage, this change will spur the rest of the industry to innovate, creating new solutions to compensate for the changing environment. The death of the third-party cookie truly empowers us all to come together and build a seamless environment that adheres to privacy controls managed by one ID that represents a consumer and their consent.

Related Stories

Could A Consumer Taxonomy Fill The Identity Void In A Cookie-less World?

image

This article was originally published on AdExchanger.

The death of the cookie has been predicted since at least 2013, but the third-party cookie has lingered so long because the advertising industry still depends on it – more than most will admit.

Everyone praises the first-party cookie, but advertising buying platforms continue to use third-party cookies to sync with first-party data. The third-party cookie is key to retargeting, cross-site frequency capping and customer data collection and profile creation, for starters.

Without third-party cookies to connect data and audience-based buying, targeting becomes challenging and will strictly move to contextual-based buying without a historical consumer view. On mobile devices, outside of the cookie, the IDFA and Mobile Ad ID are used for targeting but require explicit opt-in consent for GDPR compliance.

The once-touted Advertising ID faced a major blow last fall when AppNexus pulled out of the consortium. With too many large players defending their territory and respective identity profiles, the hope for a universal cookie-based solution waned.

Instead of approaching the problem from a universal cookie angle, an independent trade group like the IAB should step in and create a standardized consumer taxonomy that will allow for identity- and interest-based targeting while still maintaining user privacy.

The IAB already has a content taxonomy to identify content types on a given page; a consumer taxonomy would function similarly. Just like the content taxonomy is used across protocols such as OpenRTB to help target context-based advertising, the appropriate consumer taxonomy categories would be registered and passed on before a page loads and could be read by advertising platforms in order to target consumer profiles in specific segments without using the cookie for profile-based targeting.

To follow the consumer across sites to build accurate profiles around this taxonomy, an identity provider would still need to integrate directly with publishers at the DNS level. This provider would have a DNS entry like “id.nytimes.com,” which would send the first-party cookie back to the provider’s taxonomy pool. The identity provider would take that first-party data at the beginning of the transaction and be able to respond with the matching taxonomy responses.

So if I visited cnn.com, my session data would be sent to the identification provider, and it would match with IAB-defined categories like “Cycling Enthusiast,” which would be passed along to advertising platforms, along with a confidence score.

That taxonomy data could then be passed on to exchanges and advertising platforms without the need to cookie-tie, and it could provide interest-based advertising at the consumer level, not just the content level.

Paywalled publishers would have a leg-up in this new model, as they would be able to more accurately identify logged-in users, similar to how Facebook and Google have had an advantage with their sign-in requirements across their platforms.

The consumer specification or taxonomy would also include a notion of a rolling identifier – a universal unique identifier (UUID) like 57990d49-07d9-4b85-bc29-2616035cc57d, for example – which would rotate every 48 hours and allow for consumers to be temporarily tracked for frequency capping and retargeting. Identity providers implementing the specification would need to maintain something of a Rosetta Stone for passing the same UUID across sites.

This open-source, cookie-less specification would allow any company to accurately identify and reach their target audiences and share the most pertinent information across platforms. While this system is admittedly a looser coupling of a person’s profile than it would be in a cookie-based world, it provides more privacy for the consumer while still allowing platforms to provide personalized advertising.

As an open-source solution, a consumer taxonomy gives every company in the industry access to its benefits and provides a new common language for transacting media, paving the way for the next generation of consumer-based marketing.

Related Stories

In The GDPR Era, Publishers Also Need A Data Opt-Out

image

This article was originally published on AdExchanger.

The average page load on an ad-supported website includes 172 network requests just for advertising, according to recent research. That’s 172 opportunities for a publisher’s audience data to be collected, stored and re-monetized by other partners every time a page displays.

The rise of header bidding has only made this data leakage worse. One agency buyer recently admitted that buyers frequently bid on impression opportunities they never intend to win so they can collect publisher audience data and sell to the same readers at a much cheaper cost elsewhere.

Most publishers have a challenging time unearthing this kind of behavior, let alone preventing it. The growing importance of data governance, however, has made controlling the issue an imperative one.

Programmatic’s Leaky Data Pipes

In programmatic buying and selling, supply-side platforms (SSPs) and demand-side platforms rarely fill 100% of a publisher’s inventory; instead, they work with other platforms to fill a percentage of their inventory.

This means that every time an SSP is called, it may call other platforms. For example, on any given page load on CNN.com, Amazon Associates may call OpenX, AppNexus, and PubMatic, and PubMatic may, in turn, call The Trade Desk, RocketFuel, BrightRoll and others

Each platform collects and stores valuable audience data, whether or not they ever actually fill the ad space and contribute to revenue.

The Limits Of Human Intervention

With this much complexity, publishers have limited options in controlling the data leaking from their sites. Some publishers have implemented strict contracts with their partners about who can collect their data and, in some cases, they prohibit the use of cookies for data capture. While this limits potential programmatic partnerships, these publishers retain the use of their data and can run their own retargeting campaigns directly with advertisers who want to reach a high-value audience.

Other publishers rely on browser monitoring tools to track data pixels on their site and use the information to chase down noncompliant partners. But for the most part, publishers have to trust their programmatic partners not to fire unnecessary pixels or JavaScript. In cases of discovered violations, they rely on contract enforcement, which can become time-consuming and costly.

While these are all options publishers pursue today, these practices won’t be sustainable or successful in the coming era when data – and the successful management of it – truly defines a publisher’s value. It’s time for a more sophisticated solution.

Defining Domain Data Permissions

With the advent of General Data Protection Regulation (GDPR), publishers and platforms are looking more closely than ever at data governance practices. The attention has primarily focused on individual consumer privacy rights, but since GDPR directly affects publishers, which could be held liable for data misuse, publishers also need to advocate for their own data rights and protections.

To start, publishers need to work with trade groups to establish data-collection specifications and standards and the browser vendors to enforce them.

For instance, publishers could create a way to specify which platforms and companies are allowed to fire pixels and collect data, which would be the equivalent of a data white list. Browsers could enforce this by controlling the code that is allowed to execute on a given site. Browsers such as Chrome have indicated they will begin adhering to the Better Ads Standards created by the Coalition for Better Ads as soon as next month, and domain data permissions could become a logical next step to better protect both publisher and consumer data.

Just as consumers have the option to opt out of data collection, publishers should assert control over who can collect data from their sites. As data ownership increasingly defines value, it is the publishers and consumers who should have the tools necessary to properly control it.

Related Stories

In the Age of AI, Publishers Need Authenticated Audiences

image

This article was originally published on AdExchanger.

Though Facebook has only recently come under fire for fake user profiles on its platform, the problem of fake profile data has long plagued programmatic media trading.

Lotame, for example, recently purged more than 400 million user profiles after identifying them as bots. The problem is so profound in programmatic that some research has even shown that targeting users at random was just as effective as targeting using third-party data segments.

But data fraud is just as big of a problem for publishers as it is for advertisers. Today, the value of publisher audiences in bidding algorithms is increasingly defined by data, and bad data potentially is driving down the price of their inventory.

How does this happen? Ad fraud networks frequently create bots that mimic real users, and those bots are programmed to visit a site like nytimes.com to increase the value of their cookie or data. The publishers visited by a bot become part of the bot’s “user” profile, but because the bot isn’t real and won’t convert into a sale, depending on sophistication, the value of nytimes.com and the other associated publishers used to create the fake profile drops in algorithmic trading.

This is why it’s imperative that publishers, as much as advertisers, push for more standards and authentication around their audience data.

Much of the problem for both advertisers and publishers comes from the fact that, in today’s data-driven world, anyone can fire a pixel via HTML or JavaScript in a browser or standalone program. Ad platforms and data management platforms both ingest the cookie and its pixel data and use it to target their creatives with little attention paid to the provenance of the data contained in the cookie or ad ID.

As data becomes more of a currency and begins to be regulated more stringently, industry trade groups need to start focusing on standards around data ingestion, with a focus on the accuracy and authenticity of the data. While I’m not aware of a company currently providing this service, publishers can help push forward this kind of initiative by working with data providers and collectors to push measures that require data pixels to be digitally signed or verifiedwhile also controlling who is authorized to collect data from their sites. Stricter data controls and standards also will help publishers combat data leakage and protect long-term publisher value.

Creating a working group within the IAB around data verification standards would help advance some of the techniques and engineering needed to solve this complex problem. For example, having JavaScript pull the location field in the browser and appending it to the URL or adding it to HTTP headers isn’t good enough anymore to determine provenance. We should be working with browser vendors to propose a standard that will facilitate the authenticity of the data. This would allow downstream platforms using the data generated from a page load to only apply data to segments that have been verified and reconciled.

As the technology matures, blockchain also has a role to play in data authenticity by establishing an immutable identity for consumers and publishers alike, while providing ledgers that can prove the origin of data and its association with a particular publisher and real site visitor at the consumer level. Publishers need to get involved with working groups and advocate for change in an industry that has multiple influential stakeholders.

Authentic identity will define the future of digital advertising. As the upcoming General Data Protection Regulation puts more onus on companies to understand identity and give consumers controls over that identity, the industry needs to be united in creating the data security framework that will define the next generation of media trading. Publishers in particular have an important role to play in establishing the standards that will define their value in an automated world.

Related Stories

Engagement Metrics Can Help Publishers Detect Ad Fraud

image

This article was originally published on AdExchanger

Ad fraud is present across all layers of the advertising ecosystem, but there is one behavioral factor that is more likely to predict the presence of fraudulent bots than any other: third-party traffic sourcing.

Fifty-two percent of sourced traffic was bot fraud in a recent study [PDF] by White Ops and the Association of National Advertisers (ANA). This should raise a red flag for publishers, whose use of paid traffic-generating sources has increased as they seek to generate more impressions, fulfill advertising minimums and grow their audiences. As a result, botnet operators have stepped in to take advantage of the dollars funneling through these channels.

Publishers, however, can combat fraudulent bots by keeping a close eye on their third-party partners, diving into metrics most likely to indicate ad fraud and proactively cutting out underperformers and suspicious sources. The time-on-site metric may be one of the most powerful measures to help publishers combat bot-based fraud.

Bot traffic is becoming more sophisticated and human-looking every day, so using a combination of third-party verification, Google Analytics and big data resources is essential to catch evolving sources of fraud. As a starting point, analyzing a few key metrics in Google Analytics and associating the data points by referring domain can provide early indicators for identifying questionable traffic.

Page Depth And Browser Behavior

The practice of purchasing traffic is common among publishers of all sizes, even premium publishers, which often have dedicated audience acquisition budgets. But the practice is rife with potential pitfalls. This isn’t to say that publishers will or should stop their traffic acquisition efforts, since many services provide legitimate ways of acquiring new audiences and real readers.

For many years, it was relatively easy to spot bot traffic. Offending referring domains would often reveal a session length of just one page viewed per visit. In comparison, a typical site average is at least 1.1 pages viewed per visit but usually higher, as real humans played in the mix.

Today’s bots tend to be more sophisticated and can generate lots of page views per visit to avoid instant detection. However, many times, those views will be generated in a shorter period of time compared to the time it would take a real human to see the same amount of pages.

image

Within the referral channel grouping, Google Analytics’ comparison graph highlights outliers in pages per session. All graphics courtesy of Manny Puentes.

Bots are also much more common in older browsers than newer ones, as older versions are more susceptible to hijacking and malware. The White Ops/ANA study showed that a disproportionate amount of impressions generated by Internet Explorer 6 and 7 were bots – 58% and 46% respectively.

If a referring domain shows a browser makeup that’s markedly different from the overall site average, it’s worth digging into other potentially high-risk metrics and seeing if that source is problematic and possibly fraudulent.

image

Suspicious traffic sources can show higher-than-average use of Internet Explorer when compared to the overall site average

Time On Site

While other session-based signals can surface in  instances of questionable traffic, time on site can be the most powerful metric to combat bot-based fraud, because of its importance to both publishers and advertisers. The metric is among the most meaningful to all parties when it comes to identifying truly engaged – and reliably human – audiences.

A session lasting a few seconds isn’t going to be inherently valuable to a publisher or advertiser, whether that session is produced by a bot or a human. Yet impression-based revenue models, notably cost per mille, have driven the growth of third-party traffic sources aimed solely at providing as many impressions per dollar as possible, with no consideration of actual reader engagement.

image

Find suspicious traffic domains by diving into the average session duration per source.

Some publishers are experimenting with transacting on the idea of time spent on site instead of traditional impressions, especially as native content and video become more meaningful revenue sources. Most notably, the Financial Times recently announced it would sell display ads based on time spent on site by charging a fixed amount for every second that a visitor actively engages with the content. The thought is that high-quality content and loyal readers will result in more time spent engaging with the publisher content and brand creative, leading to more long-term value for advertisers.

The time-on-site metric also plays strongly into viewability and the number of seconds that a reader is visually exposed to a brand’s message – both increasingly vital performance measures for digital advertisers.

As part of their extensive recommendations, The White Ops/ANA study suggested that advertisers maintain the right to not buy impressions based on sourced traffic. While it is yet to be seen if advertisers will take this to heart, publishers need to proactively clean up their third-party traffic sources, working to eliminate any potential for fraud.

By sourcing traffic with higher overall engagement metrics and terminating those with below-average performance, publishers can provide real audiences that meet the metrics that matter to advertisers.

Related Stories

As Data Sales Rise, Questionable Provenance Proves A Growing Threat

image

This article was originally published on AdExchanger.

According to the IAB’s Outlook for Data Report, marketers plan to spend more money on data than ever. But just as investment peaks, questions are arising about the accuracy and legitimacy of this proliferation of data. Location data in particular has been in the spotlight recently, with some reports claiming that up to 80% of lat/long data in the bid stream is fake.

Marketers spend around $17 billion on location data generated by consumers, but there is often no transparency about where it originated or its accuracy. To satisfy the demand for scale, vendors often buy other location data sets to supplement their offerings, with no way to trace back where the data came from. Today, the ad tech ecosystem lacks a basic standard for how advertising platforms should pass location data, let alone verify it.

To make matters worse, this type of information is extremely easy to manipulate in today’s ad tech marketplace, particularly within the mobile in-app ecosystem. Any metadata within an ad request can be spoofed, including location data, and altered data can easily be passed into platforms and subsequently used to target consumers. As more and more unverifiable or unverified data is used in the bidding process, the likelihood of inaccuracies in these streams rises significantly.

To combat this, today’s marketplace requires a new approach. The industry has traditionally been reactive when approaching fraud – identifying after the fact that something was falsified and scrambling to add detection for signs of suspicious activity. But rather than flagging falsified coordinates, we need to bring a security-first, end-to-end mindset to advertising, verifying data authenticity before it is ever used in the bidding or ad-serving process.

The industry could create a standard around digital verification of location data, and any metadata used in the bidding process for that matter, so the data can always be traced back to its origin. Bidders today use the metadata they are passed, like latitude and longitude to target consumers, but they aren’t verifying that those parameters match what the original ad request actually sent, which allows for intermediaries to manipulate it.

To remedy this, the standard could specify that the bidder would always append the same parameter values used in the bid criteria to the creative itself, so that the original parameters could be compared to the parameters used in the bidding process, allowing for closed-loop verification.

This ensures that the location values that were originally sent always match the location values used in the bidding process. This type of closed-loop verification process ensures end-to-end metadata matching, eliminating the potential for data to be altered in the supply chain. In conjunction with this standard, we would also need to create a similar certification for mobile apps to prevent the falsification of location on mobile devices.

By authenticating this data end to end, we can start to pass accurate, reliable and verified location data, creating real value for advertisers and real security for the growing data marketplace.

Related Stories

What Blockchain Can (And Can’t) Solve For Ad Tech

image

This article was originally published on AdExchanger

Few technologies are riding as high on the hype curve right now as blockchain. With its distributed nature, smart contract functionality and security features, it’s been heralded as the latest savior to ad tech’s admitted transparency problems.

Though the underlying technology is complex, blockchain is straightforward in concept.

It’s a specialized, distributed database that contains an ever-growing list of information called “blocks.” These blocks are continuously time-stamped and verified by a peer-to-peer network. Once added to the chain of transactions, blocks cannot be altered, making it a single, immutable source of truth. The implementation is best known through Bitcoin, the cryptocurrency that uses blockchain as the underlying public ledger of transactions.

Beyond Bitcoin, blockchain has lately been applied to a variety of applications in health care, finance and now advertising, with the launch of the first blockchain-based startups.

Blockchain has plenty of features that make it intuitively applicable and appealing to advertising like smart contracts and transparent records, but it has limitations as well. Not only is the technology still in its early days, but the scale and speed that programmatic advertising requires means that blockchain-based platforms are still years away, if they arrive at all.

That doesn’t mean the industry should shy away from innovating and experimenting with blockchain, but understanding its limitations will help us build advertising technology that is sustainable now and for the long term.

Current Limitations 

Blockchain’s biggest asset – decentralization – is also its biggest weakness in the digital advertising space. Due to its distributed nature, where transactions are verified by “miners” around the world, blockchain technology simply can’t analyze or process real-time advertising transactions fast enough. Current confirmation times for a transaction to be validated and added to the public ledger range between 10 and 30 seconds.

Compared to the millisecond response times required to return an ad, the blockchain is just too slow to be a tool for real-time fraud prevention and validation. Instead, companies are using it as a “post-campaign” layer to validate and authenticate transactions (relatively) long after the fact. While the eventual reconciliation can prevent money from getting into the wrong hands, the lack of real-time verification still puts a financial burden on publishers, which stand to lose the most from the inevitable clawbacks that will emerge after validation.

Blockchain also faces the hurdle of adoption and scale. By design, the technology needs to reach a critical mass of users before it works as intended, with all parties accounted for in the same blockchain ecosystem.

For instance, the success of the recently launched Basic Attention Token, a blockchain currency spearheaded by the ad-blocking browser Brave, depends largely on the wide-scale adoption of a particular browser and publishers agreeing to relinquish direct control over their revenue sources. Considering publishers have already made legal threats against the browser’s behavior, full cooperation seems unlikely to happen without concerted industry effort.

While adoption and scale is a solvable problem, it is still years away before every player in the landscape invests the time and resources necessary to commit to a brand-new technology as a transactional layer.

Smart Applications 

Rather than a standalone application built to solve all of advertising’s problems, blockchain should be considered a feature of a larger advertising technology stack. By starting with smaller, smarter applications, we can integrate blockchain into the ecosystem slowly and systematically so it can grow into a larger layer as the technology matures.

Companies can begin to build on blockchain through contract execution. Programmatic has always promised to eliminate the heavy lifting of RFPs and paper contracts, but it has been messy in practice with so many intermediaries between publisher and advertiser, which have siphoned a lot of value between what the publisher is willing to accept and what an advertiser is willing to pay.

Blockchain platforms, particularly Ethereum, have smart contract functionality natively built in, and rules can be put in place that only execute the contract in cases where the buyer’s price matches the seller’s price.

Companies can also use blockchain for deal IDs and private marketplaces (PMPs)Deals and private marketplaces have evolved as a workflow solution to the contract problem, and they create a virtual connection between select publishers and advertisers.

But discrepancies have been inevitable as these deals pass through supply-side platforms, demand-side platforms and verification partners. Private marketplaces have also been surprisingly susceptible to misrepresentation and fraud, as a recent report by RocketFuel and IAS revealed. The report found that PMPs had a higher likelihood of video placement misrepresentation and, similarly, the Methbot report uncovered the high occurrence of spoofed domains.

By providing a single, public source of truth, the blockchain can streamline and simplify the execution and verification of these private deals.

Finally, because blockchain is a publicly available database, all parties would have a single source of record to ensure a transaction did indeed make it from point A to point B. While not a solution for real-time transactions, the blockchain can be used as a post-campaign reconciliation tool, which could be used to prevent fraudulent actors from being paid. The Trustworthy Accountability Group is implementing a workflow version of this with its Payment ID concept, which would be even more secure and transparent if replicated in the blockchain ecosystem.

Beyond Blockchain

As a whole, blockchain technology is still in its infancy, and it has bugs and hackers of its own that need to be addressed and secured before widespread adoption will occur. Still, there’s no doubt blockchain will change the way many industries transact. Digital advertising is no exception.

The most effective approaches to integrate this technology will consider both its potential and its limitations to produce real and sustainable innovation.

Related Stories

Methbot’s Hidden Cost: Publisher Data Integrity

image

This article was originally published on AdExchanger

Although White Ops estimated that Methbot siphoned $3 million to $5 million per day from advertisers, fraud where domains are falsified carry a hidden price tag that costs the industry much more.

Since Methbot and similar operations send a false domain location, such as vogue.com, false data is also being passed along and bundled with real data from the legitimate vogue.com site, compromising the digital identity and audience data of real publishers.

How It Happens

A complex ecosystem makes passing inauthentic domain data all too easy and obscures real data in the process.

As seen in the graph below, both a publisher and a data center run by a fraud operation may send inventory to the same supply-side platform (SSP), which works with a number of demand-side platforms (DSPs). In the example, both real “premiumpub.com” inventory and fraudulent “premiumpub.com” inventory are passed through the ecosystem as the same domain, and they show up in DSP and SSP reporting as the same domain.

image

Why Digital Identity Matters In A Data-Driven World 

The industry talks about fraud in terms of its dollar impact on advertisers and brands, but publishers also suffer. The flood of fake supply obviously drives down the CPM of real inventory, but Methbot-style fraud is harming publishers in more subtle ways.

By stealing a publisher’s digital identity and using the value of the brand associated with it, fraudsters not only take money that might otherwise belong to the publisher, they also manipulate the associated site and audience data. White Ops reported that the Methbot operation faked clicks, mouse movements, geolocation data and even social network login information to further look like real, engaged people.

Every time a perpetrator fakes a domain, the market is hit with these fake metrics. This dilutes a publisher’s brand in the industry as advertisers and platforms see a mix of metrics that don’t accurately represent a publisher’s inventory.

Data is the currency that defines the value of a publisher. As the explosion of devices has exponentially increased the amount of data that’s processed daily, it has become increasingly important that a publisher’s data is accurately represented. Machine learning algorithms in programmatic environments are driven by data. The buy side uses this data to update their models to determine the value of the inventory. Like any data model, garbage in, garbage out.

As the advertising ecosystem continues to evolve and we increase our dependence on machines to determine publisher value, the fidelity and accuracy of the data that represents the publisher will be vital to the publisher brand.

A New Target 

As the header-bidding trend moves to a server-to-server approach, programmatic transactions will become increasingly susceptible to manipulation. Any time there is a server-to-server connection, the IPs, domains and other browser metadata passed on the query as part of the media transaction can be altered.

Methbot-type fraud works by manipulating IP addresses within the perpetrator’s data center. When an ad platform or other code executes within a browser, the code asks the browser for its location. This location can reference an IP inside the data center, making it look like a legitimate domain.

Server-side header bidding isn’t bad; there’s no doubt it solves header bloat for the publisher and moves auction-type mechanics back to the server. But there is an inherent risk associated with more server-to-server connections. In this model, the SSP that is connected to the publisher will need to closely manage the data entry points to prevent future Methbot-style fraud.

image

This type of fraud is difficult to eliminate, since the browser remains the source of truth for domain reporting in the industry. That said, publishers can take a stand in controlling their digital identity by carefully vetting their programmatic partners and advocating for their own interests and needs with fraud and verification companies.

Protecting The Future Of Programmatic

Methbot may have been largely dismantled by the significant press coverage and release of associated IPs, but it’s only a matter of time before the next operation arises. Protecting data integrity and brand identity from this kind of fraud in the future will be paramount for both publishers and advertisers.

The Audibility The New Viewability?

image

This article was originally published on AdExchanger

Following the industry’s adoption of viewability as a core metric, even moving toward transacting on vCPM, advertisers are now eyeing other measures that can tell them more about which factors ultimately drive engagement and conversions.

With video advertising gaining traction on Facebook and Twitter, the muting-by-default behavior on these platforms has advertisers starting to ask about audibility – whether an ad was heard and for how long.

Group M, well known for its strict viewability requirements, already requires sound on for its video campaigns, and Pandora recently announced plans to test audio-based ad measurement.

As audibility enters the conversation in a bigger way, publishers need to stay ahead of the curve and understand and measure the metric on their own sites so they can be well prepared to transact on it when the time comes.

Challenges To Measurement 

The MRC hasn’t officially created any standard around audibility because traditionally it has been difficult to measure the different ways an advertisement can be muted. Accurately assessing whether someone can hear an ad requires checks at the video file, player and device levels. MRC has delayed setting a standard until the technology to more accurately measure these scenarios is in place.

That has left the onus on verification companies to build that technology and push the conversation forward. Moat measures audibility based on audio checks at the creative and player level, and Google has announced plans to eventually report on audibility across its video advertising products. As more players come to the table with proven measurement technologies, the MRC won’t be far behind in creating official guidelines.

The Sound-Off

Publishers and platforms alike tend to prefer muted-by-default behavior for their video to avoid disrupting users. As video grows in popularity, however, advertisers will no doubt begin looking at how audio impacts overall engagement. They will expect their ads to be heard, not just seen.

Publishers should begin by measuring and understanding their overall audibility rates and how they correlate with other metrics, such as viewability and even fraud. Although bots are successfully mimicking viewability, high audibility may be correlated to lower rates of fraud.

Once publishers have a grasp on their overall audibility, they can begin to package their inventory with these type of value-added metrics that show how their audiences are truly engaging with video on their sites.

A Matter Of Metrics

Most publishers were reactive to meeting the more stringent MRC-sanctioned viewability requirements pushed last year and responded by adjusting layouts, implementing expensive redesigns and installing new measurement technologies.

As audibility comes into focus, publishers now have the ability to get ahead of the curve and use it and other below-the-radar engagement metrics as a position of strength to more successfully package and promote their video inventory.

Related Stories