Tracking Billable Impressions and 3rd Party Discrepancies with Ad-Juster

The Problem with 3rd Party Discrepancies

It’s a sad fact that after more than a decade of innovation and growth in the digital display business, virtually nothing has been done to address the cost of 3rd party discrepancies on the industry.  As I detailed in my post on how 3rd party ad serving works, because Publisher ad servers and Marketer ad servers count an impression at a different point in the technical process, there is always a variance in the numbers, and reconciling those figures to cut an invoice is a manual, time-consuming process, and a huge administrative cost on the industry.  Discrepancies are typically around 10%, but can often exceed this, especially if there is a technical problem with the ad.  In virtually all cases however, publishers simply have to accept losses due to discrepancies as the cost of doing business.

Third party ad servers have never made it easy to address this issue.  Their publisher reporting tools are woefully inadequate and in some cases comically inefficient.  For example, the leading ad server, DoubleClick’s DART product, does not provide site level reports for publishers that allows them to see everything running on their site from that ad server, but only allows publishers to get reports advertiser-by-advertiser.  That means billing departments on every major online publisher spend days pulling hundreds of reports every month out of Dart alone. That means for most operations folks, a centralized reporting database that maps 3rd party delivery to local ad server delivery at the creative or flight level and updates automatically is practically a holy grail.

The Industry’s Response: An Impression Exchange

The IAB has recommended their own solution to address the matter via the Impression Exchange project, but I find the project fundamentally flawed.  For one, the technical process the IAB uses to centralize impression reporting between systems adds another call in the ad serving process and so creates a discrepancy on the discrepancy it reports.  Additionally, it has been very slow to win adoption by the ad servers – a year and a half in and DART is the only ad server currently on board.

Ad-Juster, The Superior Solution

A far better solution is for publishers to look at a company called Ad-Juster, which has created a way to centralize third party reports and map third party delivery against their internal flights down to the creative level.  Ad-Juster has essentially mapped the schema for every ad server reporting system, figured out how to pull large data dumps from every major third party ad server on a regular basis and map it with a unique identifier back to the third party tag running on a publisher’s local ad server.  In other words, it allows them to create a unified database across lots of systems. While the system is just a read-only version of the reporting you can get yourself, the speed and automation it brings to the table is very compelling for any large publisher.

Ad-Juster offers some canned reports that actually calculate the discrepancy between systems as well as some helpful filters that automatically email you a discrepancy report on flights that launched in the last three or five days for example, which allows operations staff to quickly catch implementation or technical issues.  Since you can monitor the entire network on a regular basis, it is easy to adjust the padding that most publishers add to client goals to makeup for an expected discrepancy.  You may very well find that some third parties track closer than others, so you can reduce the padding for those campaigns.  The reports are a boon to operations folks, but also extremely useful for billing departments.  Now the billing staff doesn’t have to spend all their time waiting in an ancient publisher-facing ad server UIs, they can push bills to clients faster.

The system isn’t perfect, especially if you run the same third party tags in multiple flights (the system can have a hard time attributing the right amount of third party impressions at the flight level in that case), but in most cases it offers tremendous benefits.  Recently Ad-Juster has partnered with Solbright and Fattail to push their data into those workflow systems, but they also offer an API for clients that want to push the data to proprietary systems.

Highly recommended for large publishers seeking reporting relief.

6 comments

  1. I run reports 24 hours after launch to determine what the discrepancies are if any, and then I make the proper adjustments, taking into account usually a 15% difference. I find that bringing on board another program to report on reporting discrepancies is just another middle man pointing out the obvious – and it only reports after 3-5 days? This doesn’t help for Netblock campaigns that only run 24 hours.
    Am I off-track in thinking that Ad-Juster is a tool for lazy Ad Ops?

  2. Hi Jess,

    Thanks for your comment! Your method is a great way to check for discrepancies, and I’ve done the same kind of thing. The downside to that approach however is if you start to get into an organization where you have thousands of flights running at a given time, it gets pretty time consuming to pull those reports as part of your daily routine, especially if you have certain campaigns that are rotating tags from multiple 3rd parties against a single goal, such as DART for standard tags, and Pointroll for Rich Media tags. Ad-Juster does that work automatically for you, and puts it together in one report. It can store data for any length of time you want, but can automatically filter to campaigns that launched within a certain time frame. So what I meant by the 3-5 day comment was you can create a report to show you the discrepancies for only those flights that launched within the past few days, essentially creating a scheduled, automated approach to your process.

    The other big benefit I see to a system like Ad-Juster isn’t necessarily on the Ops side, but on the Billing side. Take that same list of a few thousand flights that need billed on 3rd party actuals each month, and you need an entire team of people to sit and pull those reports each day to do it. If you could have a system automate that piece of the work-flow, I think at a certain scale it makes a lot of sense.

  3. Great write-up. I agree completely that the IAB impression exchange has fundamental flaws. After spending days working on an implementation with a large vendor, it was only the day after launching the test that they informed us it wouldn’t work with rich media and suggested that we use our own system for RM and their new system for standard media. I tried to explain that we had a good system and were only looking to replace it with a great system.

    Add to that the complexities of Fourth Party and even Fifth Party tracking and it starts to get a more than a little silly.

    By automating much of the data integration and pulling, we were able to increase the billing reports that can be pulled by a single individual from 15 to over 100 per day, but that is only when Ad-Ops and the agency do their work correctly. Ad operations is quite good most of the time, but the agencies seem to have limited technical understanding (or perhaps concern) regarding the complexities they create.

    Ask an agency contact sometime to provide a tag matchup between the vendors they are using (say, serving Eyewonder but reporting/billing DoubleClick) and they will give you a blank stare, followed by the statement “I don’t have access to that”. Really? You created the relationships, don’t you understand them? Can’t you see them? If you can’t, who do you think can?

  4. Client-side failure to complete processing every link in the chain accounts for a great deal of the discrepancies. If a user leaves the page after your ad server delivers the iframe but before it finishes, a discrepancy will result. If that happens because your site is slow, then that’s something you need to correct. If that happens because the beaconing service was unavailable, that’s another story.

    How big of a problem is this? What percentage of the discrepancy is due to organic user activity vs technical failures with the third-party’s impression. trackers? And who’s to blame? You need to instrument end-user measurements at the component level–the server doesn’t know what it never sees.

    We think this will help http://digital-fulcrum.com/url-level-real-user-monitoring/

  5. Thanks Will , I’ll have to look at your solution in greater detail. You outline a number of good points on why and how discrepancies happen including some publisher driven problems which are usually ignored. I think Ad-Juster has a good product to identify problems early but could improve their product by helping to identify the root cause of discrepancies.

Leave a Reply

Your email address will not be published. Required fields are marked *