Does Lazy Loading Ads Solve the Viewability Problem?

What Does Lazy Loading Ads Mean?

Traditionally in web design, a browser calls a web server, which returns all the HTML necessary to render the entire page to the browser in a source code file.  Now that file may contain redirects and references to other web servers that the browser has to call in order to fully render the page, but the general idea has been that browser fully loads the whole page for the user as quickly as possible.  Such that, if the page is extremely long, contains lots of images, or what have you, the whole thing is rendered at once, irrespective of the user’s navigation.  In other words, the browser renders everything, whether the user ever views that content or not.

“Lazy loading” (sometimes also known as “just-in-time loading” on the other hand, is a relatively new method of web design that renders the page on an as-needed basis, just as the user is scrolling their browser down to that piece of content.  If you’ve seen pages that contain an “infinite scroll” design, you have the general sense.  The content available to the user isn’t all loaded at once, because it would take forever; rather, the page renders as you the user scroll to it.  If you don’t scroll down, the content isn’t rendered.  So lazy loading any web content, ads included means the web server only provides the necessary source code to the browser as the user needs it.

The Performance Benefits of Lazy Loading Ads

Lazy loading advertising content in general has a few benefits; first, your page performance from a latency and time to load perspective typically improves because you are reducing the amount of data a browser has to download.  Holding network speed constant, the best way to make pages load faster is to make them lighter.  Loading content as a user needs it accomplishes that nicely, and as advertisements are especially heavy because they are 3rd party hosted images, lazy loading ads tends to have an outsized impact on performance.

In fact, the New York Times recently switched their site design over to lazy loading content, and called out advertisements in particular in a blog post about the strategy.  In the article, the Times mentions they achieved a 50% improvement in page performance, largely because their lazy loading strategy (that also relies on writing the ad into a dedicated iFrame on the page) prevents the JavaScript code many ads require from running in-line to the rest of the page content, and thereby holding up the over render time.

How Lazy Loading Addresses Viewability

Aside from the performance benefits of lazy loading ad content though is the happy consequence that every ad view is also visible to the user, since the content is only rendered when the user is scrolling the content into view.  While it’s true there’s still a lingering debate over how viewability is measured – this Digiday post gives a good overview on the complexities of each viewability vendor using different methodologies to measure the same MRC standard (50% of the ad content in-view for at least one second) – there’s no question that a lazy loading strategy is far superior to traditional content rendering in terms of ensuring your ad requests are viewable.

Now, rendering the ads as they slide into a user’s active view isn’t going to help publishers cope with the reduction in inventory – in fact, it may even be jumping the gun.  After all, the industry hasn’t yet shifted to a standard that requires publishers bill off a viewable metric, though most publishers are seeing soft requirements (meaning they’ll be measured and evaluated against others based on viewability) for it on the majority of RFPs now.  My guess is it’ll be perhaps two years before it becomes a billable standard in the IAB / 4A’s standard T&Cs, though I’ve been surprised how fast this issue has moved already.

I’ve thought for awhile now that viewability actually isn’t a positive thing for the industry and that it will have the unintended consequence of promoting lower quality content and ad layouts.  If publishers are forced to bill off viewable impressions, then my hypothesis has been that publishers will reconfigure their pages to ensure every ad is viewable, which will probably mean the pages will get worse.  For example, I suspect we’ll see even more slideshow-style content or highly paginated content where everything on the page can load in-view, as well as very cluttered and lower quality ad placement above the fold on articles.  These ads will be in view, but will it make for an overall better ad experience for users and advertisers, I tend to think not.  Lazy loading ads however might be somewhat of a compromise.  Publisher could keep user-friendly page layouts and not worry as much about 3rd party viewability measurement, though they’ll still lose inventory relative to what they have today.   One voice that makes a lot of sense on this topic is Josh Schwartz‘s over at Chartbeat, who notes that the more engaging the content, the higher the viewability score for the ads.

Implementing Lazy Loading Ads

There are a few ways to implement lazy loading ads, and most of them are free and open source on Github if you are comfortable with some development work.  One popular solution is the PostScribe library from the kind folks at Krux (usually just part of an overall solution), which the Times references in their post.  Other libraries are also out there, such as jQuery.dfp.js, jQuery LazyLoad Ad, and Lazy Ads, though none other backed by an ad tech company, so you’d want to have the resources in house to support those solutions yourself.

And finally, it seems possible to implement lazy loading ads with DoubleClick’s GPT tags in an SRA (Single-Request-Architecture) configuration as Mike McLeod notes on this AdMonsters Forum post.

Lookalike Modeling Your Ad Ops Team Can Build With a DMP

Digital Publishers and Advertisers that have access to a Data Management Platform (DMP) can bootstrap their own data modeling, or lookalike model capabilities with some simple index-based approaches.  That is to say, if you can understand both the total population of users for every segment and for any specific segment, how many users of every other segment overlap in that target segment, you can build a fast and easily understood audience model with a little legwork. It’s not the rocket science approach of a regression model or black box algorithm, but it works, and it’s pretty easy for people without a degree in data science to execute once you figure out how to get the right data out of your system.

How to Do Lookalike Modeling Yourself

The first step to building a lookalike segment is to first define what you are trying to model, that is, what audience you want want more of.  This will be your ‘target’ – for our example here, let’s consider the following audiences:

Segment
Qualified Users
% of Total
Women 20,000 20%
Pet Owners 5,000 5%
Coffee Drinkers 8,000 8%
Outdoor Enthusiasts 9,000 9%
Total Users 100,000 100%

Let’s say we’re trying to reach females.  Unfortunately, we only have 20,000 we can identify, out of a total population of 100,000.  Now let’s assume that our content isn’t skewed to one gender or another, and therefore there’s clearly some users in the 80,000 other users that we can expect would be female.  But we need to find a signal within that group that directs us to which other audiences are likely to be female.

What we need to do then is compare every other audience to our female audience, and figure out how many users of each of our other segments overlap with our female segment.  To do that, we need to pull another table of data – let’s add a few more audiences while we’re at it.

Test Segment
Total Users in Test Segment
Overlap (Number of Females in Test Segment)
Pet Owners 5,000 1,500
Coffee Drinkers 8,000 500
Outdoor Enthusiasts 9,000 1,200
Business Travelers14,000 3,000
Sports Fans 2,800 1,000
Avid Readers 7,000 900

Now, since every audience has a different total population, and every overlap of one audience to another is also different, we need a way to compare one overlap to another.  For example, just because there are a greater number of men over 6 feet tall in China than in Norway doesn’t mean Chinese men are more likely to be over 6 feet tall that Norwegians – to know for sure, you need to know the total population of each country and figure out if men are more likely to be over 6 feet tall in China or Norway relative to their population.  And that’s exactly what we need to do when building our lookalike segment, we need to determine if one audience is more or less likely to be female relative to its population.

To do that, we need to divide the overlap of each test segment audience (pet owners, coffee drinkers, etc.) to target segment audience by the population of the target segment audience (females), so that we can compare that to the target segment audience overlap in the overall population.  So, with some simple division, we divide the overlap figures from the table above into the total population of females, and get the following:

Segment
Total Users in Test Segment
Overlap
Total Females
Concentration of Test Segment in Female Segment
Pet Owners 5,000 1,500 20,000 7.5%
Coffee Drinkers 8,000 500 20,000 2.5%
Outdoor Enthusiasts 9,000 1,200 20,000 6%
Business Travelers14,000 3,000 20,000 15%
Sports Fans 2,800 1,000 20,000 5%
Avid Readers 7,000 900 20,000 4.5%

Finally, if we divide the relative concentration of females in each test segment to the concentration of each test segment in the total population, we can create an index, or a comparison of one relative figure to another.  All we need to do this is multiply each comparison by 100, which is our benchmark.  Any audience with an index greater than 100 tells us the test segment is more likely to contain female users that the general population, and any audience with an index less than 100 tells us the test segment is less likely to contain female users than the general population.

Test Segment
Total Users in Test Segment
Overlap
Concentration of Test Segment in Total Population
Concentration of Test Segment in Female Segment
Relative Concentration of Test Segment in Female Segment (Index)
Pet Owners 5,000 1,500 5%7.5%150
Coffee Drinkers 8,000 5008%2.5%31
Outdoor Enthusiasts 9,000 1,200 9%6%67
Business Travelers14,000 3,000 14%15%107
Sports Fans 2,800 1,000 2.8%5%179
Avid Readers 7,000 9007%4.5%64

So now with the data above, if you wanted to model an audience to find those who are likely to be women, but not necessarily known to be women, you could build a segment of pet owners or sports fans, neither of which is a coffee drinker, and know they were more likely than not to be women using the data below.  In boolean logic is would be (pet owners OR sports fans) NOT coffee drinkers.  After you create the new compound audience, you can see how it ends up indexing to your total once the overlapping users are de-duplicated into a single segment, and then refine as necessary.

You Can Model Clickers and Converters, Too

The technique above is especially useful for finding ways to optimize campaigns that are focused on a click or online conversion metric – you simply track the campaign clickers or converters with a new audience in your DMP, and then index all audiences in your platform against their overlap in the clicking or converting audience.  You could, for example, start running every performance based campaign in ROS to expose every audience to the campaign, and then after a short period of time figure out which audiences are responding more favorably and reliably to the campaign goal.

In an ideal world you have lots of audiences you can overlap against a target; hundreds or even thousands.  You could then index all of them against your target, sort them by the index, and then optimize your campaign targeting into the top choices.  Which segments you pick, the highest indexing or the largest scale (there will rarely ever be an option that is both large and high quality), depending on your goals for the campaign, budget, etc. You can also exclude the lowest indexing audiences as a technique, and reduce your distribution against lower performing audienciences.

The risk to this technique is that the amount of overlapping users is so small that you lack enough of a sample to reach a statistically significant index.  In other words you don’t have enough data to trust the lookalike.  To precisely calculate this, you’d need to employ a statistician, however my rule of thumb has been to rely on standard sample size tables that clearly define how many users you need to sample from a given population for the result to meet a particular confidence level.  You can easily build this check into Excel to compare your overlapping users in the test segment (pet owners in our case) to the target segment (women).

As you can see though, in a population of almost any size, a mere 400 users is all you need for a representative sample to meet a 95% confidence level with a ± 5% margin of error.  You can use this same check on creating general lookalike audiences, but it tends to be more relevant when working with very small target segments, like users who had to take a particular action.  Of course, this isn’t the most sophisticated audience modeling method out there, far from it; but for Ad Ops teams who need to play fast and loose with campaign optimization, it’s a place to start, and a great way to get more out of your investment in a DMP.

How Ad Serving Works – Mobile vs. Web Environments

The most popular article on this blog is one of the very first ones I ever wrote – How Does Ad Serving Work. What I probably should have titled it though was How Does Ad Serving Work on the Web, because there are a few important differences when you’re talking about the mobile ecosystem.

Server Redirects vs. Client Redirects

For the most part, it comes down to the interaction between a client and a server – in desktop environments, the user’s browser, or the client in technical-speak, does most of the work fetching and redirecting information, which is ideal for lots of reasons. For one, redirecting the client gives each platform in the ecosystem the ability to drop or read a cookie, which helps with downstream conversion tracking, frequency measurement, and audience profiling. Secondly, it facilitates client-side tracking of key metrics like clicks and impressions for billing purposes. Client-side tracking is the preferred methodology for advertisers because it measures requests from a user instead of from a server, and is therefore a more accurate measure of what a user actually saw.  This process requires more work from the browser, but that’s OK because high-speed connections and unlimited data usage is pretty much the norm these days for home and office connections.

Desktop Ad Serving Sequence

In mobile environments though, connection speeds really matter. Many users are on slow enough connections that if the browser or app was responsible for fetching the ad the way it does on desktop connections, the user is likely to abandon the page before the ad finished loading. Because of that, you often see more of the work being done in the cloud for mobile ad serving, independent of the client. So instead of the browser calling a server, and then being redirected to another server, the browser tends to call a server, which then calls other servers, which can talk to each other through the ultra-fast fiber-optic landlines instead of the cellular network.

Mobile Ad Serving Sequence

It’s true that this is an ever-changing situation; in many cases people are already starting to think about 4G LTE cellular connections one day replacing fiber connections for web browsing, but here’s no question that in most parts of the country that cellular speeds still vary wildly. You could easily argue over just how fast 3G is to 4G is to 4G LTE, but one need only look at one of the recent studies from RootMetrics to see that even within each type of connection, there is a huge variation in speed even on the same carrier, not to mention 4G LTE coverage is still pretty sparse through most of the country.

Mobile Ad Serving with Exchanges – Even More Complicated

The client to server vs. server-to-server challenge is even more pronounced when the ad request is served to an exchange. For example, if you look at my old article, Diagramming the SSP, DSP, and RTB Redirect Path, you’ll see there are no fewer than four client side requests made to fulfill an ad request; one to the publisher’s ad server, one to the supply side platform or ad exchange, one to winning advertiser’s ad server, and one to the CDN. See the sequence diagram below for a visual:

Desktop RTB Ad Serving Sequence Diagram

In the mobile ecosystem, you effectively have three; one to the publisher ad server, which then makes the call to the SSP itself, and passes the winning advertiser’s tag down and becomes the second client side call, and then a third and final call to the CDN. See the sequence diagram below for a visual:

Mobile RTB Ad Serving Sequence Diagram

 

So How Long Does it Really Take to Serve an Ad?

In the web ecosystem, you’d typically expect perhaps 250ms to connect to a web server, 150ms to connect to the ad server, 150ms to connect to the SSP, 250 – 400ms to wait for the SSP, 150ms to connect to the marketer’s ad server, and 50 – 100ms to download the content from the CDN, for a total of about a second to serve the ad from start to finish.  Now most (80 – 90%) of this time is network latency vs. waiting for the server to make a decision.  Network latency is the time you have to wait for your browser to do things like the DNS lookup (translating the .com address to an IP address), establishing a connection, and sending the request – basically the time it takes to travel through the network fiber to reach the physical location of the server.  Not only does your browser or device have to suffer this network latency, so does every part of the system.  So the Publisher server has to run through the process with the publisher ad server, which has to run through the process with SSP, which has to run through the process with the ad exchanges, and so on.  The rest of the time is waiting for the various parts of the system to actually make a decision on what to do – serve an ad or respond with no bid?  If serving an ad, which ad?  Usually these decisions, what engineers call “time in I/O”, is actually very fast, under 10ms.

If the same sequence played out in the mobile ecosystem however, you might find it take more or less the same amount of time on a 4G LTE network in downtown Chicago, or 8 to 10 whole seconds in a more suburban or rural area.  Network latency could be 4x higher on 3G cellular networks, and downloads speeds 8 – 10X slower – this may not apply to all you ad tech professionals sporting state of the art devices, but for much of America this is the reality.  In the case below when I ran a test, the network latency was actually better on the cell network, but download speeds were much worse.

cellular vs wifi connection time comparison

To see this process for yourself, download Shunra NetworkCatcher on iPhone or SpeedTest.net on Android and ping any web destination with your WiFi on and then again with your WiFi off to see this effect in action.

What is Holistic Ad Serving?

Certainly one of the biggest opportunities in ad tech today is integrating real time bidding (RTB) systems to core ad serving platforms such that ad serving decisions are made from a single system. This vision of a fully integrated monetization stack is known as holistic ad serving, and it’s going to be big.

Holistic ad serving consolidates what is today a fragmented marketplace, modernizes the publisher ad serving stack, and lays the groundwork for advertisers and publishes to transact guaranteed campaigns over RTB infrastructure.  In other words, it provides a way for publishers to transition from a world of manual campaign implementations to accepting and trafficking campaigns programmatically without having to manage the balance between two systems.

Tactically, holistic ad serving is a seems like a basic change – instead of filling direct campaigns first and then letting the exchange try to fill whatever is left, the idea is for publishers to call to the exchange marketplace and get a bid for every single impression, thereby allowing RTB demand to compete directly with the traditionally sold campaigns with guaranteed goals.  By at least getting a bid for every impression, the publisher’s ad server can understand the benefit or cost of filling an impression with a direct campaign – it has all the information.  Holistic ad serving also opens the possibility, on an impression by impression basis, for an RTB campaign to trump a direct campaign. (more…)

The Future of Geotargeting is Hyperlocal

This is the fourth article in a four part series on Geotargeting. Click here to read parts one, two, and three

Updated August 15, 2012

So called hyperlocal geotargeting, particularly on mobile platforms is the real promise of geotargeting in the future.  Hyperlocal is far more granular than just a zip code; it’s as specific as your exact location, within a 10 meter radius.  If you own a smartphone, chances are you’ve already taken advantage of these systems to find a nearby restaurant, get directions while lost, or figure out the best mass transit route from one place to another.  From a mobile perspective, many services and apps depend on hyper-accuracy to work correctly, though the information also provides a huge potential to innovate to the advertising community.  For example, a company might run a campaign that serves a unique offer to someone if they are within a certain distance of their stores.  While likely not all that scalable, it might be particularly appealing for local, brick and mortar businesses.

Technically speaking, hyperlocal is also likely to be far more reliable than traditional geotargeting on the desktop because unlike the desktop, IP address won’t be the mechanism anymore, the device signal itself will.  What does that mean exactly?  In some cases, geotargeting will leverage a device’s GPS receiver in concert with a customized table of coordinate ranges to identify targetable impressions.  Up until a few years ago, using GPS signals to deliver advertising would have been all but impossible due to the significant latency, up to 30 seconds for a so-called time to first fix (TTFF), which is when a location of the GPS satellite constellation (the physical location of the GPS satellites in orbit above the earth) is finally known and is a result of how often the GPS satellites broadcast a ping.  While generally reliable, 30 seconds is an eternity to ad delivery systems, and hardly a realistic solution to deliver a timely message.

Today however, TTFF is usually only required for non-cellular devices, like standalone GPS systems. For things like smartphones, the GPS coordinates are determined by a process known as ‘assisted GPS’, which speeds up geolocation by referencing a saved copy of the satellite constellation locations known as an almanac. The almanac details the exact locations of every GPS satellite in orbit at regular intervals, as well as the health of the signal. Every day, the cell towers download a fresh copy of the almanac, so instead of needing to acquire a first fix, your smartphone can simply rely on the cell towers to acquire its GPS coordinates in no time at all.

In addition to GPS, one concept gaining traction is the notion of signal triangulation by a dedicated 3rd party.  The idea here is that every mobile device has an antenna that not only broadcasts a signal but recognizes other wireless signals like Wi-Fi routers and cell phone towers in addition to the GPS satellite signals. Now, if someone were to read those signals off the device, could identify those other devices, and also knew the physical location of each device, they could use that information to triangulate the mobile device’s exact location, all with incredible accuracy.

If that sounds like science fiction, take a moment to familiarize yourself with a company called Skyhook Wireless, which is doing just that, and has been for years.  They already have millions of wireless signals mapped for virtually every street in the country, and have a response time that is a fraction of GPS, around 1 second.  There’s a very cool video that explains how their process works available on their site.  Their product is in production for a long list of major companies, including many of the major cell carriers.  Google and Microsoft for their part have opted to build their own systems that work on a similar process of triangulating user location based on Wi-Fi signals.  In many ways, the future is now!

Hyperlocal Desktop?

Outside of mobile, there’s a similar thread of innovation happening on the desktop side, though it isn’t nearly as advanced, and still relies on IP address since many desktop systems are directly cabled to their networks and don’t broadcast or receive a wireless signal.  Just this year, computer scientist Yong Wang demonstrated that by using a multi-layered technique combining ping triangulation and traceroutes  with the locations of well-known web landmarks like universities and government offices that locally host their services and publically provide their physical addresses, he could accurately map an IP address within 700m versus the 34km that traditional traceroute triangulation produces.  While this method isn’t in production as of yet, it could be soon, since Wang’s process is quite similar to the existing methodology, but at a much higher frequency.

[This article was originally published on Run of Network in Dec of 2011]