How app developers track and estimate ROAS
In the never-ending quest for growth, mobile marketers are unendingly on a mission to grow the number of users in their mobile apps. But increasing downloads isn’t generally the end goal. A more important outcome of growth is to increase overall earned revenue: more in-app purchases; increasing subscription sign-up conversions, or having users view more ad impressions, the end result of marketing is usually to grow revenue. User acquisition (UA) costs money, in the advertising world this is called “spend.” The earnings from mobile users acquired with spend is called your “return.” This gives you the golden metric for measuring the effectiveness of your marketing: Return on Ad Spend, or simply ROAS.
There are entire companies dedicated to helping marketers succeed in achieving ROAS so we won’t really be able to cover everything there is to know in this article. Instead, we’ll suffice in outlining a rough framework of how mobile app developers today are measuring return on ad spend (ROAS) and include reference material for those interested in getting more effective results from paid UA.
When getting started on choosing an approach we’ve found a good differentiation is to break out approaches by the level of investment an app developer is willing to make into the complexity of measuring ROAS: there’s an easy way; an intermediate way, and a complex way. By and large, a higher degree of complexity allows for more accurate (and actionable) measurements. However this doesn’t necessarily mean app developers taking a complicated approach are spending more, we work with folks all across the spectrum and have found the level of spend and earnings doesn’t always correlate with sophistication.
The Table Stakes: ROAS requires three main areas of measurement/investment
- Campaign attribution (here are my installs coming from?): Most mobile marketers will know about MMPs— platforms to help you track sources of acquired users – these are necessary because of the sheer complexity of tracking a user from campaign to app install and this process is made much more difficult with recent privacy restrictions. If you’re serious about UA, you’ll need an MMP.
- Spend tracking (what do my installs cost?): At the low-end of sophistication, you may only care about cost per install (CPI) whereas at the high-end you’re tracking spending at the most granular level possible. At the most basic level, you may simply need to download UA spend history for high-level measurement but most MMPs have tools/services to help with more granular spend tracking.
- Earning measurement (what are my acquired users earning?): Often the hardest metric to measure of the three: for apps that monetize via in-app purchase or subscription you’ll need to track the actual in-app purchase or subscription conversion event, but be forewarned this won’t tell you the entire story. Subscription renewals, refunds, and cancellations are handled by the store – and that information isn’t available in-app. To get accurate, store-validated information you’ll need to invest in a solution like RevenueCat, or build out something on your own. To measure revenue earned by in-app ads, you’ll need to track impression-level ad revenue for each of your users. Most major mediation vendors have APIs to make recent impression-level data accessible, for more information see these articles that walk-through connecting impression-level data to ironSource, and AppLovin.
.Assuming you’ve got these three measurement methods down, you can start on the frameworks.
The Easy way: tracking CPIs compared to LTVs
You’d be surprised how often we see app developers embark on user acquisition without clearly understanding the actual value of their users. I get it, it’s easy to throw together an ASA or Facebook campaign and say to yourself “$2.00 an install isn’t terrible.” The problem is, that app developers will face a much higher hurdle towards scale if they don’t understand at which point an acquired user will be profitable.
The formula for how this is done is straight-forward: use your average cost-per-install (CPI) and measure that against the average lifetime value of a user (LTV)
There is a major disclaimer on the above point: while CPI targets are a great starting point, at scale this can lead you in the absolute wrong direction. We’ve seen multiple instances where cheap US CPIs ended up being from VPN apps in other countries. There’s often a reason for cheap CPIs; without granular measurement, you’ll invariably optimize towards the wrong metric.
The next logical step is to measure LTVs of users from each acquisition source to understand the actual value of the users for each of your campaigns.
For many, this simple approach to ROAS can scale and work perpetually, especially for apps that have a clear, short-term route to user profitability – for instance, user subscription on the first day. However, apps that face a longer route to profitability will need earlier indications of success. For that, we’ll need to explore benchmarks and predictions.
The Intermediate Way: Set benchmarks and use predictors get earlier signals
The goal of any marketer is to get campaign results as quickly as possible. The faster you can trim ineffective budgets the faster you can redeploy that money towards effective campaigns. This constant, in conjunction with ATT requirements (another topic entirely), means it’s advantageous to look at indicative signals that will help you predict the future value of users as early as possible. Subscriptions or in-app purchases are a great signal, but there are two fundamental problems. The first is it may take too long if you have a long lead time on the conversion – if a user opts in to a subscription after seven days, that’s generally too long to wait for campaign feedback. The second issue is a subscription will represent a subset of users — say 1% of users — which means you’ll only get a directional signal from 1% of your users. Privacy restrictions will limit the amount of feedback you can get from users, without increasing your signal density, you could miss thresholds and miss signals entirely.
The solution is to start leveraging more widespread user behavior as a proxy for future success. This can give an idea of campaign success earlier: following the example above, if 1% of users opt into a subscription but 10% start a trial, you’ll lean on a valuable signal with 10X the density to indicate future value. The density and speed of these lesser-value signals could give you a more reliable and faster indication of the ROI of the campaign.
Getting these signals are actually easier than you think, most app developers have a pretty strong idea of strong engagement indicators, it’s just measuring the average engagement and tracking your campaign’s performance against this benchmark. And from experience, identifying important users can show a dramatic effect when compared to averages.
For apps that earn from in-app advertising, things are slightly different. Monetizing users aren’t triggering a single purchase event, they’re monetizing via hundreds or thousands of distinct advertising events
With an order of magnitude more monetization events, small amounts of users will statistically be less important in driving long-term success for some campaigns. Don’t get me wrong, especially for games that rely heavily on Ad Whales, a few users can make large differences. But statistically, you’ll less likely have a single user drive a cohort’s success. In addition, while purchases often happen in the first few days, ad impressions are spread over the lifetime of a user, meaning each user of IAA monetization will likely take much longer to achieve payback.
The Advanced Way: Custom performance models
The most advanced buyers are using real-time predictive feedback to adjust campaigns dynamically. This allows UA to provide fluid feedback to market conditions, indicating where budgets should be increased, and where buys are missing ROAS targets. We’ve seen this done by the giants all the way down to sophisticated indies. One app developer/publisher we talked to had a team of 75 folks just working on the data science team, while another indie had a company-wide headcount of 3, including developers. (If you’re curious about what data scientists are actually doing at app companies, check out this interview).
At this advanced stage prediction models are often different, often between titles, and sometimes even geos. The AdLibertas Machine Learning Prediction models use historical earnings to create a custom prediction models for fast estimation of earnings/conversions on campaigns.
Conclusion
No matter where you are in your journey, the key to sustainably scaling UA is effectively measuring your ROAS. This is one of the only ways to provide actionable feedback to your marketing budgets. One can think back to one of the seminal quotes in marketing by John Wanamaker “Half the money I spend in advertising is wasted, the trouble is I don’t know which half.” While today’s successful mobile app marketers may face mounting obstacles in measuring success, through effective measurement and iteration, they can more effectively put their advertising budget to work than Mr. Wanamaker in the early 1900s.