Skip to main content

Measuring The Success of Your Personalization Campaign

Jason Hamrick | Principal Strategist, Data & Insights

March 12, 2020


In earlier blog posts about personalization, we’ve discussed how to prepare your organization for personalization, how to set goals, the importance of creating a hypothesis, and the details of implementing a personalization campaign. 

But, how do you measure the success of your campaign?

Testing The Hypothesis

Your campaign hypothesis should include the result that you should expect to see. That expected result should be something that you can capture as a Key Performance Indicator that you can track and measure over the duration of the campaign.

For example, in our earlier blog post about developing a personalization hypothesis we used the example of reversing mobile shopping cart abandonment by showing a CTA to returning visitors:

If we show a reminder CTA to our mobile users on their return visit, then we will see an increase in completed purchases from abandoned carts, because our mobile user click data demonstrates  they take action at the top of the browser page on return visits.

This hypothesis can be tracked via single KPI: The percentage of returning visitors who complete a purchase after clicking on our reminder CTA. 

Measurement

Once you’ve determined your KPI, the next step is to see if you already measure it. In many cases, you’ll learn that you aren’t measuring that exact KPI, but you may be tracking a similar KPI you can use for comparison. In our example above, we are likely already tracking a similar metric that can serve as a baseline for our campaign:  shopping cart completions for returning visitors.

If you don’t have an existing metric that can serve as a suitable baseline, you’ll need to create one and gather data until you feel like you have a trustworthy number. With a baseline in place, you can also determine a target for your KPI. Your KPI target should be achievable (don’t expect a 1,000% increase!) and informed by an industry benchmark

Once you’ve determined the metric, configure your personalization engine (like Acquia Lift) and web analytics suite (like Google Analytics) to capture that metric. 

Create Your Report and Socialize It 

Personalization campaigns are about driving progress towards an organizational goal, so you need to report on the results of your campaign. Our preferred approach is to add a new chart to an existing data dashboard, created in a tool like Google Data Studio. 

When creating a new chart or report, take care that the format of the chart is appropriate to your audience. Transparency is key to internal buy-in for a personalization program, so it’s important that everyone can understand this information.

The advantage of this approach is that it presents your campaign KPI side-by-side with your standard reporting, adding context and nuance to what might be a single metric view of the campaign. Rather than quietly adding the chart to the report, use its addition as a reason to meet with your stakeholders, review all of the relevant analytics, and explain how this metric fits into your bigger customer data and insights story. 

Including your new chart within an existing dashboard or report subtly reinforces the notion that personalization and experimentation are baked into your broader analytics strategy. Insights gleaned from your personalization campaigns should inform content creation investments, changes to user experience,  new technology/development approaches, and potentially direct marketing dollars toward new initiatives. The customer data you collect during your campaign isn’t an end unto itself - it is knowledge that leads to informed action.

Ensuring Campaign Results are Actionable

One method to ensure that your campaign results are valid and actionable is to conduct your personalization experiments as A/B tests.

 Running your campaign as a A/B test allows you to compare the performance of your personalized content against the default control - even within your audience segment. This adds validity to your baseline and helps you account for any peculiarities in your audience segment, and reduces the risk of getting muddy data from testing too many variations at once.

Using our cart abandonment example, running the campaign as an A/B test helps us account for the possibility that returning visitors are more likely to complete a purchase, even without being encouraged by a specific call to action.

Of course, this raises the question “How long do I need to run my campaign before I see trustworthy results?” You’ll find endless answers to this question, but they can be distilled into three best practices:

  1. Run your campaign for as long as it takes to get a valid sample size to reach statistical significance. Optimizely and Visual Website Optimizer both have simple online calculators you can use to estimate sample size and campaign duration. If you can’t reach statistical significance due to small sample size, run the campaign for long enough to get a solid indication of a trend. Personalization is as much art as science, so it’s OK to trust your intuition
  2. Run your campaign for at least a single business cycle - usually a week or a month, depending on your particular business. Running for a full week or month lets you account for differences in daily traffic (like lower weekend traffic) or the impact of sales or other events that might otherwise skew your normal traffic pattern
  3. Run your campaign for full business cycles - don’t stop midweek or mid-month

Comparing Metrics Across Tools

You may find that as you are gathering your metrics, you are seeing different numbers from your personalization engine versus your website analytics suite. 

While you’ll want to check for errors in your configuration, it is just as likely that reporting discrepancies are due to differences in how each tool defines and captures their metrics. 

For example, Google Analytics and Adobe Analytics often report different values for Pageviews, because GA fires at the top of the page and Adobe Analytics fires at the bottom of the page. If someone leaves a page before the AA tag loads, then it will not capture that pageview.  Understanding how different tools report similar metrics can help you better contextualize the impact of an individual campaign.

Pulling the Plug on an Underperforming Campaign

Once your campaign is in flight, you will naturally want to check on its progress every day, by loading your dashboard and tracking your KPI.

 We encourage you to do so at the start of the campaign, if only to double check that you are correctly gathering data. After that, resist the urge to obsessively monitor that number. Over the course of your campaign, you may find that your KPI is moving in the wrong direction and you might be tempted to pull the plug on your campaign. Resist that urge! 

Wait at least one business cycle before making any decisions about next steps for your campaign. What you are seeing might be a result of mid-cycle variations or the natural shifts that occur in your traffic. Exercise patience and let your campaign run long enough for a valid sample size. If you’re questioned by your stakeholders, remind them that you are building a culture of experimentation. At the close of the campaign, you should honestly report on the results - the good, the bad, and the ugly - and adjust your strategy for the next campaign.


Recommended Next
Data & Insights
Navigating the Next Wave: AI-Assisted Search in Healthcare Marketing
Purple, pink gradient
Data & Insights
HIPAA Compliant A/B Testing in Healthcare Marketing
woman talking
Data & Insights
How to Track Clicks in the Shadow DOM with Google Tag Manager
Three coworkers seated in a conference room discussing a project
Jump back to top