The success of specific email marketing campaigns will be defined by whether or not the goals set out at the beginning are met or not. But to properly analyze the efficiency and results of email deployments it is extremely important to know which metrics are actually relevant and which might don’t matter all that much.
In this article from our ongoing Email Marketing Essentials series, we will address these aspects at length, while also providing you with the best practices that will lead to outperforming expectations and predictions. If you are new to the world of email marketing, we have got you covered, from detailing the basics of email marketing, tips and tricks for how to craft the perfect email strategy, what to consider and what to avoid when sending out newsletters to your subscribers, as well as an overview on email regulations and deliverability.
Email Marketing KPIs
Before you can start setting out objectives and key performance indicators for email deployments, it is important to return to your strategy. What do you want to achieve with your email campaigns? Once you can answer this important question, you can map the relevant metrics that you need to keep a close eye on.
There are two important metric categories that your email service provider will most likely provide for each campaign:
- engagement: here you can pay attention to metrics such as open rate and click rate
- disengagement: metrics like unsubscribes and bounces (either hard or soft bounces)
Let’s start defining them and then try to explain why and where you need to follow them closely.
Open rates translate into the share of subscribers in your list who have opened the newsletter. While this number will differ depending on the size of your email list, as well as on when you schedule your deployment, a successful email marketing campaign depends almost entirely on this metric alone. If users never really engage with your email and they just delete it or they mark it as read, you will have all the reason to start making changes.
Click rates refer to the percentage of subscribers who have opened emails and have also clicked on at least one link. Clear call to actions, clean and easy to read emails might contribute to a higher open rate. If you want recipients to take any kind of action after reading your message, tracking clicks will help you measure how efficient your campaign has been.
Whenever you encounter issues with these engagement metrics, you can try to mix things up. Set up A/B testing, something we will address in-depth below, change your email template, review the best day and time for sending out newsletters, to see if maybe you are reaching out to subscribers at a bad time. Use the email marketing benchmarks from your ESP or those offered by GetResponse or MailChimp to generate action points for your improvement plan.
Unsubscribing is a term that describes a recipient’s decision to opt-out from your email list. That means you will need to remove them from future communication. Overwhelming subscribers with too many emails will eventually lead to people call it quits. You don’t want to crowd their inbox, but to remain top-of-mind and help them with any email you plan to send out. Take into account any triggered emails or drip campaigns when analyzing how often you send out newsletters.
There are two major types of bounces when it comes to email. The first one, soft bounces, means that there has been a temporary deliverability issue. It might happen when the mailbox is not configured correctly or full, the email is too large or the recipient email server is offline or temporarily down. The second one, hard bounces, occurs when the email is invalid or doesn’t exist or when the recipient email server has stopped allowing incoming emails.
Both types of email bounces are indicative of problems with your email list. One good thing you will want to try out is an email list cleaner. DataValidation offers a reliable and trusted email validation solution, aimed at improving your email reputation.
If you see drops in engagement or spikes in the number of unsubscribers, those are situations that point to deeper problems with your overall email marketing strategy. You might want to consider making changes to the email calendar, send out different types of campaigns or check for compliance irregularities.
As we have previously mentioned, it can be difficult, at times, to pin-point the exact aspect that contributes to email marketing metrics declines. Even if you know precisely the source of an issue, it can be hard to find the best approach to address it. That’s where A/B testing comes into play. A/B testing, or split testing, allows sending out two competing emails, also known as variants, so that you can choose elements that will differ between them. Eventually, email marketers will be able to see which version performs better and extrapolate those conclusions for their entire email strategy.
What are some good contexts where A/B testing fits perfectly? One thing you need to keep in mind is that not all users will appreciate the same communication method. Some of them might prefer emails that are less flashy, without images, while others will find them boring and lacking. Some will prefer that you use personalisation when addressing them, while others will consider that inappropriate or unauthentic. To provide an answer to the question above, A/B testing is a good idea whenever you are not sure what method will bring in the best results.
Let’s be more specific. A/B testing in the email world has 5 ground rules. Knowing them will also clarify how and when to use A/B testing for your campaigns.
- Only choose one element or asset for each test Test different subject lines, CTAs or images, but always limit yourself to just one asset. After your A/B test is final and you have picked a winner, you can start testing out a different asset.
- Only have one variable You might be inclined to test out two or more variables, because you want to gather as much information as possible from a test. But doing so will defeat the purpose and make drawing conclusions unnecessarily difficult or downright impossible.
- Always have a control version A/B testing isn’t viable without having a control version of your email. That means that you are sending out a version with a different asset and one without it. You will be testing the variant against the control, so that the outcome is actionable.
- Use random split audiences Most well known email service providers offer complete A/B testing features, including randomizing audiences. That way, you won’t have to create segments prior to deployment. Random split audiences will ensure that no bias or misconception will influence who receives the control or variable versions of the email.
- Send the A/B test at the same time To allow for relevant results, both versions need to be sent out at the same time. Otherwise, you risk influencing the outcome by allowing timing to affect how subscribers will engage with your campaign.
Multivariate testing is the next step in sending out different versions of the same email. You will be testing multiple variations, not just one, to see not just what performs better, but also to find out which combinations of assets deliver the best results.
You can create several copy messages, images, CTAs or even different templates, with different fonts and colors. Keep in mind that you will need to use a software solution that permits multivariate testing, because doing it manually is impractical and could also affect results. MailChimp, which is natively integrated with our email verification service, has multivariate testing built-in. Read this great guide from Uplers Email to learn more on the topic of A/B and multivariate testing.
To achieve all email campaign goals, it is important to remember the lessons we have learned today. Here is what you need to keep in mind when setting out KPIs and when analyzing your email data:
- Create a clear email marketing strategy and adhere to it religiously
- Follow both engagement and disengagement email marketing metrics to have a clear picture on results
- Implement regular A/B testing to analyze data and extract useful data. When possible, and when you feel comfortable with the approach, try to also use multivariate testing.