- Many marketers' email deliverability measurements are incomplete
- True delivery measurement requires both your failure logs and your seed list results
- Tracking your "potential rate" will determine the real value of your campaigns
Lately, I have heard a number of people talking about email delivery rates and what they really mean. While the topic of deliverability is not new to the email space, it is an important one that deserves discussion. Most of the time, when marketers are talking about numbers and various metrics, they invariably focus on opens and clicks, not delivery rates. With that being the case, I wanted to take a look at what delivery rates really are and, more importantly, how you should be looking at them.
One of the things that always makes me laugh is when clients or prospects tell me that some other email service provider promised them 99 percent delivery. Here is the problem with that: It's impossible to make such a promise without knowing anything about your list, your previous practices, or other important factors affecting deliverability. When it comes down to it, the sending platform is only half of the deliverability equation; a sender's practices make up the other half.
There is also confusion around how people look at deliverability. Some folks in the email space think that delivery rates are simply the number of messages sent minus the messages failed, while others look at it strictly from an inbox percentage perspective. I would argue that neither view by itself is correct. Instead, I see it as a combination of the two. While this viewpoint may be a little more complicated and harder to track, it is the most effective way to look at deliverability over the long term. Both of these statistics should be monitored, followed, trended, and acted upon, but they don't tell the whole story. What we really need is a new metric: "productive rates." But, I'll get into that a little later. First, let's discuss these two approaches in more detail.
Tracking your delivery rate based on the number of messages sent minus the number of messages bounced will tell if you are keeping your list clean and not having any issues with ISPs from a blocking standpoint; however, it won't tell you where your messages are landing. Did your messages get to the inbox, the bulk folder, or did the ISP just accept the messages and then never actually deliver them to its users? This practice of "dropping messages on the floor" is not as uncommon as you might think; ISPs do it to help protect their networks and monitor mailings from suspicious senders. This makes it all the more important to know what ultimately happens with your messages.
I'm sure you'll agree that if your message isn't in the inbox, you will get very little response. Fewer and fewer people are checking their spam folders, and that's not surprising since 99 percent are typically messages that they don't want. The increasing number of messages landing in the spam folder makes it less likely that people will take the time to search for a marketing message.
That being said, you might think it's better to use a seed list to determine your inbox delivery and use that as the actual delivery rate. Unfortunately, there's a problem with this approach as well. Seed lists can only offer you a representative sample to show you what likely happened with your mailing, which can't account for users who have changed their default email settings. For example, customers could change their settings to junk any message received from an address not already in their address book. On the other hand, the seed list could show your message arriving as spam, but if the recipient has added your email to the safe sender list, it will arrive in the inbox. In short, there are several different scenarios that could affect these numbers, which is why I am continually stressing the importance of testing.
So, what's the best approach? Should you be looking at your failure logs or your seed list results? The answer is "both." Think of email delivery as a puzzle, where you need to use all the pieces to get the complete picture. If you are missing any piece, the puzzle is incomplete.
Measuring email delivery comes down to using both your failure logs and your seed list results to determine what I call your "campaign potential rate" (i.e., the potential your campaign has to reach customers and get them to act on your email).
In order to figure your campaign potential rate, you must first gather failure data from your email system and then compare it to your seed list results. For example, let's say your delivery rate is 95 percent, but your seed list shows that your mail was sent to the spam folder at MSN/Hotmail, which makes up 30 percent of your overall list. In that case, your actual campaign potential rate would be 65 percent.
While it's possible that more than 65 percent of the people will have received the message in the inbox, this is still a good way to look at your mailings. Not only will this approach allow you to do trending, when you fix the spam problem, you will also be able to measure the actual effect it had on the overall ROI on your campaigns. So, say you earned $10,000 from the campaign with the 65 percent campaign potential rate, but the next campaign got to the inbox at MSN/Hotmail and you earned $15,000. You can now see the direct impact of getting junked at MSN/Hotmail. This is very valuable information that can be used to push for the best practices that will help ensure inbox delivery.
The bottom line is that you should use all the tools available to you to better understand your delivery rates and the overall potential of your campaigns. No one metric should be used to evaluate your email marketing programs because they are all interconnected.
Good luck and good sending.