What do open and click-through rates actually tell us?
- Alex Rodriguez
- Jan 15
- 4 min read
Updated: Jan 16

When I first started marketing, I thought open and click rates (in an email context) were the be-all/end-all for communicating with customers and other various stakeholder groups. Then, I began to confront a stark reality as my messaging was not driving the desired outcomes I wanted to see that would help achieve broader organizational objectives:
What were my high open and click-through rates really telling me?
Were open and click-through rates as relevant as I thought they were?
If they were not, what metrics should be paramount when deploying any customer communications?
Let's try to answer these questions.
Open, Click Rates, and Click-Through Rates: What do they really mean?
Before I go any further, let's make sure we have our terminology down!
An open rate is the percentage of unique recipients of a message that actually opened it. This definition applies across all communication channels (i.e., email, push notifications, SMS, etc.) and is very straightforward. They can increase with, for example, subject lines that are engaging and relevant message timing (remember: right customer, right message, right time!), which have to be tested to draw conclusions and iterate over time. Beyond seeing the message in your inbox or on your phone, it is the first level of engagement. Depending on your industry, a healthy open rate usually lies somewhere between 15-25%. But here is the catch: just because you hit your industry standard does not mean your message is optimized. To delineate whether your message resonated, you must test it against another message (i.e., different subject lines or timing). This continuous process is a means of refinement, not perfecting your messaging.
Click rates and click-through rates (CTR) are often used synonymously but do have some differences. Click rates are measured as a percentage of all recipients, while CTR is calculated as a percentage of those who opened the message. Most marketers, myself included, will contend that CTR is a better metric in that it will tell you whether or not the content within the message or the call-to-action was engaging enough to warrant a click. To increase CTR, you can test different types of content within the messaging, various forms of multimedia, and different calls-to-action. Again, no silver bullet will get you the CTR you seek. You have to consistently test and deploy your findings in further iterations.
Business Results: The numbers that actually matter
But here is the thing: These metrics are NOT business indicators!! They only measure how engaging your message is. You can have the most engaging message on earth, but your message is ineffective if it is not driving the KPI you aim to hit. As a marketer, you want to define effectiveness by whether your message got the customer to engage in the desired behavior. For example, did they make a purchase? Did they engage with the app? Was the customer retained? A marketing test comparing a sample to a control group (customers that fall in your target segment and do not receive the marketing treatment) will help you understand if those who received the marketing treatment were more likely to engage the way you intended more than those who did not which will determine whether your message moved the needle.
Statistical significance also plays an integral role in the conclusions you draw in any marketing test. It allows the marketer to understand the amount of certainty they can ascribe to their findings. For example, if you conduct a marketing test that hits 75% statistical significance, you can say that the observed result will occur 75% of the time, which is considered too low to draw conclusions around a particular KPI. When publishing an academic article, for example, a researcher is supposed to have reached statistical significance at 95%. A figure such as 95% in an academic setting is warranted because researchers continuously try to build a body of literature on a subject based on new, fact-based findings for publication. Industry is (and should) be quite different. In my experience, I typically see an 85% statistical significance used as a standard as to whether something is factual enough to build conclusions from which to continue to iterate. To avoid "letting the perfect become the enemy of the good," organizations will also settle around a finding if a test has been conducted several times and is finishing just below their threshold for statistical significance.
Do engagement metrics matter?
Most definitely!! It is virtually impossible to prod a customer to behave a certain way when they do not open messages or click on links. That is why it is essential always to TEST, TEST, TEST!! Testing is the only means by which we can determine what type of content will get our customers to the offerings that will provide them with the most value. When conducting a marketing test, however, please know that they only tell you a small part of the story, and at the end of the day, the point of marketing (or any business function) is to drive broader organizational success measured as defined by KPIs.
Have you ever confronted this conundrum? Please share your experiences below and let's get a conversation going.
If you and your team are facing some customer marketing challenges or need some help getting off the ground, schedule a free one-hour consultation. I would love to learn more about your organization and discuss how I could help you and your team drive meaningful and impactful results.
Comentários