This web site is a
Direct Response Specialist
For every direct marketer there are good months and not-so-good months for selling by mail. On average, this is the order of best-to-worst months ... based upon all types of businesses and products:
January, February, October, August,
November, September, December,
July, May, April, March, June.
Of course, your best/worse months may vary from the average. For example, if youíre a gourmet food-to-consumers marketer, your best months may be those during the Christmas season. The key point here is that you should track your mailings and responses so that you know which months work best/worst for you.
Why? Because of this rule-of-thumb: You should run your major tests in the months that are less responsive than the months in which you expect to do the rollouts (given the test is successful enough to justify the rollout).
Using the average best-to-worst breakdown above, the rule says that you wouldnít do a major test in January and then roll-out in June.
Instead, youíd go with your control mailing in January. Sure, you can run tests along with your control in your best months. But if you do, the percentage breakdown of control to test should be at least 70% control, if not more.
There are two reasons for this ...
1. Your best response/profit months should be reserved primarily for your known winners. You donít want to risk potential losers on your most profitable month of mailing.
Therefore, if you test, only commit 10% to 30% of your mailings to the unproven. Assuming you were mailing 100,000 pieces, and you wanted to run two different tests against your control, youíd have a breakdown of at least 70,000 control with 15,000
each for each test variation.
2. If you run your major tests in your best mailing month, itís highly unlikely that youíll match the results on your rollout. Of course, you could wait for another good month before rolling out, but time has a way of eroding response also.
For instance, since June is historically just 67% as responsive as January (for most firms), you could expect your rollout results to be no more than 67% as good as the January test results. Add to that the fact that rollout results seldom equal test results (even if mailed in the same month), youíd have a difficult time forecasting with much accuracy the results of a June rollout from a January test.
Itís usually much safer to run major tests in a month that historically is less responsive that the rollout month. That way youíll risk less ... and have the new winning package ready to be rolled out at a time when youíre likely to have the best response.
Letís assume that you run a 50-50 A-B split test in June. Package A, your control, generates a .8% response while your test package B produces a 1.1% response.
To confirm results, you re-test with a higher quantity mailing in September (re-tests should be no more than 10 times the original number of test mailings).
Since September historically produces a response about 18% higher than June, you would expect this re-test to do at least as well as in June. Letís assume on your re-test that the control (A) now produces .9% while the test (B) comes in with a 1.25%.
This September re-test confirms that your test package beat your control by about 38%. The test package now becomes your control.
Now youíre ready to do a full rollout in January ... when you historically enjoy your best response. This allows you to maximize your income from your best months.
Copyright 2000 by Galen Stilson. Reproduction without permission is strictly prohibited.