This article was originally written in 2002 by R.D. Ellison as a sales pitch/promotion for his then new book “Gamble To Win Roulette”. Although I cannot verify that his method (named 3Q/A) does work (maybe we will post it here in the future for you to decide 😉 it is worth reading as it is one of the most well known articles defending the “beat-ability” of roulette by a system, despite the house edge.
American Roulette Is Now Mathematically Beatable
Whenever a system becomes completely defined, some damn fool discovers something that either abolishes the system or expands it beyond recognition.
From the moment the first casino opened, there has been an ongoing controversy as to whether table game decisions are affected by previous results. Many players believe that past results do matter. They wait for an even money proposition to win three or four decisions in a row, and then bet on the opposite choice, figuring that it is statistically due. But this kind of thinking is ridiculed by gaming experts, authors, and purists, because it infers that the dice or roulette wheel react to past events. The wheel has no memory, they say of roulette. Therefore, those who believe that past events influence future results are frequently considered to be misguided or naïve.
If you ask ten gaming experts if gaming decisions are independent of all previous decisions, you’ll hear the word “yes” approximately ten times. That figure, in fact, was derived from a recent survey I conducted of gaming authors, which assumed there was no bias from any mechanical defect or external influence. One of the objectives of this article, however, is to prove that all ten of those answers are wrong.
Is it possible to prove that gaming decisions can be influenced by past results? Answering this question begins with a premise: For a roulette wheel to be deemed suitable for live gaming, it would have to show no bias towards or against any of the playable numbers. This could be reasonably established from a trial run of perhaps 3000 spins. At the end of that trial, if the table decisions do not demonstrate a marked deviation from the mathematical expectation, there shouldn’t be a problem. But if the number 8, for example, doesn’t turn up once in all of those spins, then there is a problem. For the wheel to pass the test, all the numbers would have to come up in a pattern that resembles a fairly even distribution.
But let’s take a closer look at the implications of this. If every table game result is an independent event, how can we ever expect any particular number to come up at all? We can’t, because there would be nothing to stop the wheel from selecting a different number, every time. And yet, the same people who say that these numerical events are immaculately independent, expect the numbers to conform with the probabilities. But if such events were truly independent, there would never be a moment, or even a sustained period, when any number could be expected to show up.
There is a causative force that compels numerical events to seek their legitimate place within their assigned probabilities. Whether the dice or wheel have a memory is irrelevant. The influence originates from the effects of statistical propensity, the authority that governs the probabilities of random numerical events.
The key to getting a clear handle on this lies in seeing the difference between viewing table decisions one at a time, or in groups. On a one-by-one basis, it is true that there will never be a time when any number is mandated to appear or not appear. But even in a sampling as small as 3000 spins, you will never see what might be regarded as a catastrophic deviation from the statistical expectation. There’s not an unbiased roulette table on earth that can make it through that many spins without our number 8 coming up at least two or three dozen times times.
To understand why this is the case, one must know a little something about the characteristics of the numbers that form the table decisions at roulette. Toward that end, let us look at the 15,000 actual casino spins, as they appear in Erick St. Germain’s Roulette System Tester. These spins are broken down into thirteen sessions in a Single Number Distribution Chart that appears at the end of the book. This chart shows how many times each of the 38 playable roulette numbers came up in the course of thirteen groups of 1,140 documented spins apiece.
To get the ball rolling, we will look at the occurrences of the number 7. In all of the groups of 1,140 spins, the 7 came up at least 25 times, but never more than 38 times. That averages out to an occurrence every 30 spins on the low end, and every 45.6 spins on the high end. What’s the average of those two figures? 37.8. That’s just two-tenths away from the exact statistical expectation of 1 in 38.
Something is making that happen. Independent events are not that obedient or precise, particularly in a sampling that small!
But then, could that just be a fluke? Might we get a whole different set of results from another one of those numbers? Let’s take a look at the entire group:
Taking all 38 numbers into consideration, the least number of times any number showed up was 16, and the most number of times was 50. This is a wider range, which accounts for the greater possibility of unconventional trends in a larger sampling, but not one of the 38 numbers tried to escape from the corral. Meaning, each one was compelled to show up a minimum number of times, but not too many times.
This is pretty much how the numbers fall in any group that size. Conformity with this pattern, by and large, is as reliable as a Swiss watch. You never know when a given number will appear, but at the end of the day, every number will have taken its turn in the spotlight. The numbers have not the inclination or the means to overlook the mathematics of statistical destiny.
If every gaming result were truly independent, then it would be possible for a roulette table to fail to produce the number 7 in twenty million consecutive spins, because there would be nothing to enforce that occurrence. But in the real world, unless the wheel is biased, there is a 100 percent chance that won’t happen. Anyone who understands the numbers knows that an unbiased table would never make it past the first thousand spins without a 7 coming up.
Assuming the above is true, the only logical conclusion that can be drawn is that it is not possible for gaming results to be truly independent, for those results are constantly bending, however imperceptibly, toward a state of perfect statistical balance. To presume that this is nothing more than a persistent coincidence (that never stops occurring) is not a credible argument!
There’s just one tiny problem with all of this. The consensus of gaming experts and mathematicians is that in such matters, past events have no bearing on future results. And this consensus has evolved for generations, and has withstood the test of time throughout that period. How could all those experts be so wrong?
Actually, quite a few of these authorities have been dancing along the edge of this issue for many years:
In his book, Winning at Casino Gambling, author Lyle Stuart said that he once witnessed an even money wager at baccarat win 23 consecutive times. He said this was the longest streak he had ever seen in all his years of playing. It is also the longest streak that I have ever read about, heard about, or witnessed. If this is (roughly) the farthest a numerical pattern is likely to stray from the norm in what amounts to trillions of gaming decisions, this is no accident, or coincidence.
In another book, Beat the Casino, Frank Barstow follows up that thought with “Dice and the wheel are inanimate, but if their behavior were not subject to some governing force or principle, sequences of 30 or more repeats might be commonplace, and there would be no games like craps or roulette, because there would be no way of figuring probabilities.” He goes on to talk about his Law of Diminishing Probability, which is, in effect, one of the subordinate laws of Statistical Propensity.
But these are gaming authors. What do they know? All right; let’s see what an expert in statistics has to say. In his book, Can You Win, author and statistician Mike Orkin, describing the Law of Averages, seemed to agree with this philosophy when he wrote: “In repeated, independent trials of the same experiment, the observed fraction of occurrences of an event eventually approaches its theoretical probability.” In other words, what goes up, must come down. Given enough trials, a statistical balance will be compelled.
The only reason the laws of numerical science have not been modified to allow for this logic is because everyone pussyfoots around the issue. It’s just too hot to handle. Indirectly, they embrace the concept, or make obscure references to it, but they don’t challenge the existing philosophy.
But how is this possible? And what makes the 3Q/A work? There is no simple explanation, and if there was, it would be a trade secret. But this much can be said: the 3Q/A is a two-pronged strategy. Each of these two categories covers roughly one-third of the layout, and pays 2-1. What you’re doing is playing the one against the other. If you see one of the two betting categories in a state of activity, you choose the other prong as your betting choice for that session. In effect, you’re looking for a trend that is occurring inside of a non-trend.
Because of the way the numbers of each respective group are spaced along the wheel itself, the two form a symbiotic relationship, by virtue of the effects of the dealer signature.