Rouge et Noir
7 minutes • 1455 words
Table of contents
The questions raised by games of chance, such as roulette, are, fundamentally, quite analogous to those we have just treated.
For example, a wheel is divided into a large number of equal compartments, alternately red and black. A ball is spun round the wheel, and after having moved round a number of times, it stops in front of one of these subdivisions.
The probability that the division is red is 12.
The needle describes an angle θ, including several complete revolutions.
I do not know what is the probability that the ball is spun with such a force that this angle should lie between θ and θ + dθ, but I can make a convention.
I can suppose that this probability is φ(θ) dθ. As for the function φ(θ), I can choose it in an entirely arbitrary manner. I have the calculus of probabilities.
nothing to guide me in my choice, but I am naturally induced to suppose the function to be continuous. Let be a length (measured on the circumference of the circle of radius unity) of each red and black compartment.
We have to calculate the integral of φ(θ) dθ, extending it on the one hand to all the red, and on the other hand to all the black compartments, and to compare the results.
Consider an interval 2 comprising two consecutive red and black compartments. Let M and m be the maximum and minimum values of the function φ(θ) in this interval. The integral P extended to the red compartments will be smaller than M; extended to the black it will be greater than m.
The difference will therefore be
…
smaller than (M − m). But if the function φ is supposed continuous, and if on the other hand the interval is very small with respect to the total angle described by the needle, the difference M − m will be very small.
The difference of the two integrals will be therefore very small, and the probability will be very nearly 2 1 .
We see that without knowing anything of the function φ we must act as if the probability were 12 . And on the other hand it explains why, from the objective point of view, if I watch a certain number of coups, observation will give me almost as many black coups as red. All the players know this objective law.
But it leads them into a remarkable error, which has often been exposed, but into which they are always falling. When the red has won, for example, 6 times running, they bet on black, thinking that they are playing an absolutely safe game, because they say it is a very rare thing for the red to win seven times running.
In reality, their probability of winning is still 12.
Observation shows that the series of 7 consecutive reds is very rare, but series of six reds followed by a black are also very rare. They have noticed the rarity of the series of seven reds.
If they have not remarked the rarity of six reds and a black, it is only because such series strike the attention less.
V. The Probability of Causes
This is the most important from the point of view of scientific applications.
Two stars, for instance, are very close together on the celestial sphere. Is this apparent contiguity a mere effect of chance? Are these stars, although almost on the same visual ray, situated at very different distances from the earth, and therefore very far indeed from one another? or does the apparent correspond to a real contiguity?
This is a problem on the probability of causes.
First of all, I recall that at the outset of all problems of probability of effects that have occupied our attention up to now, we have had to use a convention which was more or less justified.
If in most cases the result was to a certain extent independent of this convention, it was only the condition of certain hypotheses which enabled us à priori to reject discontinuous functions, for example, or certain absurd conventions.
We shall again find something analogous to this when we deal with the probability of causes. An effect may be produced by the cause a or by the cause b.
The effect has just been observed. We ask the probability that it is due to the cause a. This is an à posteriori probability of cause.
But I could not calculate it, if a convention more or less justified did not tell me in advance what is the à priori probability for the cause a to come into play—I mean the probability of this event to some one who had not observed the effect.
To make my meaning clearer, I go back to the game of écarté mentioned before. My adversary deals for the first time and turns up a king.
What is the probability that he is a sharper?
The formulæ ordinarily taught give 9 8, a result which is obviously rather surprising. If we look at it closer, we see that the conclusion is arrived at as if, before sitting down at the table, I had considered that there was one chance in two that my adversary was not honest.
An absurd hypothesis, because in that case I should certainly not have played with him; and this explains the absurdity of the conclusion.
The function on the à priori probability was unjustified, and that is why the conclusion of the à posteriori probability led me into an inadmissible result.
The importance of this preliminary convention is obvious. I shall even add that if none were made, the problem of the à posteriori probability would have no meaning. It must be always made either explicitly or tacitly.
Let us pass on to an example of a more scientific character. I require to determine an experimental law; this law, when discovered, can be represented by a curve.
I make a certain number of isolated observations, each of which may be represented by a point.
When I have obtained these different points, I draw a curve between them as carefully as possible, giving my curve a regular form, avoiding sharp angles, accentuated inflexions, and any sudden variation of the radius of curvature.
This curve will represent to me the probable law, and not only will it give me the values of the functions intermediary to those which have been observed, but it also gives me the observed values more accurately than direct observation does; that is why I make the curve pass near the points and not through the points themselves.
Here, then, is a problem in the probability of causes.
The effects are the measurements I have recorded; they depend on the combination of two causes—the true law of the phenomenon and errors of observation. Knowing the effects, we have to find the probability that the phenomenon shall obey this law or that, and that the observations have been accompanied by this or that error.
The most probable law, therefore, corresponds to the curve we have traced, and the most probable error is represented by the distance of the corresponding point from that curve. But the problem has no meaning if before the observations I had an à priori idea of the probability of this law or that, or of the chances of error to which I am exposed.
If my instruments are good (and I knew whether this is so or not before beginning the observations), I shall not draw the curve far from the points which represent the rough measurements.
If they are inferior, I may draw it a little farther from the points, so that I may get a less sinuous curve; much will be sacrificed to regularity.
Why, then, do I draw a curve without sinuosities?
Because I consider à priori a law represented by a continuous function (or function the derivatives of which to a high order are small), as more probable than a law not satisfying those conditions.
But for this conviction the problem would have no meaning; interpolation would be impossible. No law could be deduced from a finite number of observations; science would cease to exist.
Fifty years ago, physicists thought that a simple law as more probable than a complicated law.
This principle was even invoked in favour of Mariotte’s law as against that of Regnault.
But this belief is now repudiated. Yet, how many times are we compelled to act as though we still held it! However that may be, what remains of this tendency is the belief in continuity, and as we have just seen, if the belief in continuity were to disappear, experimental science would become impossible.