Steven R. Dunbar
Department of Mathematics
203 Avery Hall
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Stochastic Processes and

__________________________________________________________________________

Randomness

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________ ### Rating

Student: contains scenes of mild algebra or calculus that may require guidance.

_______________________________________________________________________________________________ ### Section Starter Question

What do we mean when we say something is “random”? What is the dictionary deﬁnition of “random”?

_______________________________________________________________________________________________ ### Key Concepts

1. Assigning probability $1∕2$ to the event that a coin will land heads and probability $1∕2$ to the event that a coin will land tails is a mathematical model that summarizes our experience with many coins.
2. A coin ﬂip is a deterministic physical process, subject to the physical laws of motion. Extremely narrow bands of initial conditions determine the outcome of heads or tails. The assignment of probabilities $1∕2$ to heads and tails is a summary measure of all initial conditions that determine the outcome precisely.
3. The random walk theory of asset prices claims that market prices follow a random path without any inﬂuence by past price movements. This theory says it is impossible to predict which direction the market will move at any point, especially in the short term. More reﬁned versions of the random walk theory postulate a probability distribution for the market price movements. In this way, the random walk theory mimics the mathematical model of a coin ﬂip, substituting a probability distribution of outcomes for the ability to predict what will really happen.

__________________________________________________________________________ ### Vocabulary

1. Technical analysis claims to predict security prices by relying on the assumption that market data, such as price, volume, and patterns of past behavior can help predict future (usually short-term) market trends.
2. The random walk theory of the market claims that market prices follow a random path up and down according to some probability distribution without any inﬂuence by past price movements. This assumption means that it is not possible to predict which direction the market will move at any point, although the probability of movement in a given direction can be calculated.

__________________________________________________________________________ ### Mathematical Ideas

#### Coin Flips and Randomness

The simplest, most common, and in some ways most basic example of a random process is a coin ﬂip. We ﬂip a coin, and it lands one side up. We assign the probability $1∕2$ to the event that the coin will land heads and probability $1∕2$ to the event that the coin will land tails. But what does that assignment of probabilities really express?

To assign the probability $1∕2$ to the event that the coin will land heads and probability $1∕2$ to the event that the coin will land tails is a mathematical model that summarizes our experience with coins. We have ﬂipped many coins many times, and we see that about half the time the coin comes up heads, and about half the time the coin comes up tails. So we abstract this observation to a mathematical model containing only one parameter, the probability of a heads.

From this simple model of the outcome of a coin ﬂip we can derive some mathematical consequences. We will do this extensively in the chapter on limit theorems for coin-ﬂipping. One of the ﬁrst consequences we can derive is a theorem called the Weak Law of Large Numbers. This consequence reassures us that if we make the probability assignment then long term observations with the model will match our expectations. The mathematical model shows its worth by making deﬁnite predictions of future outcomes. We will prove other more sophisticated theorems, some with reasonable consequences, others are surprising. Observations show the predictions generally match experience with real coins, and so this simple mathematical model has value in explaining and predicting coin ﬂip behavior. In this way, the simple mathematical model is satisfactory.

In other ways the probability approach is unsatisfactory. A coin ﬂip is a physical process, subject to the physical laws of motion. The renowned applied mathematician J. B. Keller investigated coin ﬂips in this way. He assumed a circular coin with negligible thickness ﬂipped from a given height ${y}_{0}=a>0$, and considered its motion both in the vertical direction under the inﬂuence of gravity, and its rotational motion imparted by the ﬂip until the coin lands on the surface $y=0$. The initial conditions imparted to the coin ﬂip are the initial upward velocity and the initial rotational velocity. With additional simplifying assumptions Keller shows that the fraction of ﬂips which land heads approaches 1/2 if the initial vertical and rotational velocities are high enough. Keller shows more, that for high initial velocities narrow bands of initial conditions determine the outcome of heads or tails. From Keller’s analysis we see the randomness comes from the choice of initial conditions. Because of the narrowness of the bands of initial conditions, slight variations of initial upward velocity and rotational velocity lead to diﬀerent outcomes. The assignment of probabilities $1∕2$ to heads and tails is actually a statement of the measure of the initial conditions that determine the outcome precisely. Figure 1: Initial conditions for a coin ﬂip, following Keller.

The assignment of probabilities $1∕2$ to heads and tails is actually a statement of our inability to measure the initial conditions and the dynamics precisely. The heads or tails outcomes alternate in adjacent narrow initial conditions regions, so we cannot accurately predict individual outcomes. We instead measure the whole proportion of initial conditions leading to each outcome.

If the coin lands on a hard surface and bounces, the physical prediction of outcomes is now almost impossible because we know even less about the dynamics of the bounce, let alone the new initial conditions imparted by the bounce.

Another mathematician who often collaborated with J. B. Keller, Persi Diaconis, has exploited this determinism. Diaconis, an accomplished magician, is reportedly able to ﬂip many heads in a row using his manual skill. Moreover, he has worked with mechanical engineers to build a precise coin-ﬂipping machine that can ﬂip many heads in a row by controlling the initial conditions precisely. Figure 2 is a picture of such a machine. Figure 2: Persi Diaconis’ mechanical coin ﬂipper.

Mathematicians Diaconis, Susan Holmes and Richard Montgomery have done an even more detailed analysis of the physics of coin ﬂips. The coin-ﬂipping machines help to show that ﬂipping physical coins is actually slightly biased. Coins have a slight physical bias favoring the coin’s initial position $51%$ of the time. The bias results from the rotation of the coin around three axes of rotation at once. Their more complete dynamical description of coin ﬂipping needs even more initial information.

If the coin bounces or rolls the physics becomes more complicated. This is particularly true if the coin rolls on one edge upon landing. The edges of coins are often milled with a slight taper, so the coin is really more conical than cylindrical. When landing on edge or spinning, the coin will tip in the tapered direction.

The assignment of a reasonable probability to a coin toss both summarizes and hides our inability to measure the initial conditions precisely and to compute the physical dynamics easily. James Gleick summarizes this neatly  “In physics – or wherever natural processes seem unpredictable – apparent randomness may …arise from deeply complex dynamics.” The probability assignment is usually a good enough model, even if wrong. Except in circumstances of extreme experimental care with millions of measurements, using $1∕2$ for the proportion of heads is sensible.

#### Randomness and the Markets

A branch of ﬁnancial analysis, generally called technical analysis, claims to predict security prices with the assumption that market data, such as price, volume, and patterns of past behavior predict future (usually short-term) market trends. Technical analysis also usually assumes that market psychology inﬂuences trading in a way that enables predicting when a stock will rise or fall.

In contrast is random walk theory. This theory claims that market prices follow a random path without inﬂuence by past price movements. The randomness makes it impossible to predict which direction the market will move at any point, especially in the short term. More reﬁned versions of the random walk theory postulate a probability distribution for the market price movements. In this way, the random walk theory mimics the mathematical model of a coin ﬂip, substituting a probability distribution of outcomes for the ability to predict what will really happen.

If a coin ﬂip, although deterministic and ultimately simple in execution cannot be practically predicted with well-understood physical principles, then it should be even harder to believe that some technical forecasters predict market dynamics. Market dynamics depend on the interactions of thousands of variables and the actions of millions of people. The economic principles at work on the variables are incompletely understood compared with physical principles. Much less understood are the psychological principles that motivate people to buy or sell at a speciﬁc price and time. Even allowing that economic principles which might be mathematically expressed as unambiguously as the Lagrangian dynamics of the coin ﬂip determine market prices, that still leaves the precise determination of the initial conditions and the parameters.

It is more practical to admit our inability to predict using basic principles and to instead use a probability distribution to describe what we see. In this text, we use the random walk theory with minor modiﬁcations and qualiﬁcations. We will see that random walk theory leads to predictions we can test against evidence, just as a coin-ﬂip sequence can be tested against the classic limit theorems of probability. In certain cases, with extreme care, special tools and many measurements of data we may be able to discern biases, even predictability in markets. This does not invalidate the utility of the less precise ﬁrst-order models that we build and investigate. All models are wrong, but some models are useful.

The cosmologist Stephen Hawking says in his book A Brief History of Time  “A theory is a good theory if it satisﬁes two requirements: it must accurately describe a large class of observations on the basis of a model that contains only a few arbitrary elements, and it must make deﬁnite predictions about the results of future observations.” As we will see the random walk theory of markets does both. Unfortunately, technical analysis typically does not describe a large class of observations and usually has many arbitrary elements.

#### True Randomness

The outcome of a coin ﬂip is physically determined. The numbers generated by an “random-number-generator” algorithm are deterministic, and are more properly known as pseudo-random numbers. The movements of prices in a market are governed by the hopes and fears of presumably rational human beings, and so might in principle be predicted. For each of these, we substitute a probability distribution of outcomes as a suﬃcient summary of what we have experienced in the past but are unable to predict precisely. Does true randomness exist anywhere? Yes, in two deeper theories, algorithmic complexity theory and quantum mechanics.

In algorithmic complexity theory, a number is not random if it is computable, that is, if a computer program will generate it, . Roughly, a computable number has an algorithm that will generate its decimal digit expression. For example, for a rational number the division of the denominator into the numerator determines the repeating digit blocks of the decimal expression. Therefore rational numbers are not random, as one would expect. Irrational square roots are not random since a simple algorithm determines the digits of the nonterminating, nonrepeating decimal expression. Even the mathematical constant $\pi$ is not random since a short formula can generate the digits of $\pi$.

In the 1960s mathematicians A. Kolmogorov and G. Chaitin were looking for a true mathematical deﬁnition of randomness. They found one in the theory of information: they noted that if a mathematician could produce a sequence of numbers with a computer program signiﬁcantly shorter than the sequence, then the mathematician would know the digits were not random. In the algorithm, the mathematician has a simple theory that accounts for a large set of facts and allows for prediction of digits still to come, . Remarkably, Kolmogorov and Chaitin showed that many real numbers do not ﬁt this deﬁnition and therefore are random. One way to describe such non-computable or random numbers is that they are not predictable, containing nothing but one surprise after another.

This deﬁnition helps explain a paradox in probability theory. Suppose we roll a fair die 20 times. One possible result is 11111111111111111111 and another possible result is 66234441536125563152. Which result is more probable to occur? Each sequence of numbers is equally likely to occur, with probability $1∕{6}^{20}$. However, our intuition of algorithmic complexity tells us the short program “repeat 1 20 times” gives 11111111111111111111, so it seems to be not random. A description of 66234441536125563152 requires 20 separate speciﬁcations, just as long as the number sequence itself. We then believe the ﬁrst monotonous sequence is not random, while the second unpredictable sequence is random. Neither sequence is long enough to properly apply the theory of algorithmic complexity, so the intuition remains vague. The paradox results from an inappropriate application of a deﬁnition of randomness. Furthermore, the second sequence has $20!∕\left(3!\cdot 3!\cdot 3!\cdot 3!\cdot 4!\cdot 4!\right)=3,259,095,840,000$ permutations but there is only one permutation of the ﬁrst. Instead of thinking of the precise sequence we may confuse it with the more than $3×1{0}^{12}$ other permutations and believe it is therefore more likely. The confusion of the precise sequence with the set of permutations contributes to the paradox.

In the quantum world the time until the radioactive disintegration of a speciﬁc N-13 atom to a C-13 isotope is apparently truly random. It seems we fundamentally cannot determine when it will occur by calculating some physical process underlying the disintegration. Scientists must use probability theory to describe the physical processes associated with true quantum randomness.

Einstein found this quantum theory hard to accept. His famous remark is that “God does not play at dice with the universe.” Nevertheless, experiments have conﬁrmed the true randomness of quantum processes. Some results combining quantum theory and cosmology imply even more profound and bizarre results. Again in the words of Stephen Hawking, “God not only plays dice. He also sometimes throws the dice where they cannot be seen.”

#### Sources

This section is adapted from: “The Probability of Heads”, by J. B. Keller, American Mathematical Monthly, Volume 83, Number 3, March 1986, pages 191–197, and deﬁnitions from investorwords.com. See also the article “A Reliable Randomizer, Turned on Its Head”, David Adler, Washington Post, August 2, 2009. The discussion of algorithmic complexity is from James Gleick’s book The Information. The discussion of the probability paradox is from Marilyn Vos Savant, Sequences of Die Rolls..

_______________________________________________________________________________________________ ### Problems to Work for Understanding

__________________________________________________________________________ ### References

   David Adler. A reliable randomizer, turned on its head. Washington Post, August 2 2009. randomness.

   James Gleick. The Information: A History, a Theory, a Flood. Pantheon, 2011.

   Stephen Hawking. A Brief History of Time. Bantam, 1998.

   J. B. Keller. The probability of heads. American Mathematical Monthly, 93(3):91–197, March 1986.

__________________________________________________________________________ 1. A satire on the philosophy of randomness. Accessed August 29, 2009.

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.

Information on this website is subject to change without notice.