This book is one of the great ones I read.
Peter Bernstein does deep research that looks back at our evolving understanding of risk, starting from the year 1202 all the way up to when the book was written in 1998.
He discusses many of the pioneers who were essential in helping us develop our understanding of risk, such as Bernoulli, Galton, Keynes, Kahneman, Tversky, and Bayes.
Some of the many ideas I learned are the importance of having a really large sample size when estimating probabilities, the importance of having an unbiased sample size, options in financial markets, how portfolio insurance caused the largest one day drop in the stock market on Black Monday in 1987, Prospect Theory, Bayes Theorem, and the insurance business but the best idea I learned from this book was this:
There is so much uncertainty in the future, which can create a large amount of anxiety for us, but this uncertainty is actually a good thing because it means we have free souls. In other words, the decisions we make matter. If everything in the future was certain there would be no mystery to our lives and our decisions would matter a lot less.
Today, we rely less on superstition and tradition than people did in the past, not because we are more rational, but because our understanding of risk enables us to make decisions in a rational mode.
Daniel Bernoulli propounded the idea that the satisfaction resulting from any small increase in wealth “will be inversely proportionate to the quantity of goods previously possessed.” With that innocent-sounding assertion, Bernoulli explained why King Midas was an unhappy man, why people tend to be risk-averse, and why prices must fall if customers are to be persuaded to buy more. Bernoulli’s statement stood as the dominate paradigm of rational behavior for the next 250 years and laid the groundwork for modern principles of investment management.
Thomas Bayes made a striking advance in statistics by demonstrating how to make better-informed decisions by mathematically blending new information into old information. Bayes’ theorem focuses on the frequent occasions when we have sound intuitive judgments about the probability of some event and want to understand how to alter those judgement as actual events unfold.
We cannot quantity the future, because it is an unknown.
Nobel laureate Kenneth Arrow has warned, “[O]ur knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness. Vast ills have followed a belief in certainty.”
Far away from the gaming tables, the managers of insurance companies conduct their affairs in the same fashion. They set their premiums to cover the losses they will sustain in the long run; but when earthquakes and fires and hurricanes all happen at about the same time, the short run can be very painful. Unlike gamblers, insurance companies carry capital and put aside reserves to tide them over during the inevitable sequences of shorts runs of bad luck.
Time is the dominant factor in gambling. Risk and time are opposite sides of the same coin, for if there were no tomorrow there would be no risk. Time transforms risk, and the nature of risk is shaped by the time horizon: the future is the playing field.
We all have to make decisions on the basis of limited data.
Sampling is essential to risk taking.
Insurance companies use the premiums paid by people who have not sustained losses to pay off people who have. The same holds true of gambling casinos, which pay off the winners from the pot that is constantly being replenished by the losers. Because of the anonymity provided by the insurance company or the gambling casino that acts as intermediary, the actual exchange is less visible. And yet the most elaborate insurance and gambling schemes are merely variations on the Monte dei Paschi theme.
The hypothesis that utility is inversely related to the quantity of goods previously possessed is one of the great intellectual leaps in the history of ideas.
The theory of probability can define the probabilities at the gaming casino o in a lottery – there is no need to spin the roulette wheel or count the lottery tickets to estimate the nature of the outcome – but in real life relevant information is essential. And the bother is that we never have all the information we would like. Nature has established patterns, but only for the most part.
Real-life baseball fans, like aficionados of the stock market, assemble reams of statistics precisely because they need that information in order to reach judgments about capabilities among the players and the teams – or the outlook for the earning power of the companies trading on the stock exchange. And even with thousands of facts, the track record of the experts, in both athletics and finance, proves that their estimates of the probabilities of the final outcomes are open to doubt and uncertainty.
But real-life situations often require us to measure probability in precisely this fashion – from sample to universe. In only rare cases does life replicate games of chance, for which we can determine the probability of an outcome before an event even occurs – a priori, as Jacob Bernoulli puts it. In most instances, we have to estimate probabilities from what happened after the fact – a posteriori. The very notion of a posteriori implies experimentation and changing degrees of belief.
All the law [of large numbers] tells us is that the average of a large number of throws will be more likely than the average of small number of throws to differ from the true average by less than some stated amount.
The most exciting feature of all the achievements mentioned in this chapter is the daring idea that uncertainty can be measured. Uncertainty means unknown probabilities; to reverse Hacking’s description of certainty, we can say that something is uncertain when our information is correct and an event fails to happen, or when our information is incorrect and an event does happen.
People can make serious mistakes by sampling data that are not independent. In 1936, a now-defunct magazine called the Literary Digest took a straw vote to predict the outcome of the forthcoming presidential election between Franking Roosevelt and Alfred Landon. The magazine sent about 10 million ballots in the forms of returnable postcards to names selected from telephone directories and automobile registrations. A high proportion of the ballots were returned, with 59% favoring Landon and 41% favoring Roosevelt. On Election Day, Landon won 39% of the vote and Roosevelt won 61%. People who had telephones and drove automobiles in the mid-1930’s hardly constituted a random sample of American voters: their voting preferences were all conditioned by an environment that the mass of people at the time could not afford.
The normal distribution is what the insurance business is all about, because a fire in Chicago will not be caused by a fire in Atlanta, and the death of one individual at one moment in one place has no relationship to the death of another individual at another moment in a different place. As insurance companies sample the experience of millions of individuals of different ages and of each gender, life expectancies begin to distribute themselves into a normal curve.
The best way to determine whether change in stock prices are in fact independent is to find out whether they fall into a normal distribution. Impressive evidence exists to support the case that changes in stock prices are normally distributed. That should come as no surprise. In capital markets as fluid and as competitive as ours, where each investor is trying to outsmart all the others, new information is rapidly reflected in the price of stocks. If General Motors posts disappointing earnings or if Merck announces a major new drug, stock prices do not stand still while investors contemplate the news. No investor can afford to wait for others to act first. So they tend to act in a pack, immediately moving the price of General Motors or Merck to a level that reflects this new information. But new information arrives in random fashion. Consequently, stock prices move in unpredictable ways.
Regression to the mean is dynamite. Galton transformed the notion of probability from a static concept based on randomness and the Law of Large Numbers into a dynamic process in which the successors to the outliers are predestined to join the crowd at the center. Change and motion from the outer limits toward the center are constant, inevitable, foreseeable. Given the imperatives of this process, no outcome other than the normal distribution is conceivable. The driving force is always toward the average, toward the restoration of normality, toward Quetelet’s homme moyen.
If regression to the mean follows such a constant pattern, why is forecasting such a frustrating activity? The simplest answer is that the forces at work in nature are not the same as the forces at work in the human psyche. The accuracy of most forecasts depends on decisions being made by people rather than by Mother Nature. Mother Nature, with all her vagaries, is a lot more dependable than a group of human beings trying to make up their minds about something.
There are three reasons why regression to the mean can be such a frustrating guide to decision-making. First, it sometimes proceeds at so slow a pace that a shock will disrupt the process. Second, the regression may be so strong that matters do not come to rest once they reach the mean. Rather, they fluctuate around the mean, with repeated, irregular deviations on either side. Finally, the mean itself may be unstable, so that yesterday’s normality may be supplanted today by a new normality that we know nothing about. It is perilous in the extreme to assume that prosperity is just around the corner simply because it always has been just around the corner.
Nature has placed mankind under the governance of two sovereign masters, pain and pleasure.
If everything is a matter of luck, risk management is a meaningless exercise.
When we take a risk, we are betting on an outcome that will result from a decision we have made, though we do not know for certain what the outcome will be. The essence of risk management lies in maximizing the areas where we have some control over the outcome while minimizing the areas where we have absolutely no control over the outcome and the linkage between effect and cause is hidden from us.
Once, at a professional investment conference, a friend passed me a note that read as follows: The information you have is not the information you want. The information you want is not the information you need. The information you need is no the information you can obtain. The information you can obtain costs more than you want to pay.
We can assemble big pieces of information and little pieces, but we can never get all the pieces together. Whenever know for sure how good our sample is. That uncertainty is what makes arriving at judgments so difficult and acting on them so risky.
Early on, [Kenneth] Arrow became convinced that most people overestimate the amount of information that is available to them. The failure of economists to comprehend the causes of the Great Depression at the time demonstrated to him that their knowledge of the economy was “very limited.” His experience as an Air Force weather forecaster during the Second World War “added the news that the natural world was also unpredictable.”
In an essay on risk, [Kenneth] Arrow asks why most of us gamble now and then and why we regularly pay premiums to an insurance company. The mathematical probabilities indicate that we will lose money in both instances. I the case of gambling, it is statistically impossible to expect – though possible to achieve – more than a break-even, because the house edge tilts the odds against us. In the case of insurance, the premiums we pay exceed the statistical odds that our house will burn down or that our jewelry will be stolen.
We buy insurance because we cannot afford to take the risk of losing our home to fire – or our life before our time. That is, we prefer a gamble that has 100% odds on a small loss (the premium we must pay) but a small chance of a large gain (if catastrophe strikes) to a gamble with a certain small gain (saving the cost of the insurance premium) but with uncertain but potentially ruinous consequences for us or our family.
Reducing uncertainty is a costly business.
[Kenneth] Arrow warns that a society in which no one fears the consequences of risk-taking may provide fertile ground for antisocial behavior. For example, the availability of deposit insurance to the depositors of saving and loan associations in the 1980s gave the owners a chance to win big if things went right and to lose little if things went wrong. When things finally went wrong, the taxpayers had to pay. Wherever insurance can be had, moral hazard – the temptation to cheat – will be present.
The explosion of knowledge over the years has served only to make life more uncertain and the world more difficult to understand. [My note: this book copy was written in 1998.]
The recognition of risk management as a practical art rests on a simple cliché with the most profound consequences: when our world was created, nobody remembered to include certainty. We are never certain; we are always ignorant to some degree. Much of the information we have is either incorrect or incomplete.
Perception of probability, weight, and risk are all highly dependent on judgement,” and “the basis of our degrees of belief is part of our human outfit.”
Keynes’s view of economics ultimately revolves around uncertainty – uncertainty as to how much a family will save or spend, uncertainty as to what portion of its accumulated savings a family will spend in the future (and when it will spend that portion), and, most important, uncertainty as to how much profit any given outlay on capital goods will produce. The decisions business firms make on how much to spend (and when to spend it) on new buildings, new machinery, new technology, and new forms of production constitute a dynamic force in the economy.
Because the economic environment is constantly changing, all economic data are specific to their own time period…. What was 75% probable yesterday has an unknown probability tomorrow.
Once we understand that we are not obliged to accept the spin of the roulette wheel or the cards we are dealt, we are free souls. Our decisions matter. We can change the world. Keynes’s economic prescriptions reveal that as we make decisions we do change the world.
Whether that change turns out to be for better or for worse is up to us. The spin of the roulette wheel has nothing to do with it.
Game theory brings a new meaning to uncertainty. Earlier theories accepted uncertainty as a fact of life and did little to identify its source. Game theory says that the true source of uncertainty lies in the intentions of others.
We swing back and forth in everything we do, continuously regressing toward what will turn out to be our average performance….”Once you become sensitized to it, you see regression everywhere,” Kahneman pointed out to Tversky. Whether your children do what they are told to do, whether a basketball player as a hot hand in tonight’s game, or whether an investment managers performance slips during this calendar quarter, their future performance is most likely to reflect regression to the mean regardless of whether they will be punished or rewarded for past performance.
Prospect Theory discovered behavior patterns that had never been recognized by proponents of rational decision-making. Kahneman and Tversky ascribe these patterns to two human shortcomings. First, emotion often destroys the self-control that is essential to rational decision making. Second, people are often unable to understand fully what they are dealing with. They experience what psychologists call cognitive difficulties.
We display risk-aversion when we are offered a choice in one setting and then turn into risk-seekers when we are offered the same choice in a different setting. We tend to ignore the common components of a problem and concentrate on each part in isolation – one reason why Markowitz’s prescription of the portfolio-building was so slow to find acceptance. We have trouble recognizing how much information is enough and how much is too much. We pay excessive attention to low-probability events accompanied by high drama and overlook events that happen in routine fashion.
Here is a question that Kahneman and Tversky use to show how intuitive perceptions mislead us. Ask yourself whether the letter K appears more often as the first or as the third letter of English words. You will probably answer that is appears more often as the first letter. Actually, K appears as the third letter twice as often. Why the error? We find it easier to recall words with a certain letter at the beginning than word with that same letter somewhere else.
Where significant sums are involved, most people will reject a fair gamble in favor of a certain gain - $100,000 certain is preferable to a 50-50 possibility of $200,000 or nothing. We are risk-averse, in other words.
When the choice involves losses, we are risk seekers, not risk-averse.
Kahneman and Tversky interpret the evidence produced by these experiments as a demonstration that people are not risk-averse: they are perfectly willing to choose a gamble when they consider it appropriate. But if they are not risk -averse, what are they? “The major driving force is loss aversion,” writes Tversky. “It is not so much that people hate uncertainty – but rather, they hate losing.” Losses will always loom larger than gains indeed, losses that go unresolved – such as the loss of a child or a large insurance claim that never gets settled – are likely to provoke intense, irrational, and abiding risk-aversion.
One of the insights to emerge from this research is that Bernoulli had it wrong when he declared, “[The] utility resulting from any small increase in wealth will be inversely proportionate to the quantity of goods previously possessed.” Bournoulli believed that it is the pre-existing level of wealth that determines the value of a risky opportunity to become richer. Kahneman and Tversky found that the valuation of a risky opportunity appears to depend far more on the reference point from which the possible gain or loss will occur than on the final value of the assets that would result. It is not how rich you are that motivates your decision, but whether that decision will make you richer or poorer. As a consequence, Tversky warns, “our preferences… can be manipulated by changes in the reference points.
Ambiguity aversion means that people prefer to take risks on the basis of known rather than unknown probabilities. Information matters, in other words.
In a 1992 paper that summarized advances in Prospect Theory, Kahneman and Tversky made the following observation: “Theories of choice are at best approximate and incomplete… Choice is a constructive and contingent process. When faced with a complex problem, people… use computation shortcuts and editing operations. The evidence in this chapter, which summarizes only a tiny sample of a huge body of literature, reveals repeated patterns of irrationality, inconsistency, and incompetence in the ways human beings arrive at decisions and choices when faced with uncertainty.
As Daniel Kahneman points out, “The failure of the rational model is not in its logic but in the human brain it requires. Who could design a brain that could perform the way this model mandates? Every single one of us would have to know and understand everything, completely and at once.”
Shefrin and Statman hypothesize the existence of a split in the human psyche. One side of our personality is an internal planner with a long-term perspective, an authority who insists on decisions that weight the future more heavily than the present. The other side seeks immediate gratification.
In 1974, when the quadrupling of oil prices forced Consolidated Edison to eliminate its dividend after 89 years of uninterrupted payments, hysteria broke out at the company’s annual meeting of stockholders.
As an example of Prospect Theory, Thaler and DeBondt demonstrated that, when new information arrives, investors revise their beliefs, not according to the objective methods set forth by Bayes, but by overweighting the new information and underweighting prior and longer-term information. That is, they weight the probabilities of outcomes on the “distribution of impressions” rather than on an objective calculation based on historical probability distributions. As a consequence, stock prices systematically overshoot so far in either direction that their reversal is predictable regardless of what happens to earnings or dividends or any other objective factor.
The value of an option depends on four elements: time, prices, interest rates, and volatility.
Nature has established patterns originating in the return of events, but only for the most part.” As I pointed out in the Introduction, that qualification is the key to the whole story. Without it there would be no risk, for everything would be predictable. Without it, there would be no change, for every event would be identical to a previous event. Without it, life would have no mystery.