sexta-feira, 28 de janeiro de 2011

The Great Asymmetry

(...)

Indeed, the notion of asymmetric outcomes as the central idea of this book: I will never get to know the unknown since, by definition, it is unknown. However, I can always guess how it might affect me, and I should base my decisions around that.

This idea is often erroneously called Pascal's wager, after the philosopher and (thinking) mathematician Blaise Pascal. He presented it something like this: I do not know whether God exists, but I know that I have nothing to gain from being an atheist if he does not exist, whereas I have plenty to lose if he does. Hence, this justifies my belief in God.

Pascal's argument is severely flawed theologically: one has to be naïve enough to believe that God would not penalize us for false belief. Unless, of course, one is taking the quite restrictive view of a naive God. (Bertrand Russell was reported to have claimed that God would need to have created fools for Pascal's argument to work.)

But the idea behind Pascal's wager has fundamental applications outside of theology. It stands the entire notion of knowledge on its head. It eliminates the need for us to understand the probabilities of a rare event (there are fundamental limits to our knowledge of these); rather, we can focus on the payoff and benefits of an event if it takes place. The probabilities of very rare events are not computable; the effect of an event on us is considerably easier to ascertain (the rarer the event, the fuzzier the odds). We can have a clear idea of the consequences of an event, even if we do not know how likely it is to occur. I don't know the odds of an earthquake, but I can imagine how San Francisco might be affected by one. This idea that in order to make a decision you need to focus on the consequences (which you can know) rather than the probability (which you can't know) is the central idea of uncertainty. Much of my life is based on it.

(...)

(The Black Swan, Nassim Nicholas Taleb)

Backwards Narrative

Philosophers since Aristotle have taught us that we are deep-thinking animals, and that we can learn by reasoning. It took a while to discover that we do effectively think, but that we more readily narrate backward in order to give ourselves the illusion of understanding, and give a cover to our past actions. The minute we forgot about this point, the "Enlighten­ment" came to drill it into our heads for a second time.

(The Black Swan, Nassim Nicholas Taleb)

terça-feira, 25 de janeiro de 2011

We Just Can't Predict

When I ask people to name three recently implemented technolo­gies that most impact our world today, they usually propose the computer, the Internet, and the laser. All three were unplanned, unpredicted, and unappreciated upon their discovery, and remained unap­preciated well after their initial use. They were consequential. They were Black Swans. O f course, we have this retrospective illusion of their partak­ing in some master plan. You can create your own lists with similar re­sults, whether you use political events, wars, or intellectual epidemics.

You would expect our record of prediction to be horrible: the world is far, far more complicated than we think, which is not a problem, except when most of us don't know it. We tend to "tunnel" while looking into the future, making it business as usual, Black Swan-free, when in fact there is nothing usual about the future. It is not a Platonic category!

We have seen how good we are at narrating backward, at inventing stories that convince us that we understand the past. For many people, knowledge has the remarkable power of producing confidence instead of measurable aptitude. Another problem: the focus on the (inconsequential) regular, the Platonification that makes the forecasting "inside the box."

I find it scandalous that in spite of the empirical record we continue to project into the future as if we were good at it, using tools and methods that exclude rare events. Prediction is firmly institutionalized in our world. We are suckers for those who help us navigate uncertainty, whether the fortune-teller or the "well-published" (dull) academics or civil servants using phony mathematics.

(The Black Swan, Nassim Nicholas Taleb)

How Not to Bo a Nerd

Think of a bookworm picking up a new language. He will learn, say, Serbo-Croatian or !Kung by reading a grammar book cover to cover, and memorizing the rules. He will have the impression that some higher gram­matical authority set the linguistic regulations so that nonlearned ordinary people could subsequently speak the language. In reality, languages grow organically; grammar is something people without anything more exciting to do in their lives codify into a book. While the scholastic-minded will memorize declensions, the a-Platonic nonnerd will acquire, say, Serbo-Croatian by picking up potential girlfriends in bars on the outskirts of Sarajevo, or talking to cabdrivers, then fitting (if needed) grammatical rules to the knowledge he already possesses.

Consider again the central planner. As with language, there is no gram­matical authority codifying social and economic events; but try to con­vince a bureaucrat or social scientist that the world might not want to follow his "scientific" equations. In fact, thinkers of the Austrian school, to which Hayek belonged, used the designations tacit or implicit precisely for that part of knowledge that cannot be written down, but that we should avoid repressing. They made the distinction we saw earlier be­tween "know-how" and "know-what"—the latter being more elusive and more prone to nerdification.

To clarify, Platonic is top-down, formulaic, closed-minded, self-serving, and commoditized; a-Platonic is bottom-up, open-minded, skeptical, and empirical.

(The Black Swan, Nassim Nicholas Taleb)

quarta-feira, 19 de janeiro de 2011

Phony Philanthropy

Frédéric Bastiat was a nineteenth-century humanist of a strange vari­ety, one of those rare independent thinkers—independent to the point of being unknown in his own country, France, since his ideas ran counter to French political orthodoxy (he joins another of my favorite thinkers, Pierre Bayle, in being unknown at home and in his own language). But he has a large number of followers in America.

In his essay "What We See and What We Don't See," Bastiat offered the following idea: we can see what governments do, and therefore sing their praises—but we do not see the alternative. But there is an alternative; it is less obvious and remains unseen.

Recall the confirmation fallacy: governments are great at telling you what they did, but not what they did not do. In fact, they engage in what could be labeled as phony "philanthropy," the activity of helping people in a visible and sensational way without taking into account the unseen cemetery of invisible consequences. Bastiat inspired libertarians by attack­ing the usual arguments that showed the benefits of governments. But his ideas can be generalized to apply to both the Right and the Left.
Bastiat goes a bit deeper. If both the positive and the negative conse­quences of an action fell on its author, our learning would be fast. But often an action's positive consequences benefit only its author, since they are visible, while the negative consequences, being invisible, apply to oth­ers, with a net cost to society. Consider job-protection measures: you no­tice those whose jobs are made safe and ascribe social benefits to such protections. You do not notice the effect on those who cannot find a job as a result, since the measure will reduce job openings. In some cases, as with the cancer patients who may be punished by Katrina, the positive consequences of an action will immediately benefit the politicians and phony humanitarians, while the negative ones take a long time to appear — they may never become noticeable. One can even blame the press for di­recting charitable contributions toward those who may need them the least.

(The Black Swan, Nassim Nicholas Taleb)

terça-feira, 18 de janeiro de 2011

Patterns vs. Randomness

There is another, even deeper reason for our inclination to narrate, and it is not psychological. It has to do with the effect of order on information storage and retrieval in any system, and it's worth explaining here because of what I consider the central problems of probability and information theory.

The first problem is that information is costly to obtain.

The second problem is that information is also costly to store—like real estate in New York. The more orderly, less random, patterned, and narratized a series of words or symbols, the easier it is to store that series in one's mind or jot it down in a book so your grandchildren can read it someday.

Finally, information is costly to manipulate and retrieve.

(...)

Consider a collection of words glued together to constitute a 500-page book. If the words are purely random, picked up from the dictionary in a totally unpredictable way, you will not be able to summarize, transfer, or reduce the dimensions of that book without losing something significant from it. You need 100,000 words to carry the exact message of a random 100,000 words with you on your next trip to Siberia. Now consider the opposite: a book filled with the repetition of the following sentence: "The chairman of [insert here your company name] is a lucky fellow who hap­pened to be in the right place at the right time and claims credit for the company's success, without making a single allowance for luck," running ten times per page for 500 pages. The entire book can be accurately com­pressed, as I have just done, into 34 words (out of 100,000) ; you could re­produce it with total fidelity out of such a kernel. By finding the pattern, the logic of the series, you no longer need to memorize it all. You just store the pattern. And, as we can see here, a pattern is obviously more compact than raw information. You looked into the book and found a rule. It is along these lines that the great probabilist Andrey Nikolayevich Kolmogorov defined the degree of randomness; it is called "Kolmogorov com­plexity."

We, members of the human variety of primates, have a hunger for rules because we need to reduce the dimension of matters so they can get into our heads. Or, rather, sadly, so we can squeeze them into our heads. The more random information is, the greater the dimensionality, and thus the more difficult to summarize. The more you summarize, the more order you put in, the less randomness. Hence the same condition that makes us simplify pushes us to think that the world is less random than it actually is.

And the Black Swan is what we leave out of simplification.

(The Black Swan, Nassim Nicholas Taleb)

Perception is biologically bounded

Actually, as I am writing this, there is news of a pending lawsuit by a patient going after his doctor for more than $200,000 — an amount he al­legedly lost while gambling. The patient claims that the treatment of his Parkinson's disease caused him to go on wild betting sprees in casinos. It turns out that one of the side effects of L-dopa is that a small but signifi­cant minority of patients become compulsive gamblers. Since such gam­bling is associated with their seeing what they believe to be clear patterns in random numbers, this illustrates the relation between knowledge and randomness. It also shows that some aspects of what we call "knowledge" (and what I call narrative) are an ailment.

Once again, I warn the reader that I am not focusing on dopamine as the reason for our overinterpreting; rather, my point is that there is a physi­cal and neural correlate to such operation and that our minds are largely victims of our physical embodiment. Our minds are like inmates, captive to our biology, unless we manage a cunning escape. It is the lack of our control of such inferences that I am stressing. Tomorrow, someone may discover another chemical or organic basis for our perception of patterns, or counter what I said about the left-brain interpreter by showing the role of a more complex structure; but it would not negate the idea that percep­tion of causation has a biological foundation.

(The Black Swan, Nassim Nicholas Taleb)