One main property of these fractals (or another way to express their main property, scalability) is that the ratio of two exceedances is going to be the ratio of the two numbers to the negative power of the power exponent.
Let us illustrate this. Say that you "think" that only 96 books a year will sell more than 250,000 copies (which is what happened last year), and that you "think" that the exponent is around 1.5. You can extrapolate to estimate that around 34 books will sell more than 500,000 copies -- simply 96 times (500,000/250,000)^(-1.5). We can continue, and note that around 8 books should sell more than a million copies, here 96 times (l,000,000/250,000)^(-1.5).
Table 2 illustrates the impact of the highly improbable. It shows the contributions of the top 1 percent and 20 percent to the total. The lower the exponent, the higher those contributions. But look how sensitive the process is: between 1.1 and 1.3 you go from 66 percent of the total to 34 percent. Just a 0.2 difference in the exponent changes the result dramatically -- and such a difference can come from a simple measurement error. This difference is not trivial: just consider that we have no precise idea what the exponent is because we cannot measure it directly. All we do is estimate from past data or rely on theories that allow for the building of some model that would give us some idea -- but these models may have hidden weaknesses that prevent us from blindly applying them to reality.
(The Black Swan, Nassim Nicholas Taleb)