I have read Allen Downey's books on statistics in the past, when trying to turn myself from a Software Engineer into what Josh Wills says a Data Scientist is -- someone who is better at statistics than a Software Engineer and better at software than a statistician (with somewhat limited success in the first area, I will hasten to add). Last year, I had the good fortune to present at PyData Global 2023 (the video is out finally!) so had a free ticket to attend, and one of the talks I really enjoyed there was Allen Downey's talk Extremes, Outliers and GOATs: on life in a lognormal world. In it, he mentions that this is essentially the material from Chapter 4 of his book Probably Overthinking It. I liked his talk enough to buy the book, and I wanted to share my understanding of this book with you all, hence this post.
The book is not as dense as a "real" book on stats like say The Elements of Statistical Learning but is definitely not light reading. I tried reading it on a flight from San Francisco to Philadelphia (and back) and found it pretty heavy going. While the writing is lucid and illustrated with tons of well-explained and easy to understand examples, most of these were new concepts to me, and I wished I took notes after each chapter so I could relate all these concepts together enough to reason about them rather than just learn about them. So I did another pass through the book, this time with pen and paper, and I now feel more confident about talking to other people about it. Hopefully, this is also helpful for folks who have done (or planning to do) the first pass on the book but not the second.
Most people who are new to statistics (me included) lay great store in the Gaussian (Normal) distribution to explain or model various datasets. Chapter 1 challenges this idea and demonstrate that while individual traits may follow a Gaussian distribution, a combination of such traits can be a very restrictive filter. In other words, almost all of us are weird (i.e. not normal). For me, it also introduces the Cumulative Distribution Function (CDF) as a modeling tool.
The second chapter introduces the Inspection Paradox, which explains why it always seems like our wait time for the next train is longer then the average wait time between trains, among other things. The explanation lies in the sampling strategy -- if we sample our data from the population, we may get a skew from oversampling from over-represented populations. It also describes a practical use case of this paradox to detect COVID superspreaders.
The third chapter describes what the author calls Preston's paradox, based on a 1976 paper by Samuel Preston. The paradox is that even if every woman has fewer children than her mother, the average family size can increase over time. The paradox is explained by an idea similar to the Inspection Paradox, i.e. because there are more women in existence from large families than small ones, a larger proportion of women would end up having large families than small ones, and overall that contributes to an increase in family size. The opposite can hold true as well, as demonstrateed by the loosening of reproductive restrictions in China in the aftermath of China's one-child policy not having the desired effect in boosting family sizes.
Chapter 4 is the one the author talked about in the PyData Global talk. In it, he demonstrates that certain attributes are better explained by a log-normal distribution, i.e. taking the log of the values in the distribution, rather than our familiar Gaussian distribution. This is especially true for outlier type distributions, such as performance numbers of GOAT (Greatest Of All Time) athletes compared to the general population. The explanation for this is that GOAT performance is almost always a multiplicative combination of innate human prowess (nature) and these skills being effectively harnessed and trained (nurture) plus a whole lot of other factors that all have to line up just so for the event to happen, and whose contributions to the target are therefore multiplicative rather than additive, hence the effectiveness of the log-normal distribution over the normal one.
Chapter 5 explores different survival characterstics of different populations and classifies them as either NBUE (New Better than Used in Expectation) and NWUE (New Worse than Used in Expectation). The former would apply for predicting the remaining life of lightbulbs with use, and the latter would apply for predicting cancer survivability and child mortality over time. Using child mortality statistics, the author shows that as healthcare improves and becomes more predictable across age categories, the NWUE distribution changes to resemble more closely a NBUE distribution.
Chapter 6 explores Berkson's Paradox, where a sub-sample selected from a population using some selection criteria can create correlations that did not exist in the population, or correlations that are opposite to that observed in the population. Berkson originally pointed out the paradox as a warning about using hospital data (sub-sample) to make conclusions about the general population. The selection criteria restrict the general population in specific ways, leading to a change in composition of the traits in the sub-sample, thus leading to the paradox.
Chapter 7 warns about the dangers of interpreting correlation as causation, something most of us have probably read or heard about many many times in the popular Data Science literature. The main case study here are moms who smoke (or don't smoke) and their low birth weight (LBW) babies. A study concluded that while smoker's were more likely to give birth to LBW babies, and LBW babies had a higher mortality rate, the mortality rate of LBW babies whose mothers smoked was 48% lower than those whose mothers didn't smoke. Further LBW babies of non-smokers also had higher rate of birth defects. Interpreting this correlation as causation, i.e. not heeding the warning, it seems like maternal smoking is beneficial for LBW babies, protecting them from mortality and birth defects. The explanation is that maternal smoking is not the only cause of LBW babies, and birth defects may be congenital and not linked to smoking. These two factors mean that there are biological explanations for LBW other than maternal smoking. This and a few other examples segue naturally into a brief high-level introduction to Causal Reasoning, which I also found useful.
Following on from GOAT events being better represented by log-normal rather than normal distributions, Chapter 8 describes applying this to model extremely rare events (such as earthquakes and stock market crashes), and concludes that while the log-normal distribution is more "long-tailed" than a Gaussian, rare events have an even longer tail that is better modeled by log-Student-t (or Log-t) distibution (Student-t is a Gaussian with longer / fatter tails). It also introduces the idea of a Tail distribution (the inverse of a CDF, a survival chart is a tail distribution chart). The author also makes a brief reference to Nassim Taleb's Black Swan events, saying that the ability to model and predict them make them more of Gray Swans.
Chapter 9 talks about the challenges in ensuring algorithmic fairness to all recipients of its predictions, which is very relevant given the many paradoxes the book has already covered. In this chapter, the author describes Bayes rule without mentioning it by name, calling it the "base rate" and the difference between the prior and posterior probabilities the "base rate fallacy". He also covers other aspects of fairness, citing differences across groups that an algorithm often does not see. This last part seemed to me to be related to the Inspection Paradox described earlier in the book.
Chapter 10 describes Simpson's Paradox, where sub-populations can exhibit similar correlations across the sub-populations but where the same traits are anti-correlated in the conbined population. To some extent, this seems related to Berkson's law. Among the examples cited, there is one about penguins, where within each species, the beak size and body size are correlated, but across species, they are anti-correlated. The explanation here is that there is a biological reason for the correlation within the species, but the anti-correlation is just a statistical artifact (correlation != causation in action I guess?).
Chapter 11 is about how certain instances of Simpson's Paradox can be explained as a combination of other underlying factors. It is a trusim that people get more conservative as they get older (i.e. if you are not a liberal when you are young, you have no heart, and if you are not a conservative when old, you have no brain). However, within each age group, it is observed that people actually get more liberal over time. This is explained as a combination of the age effect, the period effect, and the cohort effect. The age effect shows a positive correlation between adherence to traditional beliefs (conservativeness) and age. However, within each age group, it is observed that people get more liberal over time, i.e. the cohort effect. Finally the period effect deals with specific events during the time period under consideration, and this covers older people dying out and being replaced with younger (and more liberal) people.
Chaoter 12 continues the discussion from the previous chapter and brings in the idea of the Overton Window, which dictates what views are considered acceptable at any particular point in time, and which changes over time as well. So what was thought to be liberal in decades past is now considered more conservative. So while an individual may get more liberal with time, the Overtom Window has shifted faster towards liberalism. This can explain why an individual may find themselves getting more conservative as they age, relative to the world around them.
Overall, I enjoyed this book. I think the most impressive thing about this book was its use of generally available datasets to model physical and social environments, and using simulations to control for certain aspects of these data experiments. Also, I think I learned a few things about corner cases in Statistics which I think may be useful when reasoning about them in future. I hope I have sparked your curiosity about this book as well.
Be the first to comment. Comments are moderated to prevent spam.
Post a Comment