Comments on two books I saw on the new bookshelf in the past few months:
I. Bartholomew, David J., Unobserved Variables: Models and Misunderstandings, Springer Briefs, 2013
This is a delightful little book, a pleasure to read. From the abstract:
“Although unobserved variables go under many names there is a common structure underlying the problems in which they occur. The purpose of this Brief is to lay bare that structure and to show that the adoption of a common viewpoint unifies and simplifies the presentation. Thus, we may acquire an understanding of many disparate problems within a common framework. … The use of [methods where unobserved variables occur] has given rise to many misunderstandings which, we shall argue, often arise because the need for a statistical, or probability, model is unrecognized or disregarded. A statistical model is the bridge between intuition and the analysis of data.”
The explanations seem at a good level – enough detail so as not to be vague, but not so much that one gets bogged down. I particularly enjoyed the section on mixture models.
The only two drawbacks:
1. The proofreading is sloppy in places, but not so much as to detract from understanding.
2. The cost – even $40 for a used copy, while the book has only 86 pages.
II. Sabo and Boone, Statistical Research Methods: A Guide for Non-Statisticians, Springer 2013
Based just on a quick glance, my impression was at best mixed — I would not recommend this as a textbook or reference, although it does have a few good points. Examples of Pros and Cons (in reverse order, since my overall recommendation is negative):
- No index [Perhaps really just intended for their teaching?]
- The introduction talks about “representative samples” rather than “random samples.” This may be well-intentioned, but it is likely to lead to misunderstanding. Quote:
“The idea is that if a sample is representative of a population, the numeric or mathematical characteristics of that population will be present in the sample. This attribute will ensure that statistical analysis of the sample would yield similar results to a (hypothetical) statistical analysis of the population.” (p. 4).
[Note: on p. 16, they do mention random sample, but still talk about “representative”.]
- They use probability both in frequentist and Bayesian ways, but don’t point out the difference (that I could find). Example: they seem to present only frequentist methods, but say (p. 19)
“This is one of the most important ideas in this entire textbook: if the data do not support a given assumption, then that assumption is most likely not true. On the other hand, if our data did support our assumption, then we would conclude that the assumption is likely to be true (or at least more likely than some alternative).”
(Note: They haven’t discussed power at this point.)
- (p. 167)
“As mentioned earlier, statisticians can make only two mistakes (all others must have been made by someone else): we can falsely reject a true null hypothesis (type I error), or we can falsely fail to reject the null hypothesis when the alternative hypothesis is true (type II error).”
Is this a joke??
- p. 28 Points out the that the level of confidence of a confidence interval
“is often taken as the quantification of our belief that the true population parameter resides in within the estimated confidence interval; this is false. … Rather, the confidence level reflects our belief in the process of constructing confidence intervals, so that we believe that 95% of our estimated confidence intervals would contain the true population parameter, if we could repeatedly sample from the same population. This is an important distinction that underlies what classical statistical methods and inference can and cannot state (i.e., we don’t know anything about the population parameter, only our sample data).”
- (p. 168) Mentions all four (significance level, expected variability in response, desired effect size, and sample size) components that “are interrelated and each affects power in different ways,” and gives each a paragraph.