COMMON MISTEAKS MISTAKES IN USING STATISTICS: Spotting and Avoiding Them

Introduction     Types of Mistakes     Suggestions    Resources    Table of Contents    About    Glossary   Blog



Glossary

In statistics, as in many fields, words may have technical meanings that are not the same as their ordinary, everyday meanings or that are not the same as technical meanings used in other fields. Some such words are listed in this glossary, with their technical definitions (or important parts of the technical definition, if it is long), and links to relevant pages in this website.

Conditional distribution: A probability distribution obtained by restricting certain variables to certain values. This then gives a conditional distribution of the remaining variables. For example, we could talk about the conditional distribution of people's heights given that the people are of age greater than 80.

Conditional probability: A probability with some restriction imposed; e.g., the conditional probability of dying of a heart attack in the next five years for people over age 65 is a conditional probability, since only people over age 65 are being considered. A conditional probability can also be considered as a probability restricting to a subpopulation.

Data snooping: Inference someone decides to do after looking at the data (contrasted with pre-planned inference).  Data snooping can be done ethically, unethically due to ignorance, or unethically by intent.

Experiment: A study where the researchers deliberately do something (an "intervention" or "manipulation" or "assignment to treatment") to affect at least some of the data collected.

Extrapolation: Extrapolation is a broad word referring to various ways of going beyond the data. For example, one can extrapolate beyond the range of the explanatory variables or beyond the population from which the data are drawn. The further from the data the extrapolation, the less reliable the results.

File drawer problem
The tendency not to publish results that are not statistically significant (or have a result that is not what is hoped for), giving a misrepresnetaion in the literature. Also called publication bias.

Family-wise Error Rate:
The probability that a randomly chosen sample (of the given size, satisfying the appropriate model assumptions) will give a Type I error for at least one of the hypothesis tests performed on the sample. Used when more than one hypothesis test is performed on the same data. Also known as overall Type I error rate, or joint significance level, or simultaneous significance level, or  joint Type I error rate, or  experiment-wise error rate, etc. Also abbreviated FWER.

Fixed effect factor:
Data has been gathered from all the levels of the factor that are of interest

Intent-to-treat analysis: In a controlled comparison (for example, a clinical trial comparing two drugs), an intent-to-treat analysis compares the results of all subjects randomized to each treatment. This preserves the randomization (which may be needed to legitimize the statistical analysis) and also is typically what is of interest to the consumer.

Model assumptions: Most frequentist techniques for statistical inference assume certain properties of the data collection, of the random variables studied, etc. If the assumptions are not valid for the application at hand, the results of the inference may not be true. However, some techniques are robust to some assumptions -- that is, they are still approximately true if the model assumption is close to true.

Multiple inference: Performing more than one hypothesis test on the same data. Also known as 
joint inference, or simultaneous inference, or multiple testing, or multiple comparisons, or the problem of multiplicity.

Overfitting: Obtaining a regression model that may fit the data very well, but at the expense of not fitting the population from which the data were obtained.

Parameter: A number associated with a random variable or its distribution that helps characterize the distribution. For example, the mean and standard deviation are parameters. If we know that a distribution is normal, then knowing the values of these two parameters tells us exactly which normal distribution it is.
Population: A particular group of subjects being studied. For example, we might talk about the population of all adults over age 50 who have high blood pressure, or the population of all adults over age 50, or the population of all adults over 60, or the population of all adults who have high blood pressure, etc.
Power of a statistical procedure: A measure of the ability of a statistical prcedure to detect a true difference.

Pre-planned inference: Inference planned as part of the design of a study, before looking at the data.

Probability distribution: A way of describing how the probabilities of values of a random variable vary.

Pseudoreplication: Using data that does not have true replication for each experimental condition.
Random: Refers to the method by which a sample is chosen, not to properties of the resulting sample. 
Random effect factor: The factor has many possible levels, interest is in all possible levels, but only a random sample of levels is included in the experiment.
 
Random Variable: Can usually be thought of as a variable whose value depends on a random process.

Replication: May refer to having more than one experimental (or observational) unit with the same treatment, or may refer to repeating a study to see if the same conclusion is obtained with different data.

Robust: A statistical technique is said to be robust to departures from a model assumption if the technique is still approximately valid if that model assumption is not true.

Sample: A collection of individuals from a particular population that are chosen for study, with the intent of saying something about the overall population.

Significance level: The probability of falsely rejecting a true null hypthesis when repeatedly using a specific hypothesis test on different samples.

Skewed distribution: A distribution of a random variable whose values are not distributed symmetrically, and values on one end of the range are more frequent than those on the other end. The "tail" is the end where values are less frequent.

Type I error: Falsely rejecting a true null hypothesis.

Type II error: Failing to reject the null hypothesis when the null hypothesis is not true.

Uncertainty: Used in various (related) ways by various speakers and authors, but sometimes distinguished from variability.

Variability: Used 
in various (related) ways by various speakers and authors, but sometimes distinguished from uncertainty.

Last updated February 4, 2013