Philosophy of Science and Statistics
8/10/2024
Eli Gacasan
What does philosophy have to do with statistics?
The philosophy of science tries to explain and justify scientific methodologies used. Some influential works that made scientific methodologies more accessible to the general public included Exploratory Data Analysis (1970) by John Tukey, The Design of Experiments (1935) and Statistical Methods for Research Workers (1925) by R.A. Fisher. Fisher also introduced the concept of likelihood.
John Nelder and Robert Wedderburn developed the concept of generalized linear models in the 1970s that extended the simple linear model in two ways: 1. the distribution of the response can be something other than a normal distribution and 2. it allows modelling of some function of the response.
Not very many people know about philosophy of statistics in general, a course in the history of statistics or science is scarcely found in university curriculum, even though it seems so important to know current and historical thoughts on why statistics works for expanding knowledge.
Most college in statistics will be familiar with essential differences in Bayesian and Classical statistics, but why does it matter? And how are both of these approaches justified in the process of discovering new information? What are the challenges in understanding how these approaches work and what do we know so far?
Recently, I found Error and the Growth of Experimental Knowledge (1996) to be an, albeit long, broad introduction to explanations about the different ways of answering the question, “How and why do these scientific methods work?”
Article Image Source: https://www.york.ac.uk/depts/maths/histstat/bayespic.htm