Statistics is a young science. Human statistical practice is produced with counting activities, so the history of statistical development can be traced back to ancient primitive society, that is, more than 5 thousand years ago. However, it can make human statistical practice rise to the height of theoretical summary, that is, it has only a short history of more than 300 years, but it is a modern thing. The general situation of statistical development can be roughly divided into three forms: classical record statistics, modern descriptive statistics and modern inferential statistics.
Since the 20th century, with the rapid development of science and technology, great changes have taken place in society, and statistics has entered a period of rapid development. Some scientists even call our age "the statistical age". Obviously, the development of statistical science in the 20th century and its future are endowed with epoch-making significance.
16th century? Italian scholar Gerolamo Cardano (1501.9.24-1576.9.21) began to study some simple problems in gambling, such as dice. He is the founder of classical probability theory.
1646 French mathematician blaise pascal (1623.6.19-1662.8.19) and French mathematician Pierre de Fermat (1601. Together, they solved the problem sent by an upper-class gambler and amateur philosopher. He can't figure out why throwing three dice in a certain combination always loses money. In the process of solving this problem, the foundation of probability theory and combinatorial theory is established, and a series of methods to solve the problem of probability theory are obtained, which lays the foundation of modern probability theory.
17 13 years, Swiss mathematician Jacob Bernoulli (1654.12.27-1705.8.16) published his suicide note "conjecture", in which he put forward an overview. He is the founder who made probability theory a branch of mathematics. Later, the Russian mathematician and mechanic Chebyshev (пант? тий Льво? виччебышёв, 182 1 . 5 . 26- 1894. 12.8)。
1733, German mathematician and astronomer De Moivre (1667.5.26-1754.11.27) studied Bernoulli's law of large numbers, and he deduced the progressive formula. Later, this result was extended to the general situation by Pierre-Simon Laplace. Later people call it "De Morville-Laplace central limit theorem", which is the second limit theorem and the first central limit theorem in the history of probability theory.
1763, British mathematical statistician Thomas Bayes (1702- 176 1) published Bayesian statistical theory. In the same year, Richard Press published Bayesian results and put forward "Bayesian formula". Bayesian is one of the two figures who have an important influence on the early development of probability theory and mathematical statistics (the other is blaise pascal).
1809 german mathematician john carr Friedrich Gao? ,1777.4.30-1855.2.23) published the theory of celestial bodies' motion around the sun, including a section on "data combination". In this section, he discussed the theory of error distribution, and independently deduced the "normal distribution" in the process, which popularized the application of normal distribution. In the same year, Gauss put forward the "least square method".
18 12 years, the famous French astronomer and mathematician, Marquis Pierre-Simon de la Price (1749.3.23-1827.3.5), published the Theory of Probability Analysis. In this book, he clearly defined the classical definition of probability (usually called "classical probability") for the first time, and introduced more powerful analytical tools in probability theory, such as "difference equation" and "generating function", thus realizing the transition of probability theory from simple combinatorial operation to analytical method and pushing probability theory to a new development stage.
182 1 year, German mathematician Gauss put forward the "maximum likelihood estimation" of normal distribution.
In the 1920s, Finnish mathematician Jarl Waldemar Lindeberg (1876.8.4-1932.12.12) and French mathematician Paul Pierre Lé vy (1886-12).
1837, the French mathematician Simeon Denis Poisson (1781.6.21-1840.4.25) first put forward "Poisson distribution". This distribution was described earlier by the Bernoulli family.
1863, Abbe first proposed χ? The distribution was later deduced by Hermert and karl pearson, one of the founders of modern statistics, at 1875 and 1900 respectively.
1875? Francis Galton (1822.2.16-191.17), a British scientist and explorer, made an experiment on sweet peas with the help of his friends. By analyzing the data obtained, he finally.
1888? Francis Galton put forward the concept of "correlation index", and based on this, he developed a method to estimate correlation coefficient by graphs. In the same year, he gave the first official figure about "correlation coefficient" in a paper, which described the degree of correlation between two variables from a quantitative point of view.
/kloc-In the second half of the 9th century, the Petersburg School in Russia introduced "random variables". This marks the transition of probability theory from classical probability period to modern probability period.
1895? Statistician karl pearson (1857.3.27-1936.4.27) first put forward "skewness".
1900, German mathematician David Hilbert (1862- 1943) put forward an axiomatic definition of probability to solve the most general definition of probability applicable to all random phenomena.
1900, British mathematician and biostatistician karl pearson (1857.3.27-1936.4.27) put forward the replacement principle, and the estimator obtained from this principle became "moment estimation". In the same year, he introduced the famous χ? Goodness of fit test ". Karl pearson is a great founder of statistics in the 20th century, and is called the father of statistics in the 20th century. His work played a connecting role in describing the historical stage of the development of statistics to inferential statistic, and laid a solid foundation for the rapid development of statistics.
190 1 year? Karl Karl pearson put forward "Principal Component Analysis" (a classical method of multivariate statistical analysis), but it is only aimed at non-random variables. 1933 was generalized to random variables by harold hotelling (1895- 1973), an American recognized master of statistics, economics and mathematics.
1905? Statistician karl pearson (1857.3.27-1936.4.27) first proposed kurtosis. (S: I wonder if you have noticed the cleverness. Pearson first proposed skewness and then moment estimation, χ? After goodness-of-fit test and principal component analysis, the peak state is put forward. What inspired Pearson to think of skewness? It is worth pondering. )
At the beginning of the 20th century, karl pearson put forward "hypothesis testing", which was later perfected by Hill. Finally, Nieman and E.Pearson put forward a relatively complete hypothesis testing theory.
1908, British statistician gossett published a paper under the pseudonym of "student" in Biostatistics, which made him famous in the history of statistics: the probable error of the mean. This paper puts forward "T distribution". The discovery of T distribution is of epoch-making significance in the history of statistics, which breaks the situation that normal distribution dominates the world and opens a new era of statistical inference with small samples. Later, Fisher noticed the flaw in his proof, gave a complete proof of this problem in 1922, and compiled the quantile table of t distribution.
1909-1920 A.K.Erlang, a Danish mathematician and electrical engineer, studied telephone conversation with the method of probability theory, and initiated the "queuing theory".
1920 in order to estimate the development and change law of random series more accurately, since the 1920s, the academic circles began to analyze time series by using the principle of mathematical statistics. The focus of the research has shifted from summarizing superficial phenomena to analyzing the internal relationship of sequence values, thus opening up a discipline of applied statistics-"time series analysis".
1922, R.A.Fisher formally put forward "sufficient statistics", and its idea originated from his argument with astronomer Eddington about the standard deviation of estimation. In the same year, he put forward the idea of "maximum likelihood estimation" again on the basis of Gauss 182 1 year and proved some properties of it, which made the maximum likelihood method widely used.
1924, Dr. Walter A. Walter A. Shewhart of Bell Laboratories put forward the suggestion of using "control chart" in a memo to his superiors. "Quality control chart" is a graphic method to control product quality by applying statistical principles. He is the father of statistical quality control (SQC).
1924? Ronald Aylmer Fisher (1890- 1962), a British statistician, geneticist and founder of modern statistical science, put forward the "f distribution" and named it after the first letter of his surname. Later, he proposed "analysis of variance" (ANOVA).
Ronald Aylmer Fisher (1890- 1962) supplemented the goodness of fit test introduced by karl K.Pearson? . That is, in practical problems, sometimes everything depends on k unknown parameters, and then Pearson theorem is established again. Fisher proved that under the same conditions, MLE method can be used to estimate the K position parameters first, and then the estimated values can be calculated. At this time, the similarity statistics gradually obey the chi-square distribution, but the degree of freedom is r-k- 1.
1928, Neyman and E.Pearson put forward the "likelihood ratio test", which is a widely used test method, and its position in hypothesis testing is just like that of MLE in point estimation.
1929 Soviet mathematician Alexander Jakovljevic qinqin (алекса? ндр Я? ковлевич Хи? нчин,1894.7.19.1959.1.18) were generalized under the same distribution conditions.
In 1929, Baehrens proposed that if there is no information, the exact confidence interval of the search, M and N are not too large. This is the famous "Boelens-Fisher problem" in history.
1933, the Soviet mathematician André Andrey Kolmogorov (1903.4.25-1987.10.20) established a strict axiomatic system of probability theory on the basis of measure theory. Make it a strict mathematical system like calculus, and at the same time, this system contains definitions in the classical sense and statistical sense, so it not only meets the needs of mathematics itself, but also meets the requirements of natural science and even engineering technology.
1933? Harold hotelling (1895— 1973), an American statistician and economist, first put forward "principal component analysis". This is an idea of dimensionality reduction, a multivariate statistical analysis method that transforms multiple indicators into several comprehensive indicators through orthogonal rotation on the premise of losing little information.
1934, American statistician J. Naiman (1894–1981) established a strict interval estimation theory-"confidence interval". Confidence coefficient is the most basic concept in this theory. According to certain accuracy and precision requirements, an appropriate interval is constructed by the samples extracted from the population as an estimate of the true value range of the distribution parameters (or functions of parameters) of the population.
Mahalanobis (1893- 1972), a famous Indian statistician, put forward Mahala Nobis distance.
1938 H. Wold put forward the famous "Wold decomposition theorem" in his doctoral thesis "Research in Statistical Time Series Analysis", that is, any discrete stationary process can be decomposed into the sum of two unrelated stationary sequences, one of which is deterministic and the other is random. This theorem is the soul of modern time series analysis theory. Cramer proved in 196 1 that this decomposition idea can also be used for non-stationary sequences. Cramer decomposition theorem shows that the fluctuation of any sequence can be regarded as being influenced by both certainty and randomness.
1945, Wilcock established "rank statistics". Rank sum test, also known as sequence sum test, is a nonparametric test. It does not depend on the specific form of the overall distribution, and it does not consider the distribution of the studied object or whether it is known, so it is practical.
In 1950, E.L. Lehmann and H. Scheff put forward the concept of "complete statistics" and gave a full and complete unified measurement method for finding the UMVUE of estimable functions (that is, the existence of unbiased estimation of parameter functions), namely "Lehmann-Scheff Theorem".
In 1955, Stein proved that when the dimension p is greater than 2, the least square estimate of the normal mean vector cannot be accommodated, that is, we can find that another estimate is consistently better than the least square estimate in a sense.
Lindley et al. pointed out in 1960 that when the sample size is large enough, it can tend to 1, close to 0, that is, the conclusions obtained by value test and Bayesian test are opposite, so it is also called Lindley paradox.
In 1965, W.F.Massy put forward "principal component regression" according to principal component analysis (PCA) in multivariate statistical analysis.
1977? A.P.Dempster, a mathematician from Harvard University, and others put forward the "EM algorithm" to estimate the parameters of the hidden variable probability model with maximum likelihood.
1995 Ross Ihaka and Robert Gentleman of the University of Auckland, New Zealand used the S language (the S language was written by AT&; T Bell Laboratories has developed an explanatory language and a new system for data exploration, statistical analysis and drawing. Since the initials of these two scientists are both R, the system software is named "R".
Cambridge University: karl pearson, Fisher, Francis Galton, Mahalanobis
University of Edinburgh: Thomas Bayes
Paris Polytechnic University: Mohn Dennis Poisson, Levy
University of Caen: Pierre-Simon Laplace
University of Konigsberg (now Kant Baltic Federal University): David Hilbert.
University of G? ttingen: Johann Carl Friedrich Gauss (since 18)
Technical University of Brunswick: Johann Carl Friedrich Gauss (since 14)
University of basel: Jacob Bernoulli.
Moscow University: André Andrey Kolmogorov, Chebyshev, Alexander Jakovljevic Chinchin.
University of California, Berkeley: Walter A. Shewhart
University of Washington: harold hotelling
1, karl pearson is Cosette's teacher. Cosette went to Karl Pearson to study statistics from 1906 to 1907, focusing on the statistical analysis of a small amount of data.
2. Gao Erdun is Karl K Pearson's teacher.
References:
[1] Baidu Encyclopedia
[2] The life, thoughts and achievements of karl pearson, a main thread in the development of modern statistics.