Showing posts with label probabilities. Show all posts
Showing posts with label probabilities. Show all posts

20 August 2025

On Probability (300-1599)

"The dialectician is concerned only with proceeding from propositions which are as acceptable as possible. These are propositions which seem true to most people and especially to the wise." (Thomas Aquinas, "Posterior Analytics", cca. 1268)

"But, from some pre-existing causes future effects do not follow necessarily, but usually. For instance, in most cases (ut in pluribus) a perfect human being results from the insemination of a mother by a man’s semen; sometimes, however, monsters are generated, because of some obstruction which overcomes the operation of the natural capacity." (Thomas Aquinas,"Summa contra gentiles", cca. 1259-1265)

"It is not probable that, among the vast number of the faithful, there would not be many people who would readily supply the needs of those whom they hold in reverence because of the perfection of their virtue." (Thomas Aquinas, "Summa contra gentiles", cca. 1259-1265)

"And yet the fact that in so many it is not possible to have certitude without fear of error is no reason why we should reject the certitude which can probably be had [quae probabiliter haberi potest] through two or three witnesses […]" (Thomas Aquinas, "Summa theologiae", cca. 1265-1274)

"[...] propositions are called probable because they are more known to the wise or to the multitude." (Thomas Aquinas, "Commentary on the Posterior Analytics", cca. 1270)

"It is sufficient that you obtain a probable certainty, which means that in most cases (ut in pluribus) you are right and only in a few cases (ut in paucioribus) are you wrong." (Thomas Aquinas, "Summa theologiae" , cca. 1265-1274) 

"The dialectician is concerned only with proceeding from propositions which are as acceptable as possible. These are propositions which seem true to most people and especially to the wise., (Thomas Aquinas, "Posterior Analytics", cca. 1268)

"The method of demonstration is therefore generally feeble and ineffective with regard to facts of nature (I refer to corporeal and changeable things). But it quickly recovers its strength when applied to the field of mathematics. For whatever it concludes in regard to such things as numbers, proportions and figures is indubitably true, and cannot be otherwise. One who wishes to become a master of the science of demonstration should first obtain a good grasp of probabilities. Whereas the principles of demonstrative logic are necessary; those of dialectic are probable." (John of Salisbury, "Metalogicon", 1159)

"Something is readily believable (probabilis) if it seems true to everyone or to the most people or to the wise – and of the wise, either to all of them or most of them or to the most famous and distinguished – or to an expert in his own field, for example, to a doctor in the field of medicine or to a pilot in the navigation of ships, or, finally, if it seems true to the person with whom one is having the conversation or who is judging it." (Boethius, De topicis, 1180)

17 August 2025

On Probability (1925-1949)

"Hypothesis, however, is an inference based on knowledge which is insufficient to prove its high probability." (Frederick L Barry, "The Scientific Habit of Thought", 1927) 

"The rational concept of probability, which is the only basis of probability calculus, applies only to problems in which either the same event repeats itself again and again, or a great number of uniform elements are involved at the same time. Using the language of physics, we may say that in order to apply the theory of probability we must have a practically unlimited sequence of uniform observations." (Richard von Mises, "Probability, Statistics and Truth", 1928)

"There can be no unique probability attached to any event or behaviour: we can only speak of ‘probability in the light of certain given information’, and the probability alters according to the extent of the information." (Sir Arthur S Eddington, "The Nature of the Physical World", 1928)

"With fuller knowledge we should sweep away the references to probability and substitute the exact facts." (Sir Arthur S Eddington, "The Nature of the Physical World", 1928)

"Probability is the most important concept in modern science, especially as nobody has the slightest notion what it means." (Bertrand Russell, 1929)

"Thought interferes with the probability of events, and, in the long run therefore, with entropy." (David L Watson, 1930)

"When an observation is made on any atomic system that has been prepared in a given way and is thus in a given state, the result will not in general be determinate, i.e. if the experiment is repeated several times under identical conditions several different results may be obtained. If the experiment is repeated a large number of times it will be found that each particular result will be obtained a definite fraction of the total number of times, so that one can say there is a definite probability of its being obtained any time that the experiment is performed. This probability the theory enables one to calculate." (Paul A M Dirac, "The Principles of Quantum Mechanics", 1930)

"The theory of probability as a mathematical discipline can and should be developed from axioms in exactly the same way as geometry and algebra." (Andrey Kolmogorov, "Foundations of the Theory of Probability", 1933)

"Statistics is a scientific discipline concerned with collection, analysis, and interpretation of data obtained from observation or experiment. The subject has a coherent structure based on the theory of Probability and includes many different procedures which contribute to research and development throughout the whole of Science and Technology." (Egon Pearson, 1936)

"Starting from statistical observations, it is possible to arrive at conclusions which not less reliable or useful than those obtained in any other exact science. It is only necessary to apply a clear and precise concept of probability to such observations. " (Richard von Mises, "Probability, Statistics, and Truth", 1939)

"The fundamental difference between engineering with and without statistics boils down to the difference between the use of a scientific method based upon the concept of laws of nature that do not allow for chance or uncertainty and a scientific method based upon the concepts of laws of probability as an attribute of nature." (Walter A Shewhart, 1940)

"Events with a sufficiently small probability never occur, or at least we must act, in all circumstances, as if they were impossible." (Félix E Borel, "Probabilities and Life", 1943)

"Probabilities must be regarded as analogous to the measurement of physical magnitudes; that is to say, they can never be known exactly, but only within certain approximation." (Emile Borel, "Probabilities and Life", 1943)

"The conception of chance enters in the very first steps of scientific activity in virtue of the fact that no observation is absolutely correct. I think chance is a more fundamental conception that causality; for whether in a concrete case, a cause-effect relation holds or not can only be judged by applying the laws of chance to the observation." (Max Born, 1949)

On Probability (1750-1799)

"Events are independent when the happening of any one of them does neither increase nor abate the probability of the rest." (Thomas Bayes, "An Essay towards solving a Problem in the Doctrine of Chances", 1763)

"[...] the probability of any event is the ratio between the value at which an expectation depending on the happening of the event ought to be computed, and the value of the thing expected upon it's happening." (Thomas Bayes, "An Essay towards solving a Problem in the Doctrine of Chances", 1763)

"As mathematical and absolute certainty is seldom to be attained in human affairs, reason and public utility require that judges and all mankind in forming their opinions of the truth of facts should be regulated by the superior number of the probabilities on the one side or the other whether the amount of these probabilities be expressed in words and arguments or by figures and numbers." (William Murray, 1773)

"But ignorance of the different causes involved in the production of events, as well as their complexity, taken together with the imperfection of analysis, prevents our reaching the same certainty about the vast majority of phenomena. Thus there are things that are uncertain for us, things more or less probable, and we seek to compensate for the impossibility of knowing them by determining their different degrees of likelihood. So it was that we owe to the weakness of the human mind one of the most delicate and ingenious of mathematical theories, the science of chance or probability." (Pierre-Simon Laplace, "Recherches, 1º, sur l'Intégration des Équations Différentielles aux Différences Finies, et sur leur Usage dans la Théorie des Hasards", 1773)

"If an event can be produced by a number n of different causes, the probabilities of the existence of these causes, given the event (prises de l'événement), are to each other as the probabilities of the event, given the causes: and the probability of each cause is equal to the probability of the event, given that cause, divided by the sum of all the probabilities of the event, given each of the causes." (Pierre-Simon Laplace, "Mémoire sur la Probabilité des Causes par les Événements", 1774)

"The word ‘chance’ then expresses only our ignorance of the causes of the phenomena that we observe to occur and to succeed one another in no apparent order. Probability is relative in part to this ignorance, and in part to our knowledge." (Pierre-Simon Laplace, "Mémoire sur les Approximations des Formules qui sont Fonctions de Très Grands Nombres", 1783)

"[…] determine the probability of a future or unknown event not on the basis of the number of possible combinations resulting in this event or in its complementary event, but only on the basis of the knowledge of order of familiar previous events of this kind" (Marquis de Condorcet, "Essai sur l'application de l'analyse à la probabilité des décisions rendues à la pluralité des voix", 1785)

“All that can be said upon the number and nature of elements is, in my opinion, confined to discussions entirely of a metaphysical nature. The subject only furnishes us with indefinite problems, which may be solved in a thousand different ways, not one of which, in all probability, is consistent with nature.” (Antoine-Laurent Lavoisier, “Elements of Chemistry”, 1790)

"The art of drawing conclusions from experiments and observations consists in evaluating probabilities and in estimating whether they are sufficiently great or numerous enough to constitute proofs. This kind of calculation is more complicated and more difficult than it is commonly thought to be […]" (Antoine-Laurent Lavoisier, cca. 1790)

"Conjectures in philosophy are termed hypotheses or theories; and the investigation of an hypothesis founded on some slight probability, which accounts for many appearances in nature, has too often been considered as the highest attainment of a philosopher. If the hypothesis (sic) hangs well together, is embellished with a lively imagination, and serves to account for common appearances - it is considered by many, as having all the qualities that should recommend it to our belief, and all that ought to be required in a philosophical system." (George Adams, "Lectures on Natural and Experimental Philosophy" Vol. 1, 1794)

On Probability (1850-1899)

"The actual science of logic is conversant at present only with things either certain, impossible, or entirely doubtful, none of which (fortunately) we have to reason on. Therefore, the true logic for this world is the calculus of Probabilities, which takes account of the magnitude of the probability which is, or ought to be, in a reasonable man's mind." (James Clerk Maxwell, 1850)

"[…] probability, in its mathematical acceptation, has reference to the state of our knowledge of the circumstances under which an event may happen or fail. With the degree of information which we possess concerning the circumstances of an event, the reason we have to think that it will occur, or, to use a single term, our expectation of it, will vary. Probability is expectation founded upon partial knowledge. A perfect acquaintance with all the circumstances affecting the occurrence of an event would change expectation into certainty, and leave neither room nor demand for a theory of probabilities." (George Boole, "The Laws of Thought", 1854)

"There are instances of research results presented in terms of probability values of ‘statistical significance’ alone, without noting the magnitude and importance of the relationships found. These attempts to use the probability levels of significance tests as measures of the strengths of relationships are very common and very mistaken." (Leslie Kish, "Some statistical problems in research design", American Sociological Review 24, 1959)

"It [probability] is the very guide of life, and hardly can we take a step or make a decision of any kind without correctly or incorrectly making an estimation of probabilities." (William S Jevons, "The Principles of Science: A Treatise on Logic and Scientific Method", 1874)

"All experience attests the strength of the tendency to mistake mental abstractions, even negative ones, for substantive realities; and the Permanent Possibilities of sensation which experience guarantees arc so extremely unlike in many of their properties to actual sensations, that since we are capable of imagining something which transcends sensations, there is a great natural probability that we should suppose these to be it." (Hippolyte Taine, "On intelligence", 1871)

"Summing up, then, it would seem as if the mind of the great discoverer must combine contradictory attributes. He must be fertile in theories and hypotheses, and yet full of facts and precise results of experience. He must entertain the feeblest analogies, and the merest guesses at truth, and yet he must hold them as worthless till they are verified in experiment. When there are any grounds of probability he must hold tenaciously to an old opinion, and yet he must be prepared at any moment to relinquish it when a clearly contradictory fact is encountered." (William S Jevons, "The Principles of Science: A Treatise on Logic and Scientific Method", 1874)

"There is no more remarkable feature in the mathematical theory of probability than the manner in which it has been found to harmonize with, and justify, the conclusions to which mankind have been led, not by reasoning, but by instinct and experience, both of the individual and of the race. At the same time it has corrected, extended, and invested them with a definiteness and precision of which these crude, though sound, appreciations of common sense were till then devoid." (Morgan W Crofton, "Probability", Encyclopaedia Britannica 9th Ed,, 1885)

"Since a given system can never of its own accord go over into another equally probable state but into a more probable one, it is likewise impossible to construct a system of bodies that after traversing various states returns periodically to its original state, that is a perpetual motion machine." (Ludwig E Boltzmann, "The Second Law of Thermodynamics", [Address to a Formal meeting of the Imperial Academy of Science], 1886)

"I am convinced that it is impossible to expound the methods of induction in a sound manner, without resting them on the theory of probability. Perfect knowledge alone can give certainty, and in nature perfect knowledge would be infinite knowledge, which is clearly beyond our capacities. We have, therefore, to content ourselves with partial knowledge, - knowledge mingled with ignorance, producing doubt." (William S Jevons, "The Principles of Science: A Treatise on Logic and Scientific Method", 1887)

"The scientific imagination always restrains itself within the limits of probability." (Thomas H Huxley, "Science and Christian Tradition", 1893)

"It is a great mistake to suppose that the mind of the active scientist is filled with pro-positions which, if not proved beyond all reasonable cavil, are at least extremely probable. On the contrary, he entertains hypotheses which are almost wildly incredible, and treats them with respect for the time being. Why does he do this? Simply because any scientific proposition whatever is always liable to be refuted and dropped at short notice. A hypothesis is something which looks as if it might be true and were true, and which is capable of verification or refutation by comparison with facts. The best hypothesis, in the sense of the one most recommending itself to the inquirer, is the one which can be the most readily refuted if it is false." (Charles S Peirce, 1896)

30 March 2025

On Mistakes, Blunders and Errors II: Statistics and Probabilities

“It is a capital mistake to theorize before you have all the evidence. It biases the judgment.” (Sir Arthur C Doyle, “A Study in Scarlet”, 1887)

“It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” (Sir Arthur C Doyle, “The Adventures of Sherlock Holmes”, 1892)

"What real and permanent tendencies there are lie hid beneath the shifting superfices of chance, as it were a desert in which the inexperienced traveller mistakes the temporary agglomerations of drifting sand for the real configuration of the ground" (Francis Y Edgeworth, 1898)

"Some of the common ways of producing a false statistical argument are to quote figures without their context, omitting the cautions as to their incompleteness, or to apply them to a group of phenomena quite different to that to which they in reality relate; to take these estimates referring to only part of a group as complete; to enumerate the events favorable to an argument, omitting the other side; and to argue hastily from effect to cause, this last error being the one most often fathered on to statistics. For all these elementary mistakes in logic, statistics is held responsible." (Sir Arthur L Bowley, "Elements of Statistics", 1901)

"If the chance of error alone were the sole basis for evaluating methods of inference, we would never reach a decision, but would merely keep increasing the sample size indefinitely." (C West Churchman, "Theory of Experimental Inference", 1948)

"There are instances of research results presented in terms of probability values of ‘statistical significance’ alone, without noting the magnitude and importance of the relationships found. These attempts to use the probability levels of significance tests as measures of the strengths of relationships are very common and very mistaken." (Leslie Kish, "Some statistical problems in research design", American Sociological Review 24, 1959)

"Poor statistics may be attributed to a number of causes. There are the mistakes which arise in the course of collecting the data, and there are those which occur when those data are being converted into manageable form for publication. Still later, mistakes arise because the conclusions drawn from the published data are wrong. The real trouble with errors which arise during the course of collecting the data is that they are the hardest to detect." (Alfred R Ilersic, "Statistics", 1959)

"The rounding of individual values comprising an aggregate can give rise to what are known as unbiased or biased errors. [...]The biased error arises because all the individual figures are reduced to the lower 1,000 [...] The unbiased error is so described since by rounding each item to the nearest 1,000 some of the approximations are greater and some smaller than the original figures. Given a large number of such approximations, the final total may therefore correspond very closely to the true or original total, since the approximations tend to offset each other. [...] With biased approximations, however, the errors are cumulative and their aggregate increases with the number of items in the series." (Alfred R Ilersic, "Statistics", 1959)

"While it is true to assert that much statistical work involves arithmetic and mathematics, it would be quite untrue to suggest that the main source of errors in statistics and their use is due to inaccurate calculations." (Alfred R Ilersic, "Statistics", 1959)

"No observations are absolutely trustworthy. In no field of observation can we entirely rule out the possibility that an observation is vitiated by a large measurement or execution error. If a reading is found to lie a very long way from its fellows in a series of replicate observations, there must be a suspicion that the deviation is caused by a blunder or gross error of some kind. [...] One sufficiently erroneous reading can wreck the whole of a statistical analysis, however many observations there are." (Francis J Anscombe, "Rejection of Outliers", Technometrics Vol. 2 (2), 1960)

"The most important and frequently stressed prescription for avoiding pitfalls in the use of economic statistics, is that one should find out before using any set of published statistics, how they have been collected, analysed and tabulated. This is especially important, as you know, when the statistics arise not from a special statistical enquiry, but are a by-product of law or administration. Only in this way can one be sure of discovering what exactly it is that the figures measure, avoid comparing the non-comparable, take account of changes in definition and coverage, and as a consequence not be misled into mistaken interpretations and analysis of the events which the statistics portray." (Ely Devons, "Essays in Economics", 1961)

"The problem of error has preoccupied philosophers since the earliest antiquity. According to the subtle remark made by a famous Greek philosopher, the man who makes a mistake is twice ignorant, for he does not know the correct answer, and he does not know that he does not know it." (Félix Borel, "Probability and Certainty", 1963)

"He who accepts statistics indiscriminately will often be duped unnecessarily. But he who distrusts statistics indiscriminately will often be ignorant unnecessarily. There is an accessible alternative between blind gullibility and blind distrust. It is possible to interpret statistics skillfully. The art of interpretation need not be monopolized by statisticians, though, of course, technical statistical knowledge helps. Many important ideas of technical statistics can be conveyed to the non-statistician without distortion or dilution. Statistical interpretation depends not only on statistical ideas but also on ordinary clear thinking. Clear thinking is not only indispensable in interpreting statistics but is often sufficient even in the absence of specific statistical knowledge. For the statistician not only death and taxes but also statistical fallacies are unavoidable. With skill, common sense, patience and above all objectivity, their frequency can be reduced and their effects minimised. But eternal vigilance is the price of freedom from serious statistical blunders." (W Allen Wallis & Harry V Roberts, "The Nature of Statistics", 1965)

"The calculus of probability can say absolutely nothing about reality [...] We have to stress this point because these attempts assume many forms and are always dangerous. In one sentence: to make a mistake of this kind leaves one inevitably faced with all sorts of fallacious arguments and contradictions whenever an attempt is made to state, on the basis of probabilistic considerations, that something must occur, or that its occurrence confirms or disproves some probabilistic assumptions." (Bruno de Finetti, "Theory of Probability", 1974)

"Mistakes arising from retrospective data analysis led to the idea of experimentation, and experience with experimentation led to the idea of controlled experiments and then to the proper design of experiments for efficiency and credibility. When someone is pushing a conclusion at you, it's a good idea to ask where it came from - was there an experiment, and if so, was it controlled and was it relevant?" (Robert Hooke, "How to Tell the Liars from the Statisticians", 1983)

"There are no mistakes. The events we bring upon ourselves, no matter how unpleasant, are necessary in order to learn what we need to learn; whatever steps we take, they’re necessary to reach the places we’ve chosen to go." (Richard Bach, "The Bridge across Forever", 1984)

"Correlation and causation are two quite different words, and the innumerate are more prone to mistake them than most." (John A Paulos, "Innumeracy: Mathematical Illiteracy and its Consequences", 1988)

"When you want to use some data to give the answer to a question, the first step is to formulate the question precisely by expressing it as a hypothesis. Then you consider the consequences of that hypothesis, and choose a suitable test to apply to the data. From the result of the test you accept or reject the hypothesis according to prearranged criteria. This cannot be infallible, and there is always a chance of getting the wrong answer, so you try and reduce the chance of such a mistake to a level which you consider reasonable." (Roger J Barlow, "Statistics: A guide to the use of statistical methods in the physical sciences", 1989)

"Exploratory regression methods attempt to reveal unexpected patterns, so they are ideal for a first look at the data. Unlike other regression techniques, they do not require that we specify a particular model beforehand. Thus exploratory techniques warn against mistakenly fitting a linear model when the relation is curved, a waxing curve when the relation is S-shaped, and so forth." (Lawrence C Hamilton, "Regression with Graphics: A second course in applied statistics", 1991)

"Most statistical models assume error free measurement, at least of independent (predictor) variables. However, as we all know, measurements are seldom if ever perfect. Particularly when dealing with noisy data such as questionnaire responses or processes which are difficult to measure precisely, we need to pay close attention to the effects of measurement errors. Two characteristics of measurement which are particularly important in psychological measurement are reliability and validity." (Clay Helberg, "Pitfalls of Data Analysis (or How to Avoid Lies and Damned Lies)", 1995)

"We can consider three broad classes of statistical pitfalls. The first involves sources of bias. These are conditions or circumstances which affect the external validity of statistical results. The second category is errors in methodology, which can lead to inaccurate or invalid results. The third class of problems concerns interpretation of results, or how statistical results are applied (or misapplied) to real world issues." (Clay Helberg, "Pitfalls of Data Analysis (or How to Avoid Lies and Damned Lies)", 1995) 

"This notion of 'being due' - what is sometimes called the gambler’s fallacy - is a mistake we make because we cannot help it. The problem with life is that we have to live it from the beginning, but it makes sense only when seen from the end. As a result, our whole experience is one of coming to provisional conclusions based on insufficient evidence: read ing the signs, gauging the odds." (John Haigh," Taking Chances: Winning With Probability", 1999)

"Big numbers warn us that the problem is a common one, compelling our attention, concern, and action. The media like to report statistics because numbers seem to be 'hard facts' - little nuggets of indisputable truth. [...] One common innumerate error involves not distinguishing among large numbers. [...] Because many people have trouble appreciating the differences among big numbers, they tend to uncritically accept social statistics (which often, of course, feature big numbers)." (Joel Best, "Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists", 2001)

"Compound errors can begin with any of the standard sorts of bad statistics - a guess, a poor sample, an inadvertent transformation, perhaps confusion over the meaning of a complex statistic. People inevitably want to put statistics to use, to explore a number's implications. [...] The strengths and weaknesses of those original numbers should affect our confidence in the second-generation statistics." (Joel Best, "Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists", 2001)

 "A major problem with many studies is that the population of interest is not adequately defined before the sample is drawn. Don’t make this mistake. A second major source of error is that the sample proves to have been drawn from a different population than was originally envisioned." (Phillip I Good & James W Hardin, "Common Errors in Statistics (and How to Avoid Them)", 2003)

"The difference between 'statistically significant' and 'not statistically significant' is not in itself necessarily statistically significant. By this, I mean more than the obvious point about arbitrary divisions, that there is essentially no difference between something significant at the 0.049 level or the 0.051 level. I have a bigger point to make. It is common in applied research–in the last couple of weeks, I have seen this mistake made in a talk by a leading political scientist and a paper by a psychologist–to compare two effects, from two different analyses, one of which is statistically significant and one which is not, and then to try to interpret/explain the difference. Without any recognition that the difference itself was not statistically significant." (Andrew Gelman, "The difference between ‘statistically significant’ and ‘not statistically significant’ is not in itself necessarily statistically significant", 2005)

"[…] an outlier is an observation that lies an 'abnormal' distance from other values in a batch of data. There are two possible explanations for the occurrence of an outlier. One is that this happens to be a rare but valid data item that is either extremely large or extremely small. The other is that it isa mistake – maybe due to a measuring or recording error." (Alan Graham, "Developing Thinking in Statistics", 2006)

"Many scientists who work not just with noise but with probability make a common mistake: They assume that a bell curve is automatically Gauss's bell curve. Empirical tests with real data can often show that such an assumption is false. The result can be a noise model that grossly misrepresents the real noise pattern. It also favors a limited view of what counts as normal versus non-normal or abnormal behavior. This assumption is especially troubling when applied to human behavior. It can also lead one to dismiss extreme data as error when in fact the data is part of a pattern." (Bart Kosko, "Noise", 2006) 

"A naive interpretation of regression to the mean is that heights, or baseball records, or other variable phenomena necessarily become more and more 'average' over time. This view is mistaken because it ignores the error in the regression predicting y from x. For any data point xi, the point prediction for its yi will be regressed toward the mean, but the actual yi that is observed will not be exactly where it is predicted. Some points end up falling closer to the mean and some fall further." (Andrew Gelman & Jennifer Hill, "Data Analysis Using Regression and Multilevel/Hierarchical Models", 2007)

"If there is an outlier there are two possibilities: The model is wrong – after all, a theory is the basis on which we decide whether a data point is an outlier (an unexpected value) or not. The value of the data point is wrong because of a failure of the apparatus or a human mistake. There is a third possibility, though: The data point might not be an actual  outlier, but part of a (legitimate) statistical fluctuation." (Manfred Drosg, "Dealing with Uncertainties: A Guide to Error Analysis", 2007)

"In error analysis the so-called 'chi-squared' is a measure of the agreement between the uncorrelated internal and the external uncertainties of a measured functional relation. The simplest such relation would be time independence. Theory of the chi-squared requires that the uncertainties be normally distributed. Nevertheless, it was found that the test can be applied to most probability distributions encountered in practice." (Manfred Drosg, "Dealing with Uncertainties: A Guide to Error Analysis", 2007)

"Another kind of error possibly related to the use of the representativeness heuristic is the gambler’s fallacy, otherwise known as the law of averages. If you are playing roulette and the last four spins of the wheel have led to the ball’s landing on black, you may think that the next ball is more likely than otherwise to land on red. This cannot be. The roulette wheel has no memory. The chance of black is just what it always is. The reason people tend to think otherwise may be that they expect the sequence of events to be representative of random sequences, and the typical random sequence at roulette does not have five blacks in a row." (Jonathan Baron, "Thinking and Deciding" 4th Ed, 2008)

"[…] humans make mistakes when they try to count large numbers in complicated systems. They make even greater errors when they attempt - as they always do - to reduce complicated systems to simple numbers." (Zachary Karabell, "The Leading Indicators: A short history of the numbers that rule our world", 2014)

"There is a growing realization that reported 'statistically significant' claims in statistical publications  are routinely mistaken. Researchers typically express the confidence in their data in terms of p-value: the probability that a perceived result is actually the result of random variation. The value of p (for 'probability') is a way of measuring the extent to which a data set provides evidence against a so-called null hypothesis. By convention, a p- value below 0.05 is considered a meaningful refutation of the null hypothesis; however, such conclusions are less solid than they appear." (Andrew Gelman & Eric Loken, "The Statistical Crisis in Science", American Scientist Vol. 102(6), 2014)

"Using a sample to estimate results in the full population is common in data analysis. But you have to be careful, because even small mistakes can quickly become big ones, given that each observation represents many others. There are also many factors you need to consider if you want to make sure your inferences are accurate." (John H Johnson & Mike Gluck, "Everydata: The misinformation hidden in the little data you consume every day", 2016)

"The central limit conjecture states that most errors are the result of many small errors and, as such, have a normal distribution. The assumption of a normal distribution for error has many advantages and has often been made in applications of statistical models." (David S Salsburg, "Errors, Blunders, and Lies: How to Tell the Difference", 2017)

"Variance is error from sensitivity to fluctuations in the training set. If our training set contains sampling or measurement error, this noise introduces variance into the resulting model. [...] Errors of variance result in overfit models: their quest for accuracy causes them to mistake noise for signal, and they adjust so well to the training data that noise leads them astray. Models that do much better on testing data than training data are overfit." (Steven S Skiena, "The Data Science Design Manual", 2017)

"Statistical models have two main components. First, a mathematical formula that expresses a deterministic, predictable component, for example the fitted straight line that enables us to make a prediction [...]. But the deterministic part of a model is not going to be a perfect representation of the observed world [...] and the difference between what the model predicts, and what actually happens, is the second component of a model and is known as the residual error - although it is important to remember that in statistical modelling, ‘error’ does not refer to a mistake, but the inevitable inability of a model to exactly represent what we observe." (David Spiegelhalter, "The Art of Statistics: Learning from Data", 2019)

"If we don’t understand the statistics, we’re likely to be badly mistaken about the way the world is. It is all too easy to convince ourselves that whatever we’ve seen with our own eyes is the whole truth; it isn’t. Understanding causation is tough even with good statistics, but hopeless without them. [...] And yet, if we understand only the statistics, we understand little. We need to be curious about the world that we see, hear, touch, and smell, as well as the world we can examine through a spreadsheet." (Tim Harford, "The Data Detective: Ten easy rules to make sense of statistics", 2020)

"Premature enumeration is an equal-opportunity blunder: the most numerate among us may be just as much at risk as those who find their heads spinning at the first mention of a fraction. Indeed, if you’re confident with numbers you may be more prone than most to slicing and dicing, correlating and regressing, normalizing and rebasing, effortlessly manipulating the numbers on the spreadsheet or in the statistical package - without ever realizing that you don’t fully understand what these abstract quantities refer to. Arguably this temptation lay at the root of the last financial crisis: the sophistication of mathematical risk models obscured the question of how, exactly, risks were being measured, and whether those measurements were something you’d really want to bet your global banking system on." (Tim Harford, "The Data Detective: Ten easy rules to make sense of statistics", 2020)


"Always expect to find at least one error when you proofread your own statistics. If you don’t, you are probably making the same mistake twice." (Cheryl Russell)

12 January 2025

On Probability: Definitions

"Probability is the very guide of life." (Marcus Tullius Cicero, "De Natura Deorum" ["On the Nature of the Gods"], 45 BC)

"Probability is a degree of possibility." (Gottfried W Leibniz, "On estimating the uncertain", 1676)

"Probability is a degree of certainty and it differs from certainty as a part from a whole." (Jacob Bernoulli, "Ars Conjectandi" ["The Art of Conjecturing"], 1713)

"Probable evidence, in its very nature, affords but an imperfect kind of information, and is to be considered as relative only to beings of limited capacities. For nothing which is the possible object of knowledge, whether past, present, or future, can be probable to an infinite Intelligence; since it cannot but be discerned absolutely as it is in itself, certainly true, or certainly false. To us, probability is the very guide of life." (Joseph Butler, "The Analogy of Religion, Natural and Revealed, to the Constitution and Course of Nature", 1736)

"Probability is a mathematical discipline with aims akin to those, for example, of geometry or analytical mechanics. In each field we must carefully distinguish three aspects of the theory: (a) the formal logical content, (b) the intuitive background, (c) the applications. The character, and the charm, of the whole structure cannot be appreciated without considering all three aspects in their proper relation." (William Feller, "An Introduction to Probability Theory and Its Applications", 1950)

"Probability is a mathematical discipline with aims akin to those, for example, of geometry or analytical mechanics. In each field we must carefully distinguish three aspects of the theory: (a) the formal logical content, (b) the intuitive background, (c) the applications. The character, and the charm, of the whole structure cannot be appreciated without considering all three aspects in their proper relation." (William Feller, "An Introduction to Probability Theory and Its Applications", 1957)

"Many modern philosophers claim that probability is relation between an hypothesis and the evidence for it." (Ian Hacking, "The Emergence of Probability", 1975)

"Probability is the mathematics of uncertainty. Not only do we constantly face situations in which there is neither adequate data nor an adequate theory, but many modem theories have uncertainty built into their foundations. Thus learning to think in terms of probability is essential. Statistics is the reverse of probability (glibly speaking). In probability you go from the model of the situation to what you expect to see; in statistics you have the observations and you wish to estimate features of the underlying model." (Richard W Hamming, "Methods of Mathematics Applied to Calculus, Probability, and Statistics", 1985) 

"Probabilities are summaries of knowledge that is left behind when information is transferred to a higher level of abstraction." (Judea Pearl, "Probabilistic Reasoning in Intelligent Systems: Network of Plausible, Inference", 1988)

"Phenomena having uncertain individual outcomes but a regular pattern of outcomes in many repetitions are called random. 'Random' is not a synonym for 'haphazard' but a description of a kind of order different from the deterministic one that is popularly associated with science and mathematics. Probability is the branch of mathematics that describes randomness." (David S Moore, "Uncertainty", 1990)

"Mathematics is not just a collection of results, often called theorems; it is a style of thinking. Computing is also basically a style of thinking. Similarly, probability is a style of thinking." (Richard W Hamming, "The Art of Probability for Scientists and Engineers", 1991)

"Probability is not about the odds, but about the belief in the existence of an alternative outcome, cause, or motive." (Nassim N Taleb, "Fooled by Randomness", 2001)

"Probability is a mathematical language for quantifying uncertainty." (Larry A Wasserman, "All of Statistics: A concise course in statistical inference", 2004)

"Although some people use them interchangeably, probability and odds are not the same and people often misuse the terms. Probability is the likelihood that an outcome will occur. The odds of something happening, statistically speaking, is the ratio of favorable outcomes to unfavorable outcomes." (John H Johnson & Mike Gluck, "Everydata: The misinformation hidden in the little data you consume every day", 2016)

See also: Out of Context: on Probability

10 January 2025

John Haigh - Collected Quotes

"All this, though, is to miss the point of gambling, which is to accept the imbalance of chance in general yet deny it for the here and now. Individually we know, with absolute certainty, that 'the way things hap pen' and what actually happens to us are as different as sociology and poetry." (John Haigh," Taking Chances: Winning With Probability", 1999)

"As so often happens in mathematics, a convenient re-statement of a problem brings us suddenly up against the deepest questions of knowledge." (John Haigh," Taking Chances: Winning With Probability", 1999)

"But despite their frequently good intuition, many people go wrong in two places in particular. The first is in appreciating the real differences in magnitude that arise with rare events. If a chance is expressed as 'one in a thousand' or 'one in a million', the only message registered maybe that the chance is remote and yet one figure is a thousand times bigger than the other. Another area is in using partial information […]" (John Haigh," Taking Chances: Winning With Probability", 1999)

"It is the same with the numbers generated by roulette: the smoothness of probability in the long term allows any amount of local lumpiness on which to exercise our obsession with pattern. As the sequence of data lengthens, the relative proportions of odd or even, red or black, do indeed approach closer and closer to the 50-50 ratio predicted by probability, but the absolute discrepancy between one and the other will increase." (John Haigh," Taking Chances: Winning With Probability", 1999)

"Normal is safe; normal is central; normal is unexceptional. Yet it also means the pattern from which all others are drawn, the standard against which we measure the healthy specimen. In its simplest statistical form, normality is represented by the mean (often called the 'average') of a group of measurements." (John Haigh," Taking Chances: Winning With Probability", 1999)

"Probability therefore is a kind of corrective lens, allowing us, through an understanding of the nature of Chance, to refine our conclusions and approximate, if not achieve, the perfection of Design." (John Haigh," Taking Chances: Winning With Probability", 1999)

"The psychology of gambling includes both a conviction that the unusual must happen and a refusal to believe in it when it does. We are caught by the confusing nature of the long run; just as the imperturbable ocean seen from space will actually combine hurricanes and dead calms, so the same action, repeated over time, can show wide deviations from its normal expected results - deviations that do not themselves break the laws of probability. In fact, they have probabilities of their own." (John Haigh," Taking Chances: Winning With Probability", 1999)

"These so-called stochastic processes show up everywhere randomness is applied to the output of another random function. They provide, for instance, a method for describing the chance component of financial markets: not every value of the Dow is possible every day; the range of chance fluctuation centers on the opening price. Similarly, shuffling takes the output of the previous shuffle as its input. So, if you’re handed a deck in a given order, how much shuffling does it need to be truly random?" (John Haigh," Taking Chances: Winning With Probability", 1999)

"This notion of 'being due' - what is sometimes called the gambler’s fallacy - is a mistake we make because we cannot help it. The problem with life is that we have to live it from the beginning, but it makes sense only when seen from the end. As a result, our whole experience is one of coming to provisional conclusions based on insufficient evidence: read ing the signs, gauging the odds." (John Haigh," Taking Chances: Winning With Probability", 1999)

"We search for certainty and call what we find destiny. Everything is possible, yet only one thing happens - we live and die between these two poles, under the rule of probability. We prefer, though, to call it Chance: an old familiar embodied in gods and demons, harnessed in charms and rituals. We remind one another of fortune’s fickleness, each secretly believing himself exempt. I am master of my fate; you are dicing with danger; he is living in a fool’s paradise." (John Haigh," Taking Chances: Winning With Probability", 1999)

"Winning and losing is not simply a pastime; it is the model science uses to explore the universe. Flipping a coin or rolling a die is really asking a question: success or failure can be defined as getting a yes or no. So the distribution of probabilities in a game of chance is the same as that in any repeated test - even though the result of any one test is unpredictable." (John Haigh," Taking Chances: Winning With Probability", 1999)

20 October 2024

On Probability (2000 - )

"In the laws of probability theory, likelihood distributions are fixed properties of a hypothesis. In the art of rationality, to explain is to anticipate. To anticipate is to explain." (Eliezer S. Yudkowsky, "A Technical Explanation of Technical Explanation", 2005)

"I have always thought that statistical design and sampling from populations should be the first courses taught, but all elementary courses I know of start with statistical methods or probability. To me, this is putting the cart before the horse!" (Walter Federer, "A Conversation with Walter T Federer", Statistical Science Vol 20, 2005)

"For some scientific data the true value cannot be given by a constant or some straightforward mathematical function but by a probability distribution or an expectation value. Such data are called probabilistic. Even so, their true value does not change with time or place, making them distinctly different from  most statistical data of everyday life." (Manfred Drosg, "Dealing with Uncertainties: A Guide to Error Analysis", 2007)

"In fact, H [entropy] measures the amount of uncertainty that exists in the phenomenon. If there were only one event, its probability would be equal to 1, and H would be equal to 0 - that is, there is no uncertainty about what will happen in a phenomenon with a single event because we always know what is going to occur. The more events that a phenomenon possesses, the more uncertainty there is about the state of the phenomenon. In other words, the more entropy, the more information." (Diego Rasskin-Gutman, "Chess Metaphors: Artificial Intelligence and the Human Mind", 2009)

"The four questions of data analysis are the questions of description, probability, inference, and homogeneity. [...] Descriptive statistics are built on the assumption that we can use a single value to characterize a single property for a single universe. […] Probability theory is focused on what happens to samples drawn from a known universe. If the data happen to come from different sources, then there are multiple universes with different probability models.  [...] Statistical inference assumes that you have a sample that is known to have come from one universe." (Donald J Wheeler," Myths About Data Analysis", International Lean & Six Sigma Conference, 2012)

"When statisticians, trained in math and probability theory, try to assess likely outcomes, they demand a plethora of data points. Even then, they recognize that unless it’s a very simple and controlled action such as flipping a coin, unforeseen variables can exert significant influence." (Zachary Karabell, "The Leading Indicators: A short history of the numbers that rule our world", 2014)

"Entropy is a measure of amount of uncertainty or disorder present in the system within the possible probability distribution. The entropy and amount of unpredictability are directly proportional to each other." (G Suseela & Y Asnath V Phamila, "Security Framework for Smart Visual Sensor Networks", 2019)

On Probability (1975 - 1999)

"Of course, we know the laws of trial and error, of large numbers and probabilities. We know that these laws are part of the mathematical and mechanical fabric of the universe, and that they are also at play in biological processes. But, in the name of the experimental method and out of our poor knowledge, are we really entitled to claim that everything happens by chance, to the exclusion of all other possibilities?" (Albert Claude, "The Coming of Age of the Cell", Science, 1975)

"We often use the ideas of chance, likelihood, or probability in everyday language. For example, 'It is unlikely to rain today', 'The black horse will probably win the next race', or 'A playing card selected at random from a pack is unlikely to be the ace of spades' . Each of these remarks, if accepted at face value, is likely to reflect the speaker's expectation based on experience gained in the same position, or similar positions, on many  previous occasions. In order to be quantitative about probability, we focus on this aspect of repeatable situations." (Peter Lancaster, "Mathematics: Models of the Real World", 1976)

"The theory of probability is the only mathematical tool available to help map the unknown and the uncontrollable. It is fortunate that this tool, while tricky, is extraordinarily powerful and convenient." (Benoit Mandelbrot, "The Fractal Geometry of Nature", 1977)

"In decision theory, mathematical analysis shows that once the sampling distribution, loss function, and sample are specified, the only remaining basis for a choice among different admissible decisions lies in the prior probabilities. Therefore, the logical foundations of decision theory cannot be put in fully satisfactory form until the old problem of arbitrariness (sometimes called 'subjectiveness') in assigning prior probabilities is resolved." (Edwin T Jaynes, "Prior Probabilities", 1978)

"Scientific theories must tell us both what is true in nature, and how we are to explain it. I shall argue that these are entirely different functions and should be kept distinct. […] Scientific theories are thought to explain by dint of the descriptions they give of reality. […] The covering-law model supposes that all we need to know are the laws of nature - and a little logic, perhaps a little probability theory - and then we know which factors can explain which others." (Nancy Cartwright, "How the Laws of Physics Lie", 1983)

"Another reason for the applied statistician to care about Bayesian inference is that consumers of statistical answers, at least interval estimates, commonly interpret them as probability statements about the possible values of parameters. Consequently, the answers statisticians provide to consumers should be capable of being interpreted as approximate Bayesian statements." (Donald B Rubin, "Bayesianly justifiable and relevant frequency calculations for the applied statistician", Annals of Statistics 12(4), 1984)

"In the path-integral formulation, the essence of quantum physics may be summarized with two fundamental rules: (1). The classical action determines the probability amplitude for a specific chain of events to occur, and (2) the probability that either one or the other chain of events occurs is determined by the probability amplitudes corresponding to the two chains of events. Finding these rules represents a stunning achievement by the founders of quantum physics." (Anthony Zee, "Fearful Symmetry: The Search for Beauty in Modern Physics", 1986)

"In the design of experiments, one has to use some informal prior knowledge. How does one construct blocks in a block design problem for instance? It is stupid to think that use is not made of a prior. But knowing that this prior is utterly casual, it seems ludicrous to go through a lot of integration, etc., to obtain ‘exact’ posterior probabilities resulting from this prior. So, I believe the situation with respect to Bayesian inference and with respect to inference, in general, has not made progress. Well, Bayesian statistics has led to a great deal of theoretical research. But I don’t see any real utilizations in applications, you know. Now no one, as far as I know, has examined the question of whether the inferences that are obtained are, in fact, realized in the predictions that they are used to make." (Oscar Kempthorne, "A conversation with Oscar Kempthorne", Statistical Science vol. 10, 1995)

"Events may appear to us to be random, but this could be attributed to human ignorance about the details of the processes involved." (Brain S Everitt, "Chance Rules", 1999)

"The whole point of probability is to discuss uncertain eventualities before they occur. After this event, things are completely different." (David Stirzaker, "Probability and Random Variables: A Beginner’s Guide", 1999)

"The problem is that to interpret probability as a relative frequency requires that we can repeat some game or activity as many times as we wish. Often this is clearly not the case." (David Stirzaker, "Probability and Random Variables: A Beginner’s Guide", 1999)

24 October 2023

Ian Hacking - Collected Quotes

"A single observation that is inconsistent with some generalization points to the falsehood of the generalization, and thereby 'points to itself'." (Ian Hacking, "The Emergence Of Probability", 1975)

"Many modern philosophers claim that probability is relation between an hypothesis and the evidence for it." (Ian Hacking, "The Emergence of Probability", 1975)

"Determinism was eroded during the nineteenth century and a space was cleared for autonomous laws of chance. The idea of human nature was displaced by a model of normal people with laws of dispersion. These two transformations were parallel and fed into each other. Chance made the world seem less capricious; it was legitimated because it brought order out of chaos. The greater the level of indeterminism in our conception of the world and of people, the higher the expected level of control." (Ian Hacking, "The Taming of Chance", 1990)

"Epistemology is the theory of knowledge and belief." (Ian Hacking, "The Taming of Chance", 1990)

"Logic is the theory of inference and argument. For this purpose we use the deductive and often tautological unravelling of axioms provided by pure mathematics, but also, and for most practical affairs, we now employ- sometimes precisely, sometimes informally - the logic of statistical inference." (Ian Hacking, "The Taming of Chance", 1990)

"Metaphysics is the science of the ultimate states of the universe." (Ian Hacking, "The Taming of Chance", 1990)

"The systematic collection of data about people has affected not only the ways in which we conceive of a society, but also the ways in which we describe our neighbour. It has profoundly transformed what we choose to do, who we try to be, and what we think of ourselves." (Ian Hacking, "The Taming of Chance", 1990)

"There is a seeming paradox: the more the indeterminism, the more the control. This is obvious in the physical sciences. Quantum physics takes for granted that nature is at bottom irreducibly stochastic. Precisely that discovery has immeasurably enhanced our ability to interfere with and alter the course of nature." (Ian Hacking, "The Taming of Chance", 1990)

"I write of the taming of chance, that is, of the way in which apparently chance or irregular events have been brought under the control of natural or social law. The world became not more chancy, but far less so. Chance, which was once the superstition of the vulgar, became the centrepiece of natural and social science, or so genteel and rational people are led to believe." (Ian Hacking, "The Taming of Chance", 1990)

"The best reaction to a paradox is to invent a genuinely new and deep idea." (Ian Hacking, "An Introduction to Probability and Inductive Logic", 2001)

24 September 2023

On Laws II: The Laws of Probability

"The laws of probability, so true in general, so fallacious in particular." (Edward Gibbon, "Memoirs of My Life", 1774)

"The second law of thermodynamics appears solely as a law of probability, entropy as a measure of the probability, and the increase of entropy is equivalent to a statement that more probable events follow less probable ones." (Max Planck, "A Survey of Physics", 1923)

"The concepts which now prove to be fundamental to our understanding of nature- a space which is finite; a space which is empty, so that one point [of our 'material' world] differs from another solely in the properties of space itself; four-dimensional, seven- and more dimensional spaces; a space which for ever expands; a sequence of events which follows the laws of probability instead of the law of causation - or alternatively, a sequence of events which can only be fully and consistently described by going outside of space and time - all these concepts seem to my mind to be structures of pure thought, incapable of realisation in any sense which would properly be described as material." (James Jeans, "The Mysterious Universe", 1930)

"The fundamental difference between engineering with and without statistics boils down to the difference between the use of a scientific method based upon the concept of laws of nature that do not allow for chance or uncertainty and a scientific method based upon the concepts of laws of probability as an attribute of nature." (Walter A Shewhart, 1940)

"[...] the whole course of events is determined by the laws of probability; to a state in space there corresponds a definite probability, which is given by the de Brogile wave associated with the state." (Max Born, "Atomic Physics", 1957)

"We can never achieve absolute truth but we can live hopefully by a system of calculated probabilities. The law of probability gives to natural and human sciences - to human experience as a whole - the unity of life we seek." (Agnes E Meyer, "Education for a New Morality", 1957)

"People are entirely too disbelieving of coincidence. They are far too ready to dismiss it and to build arcane structures of extremely rickety substance in order to avoid it. I, on the other hand, see coincidence everywhere as an inevitable consequence of the laws of probability, according to which having no unusual coincidence is far more unusual than any coincidence could possibly be." (Isaac Asimov, "The Planet That Wasn't", 1976)

"I take the view that life is a nonspiritual, almost mathematical property that can emerge from network-like arrangements of matter. It is sort of like the laws of probability; if you get enough components together, the system will behave like this, because the law of averages dictates so. Life results when anything is organized according to laws only now being uncovered; it follows rules as strict as those that light obeys." (Kevin Kelly, "Out of Control: The New Biology of Machines, Social Systems and the Economic World", 1995)

"So we pour in data from the past to fuel the decision-making mechanisms created by our models, be they linear or nonlinear. But therein lies the logician's trap: past data from real life constitute a sequence of events rather than a set of independent observations, which is what the laws of probability demand. [...] It is in those outliers and imperfections that the wildness lurks." (Peter L Bernstein, "Against the Gods: The Remarkable Story of Risk", 1996)

"The possibility of translating uncertainties into risks is much more restricted in the propensity view. Propensities are properties of an object, such as the physical symmetry of a die. If a die is constructed to be perfectly symmetrical, then the probability of rolling a six is 1 in 6. The reference to a physical design, mechanism, or trait that determines the risk of an event is the essence of the propensity interpretation of probability. Note how propensity differs from the subjective interpretation: It is not sufficient that someone’s subjective probabilities about the outcomes of a die roll are coherent, that is, that they satisfy the laws of probability. What matters is the die’s design. If the design is not known, there are no probabilities." (Gerd Gigerenzer, "Calculated Risks: How to know when numbers deceive you", 2002)

"In the laws of probability theory, likelihood distributions are fixed properties of a hypothesis. In the art of rationality, to explain is to anticipate. To anticipate is to explain." (Eliezer S Yudkowsky, "A Technical Explanation of Technical Explanation", 2005)

On Probability Theory (2000-)

"Arithmetic and number theory study patterns of number and counting. Geometry studies patterns of shape. Calculus allows us to handle patterns of motion. Logic studies patterns of reasoning. Probability theory deals with patterns of chance. Topology studies patterns of closeness and position." (Keith Devlin, "The Math Gene: How Mathematical Thinking Evolved And Why Numbers Are Like Gossip", 2000)

"The most important aspect of probability theory concerns the behavior of sequences of random variables." (Larry A Wasserman, "All of Statistics: A concise course in statistical inference", 2004)

"In the laws of probability theory, likelihood distributions are fixed properties of a hypothesis. In the art of rationality, to explain is to anticipate. To anticipate is to explain." (Eliezer S. Yudkowsky, "A Technical Explanation of Technical Explanation", 2005)

"Chance is just as real as causation; both are modes of becoming. The way to model a random process is to enrich the mathematical theory of probability with a model of a random mechanism. In the sciences, probabilities are never made up or 'elicited' by observing the choices people make, or the bets they are willing to place.  The reason is that, in science and technology, interpreted probability exactifies objective chance, not gut feeling or intuition. No randomness, no probability." (Mario Bunge, "Chasing Reality: Strife over Realism", 2006)

"At a purely formal level, one could call probability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate." (Terence Tao, "Topics in Random Matrix Theory", 2012)

"The four questions of data analysis are the questions of description, probability, inference, and homogeneity. [...] Descriptive statistics are built on the assumption that we can use a single value to characterize a single property for a single universe. […] Probability theory is focused on what happens to samples drawn from a known universe. If the data happen to come from different sources, then there are multiple universes with different probability models.  [...] Statistical inference assumes that you have a sample that is known to have come from one universe." (Donald J Wheeler," Myths About Data Analysis", International Lean & Six Sigma Conference, 2012)

"Probability theory provides the best answer only when the rules of the game are certain, when all alternatives, consequences, and probabilities are known or can be calculated. [...] In the real game, probability theory is not enough. Good intuitions are needed, which can be more challenging than calculations. One way to reduce uncertainty is to rely on rules of thumb." (Gerd Gigerenzer, "Risk Savvy: How to make good decisions", 2014)

"When statisticians, trained in math and probability theory, try to assess likely outcomes, they demand a plethora of data points. Even then, they recognize that unless it’s a very simple and controlled action such as flipping a coin, unforeseen variables can exert significant influence." (Zachary Karabell, "The Leading Indicators: A short history of the numbers that rule our world", 2014)

"Probability theory is not the only tool for rationality. In situations of uncertainty, as opposed to risk, simple heuristics can lead to more accurate judgments, in addition to being faster and more frugal. Under uncertainty, optimal solutions do not exist (except in hindsight) and, by definition, cannot be calculated. Thus, it is illusory to model the mind as a general optimizer, Bayesian or otherwise. Rather, the goal is to achieve satisficing solutions, such as meeting an aspiration level or coming out ahead of a competitor."  (Gerd Gigerenzer et al, "Simply Rational: Decision Making in the Real World", 2015)

"New information is constantly flowing in, and your brain is constantly integrating it into this statistical distribution that creates your next perception (so in this sense 'reality' is just the product of your brain’s ever-evolving database of consequence). As such, your perception is subject to a statistical phenomenon known in probability theory as kurtosis. Kurtosis in essence means that things tend to become increasingly steep in their distribution [...] that is, skewed in one direction. This applies to ways of seeing everything from current events to ourselves as we lean 'skewedly' toward one interpretation, positive or negative. Things that are highly kurtotic, or skewed, are hard to shift away from. This is another way of saying that seeing differently isn’t just conceptually difficult - it’s statistically difficult." (Beau Lotto, "Deviate: The Science of Seeing Differently", 2017)

On Probability Theory (1975-1999)

"The theory of probability is the only mathematical tool available to help map the unknown and the uncontrollable. It is fortunate that this tool, while tricky, is extraordinarily powerful and convenient." (Benoit Mandelbrot, "The Fractal Geometry of Nature", 1977)

"Every branch of mathematics has its combinatorial aspects […] There is combinatorial arithmetic, combinatorial topology, combinatorial logic, combinatorial set theory-even combinatorial linguistics, as we shall see in the section on word play. Combinatorics is particularly important in probability theory where it is essential to enumerate all possible combinations of things before a probability formula can be found." (Martin Gardner, "Aha! Insight", 1978)

"The ‘eyes of the mind’ must be able to see in the phase space of mechanics, in the space of elementary events of probability theory, in the curved four-dimensional space-time of general relativity, in the complex infinite dimensional projective space of quantum theory. To comprehend what is visible to the ‘actual eyes’, we must understand that it is only the projection of an infinite dimensional world on the retina." (Yuri I Manin, "Mathematics and Physics", 1981)

"Scientific theories must tell us both what is true in nature, and how we are to explain it. I shall argue that these are entirely different functions and should be kept distinct. […] Scientific theories are thought to explain by dint of the descriptions they give of reality. […] The covering-law model supposes that all we need to know are the laws of nature - and a little logic, perhaps a little probability theory - and then we know which factors can explain which others." (Nancy Cartwright, "How the Laws of Physics Lie", 1983)

"Increasingly [...] the application of mathematics to the real world involves discrete mathematics... the nature of the discrete is often most clearly revealed through the continuous models of both calculus and probability. Without continuous mathematics, the study of discrete mathematics soon becomes trivial and very limited. [...] The two topics, discrete and continuous mathematics, are both ill served by being rigidly separated." (Richard W Hamming, "Methods of Mathematics Applied to Calculus, Probability, and Statistics", 1985)

"Independence is the central concept of probability theory and few would believe today that understanding what it meant was ever a problem." (Mark Kac, "Enigmas Of Chance", 1985)

"Probability and statistics are now so obviously necessary tools for understanding many diverse things that we must not ignore them even for the average student." (Richard W Hamming, "Methods of Mathematics Applied to Calculus, Probability, and Statistics", 1985)

"Phenomena having uncertain individual outcomes but a regular pattern of outcomes in many repetitions are called random. 'Random' is not a synonym for 'haphazard' but a description of a kind of order different from the deterministic one that is popularly associated with science and mathematics. Probability is the branch of mathematics that describes randomness." (David S Moore, "Uncertainty", 1990)

"Every field of knowledge has its subject matter and its methods, along with a style for handling them. The field of Probability has a great deal of the Art component in it-not only is the subject matter rather different from that of other fields, but at present the techniques are not well organized into systematic methods. As a result each problem has to be "looked at in the right way" to make it easy to solve. Thus in probability theory there is a great deal of art in setting up the model, in solving the problem, and in applying the results back to the real world actions that will follow." (Richard W Hamming, "The Art of Probability for Scientists and Engineers", 1991)

"Probability is too important to be left to the experts. […] The experts, by their very expert training and practice, often miss the obvious and distort reality seriously. [...] The desire of the experts to publish and gain credit in the eyes of their peers has distorted the development of probability theory from the needs of the average user. The comparatively late rise of the theory of probability shows how hard it is to grasp, and the many paradoxes show clearly that we, as humans, lack a well-grounded intuition in the matter. Neither the intuition of the man in the street, nor the sophisticated results of the experts provides a safe basis for important actions in the world we live in." (Richard W Hamming, "The Art of Probability for Scientists and Engineers", 1991)

"Probability theory has a right and a left hand. On the right is the rigorous foundational work using the tools of measure theory. The left hand 'thinks probabilistically', reduces problems to gambling situations, coin-tossing, motions of a physical particle." (Leo Breiman, "Probability", 1992)

"Probability theory is an ideal tool for formalizing uncertainty in situations where class frequencies are known or where evidence is based on outcomes of a sufficiently long series of independent random experiments. Possibility theory, on the other hand, is ideal for formalizing incomplete information expressed in terms of fuzzy propositions." (George Klir, "Fuzzy sets and fuzzy logic", 1995)

"Probability theory is a serious instrument for forecasting, but the devil, as they say, is in the details - in the quality of information that forms the basis of probability estimates." (Peter L Bernstein, "Against the Gods: The Remarkable Story of Risk", 1996)

"The theory of probability can define the probabilities at the gaming casino or in a lottery - there is no need to spin the roulette wheel or count the lottery tickets to estimate the nature of the outcome - but in real life relevant information is essential. And the bother is that we never have all the information we would like. Nature has established patterns, but only for the most part. Theory, which abstracts from nature, is kinder: we either have the information we need or else we have no need for information." (Peter L Bernstein, "Against the Gods: The Remarkable Story of Risk", 1996)

On Probability Theory (1950-1974)

"Historically, the original purpose of the theory of probability was to describe the exceedingly narrow domain of experience connected with games of chance, and the main effort was directed to the calculation of certain probabilities." (William Feller, "An Introduction To Probability Theory And Its Applications", 1950)

"Infinite product spaces are the natural habitat of probability theory." (William Feller, "An Introduction To Probability Theory And Its Applications", 1950)

"Probability is a mathematical discipline with aims akin to those, for example, of geometry or analytical mechanics. In each field we must carefully distinguish three aspects of the theory: (a) the formal logical content, (b) the intuitive background, (c) the applications. The character, and the charm, of the whole structure cannot be appreciated without considering all three aspects in their proper relation." (William Feller, "An Introduction to Probability Theory and Its Applications", 1950)

"Sampling is the science and art of controlling and measuring the reliability of useful statistical information through the theory of probability." (William E Deming, "Some Theory of Sampling", 1950)

"The classical theory of probability was devoted mainly to a study of the gamble's gain, which is again a random variable; in fact, every random variable can be interpreted as the gain of a real or imaginary gambler in a suitable game." (William Feller, "An Introduction To Probability Theory And Its Applications", 1950)

"The painful experience of many gamblers has taught us the lesson that no system of betting is successful in improving the gambler's chances. If the theory of probability is true to life, this experience must correspond to a provable statement." (William Feller, "An Introduction To Probability Theory And Its Applications", 1950)

"In a sense, of course, probability theory in the form of the simple laws of chance is the key to the analysis of warfare; […] My own experience of actual operational research work, has however, shown that its is generally possible to avoid using anything more sophisticated. […] In fact the wise operational research worker attempts to concentrate his efforts in finding results which are so obvious as not to need elaborate statistical methods to demonstrate their truth. In this sense advanced probability theory is something one has to know about in order to avoid having to use it." (Patrick M S Blackett, "Operations Research", Physics Today, 1951)

"The study of inductive inference belongs to the theory of probability, since observational facts can make a theory only probable but will never make it absolutely certain." (Hans Reichenbach, "The Rise of Scientific Philosophy", 1951)

"To say that observations of the past are certain, whereas predictions are merely probable, is not the ultimate answer to the question of induction; it is only a sort of intermediate answer, which is incomplete unless a theory of probability is developed that explains what we should mean by ‘probable’ and on what ground we can assert probabilities." (Hans Reichenbach, "The Rise of Scientific Philosophy", 1951)

"The epistemological value of probability theory is based on the fact that chance phenomena, considered collectively and on a grand scale, create non-random regularity." (Andrey N Kolmogorov, "Limit Distributions for Sums of Independent Random Variables", 1954)

"On the other hand, the 'subjective' school of thought, regards probabilities as expressions of human ignorance; the probability of an event is merely a formal expression of our expectation that the event will or did occur, based on whatever information is available. To the subjectivist, the purpose of probability theory is to help us in forming plausible conclusions in cases where there is not enough information available to lead to certain conclusions; thus detailed verification is not expected. The test of a good subjective probability distribution is does it correctly represent our state of knowledge as to the value of x?" (Edwin T Jaynes, "Information Theory and Statistical Mechanics" I, 1956)

"Probability is a mathematical discipline with aims akin to those, for example, of geometry or analytical mechanics. In each field we must carefully distinguish three aspects of the theory: (a) the formal logical content, (b) the intuitive background, (c) the applications. The character, and the charm, of the whole structure cannot be appreciated without considering all three aspects in their proper relation." (William Feller, "An Introduction to Probability Theory and Its Applications", 1957)

"The theory of probability can never lead to a definite statement concerning a single event." (Richard von Mises, "Probability, Statistics, and Truth" 2nd Ed., 1957)

"To the author the main charm of probability theory lies in the enormous variability of its applications. Few mathematical disciplines have contributed to as wide a spectrum of subjects, a spectrum ranging from number theory to physics, and even fewer have penetrated so decisively the whole of our scientific thinking." (Mark Kac, "Lectures in Applied Mathematics" Vol. 1, 1959)

"The mathematician, the statistician, and the philosopher do different things with a theory of probability. The mathematician develops its formal consequences, the statistician applies the work of the mathematician and the philosopher describes in general terms what this application consists in. The mathematician develops symbolic tools without worrying overmuch what the tools are for; the statistician uses them; the philosopher talks about them. Each does his job better if he knows something about the work of the other two." (Irvin J Good, "Kinds of Probability", Science Vol. 129, 1959)

"Incomplete knowledge must be considered as perfectly normal in probability theory; we might even say that, if we knew all the circumstances of a phenomenon, there would be no place for probability, and we would know the outcome with certainty." (Félix E Borel, Probability and Certainty", 1963)

"The probability concept used in probability theory has exactly the same structure as have the fundamental concepts in any field in which mathematical analysis is applied to describe and represent reality." (Richard von Mises, "Mathematical Theory of Probability and Statistics", 1964)

"After all, without the experiment - either a real one or a mathematical model - there would be no reason for a theory of probability." (Thornton C Fry, "Probability and Its Engineering Uses", 1965)

"This faulty intuition as well as many modern applications of probability theory are under the strong influence of traditional misconceptions concerning the meaning of the law of large numbers and of a popular mystique concerning a so-called law of averages." (William Feller, "An Introduction to Probability Theory and Its Applications", 1968)

"Probability theory, for us, is not so much a part of mathematics as a part of logic, inductive logic, really. It provides a consistent framework for reasoning about statements whose correctness or incorrectness cannot be deduced from the hypothesis. The information available is sufficient only to make the inferences 'plausible' to a greater or lesser extent." (Ralph Baierlein, "Atoms and Information Theory: An Introduction to Statistical Mechanics", 1971)

"Since small differences in probability cannot be appreciated by the human mind, there seems little point in being excessively precise about uncertainty." (George E P Box & G C Tiao, "Bayesian inference in statistical analysis", 1973)

"The field of probability and statistics is then transformed into a Tower of Babel, in which only the most naive amateur claims to understand what he says and hears, and this because, in a language devoid of convention, the fundamental distinctions between what is certain and what is not, and between what is impossible and what is not, are abolished. Certainty and impossibility then become confused with high or low degrees of a subjective probability, which is itself denied precisely by this falsification of the language. On the contrary, the preservation of a clear, terse distinction between certainty and uncertainty, impossibility and possibility, is the unique and essential precondition for making meaningful statements (which could be either right or wrong), whereas the alternative transforms every sentence into a nonsense." (Bruno de Finetti, "Theory of Probability", 1974)

On Probability Theory (-1949)

"I am convinced that it is impossible to expound the methods of induction in a sound manner, without resting them on the theory of probability. Perfect knowledge alone can give certainty, and in nature perfect knowledge would be infinite knowledge, which is clearly beyond our capacities. We have, therefore, to content ourselves with partial knowledge, - knowledge mingled with ignorance, producing doubt." (William S Jevons, "The Principles of Science: A Treatise on Logic and Scientific Method", 1887)

"There is no more remarkable feature in the mathematical theory of probability than the manner in which it has been found to harmonize with, and justify, the conclusions to which mankind have been led, not by reasoning, but by instinct and experience, both of the individual and of the race. At the same time it has corrected, extended, and invested them with a definiteness and precision of which these crude, though sound, appreciations of common sense were till then devoid." (Morgan W Crofton, "Probability", Encyclopaedia Britannica 9th Ed,, 1885)

"A collective appropriate for the application of the theory of probability must fulfil two conditions. First, the relative frequencies of the attributes must possess limiting values. Second, these limiting values must remain the same in all partial sequences which may be selected from the original one in an arbitrary way. Of course, only such partial sequences can be taken into consideration as can be extended indefinitely, in the same way as the original sequence itself." (Richard von Mises, "Probability, Statistics and Truth", 1928)

"A great number of popular and more or less serious objections to the theory of probability disappear at once when we recognize that the exclusive purpose of this theory is to determine, from the given probabilities in a number of initial collectives, the probabilities in a new collective derived from the initial ones." (Richard von Mises, "Probability, Statistics and Truth", 1928)

"The rational concept of probability, which is the only basis of probability calculus, applies only to problems in which either the same event repeats itself again and again, or a great number of uniform elements are involved at the same time. Using the language of physics, we may say that in order to apply the theory of probability we must have a practically unlimited sequence of uniform observations." (Richard von Mises, "Probability, Statistics and Truth", 1928)

"The result of each calculation appertaining to the field of probability is always, as far as our theory goes, nothing else but a probability, or, using our general definition, the relative frequency of a certain event in a sufficiently long (theoretically, infinitely long) sequence of observations. The theory of probability can never lead to a definite statement concerning a single event. The only question that it can answer is: what is to be expected in the course of a very long sequence of observations? It is important to note that this statement remains valid also if the calculated probability has one of the two extreme values 1 or 0." (Richard von Mises, "Probability, Statistics and Truth", 1928)

"The theory of probability as a mathematical discipline can and should be developed from axioms in exactly the same way as geometry and algebra." (Andrey N Kolmogorov, "Foundations of the Theory of Probability", 1933)

"The most important application of the theory of probability is to what we may call 'chance-like' or 'random' events, or occurrences. These seem to be characterized by a peculiar kind of incalculability which makes one disposed to believe - after many unsuccessful attempts - that all known rational methods of prediction must fail in their case. We have, as it were, the feeling that not a scientist but only a prophet could predict them. And yet, it is just this incalculability that makes us conclude that the calculus of probability can be applied to these events." (Karl R Popper, "The Logic of Scientific Discovery", 1934)

"Statistics is a scientific discipline concerned with collection, analysis, and interpretation of data obtained from observation or experiment. The subject has a coherent structure based on the theory of Probability and includes many different procedures which contribute to research and development throughout the whole of Science and Technology." (Egon Pearson, 1936)

09 July 2023

On Events: Rare Events

"We must rather seek for a cause, for every event whether probable or improbable must have some cause." (Polybius, "The Histories", cca. 100 BC)

"There is nothing in the nature of a miracle that should render it incredible: its credibility depends upon the nature of the evidence by which it is supported. An event of extreme probability will not necessarily command our belief unless upon a sufficiency of proof; and so an event which we may regard as highly improbable may command our belief if it is sustained by sufficient evidence. So that the credibility or incredibility of an event does not rest upon the nature of the event itself, but depends upon the nature and sufficiency of the proof which sustains it." (Charles Babbage, "Passages from the Life of a Philosopher", 1864)

"Events with a sufficiently small probability never occur, or at least we must act, in all circumstances, as if they were impossible." (Émile Borel, "Probabilities and Life", 1962)

"Most accidents in well-designed systems involve two or more events of low probability occurring in the worst possible combination." (Robert E Machol, "Principles of Operations Research", 1975)

"But despite their frequently good intuition, many people go wrong in two places in particular. The first is in appreciating the real differences in magnitude that arise with rare events. If a chance is expressed as 'one in a thousand' or 'one in a million', the only message registered maybe that the chance is remote and yet one figure is a thousand times bigger than the other. Another area is in using partial information […]" (John Haigh," Taking Chances: Winning With Probability", 1999)

"[…] all human beings - professional mathematicians included - are easily muddled when it comes to estimating the probabilities of rare events. Even figuring out the right question to ask can be confusing." (Steven Strogatz, "Sync: The Emerging Science of Spontaneous Order", 2003)

"Bell curves don't differ that much in their bells. They differ in their tails. The tails describe how frequently rare events occur. They describe whether rare events really are so rare. This leads to the saying that the devil is in the tails." (Bart Kosko, "Noise", 2006)

"A Black Swan is a highly improbable event with three principal characteristics: It is unpredictable; it carries a massive impact; and, after the fact, we concoct an explanation that makes it appear less random, and more predictable, than it was. […] The Black Swan idea is based on the structure of randomness in empirical reality. [...] the Black Swan is what we leave out of simplification." (Nassim N Taleb, “The Black Swan”, 2007)

"A forecaster should almost never ignore data, especially when she is studying rare events […]. Ignoring data is often a tip-off that the forecaster is overconfident, or is overfitting her model - that she is interested in showing off rather than trying to be accurate."  (Nate Silver, "The Signal and the Noise: Why So Many Predictions Fail-but Some Don't", 2012)

"[…] according to the bell-shaped curve the likelihood of a very-large-deviation event (a major outlier) located in the striped region appears to be very unlikely, essentially zero. The same event, though, is several thousand times more likely if it comes from a set of events obeying a fat-tailed distribution instead of the bell-shaped one." (John L Casti, "X-Events: The Collapse of Everything", 2012)

"[…] both rarity and impact have to go into any meaningful characterization of how black any particular [black] swan happens to be." (John L Casti, "X-Events: The Collapse of Everything", 2012)

"Black Swans (capitalized) are large-scale unpredictable and irregular events of massive consequence - unpredicted by a certain observer, and such un - predictor is generally called the 'turkey' when he is both surprised and harmed by these events. [...] Black Swans hijack our brains, making us feel we 'sort of' or 'almost' predicted them, because they are retrospectively explainable. We don’t realize the role of these Swans in life because of this illusion of predictability. […] An annoying aspect of the Black Swan problem - in fact the central, and largely missed, point - is that the odds of rare events are simply not computable." (Nassim N Taleb, "Antifragile: Things that gain from disorder", 2012)

"Behavioral finance so far makes conclusions from statics not dynamics, hence misses the picture. It applies trade-offs out of context and develops the consensus that people irrationally overestimate tail risk (hence need to be 'nudged' into taking more of these exposures). But the catastrophic event is an absorbing barrier. No risky exposure can be analyzed in isolation: risks accumulate. If we ride a motorcycle, smoke, fly our own propeller plane, and join the mafia, these risks add up to a near-certain premature death. Tail risks are not a renewable resource." (Nassim N Taleb, "Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications" 2nd Ed., 2022)

"But note that any heavy tailed process, even a power law, can be described in sample (that is finite number of observations necessarily discretized) by a simple Gaussian process with changing variance, a regime switching process, or a combination of Gaussian plus a series of variable jumps (though not one where jumps are of equal size […])." (Nassim N Taleb, "Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications" 2nd Ed., 2022)

"[…] it is not merely that events in the tails of the distributions matter, happen, play a large role, etc. The point is that these events play the major role and their probabilities are not (easily) computable, not reliable for any effective use. The implication is that Black Swans do not necessarily come from fat tails; the problem can result from an incomplete assessment of tail events." (Nassim N Taleb, "Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications" 2nd Ed., 2022)

"Once we know something is fat-tailed, we can use heuristics to see how an exposure there reacts to random events: how much is a given unit harmed by them. It is vastly more effective to focus on being insulated from the harm of random events than try to figure them out in the required details (as we saw the inferential errors under thick tails are huge). So it is more solid, much wiser, more ethical, and more effective to focus on detection heuristics and policies rather than fabricate statistical properties." (Nassim N Taleb, "Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications" 2nd Ed., 2022)

12 December 2021

David Stirzaker - Collected Quotes

"By its very nature a model cannot include all the details of the reality it seeks to represent, for then it would be just as hard to comprehend and describe as the reality we want to model. At best, our model should give a reasonable picture of some small part of reality. It has to be a simple (even crude) description; and we must always be ready to scrap or improve a model if it fails in this task of accurate depiction. That having been said, old models are often still useful." (David Stirzaker, "Probability and Random Variables: A Beginner's Guide", 1999) 

"Conversely, there are few features of life, the universe, or anything, in which chance is not in some way crucial. Nor is this merely some abstruse academic point; assessing risks and taking chances are inescapable facets of everyday existence. It is a trite maxim to say that life is a lottery; it would be more true to say that life offers a collection of lotteries that we can all, to some extent, choose to enter or avoid. And as the information at our disposal increases, it does not reduce the range of choices but in fact increases them." (David Stirzaker, "Probability and Random Variables: A Beginner's Guide", 1999)

"For several centuries that we know of, and probably for many centuries before that, flipping a coin (or rolling a die) has been the epitome of probability, the paradigm of randomness. You flip the coin (or roll the die), and nobody can accurately predict how it will fall. Nor can the most powerful computer predict correctly how it will fall, if it is flipped energetically enough. This is why cards, dice, and other gambling aids crop up so often in literature both directly and as metaphors. No doubt it is also the reason for the (perhaps excessive) popularity of gambling as entertainment. If anyone had any idea what numbers the lottery would show, or where the roulette ball will land, the whole industry would be a dead duck." (David Stirzaker, "Probability and Random Variables: A Beginner’s Guide", 1999)

"From the moment we first roll a die in a children’s board game, or pick a card (any card), we start to learn what probability is. But even as adults, it is not easy to tell what it is, in the general way." (David Stirzaker, "Probability and Random Variables: A Beginner’s Guide", 1999)

"It is difficult to make progress in any branch of mathematics without using the ideas and notation of sets and functions. Indeed it would be perverse to try to do so, since these ideas and notation are very helpful in guiding our intuition and solving problems." (David Stirzaker, "Probability and Random Variables: A Beginner’s Guide", 1999)

"Models form extraordinarily powerful and economical ways of thinking about the world. In fact they are often so good that the model is confused with reality. If you ever think about atoms, you probably imagine little billiard balls; more sophisticated readers may imagine little orbital systems of elementary particles. Of course atoms are not`really' like that; these visions are just convenient old models." (David Stirzaker, "Probability and Random Variables: A Beginner’s Guide", 1999)

"[...] the chance of a head (or a double six) is just a chance. The whole point of probability is to discuss uncertain eventualities before they occur. After this event, things are completely different. As the simplest illustration of this, note that even though we agree that if we flip a coin and roll two dice then the chance of a head is greater than the chance of a double six, nevertheless it may turn out that the coin shows a tail when the dice show a double six." (David Stirzaker, "Probability and Random Variables: A Beginner’s Guide", 1999)

"The problem is that to interpret probability as a relative frequency requires that we can repeat some game or activity as many times as we wish. Often this is clearly not the case." (David Stirzaker, "Probability and Random Variables: A Beginner’s Guide", 1999)

"The whole point of probability is to discuss uncertain eventualities before they occur. After this event, things are completely different." (David Stirzaker, "Probability and Random Variables: A Beginner’s Guide", 1999)

"[...] unlike the apparatus for choosing numbers, gamblers choose numbers for various reasons. Very few choose at random; they use birthdays, ages, patterns, and so on. However, you might suppose that for any gambler chosen at random, that choice of numbers would be evenly distributed over the possibilities." (David Stirzaker, "Probability and Random Variables: A Beginner’s Guide", 1999)

"Use of the term 'model' makes it easier to keep in mind this distinction between theory and reality. By its very nature a model cannot include all the details of the reality it seeks to represent, for then it would be just as hard to comprehend and describe as the reality we want to model. At best, our model should give a reasonable picture of some small part of reality. It has to be a simple (even crude) description; and we must always be ready to scrap or improve a model if it fails in this task of accurate depiction. That having been said, old models are often still useful. The theory of relativity supersedes the Newtonian model, but all engineers use Newtonian mechanics when building bridges or motor cars, or probing the solar system." (David Stirzaker, "Probability and Random Variables: A Beginner’s Guide", 1999)

"We cannot really have a perfectly shuffled pack of perfect cards; this ‘collection of equally likely hands’ is actually a fiction. We create the idea, and then use the rules of arithmetic to calculate the required chances. This is characteristic of all mathematics, which concerns itself only with rules defining the behaviour of entities which are themselves undefined (such as ‘numbers’ or ‘points’)." (David Stirzaker, "Probability and Random Variables: A Beginner’s Guide", 1999)

Related Posts Plugin for WordPress, Blogger...

On Leonhard Euler

"I have been able to solve a few problems of mathematical physics on which the greatest mathematicians since Euler have struggled in va...