09 July 2023

On Inference (2000-2009)

"Even if our cognitive maps of causal structure were perfect, learning, especially double-loop learning, would still be difficult. To use a mental model to design a new strategy or organization we must make inferences about the consequences of decision rules that have never been tried and for which we have no data. To do so requires intuitive solution of high-order nonlinear differential equations, a task far exceeding human cognitive capabilities in all but the simplest systems."  (John D Sterman, "Business Dynamics: Systems thinking and modeling for a complex world", 2000)

"The difficulty facing us when we have to make inferences is two-fold. First, we may build entirely the wrong mental model from the information we read or hear. […] The second difficulty facing us is that we may well build a reasonably correct initial representation of a problem, but this representation may be impoverished in some way because we have no idea what inferences are relevant […]" (S Ian Robertson, "Problem Solving", 2001)

"Ignorance of relevant risks and miscommunication of those risks are two aspects of innumeracy. A third aspect of innumeracy concerns the problem of drawing incorrect inferences from statistics. This third type of innumeracy occurs when inferences go wrong because they are clouded by certain risk representations. Such clouded thinking becomes possible only once the risks have been communicated." (Gerd Gigerenzer, "Calculated Risks: How to know when numbers deceive you", 2002)

"Information needs representation. The idea that it is possible to communicate information in a 'pure' form is fiction. Successful risk communication requires intuitively clear representations. Playing with representations can help us not only to understand numbers (describe phenomena) but also to draw conclusions from numbers (make inferences). There is no single best representation, because what is needed always depends on the minds that are doing the communicating." (Gerd Gigerenzer, "Calculated Risks: How to know when numbers deceive you", 2002)

"Natural frequencies facilitate inferences made on the basis of numerical information. The representation does part of the reasoning, taking care of the multiplication the mind would have to perform if given probabilities. In this sense, insight can come from outside the mind." (Gerd Gigerenzer, "Calculated Risks: How to know when numbers deceive you", 2002)

"Overcoming innumeracy is like completing a three-step program to statistical literacy. The first step is to defeat the illusion of certainty. The second step is to learn about the actual risks of relevant events and actions. The third step is to communicate the risks in an understandable way and to draw inferences without falling prey to clouded thinking. The general point is this: Innumeracy does not simply reside in our minds but in the representations of risk that we choose." (Gerd Gigerenzer, "Calculated Risks: How to know when numbers deceive you", 2002)

"When natural frequencies are transformed into conditional probabilities, the base rate information is taken out (this is called normalization). The benefit of this normalization is that the resulting values fall within the uniform range of 0 and 1. The cost, however, is that when drawing inferences from probabilities (as opposed to natural frequencies), one has to put the base rates back in by multiplying the conditional probabilities by their respective base rates." (Gerd Gigerenzer, "Calculated Risks: How to know when numbers deceive you", 2002)

"Bayesian inference is a controversial approach because it inherently embraces a subjective notion of probability. In general, Bayesian methods provide no guarantees on long run performance." (Larry A Wasserman, "All of Statistics: A concise course in statistical inference", 2004)

"The Bayesian approach is based on the following postulates: (B1) Probability describes degree of belief, not limiting frequency. As such, we can make probability statements about lots of things, not just data which are subject to random variation. […] (B2) We can make probability statements about parameters, even though they are fixed constants. (B3) We make inferences about a parameter θ by producing a probability distribution for θ. Inferences, such as point estimates and interval estimates, may then be extracted from this distribution." (Larry A Wasserman, "All of Statistics: A concise course in statistical inference", 2004)

"Statistical inference, or 'learning' as it is called in computer science, is the process of using data to infer the distribution that generated the data." (Larry A Wasserman, "All of Statistics: A concise course in statistical inference", 2004)

"A mental model is conceived […] as a knowledge structure possessing slots that can be filled not only with empirically gained information but also with ‘default assumptions’ resulting from prior experience. These default assumptions can be substituted by updated information so that inferences based on the model can be corrected without abandoning the model as a whole. Information is assimilated to the slots of a mental model in the form of ‘frames’ which are understood here as ‘chunks’ of knowledge with a well-defined meaning anchored in a given body of shared knowledge." (Jürgen Renn, "Before the Riemann Tensor: The Emergence of Einstein’s Double Strategy", "The Universe of General Relativity" Ed. by A.J. Kox & Jean Eisenstaedt, 2005)

"Statistics is the branch of mathematics that uses observations and measurements called data to analyze, summarize, make inferences, and draw conclusions based on the data gathered." (Allan G Bluman, "Probability Demystified", 2005)

"The basic idea of going from an estimate to an inference is simple. Drawing the conclusion with confidence, and measuring the level of confidence, is where the hard work of professional statistics comes in." (Charles Livingston & Paul Voakes, "Working with Numbers and Statistics: A handbook for journalists", 2005)

"The dual meaning of the word significant brings into focus the distinction between drawing a mathematical inference and practical inference from statistical results." (Charles Livingston & Paul Voakes, "Working with Numbers and Statistics: A handbook for journalists", 2005)

"In specific cases, we think by applying mental rules, which are similar to rules in computer programs. In most of the cases, however, we reason by constructing, inspecting, and manipulating mental models. These models and the processes that manipulate them are the basis of our competence to reason. In general, it is believed that humans have the competence to perform such inferences error-free. Errors do occur, however, because reasoning performance is limited by capacities of the cognitive system, misunderstanding of the premises, ambiguity of problems, and motivational factors. Moreover, background knowledge can significantly influence our reasoning performance. This influence can either be facilitation or an impedance of the reasoning process." (Carsten Held et al, "Mental Models and the Mind", 2006)

"Mental models abide by the principle of parsimony: They represent only possibilities compatible with the premises, and they represent clauses in the premises only when they hold in a possibility. Fully explicit models represent clauses when they do not hold too. The advantage of mental models over fully explicit models is that they contain less information, and so they are easier to work with. But they can lead reasoners astray. The occurrence of these systematic and compelling fallacies is shocking. The model theory predicts them, and they are a 'litmus' test for mental models, because no other current theory predicts them. They have so far resisted explanation by theories of reasoning based on formal rules of inference, because these theories rely on valid rules." (Philip N Johnson-Laird, Mental Models, Sentential Reasoning, and Illusory Inferences, [in "Mental Models and the Mind"], 2006)

"In mathematics, the first principles are called axioms, and the rules are referred to as deduction/inference rules. A proof is a series of steps based on the (adopted) axioms and deduction rules which reaches a desired conclusion. Every step in a proof can be checked for correctness by examining it to ensure that it is logically sound." (Cristian S Calude et al, "Proving and Programming", 2007)

"[…] statistics is the key discipline for predicting the future or for making inferences about the unknown, or for producing convenient summaries of data." (David J Hand, "Statistics: A Very Short Introduction", 2008)

"Case-based reasoning is a paradigm in machine learning whose idea is that a new problem can be solved by noticing its similarity to a set of problems previously solved. Case-based reasoning regards the inference of some proper conclusions related to a new situation by the analysis of similar cases from a memory of previous cases. Very often similarity between two objects is expressed on a graded scale and this justifies application of fuzzy sets in this context." (Salvatore Greco et al, "Granular Computing and Data Mining for Ordered Data: The Dominance-Based Rough Set Approach", 2009)

"Causal inference is different, because a change in the system is contemplated - an intervention. Descriptive statistics tell you about the data that you happen to have. Causal models claim to tell you what will happen to some of the numbers if you intervene to change other numbers." (David A. Freedman, "Statistical Models: Theory and Practice", 2009)

"Traditional statistics is strong in devising ways of describing data and inferring distributional parameters from sample. Causal inference requires two additional ingredients: a science-friendly language for articulating causal knowledge, and a mathematical machinery for processing that knowledge, combining it with data and drawing new causal conclusions about a phenomenon." (Judea Pearl, "Causal inference in statistics: An overview", Statistics Surveys 3, 2009)

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...

On Hypothesis Testing III

  "A little thought reveals a fact widely understood among statisticians: The null hypothesis, taken literally (and that’s the only way...