Showing posts with label decision-making. Show all posts
Showing posts with label decision-making. Show all posts

08 June 2023

Mental Models LXIII (Limitations VIII)

"Beliefs are generalizations about the past projected onto the present and future to shape it in the image of the past. [...] When we generalize from incomplete or unrepresentative experience, we form mental models that make the wrong predictions, but because beliefs act as self-fulfilling prophecies it is hard to find out, because we are less open to counter examples." (Joseph O’Connor, "Leading With NLP: Essential Leadership Skills for Influencing and Managing People", 1998)

"People’s mental models are apt to be deficient in a number of ways, perhaps including contradictory, erroneous, and unnecessary concepts. As designers, it is our duty to develop systems and instructional materials that aid users to develop more coherent, useable mental models. As teachers, it is our duty to develop conceptual models that will aid the learner to develop adequate and appropriate mental models. And as scientists who are interested in studying people’s mental models, we must develop appropriate experimental methods and discard our hopes of finding neat, elegant mental models, but instead learn to understand the messy, sloppy, incomplete, and indistinct structures that people actually have." (Donald A Norman, "Some Observations on Mental Models" [in "Mental Models", Ed(s). Dedre Gentner & Albert L Stevens], 1983)

"To begin with, we must understand that any mindset consists of mental models, or concepts, that influence our interpretation of situations and predispose us to certain responses. These models, which are replete with beliefs and assumptions, thus strongly determine the way we understand the world and act in it. The irony is, they become so ingrained in us, as tendencies and predispositions, that we seldom pay attention to them." (Stephen G Haines, "The Manager's Pocket Guide to Strategic and Business Planning", 1998)

"Short-term memory can hold 7 ± 2 chunks of information at once. This puts a rather sharp limit on the effective size and complexity of a causal map. Presenting a complex causal map all at once makes it hard to see the loops, understand which are important, or understand how they generate the dynamics. Resist the temptation to put all the loops you and your clients have identified into a single comprehensive diagram." (John D Sterman, "Business Dynamics Systems Thinking and Modeling for a Complex World", 2000)

"Our mental maps are often not terribly accurate, based as they are on our own selective experience, our knowledge and ignorance, and the information and misinformation we gain from others; nevertheless, these are the maps we depend on every day." (Peter Turchi, "Maps of the Imagination: The writer as cartographer", 2004)

"The most serious problem in applied ethics, or at least in business ethics, is not that we frame experiences; it is not that these mental models are incomplete, sometimes biased, and surely parochial. The larger problem is that most of us either individually or as managers do not realize that we are framing, disregarding data, ignoring counterevidence, or not taking into account other points of view." (Patricia H Werhane "A Place for Philosophers in Applied Ethics and the Role of Moral Reasoning in Moral Imagination", Business Ethics Quarterly 16 (3), 2007)

"Although good ethical decision-making requires us carefully to take into account as much relevant information as is available to us, we have good reason to think that we commonly fall well short of this standard – either by overlooking relevant facts completely or by underestimating their significance. The mental models we employ can contribute to this problem. As we have explained, mental models frame our experiences in ways that both aid and hinder our perceptions. They enable us to focus selectively on ethically relevant matters. By their very nature, they provide incomplete perspectives, resulting in bounded awareness and bounded ethicality. Insofar as our mental modeling practices result in unwarranted partiality, or even ethical blindness, the desired reflective process is distorted. This distortion is aggravated by the fact that our mental models can have this distorting effect without our consciously realizing it. Thus, although we cannot do without mental models, they leave us all vulnerable to blindness and, insofar as we are unaware of this, self-deception." (Patricia H Werhane et al, "Obstacles to Ethical: Decision-Making Mental Models, Milgram and the Problem of Obedience", 2013)

"Mental models serve to conceptualize, focus and shape our experiences, but in so doing, they sometimes cause us to ignore data and occlude critical reflection that might be relevant or, indeed, necessary to practical decision-making. [...] distorting mental models are the foundation
or underpinning of many of the impediments to effective ethical decision-making." (Patricia H Werhane et al, "Obstacles to Ethical: Decision-Making Mental Models, Milgram and the Problem of Obedience",  2013)

"We identify and analyze distorting mental models that constitute experience in a manner that occludes the moral dimension of situations from view, thereby thwarting the first step of ethical decision-making. Examples include an unexamined moral self-image, viewing oneself as merely a bystander, and an exaggerated conception of self-sufficiency. These mental models, we argue, generate blind spots to ethics, in the sense that they limit our ability to see facts that are right before our eyes – sometimes quite literally, as in the many examples of managers and employees who see unethical behavior take place in front of them, but do not recognize it as such." (Patricia H Werhane et al, "Obstacles to Ethical: Decision-Making Mental Models, Milgram and the Problem of Obedience",  2013)

07 August 2022

Decision Theory I

"Years ago a statistician might have claimed that statistics deals with the processing of data [...] today’s statistician will be more likely to say that statistics is concerned with decision making in the face of uncertainty." (Herman Chernoff & Lincoln E Moses, "Elementary Decision Theory", 1959)

"Another approach to management theory, undertaken by a growing and scholarly group, might be referred to as the decision theory school. This group concentrates on rational approach to decision-the selection from among possible alternatives of a course of action or of an idea. The approach of this school may be to deal with the decision itself, or to the persons or organizational group making the decision, or to an analysis of the decision process. Some limit themselves fairly much to the economic rationale of the decision, while others regard anything which happens in an enterprise the subject of their analysis, and still others expand decision theory to cover the psychological and sociological aspect and environment of decisions and decision-makers." (Harold Koontz, "The Management Theory Jungle," 1961)

"The term hypothesis testing arises because the choice as to which process is observed is based on hypothesized models. Thus hypothesis testing could also be called model testing. Hypothesis testing is sometimes called decision theory. The detection theory of communication theory is a special case." (Fred C Scweppe, "Uncertain dynamic systems", 1973)

"In decision theory, mathematical analysis shows that once the sampling distribution, loss function, and sample are specified, the only remaining basis for a choice among different admissible decisions lies in the prior probabilities. Therefore, the logical foundations of decision theory cannot be put in fully satisfactory form until the old problem of arbitrariness (sometimes called 'subjectiveness') in assigning prior probabilities is resolved." (Edwin T Jaynes, "Prior Probabilities", 1978)

"Decision theory, as it has grown up in recent years, is a formalization of the problems involved in making optimal choices. In a certain sense - a very abstract sense, to be sure - it incorporates among others operations research, theoretical economics, and wide areas of statistics, among others." (Kenneth Arrow, "The Economics of Information", 1984) 

"Cybernetics is concerned with scientific investigation of systemic processes of a highly varied nature, including such phenomena as regulation, information processing, information storage, adaptation, self-organization, self-reproduction, and strategic behavior. Within the general cybernetic approach, the following theoretical fields have developed: systems theory (system), communication theory, game theory, and decision theory." (Fritz B Simon et al, "Language of Family Therapy: A Systemic Vocabulary and Source Book", 1985)

"A field of study that includes a methodology for constructing computer simulation models to achieve better under-standing of social and corporate systems. It draws on organizational studies, behavioral decision theory, and engineering to provide a theoretical and empirical base for structuring the relationships in complex systems." (Virginia Anderson & Lauren Johnson, "Systems Thinking Basics: From Concepts to Casual Loops", 1997) 

"A decision theory that rests on the assumptions that human cognitive capabilities are limited and that these limitations are adaptive with respect to the decision environments humans frequently encounter. Decision are thought to be made usually without elaborate calculations, but instead by using fast and frugal heuristics. These heuristics certainly have the advantage of speed and simplicity, but if they are well matched to a decision environment, they can even outperform maximizing calculations with respect to accuracy. The reason for this is that many decision environments are characterized by incomplete information and noise. The information we do have is usually structured in a specific way that clever heuristics can exploit." (E Ebenhoh, "Agent-Based Modelnig with Boundedly Rational Agents", 2007)

05 August 2022

Brian Christian - Collected Quotes

"As with all issues involving overfitting, how early to stop depends on the gap between what you can measure and what really matters. If you have all the facts, they’re free of all error and uncertainty, and you can directly assess whatever is important to you, then don’t stop early. Think long and hard: the complexity and effort are appropriate." (Brian Christian & Thomas L Griffiths, "Algorithms to Live By: The Computer Science of Human Decisions", 2016)

"Bayes’s Rule tells us that when it comes to making predictions based on limited evidence, few things are as important as having good priors - that is, a sense of the distribution from which we expect that evidence to have come. Good predictions thus begin with having good instincts about when we’re dealing with a normal distribution and when with a power-law distribution. As it turns out, Bayes’s Rule offers us a simple but dramatically different predictive rule of thumb for each." (Brian Christian & Thomas L Griffiths, "Algorithms to Live By: The Computer Science of Human Decisions", 2016)

"Game theory covers an incredibly broad spectrum of scenarios of cooperation and competition, but the field began with those resembling heads-up poker: two-person contests where one player’s gain is another player’s loss. Mathematicians analyzing these games seek to identify a so-called equilibrium: that is, a set of strategies that both players can follow such that neither player would want to change their own play, given the play of their opponent. It’s called an equilibrium because it’s stable—no amount of further reflection by either player will bring them to different choices. I’m content with my strategy, given yours, and you’re content with your strategy, given mine." (Brian Christian & Thomas L Griffiths, "Algorithms to Live By: The Computer Science of Human Decisions", 2016)

"In other words, overfitting poses a danger any time we’re dealing with noise or mismeasurement—and we almost always are. There can be errors in how the data were collected, or in how they were reported. Sometimes the phenomena being investigated, such as human happiness, are hard to even define, let alone measure. Thanks to their flexibility, the most complex models available to us can fit any patterns that appear in the data, but this means that they will also do so even when those patterns are mere phantoms and mirages in the noise." (Brian Christian & Thomas L Griffiths, "Algorithms to Live By: The Computer Science of Human Decisions", 2016)

"Many prediction algorithms, for instance, start out by searching for the single most important factor rather than jumping to a multi-factor model. Only after finding that first factor do they look for the next most important factor to add to the model, then the next, and so on. Their models can therefore be kept from becoming overly complex simply by stopping the process short, before overfitting has had a chance to creep in. A related approach to calculating predictions considers one data point at a time, with the model tweaked to account for each new point before more points are added; there, too, the complexity of the model increases gradually, so stopping the process short can help keep it from overfitting." (Brian Christian & Thomas L Griffiths, "Algorithms to Live By: The Computer Science of Human Decisions", 2016)

"Randomness seems like the opposite of reason - a form of giving up on a problem, a last resort. Far from it. The surprising and increasingly important role of randomness in computer science shows us that making use of chance can be a deliberate and effective part of approaching the hardest sets of problems. In fact, there are times when nothing else will do."  (Brian Christian & Thomas L Griffiths, "Algorithms to Live By: The Computer Science of Human Decisions", 2016)

"The effectiveness of regularization in all kinds of machine-learning tasks suggests that we can make better decisions by deliberately thinking and doing less. If the factors we come up with first are likely to be the most important ones, then beyond a certain point thinking more about a problem is not only going to be a waste of time and effort - it will lead us to worse solutions. Early Stopping provides the foundation for a reasoned argument against reasoning, the thinking person’s case against thought. But turning this into practical advice requires answering one more question: when should we stop thinking?" (Brian Christian & Thomas L Griffiths, "Algorithms to Live By: The Computer Science of Human Decisions", 2016)

"The greater the uncertainty, the bigger the gap between what you can measure and what matters, the more you should watch out for overfitting - that is, the more you should prefer simplicity." (Brian Christian & Thomas L Griffiths, "Algorithms to Live By: The Computer Science of Human Decisions", 2016)

27 June 2021

Herbert A Simon - Collected Quotes

"All behavior involves conscious or unconscious selection of particular actions out of all those which are physically possible to the actor and to those persons over whom he exercises influence and authority." (Herbert A Simon, "Administrative Behavior: A Study of Decision-making Processes in Administrative Organization", 1947)

"Decision making processes are aimed at finding courses of action that are feasible or satisfactory in the light of multiple goals and constraints." (Herbert A Simon, "Administrative Behavior: A Study of Decision-making Processes in Administrative Organization", 1947)

"In the process of decision those alternatives are chosen which are considered to be appropriate means of reaching desired ends. Ends themselves, however, are often merely instrumental to more final objectives. We are thus led to the conception of a series, or hierarchy, of ends. Rationality has to do with the construction of means-ends chains of this kind." (Herbert A Simon, "Administrative Behavior", 1947)

"It is impossible for the behavior of a single, isolated individual to reach a high degree of rationality. The number of alternatives he must explore is so great, the information he would need to evaluate them so vast that even an approximation to objective rationality is hard to conceive. Individual choice takes place in rationality is hard to conceive. [...] Actual behavior falls short in at least three ways, of objective rationality." (Herbert A Simon, "Administrative Behavior", 1947)

"Many individuals and organization units contribute to every large decision, and the very problem of centralization and decentralization is a problem of arranging the complex system into an effective scheme." (Herbert A Simon, "Administrative Behavior: A Study of Decision-making Processes in Administrative Organization", 1947)

"Rationality requires a choice among all possible alternative behaviors. In actual behavior, only a very few of all these possible alternatives come to mind." (Herbert A Simon, "Administrative Behavior", 1947)

"Rationality requires a complete knowledge and anticipation of the consequences that will follow on each choice. In fact, knowledge of consequences is always fragmentary." (Herbert A Simon, "Administrative Behavior", 1947)

"Roughly speaking, rationality is concerned with the selection of preferred behavior alternatives in terms of some system of values, whereby the consequences of behavior can be evaluated." (Herbert A Simon, "Administrative Behavior", 1947)

"The function of knowledge in the decision-making process is to determine which consequences follow upon which of the alternative strategies. It is the task of knowledge to select from the whole class of possible consequences a more limited subclass, or even (ideally) a single set of consequences correlated with each strategy." (Herbert A Simon, "Administrative Behavior: A Study of Decision-making Processes in Administrative Organization", 1947)

"The principle of bounded rationality [is] the capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problems whose solution is required for objectively rational behavior in the real world - or even for a reasonable approximation to such objective rationality." (Herbert A Simon, "Administrative Behavior", 1947)

"The first consequence of the principle of bounded rationality is that the intended rationality of an actor requires him to construct a simplified model of the real situation in order to deal with it. He behaves rationally with respect to this model, and such behavior is not even approximately optimal with respect to the real world. To predict his behavior we must understand the way in which this simplified model is constructed, and its construction will certainly be related to his psychological properties as a perceiving, thinking, and learning animal." (Herbert A Simon, "Models of Man", 1957)

"The mathematical and computing techniques for making programmed decisions replace man but they do not generally simulate him." (Herbert A Simon, "Management and Corporations 1985", 1960)

"Programs do not merely substitute brute force for human cunning. Increasingly, they imitate-and in some cases improve upon-human cunning." (Herbert A Simon, "Management and Corporations 1985", 1960)

"Roughly, by a complex system I mean one made up of a large number of parts that interact in a nonsimple way. In such systems, the whole is more than the sum of the parts, not in an ultimate, metaphysical sense, but in the important pragmatic sense that, given the properties of the parts and the laws of their interaction, it is not a trivial matter to infer the properties of the whole." (Herbert A Simon, "The Architecture of Complexity", Proceedings of the American Philosophical Society, Vol. 106 (6), 1962)

"Thus, the central theme that runs through my remarks is that complexity frequently takes the form of hierarchy, and that hierarchic systems have some common properties that are independent of their specific content. Hierarchy, I shall argue, is one of the central structural schemes that the architect of complexity uses." (Herbert A Simon, "The Architecture of Complexity", Proceedings of the American Philosophical Society Vol. 106 (6), 1962)

"A mathematical proof, as usually written down, is a sequence of expressions in the state space. But we may also think of the proof as consisting of the sequence of justifications of consecutive proof steps - i.e., the references to axioms, previously-proved theorems, and rules of inference that legitimize the writing down of the proof steps. From this point of view, the proof is a sequence of actions (applications of rules of inference) that, operating initially on the axioms, transform them into the desired theorem." (Herbert A Simon, "The Logic of Heuristic Decision Making", [in "The Logic of Decision and Action"], 1966)

"[...] a problem of design exists when (1) there is a language for naming actions and a language for naming states of the world, (2) there is a need to find an action that will produce a specified state of the world or a specified change in the state of the world, and (3) there is no non-trivial process for translating changes in the state of the world into their corresponding actions." (Herbert A Simon, "The Logic of Heuristic Decision Making", [in "The Logic of Decision and Action"], 1966)

"A problem will be difficult if there are no procedures for generating possible solutions that are guaranteed (or at least likely) to generate the actual solution rather early in the game. But for such a procedure to exist, there must be some kind of structural relation, at least approximate, between the possible solutions as named by the solution-generating process and these same solutions as named in the language of the problem statement." (Herbert A Simon, "The Logic of Heuristic Decision Making", [in "The Logic of Decision and Action"], 1966)

"An adaptive organism is connected with its environment by two kinds of channels. Afferent channels give it information about the state of the environment; efferent channels cause action on the environment. Problem statements define solutions in terms of afferent information to the organism; the organism's task is to discover a set of efferent signals which, changing the state of the environment, will produce the appropriate afferent. But, ab initio, the mapping of efferents on afferents is entirely arbitrary; the relations can only be discovered by experiment, by acting and observing the consequences of action." (Herbert A Simon, "The Logic of Heuristic Decision Making", [in "The Logic of Decision and Action"], 1966)

"Design problems - generating or discovering alternatives - are complex largely because they involve two spaces, an action space and a state space, that generally have completely different structures. To find a design requires mapping the former of these on the latter. For many, if not most, design problems in the real world systematic algorithms are not known that guarantee solutions with reasonable amounts of computing effort. Design uses a wide range of heuristic devices - like means-end analysis, satisficing, and the other procedures that have been outlined - that have been found by experience to enhance the efficiency of search. Much remains to be learned about the nature and effectiveness of these devices." (Herbert A Simon, "The Logic of Heuristic Decision Making", [in "The Logic of Decision and Action"], 1966)

"Every problem-solving effort must begin with creating a representation for the problem - a problem space in which the search for the solution can take place. Of course, for most of the problems we encounter in our daily personal or professional lives, we simply retrieve from memory a representation that we have already stored and used on previous occasions. Sometimes, we have to adapt the representation a bit to the new situation, but that is usually a rather simple matter." (Herbert A Simon, "The Sciences of the Artificial", 1968)

"Natural science is knowledge about natural objects and phenomena." (Herbert A Simon, "The Sciences of the Artificial", 1968)

"Learning is any change in a system that produces a more or less permanent change in its capacity for adapting to its environment. Understanding systems, especially systems capable of understanding problems in new task domains, are learning systems." (Herbert A Simon, "The Sciences of the Artificial", 1968)

"Making discoveries belongs to the class of ill-structured problem-solving tasks that have relatively ill-defined goals." (Herbert A Simon, "The Sciences of the Artificial", 1968)

"Solving a problem simply means representing it so as to make the solution transparent." (Herbert A Simon, "The Sciences of the Artificial", 1968)

"The central task of a natural science is to make the wonderful commonplace: to show that complexity, correctly viewed, is only a mask for simplicity; to find pattern hidden in apparent chaos. […] This is the task of natural science: to show that the wonderful is not incomprehensible, to show how it can be comprehended - but not to destroy wonder. For when we have explained the wonderful, unmasked the hidden pattern, a new wonder arises at how complexity was woven out of simplicity. The aesthetics of natural science and mathematics is at one with the aesthetics of music and painting - both inhere in the discovery of a partially concealed pattern." (Herbert A Simon, "The Sciences of the Artificial", 1968)

"The more we are willing to abstract from the detail of a set of phenomena, the easier it becomes to simulate the phenomena. Moreover we do not have to know, or guess at, all the internal structure of the system but only that part of it that is crucial to the abstraction." (Herbert A Simon, "The Sciences of the Artificial", 1968)

"[...] in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it." (Herbert Simon, "Designing Organizations for an Information-Rich World", 1971)

"But the answers provided by the theory of games are sometimes very puzzling and ambiguous. In many situations, no single course of action dominates all the others; instead, a whole set of possible solutions are all equally consistent with the postulates of rationality." (Herbert A Simon et al, "Decision Making and Problem Solving", Interfaces Vol. 17 (5), 1987)

"[...] problem solving generally proceeds by selective search through large sets of possibilities, using rules of thumb (heuristics) to guide the search. Because the possibilities in realistic problem situations are generally multitudinous, trial-and-error search would simply not work; the search must be highly selective." (Herbert A Simon et al, "Decision Making and Problem Solving", Interfaces Vol. 17 (5), 1987)

"The way in which an uncertain possibility is presented may have a substantial effect on how people respond to it." (Herbert A Simon et al, "Decision Making and Problem Solving", Interfaces Vol. 17 (5), 1987)

Amos Tversky - Collected Quotes

"People have erroneous intuitions about the laws of chance. In particular, they regard a sample randomly drawn from a population as highly representative, that is, similar to the population in all essential characteristics. The prevalence of the belief and its unfortunate consequences for psychological research are illustrated by the responses of professional psychologists to a questionnaire concerning research decisions." (Amos Tversky & Daniel Kahneman, "Belief in the law of small numbers", Psychological Bulletin 76(2), 1971)

"Significance levels are usually computed and reported, but power and confidence limits are not. Perhaps they should be." (Amos Tversky & Daniel Kahneman, "Belief in the law of small numbers", Psychological Bulletin 76(2), 1971)

"The emphasis on significance levels tends to obscure a fundamental distinction between the size of an effect and its statistical significance." (Amos Tversky & Daniel Kahneman, "Belief in the law of small numbers", Psychological Bulletin 76(2), 1971)

"[...] the statistical power of many psychological studies is ridiculously low. This is a self-defeating practice: it makes for frustrated scientists and inefficient research. The investigator who tests a valid hypothesis but fails to obtain significant results cannot help but regard nature as untrustworthy or even hostile." (Amos Tversky & Daniel Kahneman, "Belief in the law of small numbers", Psychological Bulletin 76(2), 1971)

"[...] too many users of the analysis of variance seem to regard the reaching of a mediocre level of significance as more important than any descriptive specification of the underlying averages. Our thesis is that people have strong intuitions about random sampling; that these intuitions are wrong in fundamental respects; that these intuitions are shared by naive subjects and by trained scientists; and that they are applied with unfortunate consequences in the course of scientific inquiry. We submit that people view a sample randomly drawn from a population as highly representative, that is, similar to the population in all essential characteristics. Consequently, they expect any two samples drawn from a particular population to be more similar to one another and to the population than sampling theory predicts, at least for small samples." (Amos Tversky & Daniel Kahneman, "Belief in the law of small numbers", Psychological Bulletin 76(2), 1971)

"Chance is commonly viewed as a self-correcting process in which a deviation in one direction induces a deviation in the opposite direction to restore the equilibrium. In fact, deviations are not 'corrected' as a chance process unfolds, they are merely diluted." (Amos Tversky & Daniel Kahneman, "Judgment Under Uncertainty: Heuristics and Biases", Science Vol. 185 (4157), 1974)

"Intuitive judgments of probability are based on a limited number of heuristics that are usually effective but sometimes lead to severe and systematic errors. Research shows, for example, that people judge the probability of a hypothesis by the degree to which it represents the evidence, with little or no regard for its prior probability. Other heuristics lead to an overestimation of the probabilities of highly available or salient events, and to overconfidence in the assessment of subjective probability distributions. These biases are not readily corrected, and they are shared by both naive and statistically sophisticated subjects." (Amos Tversky, "Assessing Uncertainty", Journal of the Royal Statistical Society B Vol. 36 (2), 1974) 

"The theory of expected utility is formulated in terms of an abstract set of consequences, that are the carriers of utilities. The axiomatic theory, by its very nature, leaves the consequences uninterpreted. Any application of the theory, of course, is based on a particular interpretation of the outcomes. Thus, the theory could be valid in one interpretation and invalid in another. The appropriateness of the interpretation, however, cannot be evaluated within the theory." (Amos Tversky, "A Critique of Expected Utility Theory: Descriptive and Normative Considerations", Erkenntnis Vol. 9 (2), 1975)

"A significant property of the value function, called loss aversion, is that the response to losses is more extreme than the response to gains. The common reluctance to accept a fair bet on the toss of a coin suggests that the displeasure of losing a sum of money exceeds the pleasure of winning the same amount. Thus the proposed value function is (i) defined on gains and losses, (ii) generally concave for gains and convex for losses, and (iii) steeper for losses than for gains." (Amos Tversky & Daniel Kahneman, "Rational Choice and the Framing of Decisions", The Journal of Business Vol. 59 (4), 1986)

"An essential condition for a theory of choice that claims normative status is the principle of invariance: different representations of the same choice problem should yield the same preference. That is, the preference between options should be independent of their description. Two characterizations that the decision maker, on reflection, would view as alternative descriptions of the same problem should lead to the same choice-even without the benefit of such reflection." (Amos Tversky & Daniel Kahneman, "Rational Choice and the Framing of Decisions", The Journal of Business Vol. 59 (4), 1986)

"Effective learning takes place only under certain conditions: it requires accurate and immediate feedback about the relation between the situational conditions and the appropriate response. The necessary feedback is often lacking for the decisions made by managers, entrepreneurs, and politicians because (i) outcomes are commonly delayed and not easily attributable to a particular action; (ii) variability in the environment degrades the reliability of the feedback, especially where outcomes of low probability are involved; (iii) there is often no information about what the outcome would have been if another decision had been taken; and (iv) most important decisions are unique and therefore provide little opportunity for learning." (Amos Tversky & Daniel Kahneman, "Rational Choice and the Framing of Decisions", The Journal of Business Vol. 59 (4), 1986)

"The modern theory of decision making under risk emerged from a logical analysis of games of chance rather than from a psychological analysis of risk and value. The theory was conceived as a normative model of an idealized decision maker, not as a description of the behavior of real people." (Amos Tversky & Daniel Kahneman, "Rational Choice and the Framing of Decisions", The Journal of Business Vol. 59 (4), 1986)

"The assumption of rationality has a favored position in economics. It is accorded all the methodological privileges of a self-evident truth, a reasonable idealization, a tautology, and a null hypothesis. Each of these interpretations either puts the hypothesis of rational action beyond question or places the burden of proof squarely on any alternative analysis of belief and choice. The advantage of the rational model is compounded because no other theory of judgment and decision can ever match it in scope, power, and simplicity." (Amos Tversky & Daniel Kahneman, "Rational Choice and the Framing of Decisions", The Journal of Business Vol. 59 (4), 1986)

"Theories of choice are at best approximate and incomplete. One reason for this pessimistic assessment is that choice is a constructive and contingent process. When faced with a complex problem, people employ a variety of heuristic procedures in order to simplify the representation and the evaluation of prospects. These procedures include computational shortcuts and editing operations, such as eliminating common components and discarding nonessential differences. The heuristics of choice do not readily lend themselves to formal analysis because their application depends on the formulation of the problem, the method of elicitation, and the context of choice." (Amos Tversky & Daniel Kahneman, "Advances in Prospect Theory: Cumulative Representation of Uncertainty" [in "Choices, Values, and Frames"], 2000)

"Whenever there is a simple error that most laymen fall for, there is always a slightly more sophisticated version of the same problem that experts fall for." (Amos Tversky)

09 May 2021

On Heuristics II

"Models of bounded rationality describe how a judgement or decision is reached (that is, the heuristic processes or proximal mechanisms) rather than merely the outcome of the decision, and they describe the class of environments in which these heuristics will succeed or fail." (Gerd Gigerenzer & Reinhard Selten [Eds., "Bounded Rationality: The Adaptive Toolbox", 2001)

"A second class of metaphors - mathematical algorithms, heuristics, and models - brings us closer to the world of computer science programs, simulations, and approximations of the brain and its cognitive processes." (Diego Rasskin-Gutman, "Chess Metaphors: Artificial Intelligence and the Human Mind", 2009)

"In particular, the accurate intuitions of experts are better explained by the effects of prolonged practice than by heuristics. We can now draw a richer and more balanced picture, in which skill and heuristics are alternative sources of intuitive judgments and choices." (Daniel Kahneman, "Thinking, Fast and Slow", 2011)

"This is the essence of intuitive heuristics: when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution." (Daniel Kahneman, "Thinking, Fast and Slow", 2011)

"Heuristics are an evolutionary solution to an ongoing problem: we have limited mental resources. As such, they have a very long and thoroughly time-tested history of helping us - on average - make better decisions." (Peter H Diamandis, "Abundance: The Future is Better Than You Think", 2012)

"Heuristics are simplified rules of thumb that make things simple and easy to implement. But their main advantage is that the user knows that they are not perfect, just expedient, and is therefore less fooled by their powers. They become dangerous when we forget that." (Nassim N Taleb, "Antifragile: Things that gain from disorder", 2012)

"The art of reasoned persuasion is an iterative, recursive heuristic, meaning that we must go back and forth between the facts and the rules until we have a good fit. We cannot see the facts properly until we know what framework to place them into, and we cannot determine what framework to place them into until we see the basic contours of the facts." (Joel P Trachtman, "The Tools of Argument", 2013)

"Heuristic decision making is fast and frugal and is often based on the evaluation of one or two salient bits of information." (Amitav Chakravarti, "Why People (Don’t) Buy: The Go and Stop Signals", 2015)

"A heuristic is a strategy we derive from previous experience with a similar problem." (Darius Foroux, "Think Straight", 2017)

"The social world that humans have made for themselves is so complex that the mind simplifies the world by using heuristics, customs, and habits, and by making models or assumptions about how things generally work (the ‘causal structure of the world’). And because people rely upon (and are invested in) these mental models, they usually prefer that they remain uncontested." (Dr James Brennan, "Psychological  Adjustment to Illness and Injury", West of England Medical Journal Vol. 117 (2), 2018)

19 April 2021

On Sampling (-1949)

"By a small sample we may judge of the whole piece." (Miguel de Cervantes, "Don Quixote de la Mancha", 1605–1615)

"To a very striking degree our culture has become a Statistical culture. Even a person who may never have heard of an index number is affected [...] by [...] of those index numbers which describe the cost of living. It is impossible to understand Psychology, Sociology, Economics, Finance or a Physical Science without some general idea of the meaning of an average, of variation, of concomitance, of sampling, of how to interpret charts and tables." (Carrol D Wright, 1887)

"If the number of experiments be very large, we may have precise information as to the value of the mean, but if our sample be small, we have two sources of uncertainty: (I) owing to the 'error of random sampling' the mean of our series of experiments deviates more or less widely from the mean of the population, and (2) the sample is not sufficiently large to determine what is the law of distribution of individuals." (William S Gosset, "The Probable Error of a Mean", Biometrika, 1908)

"The postulate of randomness thus resolves itself into the question, 'of what population is this a random sample?' which must frequently be asked by every practical statistician." (Ronald Fisher, "On the Mathematical Foundation of Theoretical Statistics", Philosophical Transactions of the Royal Society of London Vol. A222, 1922)

"The principle underlying sampling is that a set of objects taken at random from a larger group tends to reproduce the characteristics of that larger group: this is called the Law of Statistical Regularity. There are exceptions to this rule, and a certain amount of judgment must be exercised, especially when there are a few abnormally large items in the larger group. With erratic data, the accuracy of sampling can often be tested by comparing several samples. On the whole, the larger the sample the more closely will it tend to resemble the population from which it is taken; too small a sample would not give reliable results." (Lewis R Connor, "Statistics in Theory and Practice", 1932)

"If the chance of error alone were the sole basis for evaluating methods of inference, we would never reach a decision, but would merely keep increasing the sample size indefinitely." (C West Churchman, "Theory of Experimental Inference", 1948)

"If significance tests are required for still larger samples, graphical accuracy is insufficient, and arithmetical methods are advised. A word to the wise is in order here, however. Almost never does it make sense to use exact binomial significance tests on such data - for the inevitable small deviations from the mathematical model of independence and constant split have piled up to such an extent that the binomial variability is deeply buried and unnoticeable. Graphical treatment of such large samples may still be worthwhile because it brings the results more vividly to the eye." (Frederick Mosteller & John W Tukey, "The Uses and Usefulness of Binomial Probability Paper?", Journal of the American Statistical Association 44, 1949) 

07 March 2021

Information Overload III

"Every person seems to have a limited capacity to assimilate information, and if it is presented to him too rapidly and without adequate repetition, this capacity will be exceeded and communication will break down." (R Duncan Luce, "Developments in Mathematical Psychology", 1960)

"People today are in danger of drowning in information; but, because they have been taught that information is useful, they are more willing to drown than they need be. If they could handle information, they would not have to drown at all." (Idries Shah, "Reflections", 1968)

"Everyone spoke of an information overload, but what there was in fact was a non-information overload." (Richard S Wurman, "What-If, Could-Be", 1976)

"The greater the uncertainty, the greater the amount of decision making and information processing. It is hypothesized that organizations have limited capacities to process information and adopt different organizing modes to deal with task uncertainty. Therefore, variations in organizing modes are actually variations in the capacity of organizations to process information and make decisions about events which cannot be anticipated in advance." (John K Galbraith, "Organization Design", 1977)

"We are drowning in information but starved for knowledge." (John Naisbitt, "Megatrends: Ten New Directions Transforming Our Lives", 1982)

"In the Information Age, the first step to sanity is FILTERING. Filter the information: extract for knowledge. Filter first for substance. Filter second for significance. […] Filter third for reliability. […] Filter fourth for completeness." (Marc Stiegler, "David’s Sling", 1988)

"It has become evident time and again that when events become too complex and move too rapidly as appears to be the case today, human beings become demonstrably less able to cope." (Alan Greenspan, "The Structure of the International Financial System", 1998)

"Specialization, once a maneuver methodically to collect information, now is a manifestation of information overloads. The role of information has changed. Once justified as a means of comprehending the world, it now generates a conflicting and contradictory, fleeting and fragmentation field of disconnected and undigested data." (Stelarc, From Psycho-Body to Cyber-Systems: Images as Post-human Entities, 1998)

"Our needs going forward will be best served by how we make use of not just this data but all data. We live in an era of Big Data. The world has seen an explosion of information in the past decades, so much so that people and institutions now struggle to keep pace. In fact, one of the reasons for the attachment to the simplicity of our indicators may be an inverse reaction to the sheer and bewildering volume of information most of us are bombarded by on a daily basis. […] The lesson for a world of Big Data is that in an environment with excessive information, people may gravitate toward answers that simplify reality rather than embrace the sheer complexity of it." (Zachary Karabell, "The Leading Indicators: A short history of the numbers that rule our world", 2014)

"Today, technology has lowered the barrier for others to share their opinion about what we should be focusing on. It is not just information overload; it is opinion overload." (Greg McKeown, "Essentialism: The Disciplined Pursuit of Less", 2014)

10 February 2021

Patricia H Werhan - Collected Quotes

"[...] each of us frames, orders and/or organizes our experiences in terms of socially learned incomplete mental models or mind sets that shape our experiences perspectivally. These mental models are constitutive of all our experiences. They are the ways in which we make sense of our experiences [...]" (Patricia H Werhane "A Place for Philosophers in Applied Ethics and the Role of Moral Reasoning in Moral Imagination", Business Ethics Quarterly 16 (3), 2007)

"The most serious problem in applied ethics, or at least in business ethics, is not that we frame experiences; it is not that these mental models are incomplete, sometimes biased, and surely parochial. The larger problem is that most of us either individually or as managers do not realize that we are framing, disregarding data, ignoring counterevidence, or not taking into account other points of view." (Patricia H Werhane "A Place for Philosophers in Applied Ethics and the Role of Moral Reasoning in Moral Imagination", Business Ethics Quarterly 16 (3), 2007)

"Although good ethical decision-making requires us carefully to take into account as much relevant information as is available to us, we have good reason to think that we commonly fall well short of this standard – either by overlooking relevant facts completely or by underestimating their significance. The mental models we employ can contribute to this problem. As we have explained, mental models frame our experiences in ways that both aid and hinder our perceptions. They enable us to focus selectively on ethically relevant matters. By their very nature, they provide incomplete perspectives, resulting in bounded awareness and bounded ethicality. Insofar as our mental modeling practices result in unwarranted partiality, or even ethical blindness, the desired reflective process is distorted. This distortion is aggravated by the fact that our mental models can have this distorting effect without our consciously realizing it. Thus, although we cannot do without mental models, they leave us all vulnerable to blindness and, insofar as we are unaware of this, self-deception." (Patricia H Werhane et al, "Obstacles to Ethical: Decision-Making Mental Models, Milgram and the Problem of Obedience", 2013)

"Because all mental models or mindsets are incomplete, we can engage in second-order studies, evaluations, judgments, and assessments about our own and other operative mental models. Of course this is highly complex since the act of reflection is itself a further of framing or reframing." (Patricia H Werhane et al, "Obstacles to Ethical: Decision-Making Mental Models, Milgram and the Problem of Obedience", 2013)

"It is important to emphasize that the dangers that certain mental models pose to ethical decision-making cannot be mitigated or overcome by imagining that we could somehow free ourselves of the need for mental models altogether. Without mental models to mediate and shape our experiences, we would be incapable of having experiences at all." (Patricia H Werhane et al, "Obstacles to Ethical: Decision-Making Mental Models, Milgram and the Problem of Obedience", 2013)

"Mental models bind our awareness within a particular scaffold and then selectively can filter the content we subsequently receive. Through recalibration using revised mental models, we argue, we cultivate strategies anew, creating new habits, and galvanizing more intentional and evolved mental models. This recalibration often entails developing a strong sense of self and self-worth, realizing that each of us has a range of moral choices that may deviate from those in authority, and moral imagination." (Patricia H Werhane et al, "Obstacles to Ethical: Decision-Making Mental Models, Milgram and the Problem of Obedience", 2013)

"Mental models serve to conceptualize, focus and shape our experiences, but in so doing, they sometimes cause us to ignore data and occlude critical reflection that might be relevant or, indeed, necessary to practical decision-making. [...] distorting mental models are the foundation or underpinning of many of the impediments to effective ethical decision-making." (Patricia H Werhane et al, "Obstacles to Ethical: Decision-Making Mental Models, Milgram and the Problem of Obedience", 2013)

"These framing perspectives or mental models construe the data of our experiences, and it is the construed data that we call 'facts'. What we often call reality, or the world, is constructed or socially construed in certain ways such that one cannot get at the source of the data except through these construals." (Patricia H Werhane et al, "Obstacles to Ethical: Decision-Making Mental Models, Milgram and the Problem of Obedience", 2013)

"Various scientific methodologies are themselves mental models through which scientists discover, predict, and hypothesize about what we then call reality. In the social constructionist paradigm such mental models frame all our experiences. They schematize, and otherwise facilitate and guide the ways in which we recognize, react, and organize the world. How we define the world is dependent on such schema and thus all realities are socially structured. In the socially constructed paradigm, the multivariate mental models or conceptual schema are the means and mode through which we constitute our experiences." (Patricia H Werhane et al, "Obstacles to Ethical: Decision-Making Mental Models, Milgram and the Problem of Obedience", 2013)

"We identify and analyze distorting mental models that constitute experience in a manner that occludes the moral dimension of situations from view, thereby thwarting the first step of ethical decision-making. Examples include an unexamined moral self-image, viewing oneself as merely a bystander, and an exaggerated conception of self-sufficiency. These mental models, we argue, generate blind spots to ethics, in the sense that they limit our ability to see facts that are right before our eyes – sometimes quite literally, as in the many examples of managers and employees who see unethical behavior take place in front of them, but do not recognize it as such." (Patricia H Werhane et al, "Obstacles to Ethical: Decision-Making Mental Models, Milgram and the Problem of Obedience", 2013)

21 December 2020

On Nonlinearity IV (Organizations)

"In strategic thinking, one first seeks a clear understanding of the particular character of each element of a situation and then makes the fullest possible use of human brainpower to restructure the elements in the most advantageous way. Phenomena and events in the real word do not always fit a linear model. Hence the most reliable means of dissecting a situation into its constituent parts and reassembling then in the desired pattern is not a step-by-step methodology such as systems analysis. Rather, it is that ultimate nonlinear thinking tool, the human brain. True strategic thinking thus contrasts sharply with the conventional mechanical systems approach based on linear thinking. But it also contrasts with the approach that stakes everything on intuition, reaching conclusions without any real breakdown or analysis. [...] No matter how difficult or unprecedented the problem, a breakthrough to the best possible solution can come only from a combination of rational analysis, based on the real nature of things, and imaginative reintegration of all the different items into a new pattern, using nonlinear brainpower. This is always the most effective approach to devising strategies for dealing successfully with challenges and opportunities, in the market arena as on the battlefield." (Kenichi Ohmae, "The Mind Of The Strategist", 1982)

"So we pour in data from the past to fuel the decision-making mechanisms created by our models, be they linear or nonlinear. But therein lies the logician's trap: past data from real life constitute a sequence of events rather than a set of independent observations, which is what the laws of probability demand.[...] It is in those outliers and imperfections that the wildness lurks." (Peter L Bernstein, "Against the Gods: The Remarkable Story of Risk", 1996)

"In a linear world of equilibrium and predictability, the sparse research into an evidence base for management prescriptions and the confused findings it produces would be a sign of incompetence; it would not make much sense. Nevertheless, if organizations are actually patterns of nonlinear interaction between people; if small changes could produce widespread major consequences; if local interaction produces emergent global pattern; then it will not be possible to provide a reliable evidence base. In such a world, it makes no sense to conduct studies looking for simple causal relationships between an action and an outcome. I suggest that the story of the last few years strongly indicates that human action is nonlinear, that time and place matter a great deal, and that since this precludes simple evidence bases we do need to rethink the nature of organizations and the roles of managers and leaders in them." (Ralph D Stacey, "Complexity and Organizational Reality", 2000)

"The world is nonlinear. Trying to make it linear for our mathematical or administrative convenience is not usually a good idea even when feasible, and it is rarely feasible." (Donella H Meadow, "Thinking in Systems: A Primer", 2008)

"Complexity theory shows that great changes can emerge from small actions. Change involves a belief in the possible, even the 'impossible'. Moreover, social innovators don’t follow a linear pathway of change; there are ups and downs, roller-coaster rides along cascades of dynamic interactions, unexpected and unanticipated divergences, tipping points and critical mass momentum shifts. Indeed, things often get worse before they get better as systems change creates resistance to and pushback against the new. Traditional evaluation approaches are not well suited for such turbulence. Traditional evaluation aims to control and predict, to bring order to chaos. Developmental evaluation accepts such turbulence as the way the world of social innovation unfolds in the face of complexity. Developmental evaluation adapts to the realities of complex nonlinear dynamics rather than trying to impose order and certainty on a disorderly and uncertain world." (Michael Q Patton, "Developmental Evaluation", 2010)

"Internal friction is exacerbated by the fact that in business as in war, we are operating in a nonlinear, semi-chaotic environment in which our endeavors will collide and possibly clash with the actions of other independent wills (customers, suppliers, competitors, regulators, lobbyists, and so on). The internal and external worlds are in constant contact and the effects of our actions are the result of their reciprocal interaction. Friction gives rise to three gaps: the knowledge gap, the alignment gap, and the effects gap. To execute effectively, we must address all three. Our instinctive reaction to the three gaps is to demand more detail. We gather more data in order to craft more detailed plans, issue more detailed instructions, and exercise more detailed control. This not only fails to solve the problem, it usually makes it worse. We need to think about the problem differently and adopt a systemic approach to solving it." (Stephen Bungay, "The Art of Action: How Leaders Close the Gaps between Plans, Actions, and Results", 2010)

"Motivation is a fine example of social complexity. It is nonlinear and sometimes unpredictable. It cannot be defined or modeled with a single diagram." (Jurgen Appelo, "Management 3.0: Leading Agile Developers, Developing Agile Leaders", 2010)

"We have minds that are equipped for certainty, linearity and short-term decisions, that must instead make long-term decisions in a non-linear, probabilistic world. (Paul Gibbons, "The Science of Successful Organizational Change", 2015)

04 December 2020

Fuzzy Logic I

"A fuzzy set is a class of objects with a continuum of grades of membership. Such a set is characterized by a membership (characteristic) function which assigns to each object a grade of membership ranging between zero and one. The notions of inclusion, union, intersection, complement, relation, convexity, etc., are extended to such sets, and various properties of these notions in the context of fuzzy sets are established. In particular, a separation theorem for convex fuzzy sets is proved without requiring that the fuzzy sets be disjoint." (Lotfi A Zadeh, "Fuzzy Sets", 1965)

"The notion of a fuzzy set provides a convenient point of departure for the construction of a conceptual framework which parallels in many respects the framework used in the case of ordinary sets, but is more general than the latter and, potentially, may prove to have a much wider scope of applicability, particularly in the fields of pattern classification and information processing. Essentially, such a framework provides a natural way of dealing with problems in which the source of imprecision is the absence of sharply denned criteria of class membership rather than the presence of random variables." (Lotfi A Zadeh, "Fuzzy Sets", 1965)

"In general, complexity and precision bear an inverse relation to one another in the sense that, as the complexity of a problem increases, the possibility of analysing it in precise terms diminishes. Thus 'fuzzy thinking' may not be deplorable, after all, if it makes possible the solution of problems which are much too complex for precise analysis." (Lotfi A Zadeh, "Fuzzy languages and their relation to human intelligence", 1972)

"Let me say quite categorically that there is no such thing as a fuzzy concept. [...] We do talk about fuzzy things but they are not scientific concepts. Some people in the past have discovered certain interesting things, formulated their findings in a non-fuzzy way, and therefore we have progressed in science." (Rudolf E Kálmán, 1972)

"[Fuzzy logic is] a logic whose distinguishing features are (1) fuzzy truth-values expressed in linguistic terms, e. g., true, very true, more or less true, or somewhat true, false, nor very true and not very false, etc.; (2) imprecise truth tables; and (3) rules of inference whose validity is relative to a context rather than exact." (Lotfi A. Zadeh, "Fuzzy logic and approximate reasoning", 1975)

"[...] much of the information on which human decisions are based is possibilistic rather than probabilistic in nature, and the intrinsic fuzziness of natural languages - which is a logical consequence of the necessity to express information in a summarized form - is, in the main, possibilistic in origin." (Lotfi A Zadeh, "Fuzzy Sets as the Basis for a Theory of Possibility", Fuzzy Sets and Systems, 1978) 

"Philosophical objections may be raised by the logical implications of building a mathematical structure on the premise of fuzziness, since it seems (at least superficially) necessary to require that an object be or not be an element of a given set. From an aesthetic viewpoint, this may be the most satisfactory state of affairs, but to the extent that mathematical structures are used to model physical actualities, it is often an unrealistic requirement. [...] Fuzzy sets have an intuitively plausible philosophical basis. Once this is accepted, analytical and practical considerations concerning fuzzy sets are in most respects quite orthodox." (James Bezdek, 1981)

"Fuzziness, then, is a concomitant of complexity. This implies that as the complexity of a task, or of a system for performing that task, exceeds a certain threshold, the system must necessarily become fuzzy in nature. Thus, with the rapid increase in the complexity of the information processing tasks which the computers are called upon to perform, we are reaching a point where computers will have to be designed for processing of information in fuzzy form. In fact, it is the capability to manipulate fuzzy concepts that distinguishes human intelligence from the machine intelligence of current generation computers. Without such capability we cannot build machines that can summarize written text, translate well from one natural language to another, or perform many other tasks that humans can do with ease because of their ability to manipulate fuzzy concepts." (Lotfi A Zadeh, "The Birth and Evolution of Fuzzy Logic", 1989)

"It is important to observe that there is an intimate connection between fuzziness and complexity. Thus, a basic characteristic of the human brain, a characteristic shared in varying degrees with all information processing systems, is its limited capacity to handle classes of high cardinality, that is, classes having a large number of members. Consequently, when we are presented with a class of very high cardinality, we tend to group its elements together into subclasses in such a way as to reduce the complexity of the information processing task involved. When a point is reached where the cardinality of the class of subclasses exceeds the information handling capacity of the human brain, the boundaries of the subclasses are forced to become imprecise and fuzziness becomes a manifestation of this imprecision." (Lotfi A Zadeh, "The Birth and Evolution of Fuzzy Logic", 1989)

"A fuzzy set can be defined mathematically by assigning to each possible individual in the universe of discourse a value representing its grade of membership in the fuzzy set. This grade corresponds to the degree to which that individual is similar or compatible with the concept represented by the fuzzy set. Thus, individuals may belong in the fuzzy act to a greater or lesser degree as indicated by a larger or smaller membership grade. As already mentioned, these membership grades are very often represented by real-number values ranging in the closed interval between 0 and 1." (George J Klir & Bo Yuan, "Fuzzy Sets and Fuzzy Logic: Theory and Applications", 1995)


02 December 2020

On Engineering VII (Systems Engineering II)

"The term 'systems engineering' is a term with an air of romance and of mystery. The romance and the mystery come from its use in the field of guided missiles, rockets, artificial satellites, and space flight. Much of the work being done in these areas is classified and hence much of it is not known to the general public or to this writer. […] From a business point of view, systems engineering is the creation of a deliberate combination of human services, material services, and machine service to accomplish an information processing job. But this is also very nearly a definition of business system analysis. The difference, from a business point of view, therefore, between business system analysis and systems engineering is only one of degree. In general, systems engineering is more total and more goal-oriented in its approach [...]." ("Computers and People" Vol. 5, 1956)

"By some definitions 'systems engineering' is suggested to be a new discovery. Actually it is a common engineering approach which has taken on a new and important meaning because of the greater complexity and scope of problems to be solved in industry, business, and the military. Newly discovered scientific phenomena, new machines and equipment, greater speed of communications, increased production capacity, the demand for control over ever-extending areas under constantly changing conditions, and the resultant complex interactions, all have created a tremendously accelerating need for improved systems engineering. Systems engineering can be complex, but is simply defined as 'logical engineering within physical, economic and technical limits'  - bridging the gap from fundamental laws to a practical operating system." (Instrumentation Technology, 1957)

"Systems engineering embraces every scientific and technical concept known, including economics, management, operations, maintenance, etc. It is the job of integrating an entire problem or problem to arrive at one overall answer, and the breaking down of this answer into defined units which are selected to function compatibly to achieve the specified objectives. [...] Instrument and control engineering is but one aspect of systems engineering - a vitally important and highly publicized aspect, because the ability to create automatic controls within overall systems has made it possible to achieve objectives never before attainable, While automatic controls are vital to systems which are to be controlled, every aspect of a system is essential. Systems engineering is unbiased, it demands only what is logically required. Control engineers have been the leaders in pulling together a systems approach in the various technologies." (Instrumentation Technology, 1957) 

"Systems engineering is more likely to be closely associated with top management of an enterprise than the engineering of the components of the system. If an engineering task is large and complex enough, the arrangement-making problem is especially difficult. Commonly, in a large job, the first and foremost problem for the systems engineers is to relate the objectives to the technical art. [...] Systems engineering is a highly technical pursuit and if a nontechnical man attempts to direct the systems engineering as such, it must end up in a waste of technical talent below." (Aeronautical Engineering Review Vol. 16, 1957) 

"Systems engineering is the name given to engineering activity which considers the overall behavior of a system, or more generally which considers all factors bearing on a problem, and the systems approach to control engineering problems is correspondingly that approach which examines the total dynamic behavior of an integrated system. It is concerned more with quality of performance than with sizes, capacities, or efficiencies, although in the most general sense systems engineering is concerned with overall, comprehensive appraisal." (Ernest F Johnson, "Automatic process control", 1958)

"There are two types of systems engineering - basis and applied. [...] Systems engineering is, obviously, the engineering of a system. It usually, but not always, includes dynamic analysis, mathematical models, simulation, linear programming, data logging, computing, optimating, etc., etc. It connotes an optimum method, realized by modern engineering techniques. Basic systems engineering includes not only the control system but also all equipment within the system, including all host equipment for the control system. Applications engineering is - and always has been - all the engineering required to apply the hardware of a hardware manufacturer to the needs of the customer. Such applications engineering may include, and always has included where needed, dynamic analysis, mathematical models, simulation, linear programming, data logging, computing, and any technique needed to meet the end purpose - the fitting of an existing line of production hardware to a customer's needs. This is applied systems engineering." (Instruments and Control Systems Vol. 31, 1958)

"Systems Engineering Methods is directed towards the development of a broad systems engineering approach to help such people improve their decision-making capability. Although the emphasis is on engineering, the systems approach can also has validity for many other areas in which emphasis may be social, economic, or political." (Harold Chestnut, "Systems Engineering Methods", 1965) 

"Systems Engineering is the science of designing complex systems in their totality to ensure that the component sub-systems making up the system are designed, fitted together, checked and operated in the most efficient way." (Gwilym Jenkins, "The Systems Approach", 1969) 

"System engineering is a robust approach to the design, creation, and operation of systems. In simple terms, the approach consists of identification and quantification of system goals, creation of alternative system design concepts, performance of design trades, selection and implementation of the best design, verification that the design is properly built and integrated, and post-implementation assessment of how well the system meets (or met) the goals." (NASA, "NASA Systems Engineering Handbook", 1995) 

"Systems engineering should be, first and foremost, a state of mind and an attitude taken when dealing with complexity." (Dominique Luzeaux et al, "Complex Systems and Systems of Systems Engineering", 2013) 

05 July 2020

Collective Intelligence II


"Civilization is to groups what intelligence is to individuals. It is a means of combining the intelligence of many to achieve ongoing group adaptation. […] Civilization, like intelligence, may serve well, serve adequately, or fail to serve its adaptive function. When civilization fails to serve, it must disintegrate unless it is acted upon by unifying internal or external forces." (Octavia E Butler, "Parable of the Sower", 1993)

"Great leaders reinforce the idea that accomplishment in our society comes from great individual acts. We credit individuals for outcomes that required teams and communities to accomplish." (Peter Block, "Stewardship", 1993)

"We must learn to think together in an integrated, synergistic fashion, rather than in fragmented and competitive ways." (Joanna Macy, Noetic Sciences Bulletin, 1994-1995)

"The leading edge of growth of intelligence is at the cultural and societal level. It is like a mind that is struggling to wake up. This is necessary because the most difficult problems we face are now collective ones. They are caused by complex global interactions and are beyond the scope of individuals to understand and solve. Individual mind, with its isolated viewpoints and narrow interests, is no longer enough." (Jeff Wright, "Basic Beliefs", [email] 1995)

"It [collective intelligence] is a form of universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills. I'll add the following indispensable characteristic to this definition: The basis and goal of collective intelligence is mutual recognition and enrichment of individuals rather than the cult of fetishized or hypostatized communities." (Pierre Levy, "Collective Intelligence", 1999)

The three basic mechanisms of averaging, feedback and division of labor give us a first idea of a how a CMM [Collective Mental Map] can be developed in the most efficient way, that is, how a given number of individuals can achieve a maximum of collective problem-solving competence. A collective mental map is developed basically by superposing a number of individual mental maps. There must be sufficient diversity among these individual maps to cover an as large as possible domain, yet sufficient redundancy so that the overlap between maps is large enough to make the resulting graph fully connected, and so that each preference in the map is the superposition of a number of individual preferences that is large enough to cancel out individual fluctuations. The best way to quickly expand and improve the map and fill in gaps is to use a positive feedback that encourages individuals to use high preference paths discovered by others, yet is not so strong that it discourages the exploration of new paths." (Francis Heylighen, "Collective Intelligence and its Implementation on the Web", 1999)

"With the growing interest in complex adaptive systems, artificial life, swarms and simulated societies, the concept of “collective intelligence” is coming more and more to the fore. The basic idea is that a group of individuals (e. g. people, insects, robots, or software agents) can be smart in a way that none of its members is. Complex, apparently intelligent behavior may emerge from the synergy created by simple interactions between individuals that follow simple rules." (Francis Heylighen, "Collective Intelligence and its Implementation on the Web", 1999)

"Cultures are never merely intellectual constructs. They take form through the collective intelligence and memory, through a commonly held psychology and emotions, through spiritual and artistic communion." (Tariq Ramadan, "Islam and the Arab Awakening", 2012)

"[…] recent researchers in artificial intelligence and computational methods use the term swarm intelligence to name collective and distributed techniques of problem solving without centralized control or provision of a global model. […] the intelligence of the swarm is based fundamentally on communication. […] the member of the multitude do not have to become the same or renounce their creativity in order to communicate and cooperate with each other. They remain different in terms of race, sex, sexuality and so forth. We need to understand, then, is the collective intelligence that can emerge from the communication and cooperation of such varied multiplicity." (Antonio Negri, "Multitude: War and Democracy in the Age of Empire", 2004)

"Collective Intelligence (CI) is the capacity of human collectives to engage in intellectual cooperation in order to create, innovate, and invent." (Pierre Levy, "Toward a Self-referential Collective Intelligence", 2009)

Collective Intelligence I

"We must therefore establish a form of decision-making in which voters need only ever pronounce on simple propositions, expressing their opinions only with a yes or a no. […] Clearly, if anyone’s vote was self-contradictory (intransitive), it would have to be discounted, and we should therefore establish a form of voting which makes such absurdities impossible." (Nicolas de Condorcet, "On the form of decisions made by plurality vote", 1788)

"Collective wisdom, alas, is no adequate substitute for the intelligence of individuals. Individuals who opposed received opinions have been the source of all progress, both moral and intellectual. They have been unpopular, as was natural." (Bertrand Russell, "Why I Am Not a Christian", 1927)

"The collective intelligence of any group of people who are thinking as a 'herd' rather than individually is no higher than the intelligence of the stupidest members. (Mary Day Winn, Adam's Rib, 1931)

"Learning is a property of all living organisms. […] Since organized groups can be looked upon as living entities, they can be expected to exhibit learning […]" (Winfred B. Hirschmann, "Profit from the Learning Curve", Harvard Business Review, 1964)

"A cardinal principle in systems theory is that all parties that have a stake in a system should be represented in its management." (Malcolm S Knowles, "The Adult Learner", 1973)

"Collective intelligence emerges when a group of people work together effectively. Collective intelligence can be additive (each adds his or her part which together form the whole) or it can be synergetic, where the whole is greater than the sum of its parts." (Trudy and Peter Johnson-Lenz, "Groupware: Orchestrating the Emergence of Collective Intelligence", cca. 1980)

"Cybernetic information theory suggests the possibility of assuming that intelligence is a feature of any feedback system that manifests a capacity for learning." (Paul Hawken et al, "Seven Tomorrows", 1982)

"The concept of organizational learning refers to the capacity of organizational complexes to develop experiential knowledge, instincts, and 'feel' or intuition which are greater than the combined knowledge, skills and instincts of the individuals involved." (Don E. Kash, "Perpetual Innovation", 1989)

"We haven't worked on ways to develop a higher social intelligence […] We need this higher intelligence to operate socially or we're not going to survive. […] If we don't manage things socially, individual high intelligence is not going to make much difference. [...] Ordinary thought in society is incoherent - it is going in all sorts of directions, with thoughts conflicting and canceling each other out. But if people were to think together in a coherent way, it would have tremendous power." (David Bohm, "New Age Journal", 1989)
Related Posts Plugin for WordPress, Blogger...

On Leonhard Euler

"I have been able to solve a few problems of mathematical physics on which the greatest mathematicians since Euler have struggled in va...