"A law explains a set of observations; a theory explains a set of laws. […] a law applies to observed phenomena in one domain (e.g., planetary bodies and their movements), while a theory is intended to unify phenomena in many domains. […] Unlike laws, theories often postulate unobservable objects as part of their explanatory mechanism." (John L Casti, "Searching for Certainty: How Scientists Predict the Future", 1990)
"[…] semantic nets fail to be distinctive in the way they (1) represent propositions, (2) cluster information for access, (3) handle property inheritance, and (4) handle general inference; in other words, they lack distinctive representational properties (i.e., 1) and distinctive computational properties (i.e., 2-4). Certain propagation mechanisms, notably 'spreading activation', 'intersection search', or 'inference propagation' have sometimes been regarded as earmarks of semantic nets, but since most extant semantic nets lack such mechanisms, they cannot be considered criterial in current usage." (Lenhart K Schubert, "Semantic Nets are in the Eye of the Beholder", 1990)
"It is important to emphasize the value of simplicity and elegance, for complexity has a way of compounding difficulties and as we have seen, creating mistakes. My definition of elegance is the achievement of a given functionality with a minimum of mechanism and a maximum of clarity." (Fernando J Corbató, "On Building Systems That Will Fail", 1991)
"[…] the standard theory of chaos deals with time evolutions that come back again and again close to where they were earlier. Systems that exhibit this eternal return" are in general only moderately complex. The historical evolution of very complex systems, by contrast, is typically one way: history does not repeat itself. For these very complex systems with one-way evolution it is usually clear that sensitive dependence on initial condition is present. The question is then whether it is restricted by regulation mechanisms, or whether it leads to long-term important consequences." (David Ruelle, "Chance and Chaos", 1991)
"What we now call chaos is a time evolution with sensitive dependence on initial condition. The motion on a strange attractor is thus chaotic. One also speaks of deterministic noise when the irregular oscillations that are observed appear noisy, but the mechanism that produces them is deterministic." (David Ruelle, "Chance and Chaos", 1991)
"The systems' basic components are treated as sets of rules. The systems rely on three key mechanisms: parallelism, competition, and recombination. Parallelism permits the system to use individual rules as building blocks, activating sets of rules to describe and act upon the changing situations. Competition allows the system to marshal its rules as the situation demands, providing flexibility and transfer of experience. This is vital in realistic environments, where the agent receives a torrent of information, most of it irrelevant to current decisions. The procedures for adaptation - credit assignment and rule discovery - extract useful, repeatable events from this torrent, incorporating them as new building blocks. Recombination plays a key role in the discovery process, generating plausible new rules from parts of tested rules. It implements the heuristic that building blocks useful in the past will prove useful in new, similar contexts." (John H Holland, "Complex Adaptive Systems", Daedalus Vol. 121 (1), 1992)
"There must be, however, cybernetic or homeostatic mechanisms for preventing the overall variables of the social system from going beyond a certain range. There must, for instance, be machinery for controlling the total numbers of the population; there must be machinery for controlling conflict processes and for preventing perverse social dynamic processes of escalation and inflation. One of the major problems of social science is how to devise institutions which will combine this overall homeostatic control with individual freedom and mobility." (Kenneth Boulding, "Economics of the coming spaceship Earth", 1994)
"The second law of thermodynamics, which requires average entropy (or disorder) to increase, does not in any way forbid local order from arising through various mechanisms of self-organization, which can turn accidents into frozen ones producing extensive regularities. Again, such mechanisms are not restricted to complex adaptive systems." (Murray Gell-Mann, "What is Complexity?", Complexity Vol 1 (1), 1995)
"By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modification of a precursor, system, because any precursors to an irreducibly complex system that is missing a part is by definition nonfunctional." (Michael Behe, "Darwin’s Black Box", 1996)
"So we pour in data from the past to fuel the decision-making mechanisms created by our models, be they linear or nonlinear. But therein lies the logician's trap: past data from real life constitute a sequence of events rather than a set of independent observations, which is what the laws of probability demand. [...] It is in those outliers and imperfections that the wildness lurks." (Peter L Bernstein, "Against the Gods: The Remarkable Story of Risk", 1996)
"Paradigms are the most general-rather like a philosophical or ideological framework. Theories are more specific, based on the paradigm and designed to describe what happens in one of the many realms of events encompassed by the paradigm. Models are even more specific providing the mechanisms by which events occur in a particular part of the theory's realm. Of all three, models are most affected by empirical data - models come and go, theories only give way when evidence is overwhelmingly against them and paradigms stay put until a radically better idea comes along." (Lee R Beach, "The Psychology of Decision Making: People in Organizations", 1997)
"Suppose the reasoning centers of the brain can get their hands on the mechanisms that plop shapes into the array and that read their locations out of it. Those reasoning demons can exploit the geometry of the array as a surrogate for keeping certain logical constraints in mind. Wealth, like location on a line, is transitive: if A is richer than B, and B is richer than C, then A is richer than C. By using location in an image to symbolize wealth, the thinker takes advantage of the transitivity of location built into the array, and does not have to enter it into a chain of deductive steps. The problem becomes a matter of plop down and look up. It is a fine example of how the form of a mental representation determines what is easy or hard to think." (Steven Pinker, "How the Mind Works", 1997)
"In our analysis of complex systems (like the brain and language) we must avoid the trap of trying to find master keys. Because of the mechanisms by which complex systems structure themselves, single principles provide inadequate descriptions. We should rather be sensitive to complex and self-organizing interactions and appreciate the play of patterns that perpetually transforms the system itself as well as the environment in which it operates." (Paul Cilliers, "Complexity and Postmodernism: Understanding Complex Systems" , 1998)
"The subject of probability begins by assuming that some mechanism of uncertainty is at work giving rise to what is called randomness, but it is not necessary to distinguish between chance that occurs because of some hidden order that may exist and chance that is the result of blind lawlessness. This mechanism, figuratively speaking, churns out a succession of events, each individually unpredictable, or it conspires to produce an unforeseeable outcome each time a large ensemble of possibilities is sampled." (Edward Beltrami, "Chaos and Order in Mathematics and Life", 1999)
"The three basic mechanisms of averaging, feedback and division of labor give us a first idea of a how a CMM [Collective Mental Map] can be developed in the most efficient way, that is, how a given number of individuals can achieve a maximum of collective problem-solving competence. A collective mental map is developed basically by superposing a number of individual mental maps. There must be sufficient diversity among these individual maps to cover an as large as possible domain, yet sufficient redundancy so that the overlap between maps is large enough to make the resulting graph fully connected, and so that each preference in the map is the superposition of a number of individual preferences that is large enough to cancel out individual fluctuations. The best way to quickly expand and improve the map and fill in gaps is to use a positive feedback that encourages individuals to use high preference paths discovered by others, yet is not so strong that it discourages the exploration of new paths." (Francis Heylighen, "Collective Intelligence and its Implementation on the Web", 1999)
"This distinction is familiar in natural science, where one is not expected to mistake, say, the cardiovascular system for the circulation of the blood or the brain with mental processes. But it is unusual in social studies. [...] Mechanism is to system as motion is to body, combination (or dissociation) to chemical compound, and thinking to brain. [In the systemic view], agency is both constrained and motivated by structure, and in turn the latter is maintained or altered by individual action. In other words, social mechanisms reside neither in persons nor in their environment – they are part of the processes that unfold in or among social systems. […] All mechanisms are system-specific: there is no such thing as a universal or substrate-neutral mechanism." (Mario Bunge, "The Sociology-philosophy Connection", 1999)
"What it means for a mental model to be a structural analog is that it embodies a representation of the spatial and temporal relations among, and the causal structures connecting the events and entities depicted and whatever other information that is relevant to the problem-solving talks. […] The essential points are that a mental model can be nonlinguistic in form and the mental mechanisms are such that they can satisfy the model-building and simulative constraints necessary for the activity of mental modeling." (Nancy J Nersessian, "Model-based reasoning in conceptual change", 1999)
No comments:
Post a Comment