09 January 2021

On Networks XI (Neural Networks II)

"The first attempts to consider the behavior of so-called 'random neural nets' in a systematic way have led to a series of problems concerned with relations between the 'structure' and the 'function' of such nets. The 'structure' of a random net is not a clearly defined topological manifold such as could be used to describe a circuit with explicitly given connections. In a random neural net, one does not speak of 'this' neuron synapsing on 'that' one, but rather in terms of tendencies and probabilities associated with points or regions in the net." (Anatol Rapoport, "Cycle distributions in random nets", The Bulletin of Mathematical Biophysics 10(3), 1948)

"The terms 'black box' and 'white box' are convenient and figurative expressions of not very well determined usage. I shall understand by a black box a piece of apparatus, such as four-terminal networks with two input and two output terminals, which performs a definite operation on the present and past of the input potential, but for which we do not necessarily have any information of the structure by which this operation is performed. On the other hand, a white box will be similar network in which we have built in the relation between input and output potentials in accordance with a definite structural plan for securing a previously determined input-output relation." (Norbert Wiener, "Cybernetics: Or Control and Communication in the Animal and the Machine", 1948)

"Neural nets have no central control in the classical sense. Processing is distributed over the network and the roles of the various components (or groups of components) change dynamically.  This does not preclude any part of the network from developing a regulating function, but that will be determined by the evolutionary needs of the system." (Paul Cilliers, "Complexity and Postmodernism: Understanding Complex Systems", 1998)

"Neural networks conserve the complexity of the systems they model because they have complex structures themselves. Neural networks encode information about their environment in a distributed form. […] Neural networks have the capacity to self-organise their internal structure." (Paul Cilliers, "Complexity and Postmodernism: Understanding Complex Systems", 1998)

"The internal structure of a connectionist network develops through a process of self-organisation, whereas rule-based systems have to search through pre-programmed options that define the structure largely in an a priori fashion. In this sense, learning is an implicit characteristic of neural networks. In rule-based systems, learning can only take place through explicitly formulated procedures." (Paul Cilliers, "Complexity and Postmodernism: Understanding Complex Systems", 1998)

"An artificial neural network is a massive parallel distributed processor made up of simple processing units. It has the ability to learn from experiential knowledge expressed through interunit connections strengths, and can make such knowledge available for use." (Yorgos Goletsis et al, "Bankruptcy Prediction through Artificial Intelligence", 2009)

"ANN is a pattern matching technique that uses training data to build a model and uses the model to predict unknown samples. It consists of input, output, and hidden nodes and connections between nodes. The weights of the connections are iteratively adjusted in order to get an accurate model." (Indranil Bose, "Data Mining in Tourism", 2009)

"Just as they did thirty years ago, machine learning programs (including those with deep neural networks) operate almost entirely in an associational mode. They are driven by a stream of observations to which they attempt to fit a function, in much the same way that a statistician tries to fit a line to a collection of points. Deep neural networks have added many more layers to the complexity of the fitted function, but raw data still drives the fitting process. They continue to improve in accuracy as more data are fitted, but they do not benefit from the 'super-evolutionary speedup'."  (Judea Pearl & Dana Mackenzie, "The Book of Why: The new science of cause and effect", 2018)

"[a neural network is] a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs." (Robert Hecht-Nielsen)

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...

On Hypothesis Testing III

  "A little thought reveals a fact widely understood among statisticians: The null hypothesis, taken literally (and that’s the only way...