"A neural network consists of large numbers of simple neurons
that are richly interconnected. The weights associated with the connections
between neurons determine the characteristics of the network. During a training
period, the network adjusts the values of the interconnecting weights. The
value of any specific weight has no significance; it is the patterns of weight
values in the whole system that bear information. Since these patterns are
complex, and are generated by the network itself (by means of a general
learning strategy applicable to the whole network), there is no abstract
procedure available to describe the process used by the network to solve the
problem. There are only complex patterns of relationships."
"The main problem that hampered the development of neural network models was the absence of a mathematical model for adjusting the weights of neurons situated somewhere in the middle of the network, and not directly connected to the input or output." (Paul Cilliers, "Complexity and Postmodernism: Understanding Complex Systems", 1998)
"Whereas formal systems apply inference rules to logical variables,
neural networks apply evolutive principles to numerical variables. Instead of calculating
a solution, the network settles into a condition that satisfies the constraints
imposed on it." (Paul Cilliers, "Complexity and Postmodernism:
Understanding Complex Systems", 1998)
"Neural networks are a popular model for learning, in part because of their basic similarity to neural assemblies in the human brain. They capture many useful effects, such as learning from complex data, robustness to noise or damage, and variations in the data set. " (Peter C R Lane, Order Out of Chaos: Order in Neural Networks, 2007)
"A neural network is a network of neurons - units with inputs and outputs. The output of a neuron can be passed to a neuron and so on, thus creating a multilayered network. Neural networks contain adaptive elements, making them suitable to deal with nonlinear models and pattern recognition problems." (Ivan Idris, "Python Data Analysis", 2014)
"A neural-network algorithm is simply a statistical procedure for classifying inputs (such as numbers, words, pixels, or sound waves) so that these data can mapped into outputs. The process of training a neural-network model is advertised as machine learning, suggesting that neural networks function like the human mind, but neural networks estimate coefficients like other data-mining algorithms, by finding the values for which the model’s predictions are closest to the observed values, with no consideration of what is being modeled or whether the coefficients are sensible." (Gary Smith & Jay Cordes, "The 9 Pitfalls of Data Science", 2019)
"The label neural networks suggests that these algorithms replicate the neural networks in human brains that connect electrically excitable cells called neurons. They don’t. We have barely scratched the surface in trying to figure out how neurons receive, store, and process information, so we cannot conceivably mimic them with computers." (Gary Smith & Jay Cordes, "The 9 Pitfalls of Data Science", 2019)
No comments:
Post a Comment