25 July 2022

On Universality XII: Data Science

"A Universal Turing Machine is an ideal mathematical object; it represents a formal manipulation of symbols and owes allegiance to criteria of logical consistency but not to physical laws and constraints. Thus, for example, physical variables play no essential role in the concept of algorithm. In reality, however, every logical operation occurs at a minimum cost of KT of energy dissipation (where K is Boltzman's constant and T is temperature) and, in fact, occurs at a much higher cost to insure reliability." (Claudia Carello et al, "The Inadequacies of the Computer Metaphor", 1982)

"The basic idea of cognitive science is that intelligent beings are semantic engines - in other words, automatic formal systems with interpretations under which they consistently make sense. We can now see why this includes psychology and artificial intelligence on a more or less equal footing: people and intelligent computers (if and when there are any) turn out to be merely different manifestations of the same underlying phenomenon. Moreover, with universal hardware, any semantic engine can in principle be formally imitated by a computer if only the right program can be found. (John Haugeland, "Semantic Engines: An introduction to mind design", 1981)

"Looking at ourselves from the computer viewpoint, we cannot avoid seeing that natural language is our most important 'programming language'. This means that a vast portion of our knowledge and activity is, for us, best communicated and understood in our natural language. [...] One could say that natural language was our first great original artifact and, since, as we increasingly realize, languages are machines, so natural language, with our brains to run it, was our primal invention of the universal computer. One could say this except for the sneaking suspicion that language isn’t something we invented but something we became, not something we constructed but something in which we created, and recreated, ourselves." (Justin Leiber, Invitation to cognitive science", 1991)

"[...] a general-purpose universal optimization strategy is theoretically impossible, and the only way one strategy can outperform another is if it is specialized to the specific problem under consideration." Yu-Chi Ho & David L Pepyne, "Simple explanation of the no-free-lunch theorem and its implications", Journal of Optimization Theory and Applications 115, 2002)

"Much of machine learning is concerned with devising different models, and different algorithms to fit them. We can use methods such as cross validation to empirically choose the best method for our particular problem. However, there is no universally best model - this is sometimes called the no free lunch theorem. The reason for this is that a set of assumptions that works well in one domain may work poorly in another." (Kevin P Murphy, "Machine Learning: A Probabilistic Perspective", 2012)

"The no free lunch theorem for machine learning states that, averaged over all possible data generating distributions, every classification algorithm has the same error rate when classifying previously unobserved points. In other words, in some sense, no machine learning algorithm is universally any better than any other. The most sophisticated algorithm we can conceive of has the same average performance (over all possible tasks) as merely predicting that every point belongs to the same class. [...] the goal of machine learning research is not to seek a universal learning algorithm or the absolute best learning algorithm. Instead, our goal is to understand what kinds of distributions are relevant to the 'real world' that an AI agent experiences, and what kinds of machine learning algorithms perform well on data drawn from the kinds of data generating distributions we care about." (Ian Goodfellow et al, "Deep Learning", 2015)

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...

Douglas T Ross - Collected Quotes

"Automatic design has the computer do too much and the human do too little, whereas automatic programming has the human do too much and...