"Frequentist statistics assumes that there is a 'true' state of the world (e.g. the difference between species in predation probability) which gives rise to a distribution of possible experimental outcomes. The Bayesian framework says instead that the experimental outcome - what we actually saw happen - is the truth, while the parameter values or hypotheses have probability distributions. The Bayesian framework solves many of the conceptual problems of frequentist statistics: answers depend on what we actually saw and not on a range of hypothetical outcomes, and we can legitimately make statements about the probability of different hypotheses or parameter values." (Ben Bolker, "Ecological Models and Data in R", 2007)
"Most modern statistics uses an approach called maximum likelihood estimation, or approximations to it. For a particular statistical model, maximum likelihood finds the set of parameters (e.g. seed removal rates) that makes the observed data (e.g. the particular outcomes of predation trials) most likely to have occurred. Based on a model for both the deterministic and stochastic aspects of the data, we can compute the likelihood (the probability of the observed outcome) given a particular choice of parameters. We then find the set of parameters that makes the likelihood as large as possible, and take the resulting maximum likelihood estimates (MLEs) as our best guess at the parameters." (Ben Bolker, "Ecological Models and Data in R", 2007)
"Normally distributed variables are everywhere, and most classical statistical methods use this distribution. The explanation for the normal distribution’s ubiquity is the Central Limit Theorem, which says that if you add a large number of independent samples from the same distribution the distribution of the sum will be approximately normal." (Ben Bolker, "Ecological Models and Data in R", 2007)
"Phenomenological models concentrate on observed patterns in the data, using functions and distributions that are the right shape and/or sufficiently flexible to match them; mechanistic models are more concerned with the underlying processes, using functions and distributions based on theoretical expectations. As usual, there are shades of gray; the same function could be classified as either phenomenological or mechanistic depending on why it was chosen." (Ben Bolker, "Ecological Models and Data in R", 2007)
"The dichotomy of mathematical vs. statistical modeling says more about the culture of modeling and how different disciplines go about thinking about models than about how we should actually model ecological systems. A mathematician is more likely to produce a deterministic, dynamic process model without thinking very much about noise and uncertainty (e.g. the ordinary differential equations that make up the Lotka-Volterra predator prey model). A statistician, on the other hand, is more likely to produce a stochastic but static model, that treats noise and uncertainty carefully but focuses more on static patterns than on the dynamic processes that produce them (e.g. linear regression)." (Ben Bolker, "Ecological Models and Data in R", 2007)
No comments:
Post a Comment