[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Swarm-Modelling] comparing models
From: |
gepr |
Subject: |
Re: [Swarm-Modelling] comparing models |
Date: |
Tue, 2 Sep 2003 15:24:53 -0700 |
Michael McDevitt writes:
> An extract from a paper I wrote for the Navy earlier this year that
> generally pertains to any modeling effort:
Great info, Mike!
Of the following:
> The following techniques from Sargent (1999) are commonly used for
> validating and verifying the sub-models and overall model.
>
> Animation
> Comparison to Other Models
> Degenerate Tests
> Event Validity
> Extreme Condition Tests
> Face Validity
> Fixed Values
> Historical Data Validation
> Historical Methods: Rationalism, Empiricism, postive economics
> Internal Validity
> Multistage Validation
> Operational Graphics
> Parameter Variability$,1rs(BSensitivity Analysis
> Predictive Validation
> Traces
> Turing Tests
I believe that "Historical Data Validation", "Predictive Validation",
and "Traces" (using your terms) are the methods that most physicists
would trust, though I suspect there are large doses of several of the
others lying in wait for anyone who does validation blindly.
Beyond those large categories, though, I think it would be a good
thing to delineate the particular instances of any of these methods
that were successful or not... in any domain.
For example, if one tries to demonstrate the validity of a model with
statistical measures, retro-, and pre-diction to a group of physicists
and they don't accept it, which particular techniques (in the above
methods or any others) need to be used to asuage their objections?
Or, if you're embarking on a validation exercise, do you have to have
at least 1 particular instance in each of the above categories in
order to satisfy all potential nay-sayers? Or, perhaps one has to use
different combinations of the above when addressing different
audiences?
<fanciful_imaginings>
This would be great fodder for a PhD in the Sociology of
peer-review. [grin]
Hey! Maybe someone might be interested in writing an agent-based
model where agents use a combination of V&V techniques, writing skill
levels, reputation, and sheer _charisma_ to get papers about their
models through peer-review. If it worked, we could package that model
up and sell it to people who want to train grant-writers, students,
editors, and grant-funders... We might even obviate peer-review
alltogether! If someone submits a paper, just have the simulation
"evaluate" it and accept it or reject it based on qualitative aspects
that represent/describe your journal or funding agency. [grin]
</fanciful_imaginings>
Ultimately, I think that scientists will go back to the standard
methods for evaluating theories. After all, a model is a peculiar
type of theory. For engineers, I suspect the most convincing methods
of validation will be predictive power and how well understood the
boundary conditions are for applying the model. For artisans (like
business people), the overwhelming validation will be a suite of
success stories (regardless of the accuracy of these stories).
--
glen e. p. ropella =><= Hail Eris!
H: 503.630.4505 http://www.ropella.net/~gepr
M: 971.219.3846 http://www.tempusdictum.com