Monday, February 02, 2004

Human selfishness is the big question when doing simulations : do you build it in as an axiomatic assumption, and give the game to the right? Or do you leave it out as an axiomatic assumption, and risk people dismissing the work as irrelevant?

What I need is some uncontroversial, plausible, underlying behavioural axioms that can give rise to emergent selfishness in some circumstances, but not in others.

Hilan has a good intuition that it's to do with security : the more secure you feel about the future, the more generous you can afford to be, and the less you horde.

But I'm still working on a way of representing this.


No comments: