Saturday, October 15, 2005

Excellent discussions of game theory and rationality

First, from Mark Kleiman:
"Game theory" is a branch of mathematics usable by social scientists, not a social-scientific theory. It's a deductive account of what will happen if actors act so as to maximize their own outcomes in situations in which the outcome for each depends on the behavior of others as well as his own. It doesn't predict anything about the real world, any more than algebra predicts anything about the real world.

In order to generate predictions using game theory, you need to add some facts: about the outcomes of different combinations of actions, about what the actors want, and about their rationality (vel non). Given such assumptions, it is possible to compare the results of real-world events to game-theoretic conclusions.

When they match, then it's reasonable to think that you have correctly identified the outcomes as the players evaluate them and that the players are acting as selfishly rational actors. When they don't match, then either you've got the outcomes wrong, or the players aren't trying to act selfishly, or they're trying to act selfishly but making mistakes.

By using money as the outcome and making sure that the participants understand the situation, experimenters can narrow the question down to whether the players are trying to act selfishly. Where their behavior doesn't match game-theoretic results, as it frequently doesn't, for example, in the Ultimatum Game and Public-Goods Contribution experiments, you've got a quite powerful finding; that's why behavioral economists have made so much use of such simple experimental games. Those experiments don't make any sense without game-theoretic results as a baseline.
Right. Game theory is a knife we can use to cut up the world into bits, not necessarily an explanation of the world. If we find something that matches a game equilibrium, great. If we find something that's way off (the Centipede game, for example) we know that we need to think differently. That's exactly what behavioral economists are doing; they aren't trying to throw game theory out. You can watch an excellent lecture by Ariel Rubinstein here on this subject. Now, to John Quiggin:
The basic problems surround the kind of use that is standard in economics and related discipline, in which ‘rational’ choices are those that maximise the value of some objective function. A lot of energy has been dissipated on disputes over whether this is a normatively compelling or descriptively accurate, or whether some alternative such as ‘satisficing’ would do better.

Rather than taking sides in this dispute, I will offer the following purely mathematical claim. Given any data on any observed set of problems involving the selection of one or more choices from a set of alternatives, the observed choices can be represented as the maximisation of an appropriately specified function. To give an easy example, satisficing can be represented (rationalised) as optimising, taking calculation costs into account, or alternatively as a combination of set-valued maximisation with a selection rule based on the order in which alternatives are presented.
Contrary to say, Barry Schwartz's claims about economic rationality.

Though, John's purely mathematical claim isn't true in general. It only works for ceratin data, not every set of observed choices and alternatives can be rationalized. In particular, you need the Generalized Axiom of Revealed Preference to hold. (Sorry, I couldn't find a better link than that.)


Blogger Isaac said...

Does revealed preference have to hold? Quiggin's claim isn't about any kind of rationality, just that you can always explain an individuals set of choices in terms of them maximizing something, which I've always, intuitively, understood to be true. Because that is the beautiful and tautologous nature of economics.

1:44 AM  
Blogger henry said...

Sort of.

Suppose an individual chooses a from {a,b}, b from {b,c}, and c from {a,c}. Then if f:{a,b,c}->R is a function representing those choices you have to have f(a) > f(b) > f(c) > f(a). This is a contradiction.

You can rescue the theory by saying that the choice of a from {a,b} only indicates that a is *one of many possible* maxima from {a,b}: f(a) >= f(b). Then f(a) >= f(b) >= f(c) >= f(a) so f(a) = f(b) = f(c) = f(a). (Note that if even one inequality is strict, we still have a contradiction.) f is just a constant function.

Unfortunately, this is pretty meaningless since you can turn any set of choices into a maximization problem by letting the objective function f:X->R be given by f(x) = c where c is some constant. Any choice "maximizes" f because f is a constant function. But this doesn't really say anything.

So in mathematical terms, yes, any set of choices can be understood in terms of maximizing an objective function, but if the choices are not nice you'll get something trivial. If you want to use a stronger formulation of revealed preference there will be choices that don't work.

9:32 AM  
Blogger henry said...

Oh, in particular GARP *doesn't* have to hold for any sets of choices.

9:34 AM  
Anonymous Anonymous said...

Alternatively, you can include some or all characteristics of the set of alternatives as an argument fo the function to be maximised (this is done in various forms of 'regret theory' for example). This is what I had in mind and doesn't require Revealed Preference.


7:09 PM  

Post a Comment

<< Home