### How (Not) To Value Human Life

The following argument is deployed to value human life: take two equally qualified people working identical jobs except that one person is forced that year to play a round of russian roulette with a gun with a barrel of 1,000,000 places with one bullet. Our russian roulette friend is paid more than our friend in a completely safe job. Take the present value of the difference in pay between the two, say $5. For that extra risk he has been paid $5. We can then say that this human life is worth...1,000,000 x $5 = $5,000,000 because if one one-millionth of that human life is worth $5, then our best guess of the value of each fraction of that life must be identical.

But that would be patently wrong. Here's why. I may well pay $5 for a one in a million chance of winning $5,000,000, but I would never pay $2,500,000 for a one in two chance of winning $5,000,000 (I would much rather have that $2,500,000). If $2,500,000 was all my savings, my life say, then maybe I'd take a pay-out of $10,000,000 or $15,000,000, but even then probably not. Something happens when we scale up, the Friedman-Savage hypothesis: in general, we are risk-loving with respect to the possibility of small losses and risk-averse with respect to possible large losses (when the large losses represent a substantial portion of net wealth: Bill Gates would happily risk losing $2,500,000 for a one in two chance of winning $10,000,000).

If the Friedman-Savage hypothesis holds, which seems reasonable enough, it means that the "value" of a human life will depend solely on how large a fraction of it you are measuring. The larger the fraction (the bigger the risk), the more human life will seem to be "worth." Yet whatever number you come up with by simple multiplication will be total bull because it doesn't take risk into account. Even if you try to consider risk, you really can't do it in any meaningful way because you have no way of knowing what happens when you move from a 50 percent to a 75 percent to a 99 percent chance of death in terms of the risk premium involved. And even if you did find some sample of people being paid a premium for a 99 percent chance of death, you'd really have to wonder if you aren't working with a very adverse selection. Perhaps the value of human life is incalculable after all.

But that would be patently wrong. Here's why. I may well pay $5 for a one in a million chance of winning $5,000,000, but I would never pay $2,500,000 for a one in two chance of winning $5,000,000 (I would much rather have that $2,500,000). If $2,500,000 was all my savings, my life say, then maybe I'd take a pay-out of $10,000,000 or $15,000,000, but even then probably not. Something happens when we scale up, the Friedman-Savage hypothesis: in general, we are risk-loving with respect to the possibility of small losses and risk-averse with respect to possible large losses (when the large losses represent a substantial portion of net wealth: Bill Gates would happily risk losing $2,500,000 for a one in two chance of winning $10,000,000).

If the Friedman-Savage hypothesis holds, which seems reasonable enough, it means that the "value" of a human life will depend solely on how large a fraction of it you are measuring. The larger the fraction (the bigger the risk), the more human life will seem to be "worth." Yet whatever number you come up with by simple multiplication will be total bull because it doesn't take risk into account. Even if you try to consider risk, you really can't do it in any meaningful way because you have no way of knowing what happens when you move from a 50 percent to a 75 percent to a 99 percent chance of death in terms of the risk premium involved. And even if you did find some sample of people being paid a premium for a 99 percent chance of death, you'd really have to wonder if you aren't working with a very adverse selection. Perhaps the value of human life is incalculable after all.

## 0 Comments:

Post a Comment

<< Home