Search Google

4/30/15

Overreaction syndrome

Should police be more aggressive or less aggressive in dealing with minor crime? Should suspected terrorists at Guantanamo Bay be released or held? Should mortgage lenders be urged to tighten underwriting standards or loosen them? Should subsidies for attending college be increased or decreased? Should health insurance companies increase or decrease efforts at preventing fraud?

These sorts of public policy issues can give rise to mistakes in either one of two directions — in the terminology of statistical decision theory, they can lead to Type I and Type II errors. If you choose to reduce the chances of making one type of error, then you increase the chances of making another.

Unfortunately, the political process tends to focus on only one error at a time. This causes policymakers to overreact to the most recent error, leading to an even larger error of the opposite sort.

For example, when house prices were booming prior to 2007, politicians thought lenders were making a mistake in turning down potential borrowers. They urged lenders to increase their willingness to work with subprime borrowers. Instead, when the housing bubble popped, it became clear that lenders had been making the mistake of approving too many borrowers. Politicians shifted focus to this type of error, so that in 2009, when the housing market most needed help from lenders, the political pressure on lenders was to tighten credit.

Today’s short news cycle tends to steer people away from thinking carefully about trade-offs. When the media is focused on crime, aggressive action against minor crime is termed “broken-windows policing” and earns praise. When a suspect is killed while being arrested for a minor crime, aggressive police action becomes “harassment” and draws accusations of racism.

Demagoguery by politicians plays a role. When a politician advocates a policy, he or she will emphasize the need to reduce one type of error, while never admitting that the proposal being offered will increase another type of error.

When I teach basic statistics, the example I use to explain the concepts of Type I error and Type II error is a murder trial. If the jury votes to convict an innocent suspect, that is a Type I error. If the jury votes to acquit a guilty suspect, that is a Type II error.

In a trial, a juror is instructed to vote to convict if the defendant appears to be guilty “beyond a reasonable doubt.” This does not mean beyond all doubt, but it does suggest that jurors should be particularly careful about avoiding the Type I error of convicting a suspect who may be innocent. Thus, in situations where there is conflicting evidence, juries will not convict. In the notorious recent case in Ferguson, Missouri, where some witnesses said that Michael Brown charged at the officer who shot him and other witnesses gave conflicting testimony, it therefore seems unlikely that a jury would have voted to convict the policeman had he been indicted.

Here are some more examples of decisions, described in this framework. In each case, if you say “yes” to the question, you increase the chance of a Type I error but you reduce the chance of a Type II error.

Graph

 

Many policies that reduce Type I errors will increase Type II errors, and vice versa. However, there is a way to reduce both types of errors, by obtaining and using information. For example, rather than provide universal aid for community college, policymakers could try to identify the characteristics of students who are likely to benefit from such aid. A statistical analysis of past performance might show whether grades in high school, standardized test scores, or other information are more useful in predicting long-term success from attending community college. Of course, it costs something to obtain and use information, and those costs have to be netted against the benefits.

We go back and forth between mortgage underwriting standards that are too loose and underwriting standards that are too tight.

Unfortunately, the policy process seems to go back and forth between the two types of errors. First, a media frenzy will make salient one type of error, say, Type I. Next, demagogic politicians will demand the elimination of the Type I error. The new policy will cause an increase in Type II errors. Once the Type II errors become salient, a media frenzy will cause that type of error to be in focus, and the policy process goes into reverse. The cycle is endless.

For example, we go back and forth between mortgage underwriting standards that are too loose and underwriting standards that are too tight. Typically, we loosen near the end of a boom and tighten at the bottom of a crash.

These back-and-forth swings take place in all of the policy realms in which Type I and Type II errors exist. One year, the priority is punishing health insurance companies who deny coverage for expensive medical procedures. Another year, the priority is reducing health care spending. One year, we worry that college subsidies simply inflate tuitions and reward colleges with low graduation rates. Another year, we worry that not enough students are going to college.

To me, what elementary statistical decision theory can tell us is that we need to keep in mind both types of errors. Otherwise, if we think of only one error at a time, then all we get is a cycle in which we amplify the opposite error until it becomes salient, and then repeat the cycle.

Arnold Kling is an adjunct scholar with the Cato Institute. The views expressed here are his own.



from AEI » Latest Content http://ift.tt/1zseHRC

0 التعليقات:

Post a Comment

Search Google

Blog Archive