From the “No kidding?” department, apparently, the GAO agrees that doing this type of research is tough in their new 41 page report. ^_- I actually think they did fairly well in pointing out some of the problems in these research methodologies.
Three widely cited U.S. government estimates of economic losses resulting from counterfeiting cannot be substantiated due to the absence of underlying studies… no single method can be used to develop estimates. Each method has limitations, and most experts observed that it is difficult, if not impossible, to quantify the economy-wide impacts. Nonetheless, research in specific industries suggest that the problem is sizeable, which is of particular concern as many U.S. industries are leaders in the creation of intellectual property…
…We determined that the U.S. government did not systematically collect data and perform analysis on the impacts of counterfeiting and piracy on the U.S. economy and, based on our review of literature and interviews with experts, we concluded that it was not feasible to develop our own estimates or attempt to quantify the economic impact of counterfeiting and piracy on the U.S. economy.
Leaving aside some of the possibly questionable assumptions inherent in the type of questions Congress wanted the GAO to examine, let’s look at the numbers. So, already existing government numbers?
Three commonly cited estimates of U.S. industry losses due to counterfeiting have been sourced to U.S. agencies, but cannot be substantiated or traced back to an underlying data source or methodology.
Well, those are some favorite cited numbers- what else do we have? How about the BSA?
While this study has an enviable data set on industries and consumers located around the world from its country surveys, it uses assumptions that have raised concerns among experts we interviewed, including the assumption of a one-to-one rate of substitution and questions on how the results from the surveyed countries are extrapolated to nonsurveyed countries.
It is difficult, based on the information provided in the study, to determine how the authors handled key assumptions such as substitution rates and extrapolation from the survey sample to the broader population.
At least there was one academic paper cited in this area.
The study indicated that downloading illegal music can have a positive effect on total consumer welfare. However, as explained by the authors, this experiment cannot be generalized; the data consist of a snapshot of undergraduate students’ responses, which is not representative of the general population.
Yup. Possible positive effects, but you can’t necessarily extrapolate those findings to the public. That’s important when looking at these types of studies- they’re not necessarily generalizable. That’s pretty much a constraint of ANY study of this type, including the others mentioned that had highly questionable assumptions and equally questionable methodologies. The lack of generalizability doesn’t mean that the study isn’t useful- you can still plan courses of action at least informed by such studies, and of course you can plan to do additional research.
I think that it’s really important that people use real numbers when making arguments about what law and policy should accomplish. The problem is, people don’t really seem to have any kind of incentive to do that. Law and policy seem to be made on talking points related to the horrible numbers and other appeals to emotion not based on evidence, and that is a shame.