‘Errors of measurement’ or ‘Impact factors are just a gimmick’: An Interview with Howard S. Becker
Dagmar Danko, Editor of The European Sociologist
Howard S. Becker, San Francisco and Paris
The US-American sociologist Howard S. Becker is the author of foundational studies such as Outsiders (1963) and Art Worlds (1982). He has also published best practice guides concerned with doing sociology, such as Writing for Social Scientists (1986) and Tricks of the Trade (1998). Since the turn of the millennium he has become more and more interested in questions raised by the sociology of science. In his latest book Evidence (2017), Becker deals with the supposed opposition between quantitative and qualitative sociology, exposing the recurring errors of both. He especially draws our attention to a detail often ignored and even less made the subject of discussion: Who collects the data?
Copyright: University of Chicago Press
Since 2010, I have been engaged in an ongoing dialogue with Howie. I am taking the opportunity of this interview to wish him a Happy 90th Birthday. His impact on many sociologists, including myself, can really never be quantified.
DD: Howie, in your latest book EVIDENCE, you are challenging all statements saying “The data show that....” You are drawing attention to the fact that probably, “the data show something else entirely”. What do they show, then?
HSB: Sociologists like to present evidence to support what they say about the world. I am very strict about evidence, very sceptical when I'm told that “this data proves X, Y or Z”. Before I accept statements like that I try to find out what was actually observed and who did the observing. A simple example can stand for what I look for. Suppose someone tells me “the crime rate in New York went up last year compared to the year before”. Before I accept that, I ask who counted the crimes and how they counted them already. Since it’s not possible for the people who say such things to actually observe what happened on the occasion of each counted crime, they rely on what someone tells them. The somebody is usually the police. But it has been demonstrated that the police don’t know about all the crimes and don’t report all the crimes they are told about or observe and further, police don’t look for or prosecute white collar crimes in the same proportion as they do other kinds of crimes, and we would have to adjust the numbers reported to take account of these flaws if we had an accurate count of them.
Another example occurs when people report survey results and assess them by the probability of the distributions their results show. But the statistics used to assess that probability depend for their force on the sample of people surveyed having been chosen randomly. But no surveys use probability samples – it’s entirely too expensive – so the probabilities presented are pure invention, with no scientific validity.
What I would like to see is reports of findings that take account of all the known sources of errors in the data the report relies on. The two best kinds of studies in this respect are usually those done by what is ordinarily called “field work” or “ethnography” etc. and the data collected by censuses, which among other things usually report on whole populations and are not therefore subject to sampling errors and also make a much greater effort to be accurate because the stakes are higher.
DD: You have talked to me about [Donald T.] Campbell’s Law before, according to which a metric becomes subject to “corruption pressures” once it is used for social decision-making. You also mention this in your book.
HSB: Campbell’s Law was created in the context of the beginnings of the widespread desire for ways to evaluate the success of innovative social programs, designed to “solve” this or that newly recognized “social problem”. Campbell, widely known for his acute analysis of the problems involved with trying to solve social science problems with the experimental methods psychologists like him routinely used, got interested in these problems of evaluation. He soon realized that that did not solve a major problem: as soon as the people involved in the new activities that were to be evaluated realized that what was being measured was going to be used to decide who would be rewarded – with more grants, for instance – everyone involved began to look for ways to make their results look better. They started making their choices of what to do for their clients and how to do it by figuring out how each possible action would influence the measure being used to give out rewards. This might be something as simple and obvious as how a psychology clinic chose patients for a “therapeutic intervention”. If you chose people with problems that were easier to cure you would probably look like you were doing a better job of therapy, but in fact those results would only show how good you were at choosing less sick patients. Which meant that the metric was flawed, not measuring what it claimed to measure. Campbell’s Law codified this insight in a concise and understandable way. It affords an excellent guide to errors of measurement.
Howard S. Becker in Paris 2016, photograph by Dianne Hagaman
DD: The importance of scientific journals is – seemingly – translated into an impact factor. The higher the impact factor, the more scientists want to publish with that journal, because for many, it is crucial that such a journal figures in their list of publications. But to what extent is publishing in such a journal “evidence” for the paper's quality or relevance?
HSB: Well, I hate to be so negative but... I have to say zero per cent. Why? Because “impact factor” is a phony measurement. It pretends to measure scholarly or scientific quality. But to argue that would require – this is common practice in any serious science – showing that the impact factor, however it’s calculated, is correlated with an independent measure of quality or relevance or whatever anyone is claiming it measures. Without that demonstration, and no matter how many people accept or act on such a specious claim, it’s not proof of anything until they validate it that way. Absent that, it’s just a gimmick that the publishers of scientific journals use to persuade their customers to pay outrageous prices for their products. And in the same way “impact factors” give academic administrators a specious “measurement” they use to justify decisions on hiring, promotions, etc.
DD: This sounds like a vicious circle. What can young scholars, who feel pressured to publish in a very few specific journals, do? They need grants and jobs for being able to continue working as researchers and hardly can revolutionise “the system” by themselves. Or can they?
HSB: This is a classic version of an ancient insoluble problem. As you state it, the situation you describe has no solution. Your description assumes that the present situation will stay exactly as it is for the foreseeable future. All the constraints you mention, all the things you describe as creating this problem for scholars, are presented in this formulation as not changeable, at least not by the actions of the people you and I are worried about; the young sociologists who would, if they only could, be doing original research. They need jobs. They need money for research expenses and to free their time. The only way they can get these things is by accepting these constraints and doing what “the system” requires.
But is everything really so unchangeable? Particularly what the scholars affected in this way desire. Suppose they don’t accept all the conditions imposed by these so apparently powerful institutions. What if they find some other way to support themselves, one that leaves them time and resources to do the good work they want to do? The System is not so all-powerful if we relax the demands we make on it. I wrote a paper in 1970 with Stephan Dedijer called “Counter-Establishment R&D” which has many suggestions about how to do this.
DD: I read the article and it is quite amazing that you wrote it in 1970. Can I conclude this interview by saying that the gist of your suggestions is: “Just do it”?
HSB: Yes! I can see that you have thoroughly assimilated my stubborn, somewhat crazy, way of thinking.
DD: My pleasure. Thank you!
Howard S. Becker and Dagmar Danko 2015 in Vienna, photograph by Hans-Ulrich Werner
Becker, Howard S. (2017): Evidence. The University of Chicago Press.
Becker, Howard S. and Stevan Dedijer (1970): Counter-Establishment R&D [A word to set the stage for a memento of the more-or-less recent past]. In: International Journal of Communication 11 (2017), Feature 1745–1754 1932–8036/2017FEA0002.