Implied in Philip Wyatt’s commentary is the idea that as measured by the number of authors on a paper, creativity and scientific knowledge have decreased over time. Not only does that simple treatment miss underlying issues related to government funding and the job market, but it also bypasses the key question of how to measure success.
Companies, funding agencies, and universities all need some method of characterizing and judging the scientific vision of researchers to determine those most likely to succeed at a given task. The initial filtering is typically done without ever meeting the scientist face to face; mustering committees to interview every candidate or grant applicant is not worth the headaches and expenses. With increased competition and a fast-paced environment, people responsible for hiring and funding must therefore find other indicators of the quality of a candidate’s work. Unlike prospective undergraduate and graduate students, scientists cannot take a standardized test on their specialized knowledge. Thus the number of papers has emerged as a chosen indicator; who, then, can blame researchers for pushing for recognition on even small parts of a published result?
The connection between an increase in the number of authors per paper and any supposed decrease in the creativity or knowledge of researchers seems tenuous at best. There might be more noise, but the signal is probably the same; the problem boils down to one of measuring originality.
Reversing the trend that Wyatt observes is equivalent to finding ways of judging scientists on the merits of their scientific acumen. For example, there have been recent attempts at improving the use of an impact factor1 through the g-index,2 the h-index,3 and others that aim to derive some sense of the importance of a particular researcher. Being more nuanced, those types of methods probably have a better chance of correlating with a researcher’s ability than a straight article count. All scientists should participate in this discussion in as many ways as possible, such as through these pages (see letters in Physics Today, November 2010, page 12, and March 2011, page 9).
If we as scientists can change the rubric used to judge success by rewarding outstanding papers and articles rather than their numbers, the pressure to publish would decrease, and perhaps in a few decades we would see a reversal in the number of authors per paper.