Scientists are just as prone to following fads and overvaluing (or undervaluing) certain brands as anyone else. We get superstitious about which companies or brands (or journals or labs) are the best, often based on almost nothing (or absolutely nothing), just like everyone else.
But there’s one huge place where this can be a serious problem: publications.
The way science gets disseminated is through publishing articles in peer-reviewed journals*. And people talk about prestige of journals, which is in large part based on impact factor, a single number (with plenty of flaws). And that number largely determines whether it’s a “good” or “bad” journal. Additionally certain labs are known for being published in “good” vs “bad” journals, and where someone publishes (or where a lab typically publishes) is seen as a mark of the quality of the scientist.
*there’s plenty of debate about whether this is even the best way to disseminate information. See my previous post; this blog devoted to publishing research in real time; this article which gives you a taste of how difficult articles can be to read and understand; and this article about the challenges of open access.
Now there are a few obvious problems with this.
- Impact factor is a flawed measure. It’s just the number of times a paper gets cited divided by the total number of papers published. But people give highest weight to the journals with the highest impact factors, so they get cited more. So you get a positive feedback loop that overestimates the value of the “good” journals.
- We tend to assume that just because something is published in a “good” journal, it must be a good article. And this isn’t actually true. The best journals (Science, Nature, Cell) are also generalists. They tend to publish more flashy, exciting research that is broadly applicable or of interest to a larger variety of people. That means a paper could get rejected from one of those journals, even if it is incredibly high quality research, simply because it’s too specific or not (obviously at the time) applicable to broader research areas. This contributes two sub-problems.
- When someone looks at this researcher’s CV or resume, they only see that they published in a lower impact journal unless they take the time to read the actual article (and are familiar enough with the area to realize that it was excellently done). This can hurt researchers who do excellent work. Especially since the whole point of research is that you don’t know the answer, so it can sometimes be difficult to tell whether or not what you’re studying will be broadly impactful or not.
- The high impact journals are more likely to have retractions. Part of this is that they’re read more, so they get more feedback and criticism, so errors are more likely to be caught. But this is also because the pressure to publish in a good journal (and to publish the new, flashy finding before someone else beats you to it) means that sometimes this research can be rushed, causing flaws in the methodology or conclusions drawn to be overlooked.
- Certain people or labs begin to be associated with quality of research based on where they publish. But then evaluation of future publications get based on the prestige of the lab, which was based on where they have published before. So you get this circular thinking related to lab quality research, which may not be based on the quality of the current research quality at all. I have definitely read articles before in top journals by labs known for publishing in top journals (and therefore assumed to be doing good science) that are actually terrible! And it has really solidified in my mind that lab prestige and publication prestige isn’t correlated highly with the quality of the research at all. (I should be reading and thinking critically about the actual methods and results anyway, but realistically I can’t read everything, and too often we fall into a trap of choosing what to read based on these baseless ideas about prestige….)
Of course we also get overly invested in certain research trends (we follow fads in topics to study and methods to use [though really, optogenetics is just too cool :P]). There are definitely cool and uncool things to study (though I’ve never actually heard someone say, “That’s so 10 years ago” about a research topic, I’ve definitely heard the sentiment expressed). And people get defensive about their pipette brands or reagent brands (or sometimes there’s collective hatred for one; I can’t even count the number of times I’ve heard “Don’t get Santa Cruz antibodies; they never work”**). And sometimes this is harmless (I use certain pipettes preferentially for no real reason), but when it comes to something like publications (which are the biggest determinant of your career and funding opportunities) this can be a huge problem
**Santa Cruz Biotechnology is a the largest producer of antibodies. The thing about antibodies though is that to be most useful, they need to do 2 things: 1) bind to the thing they’re supposed to bind to and 2) not bind to anything else. There are ways to test that this is true for your antibody, but most companies don’t verify the antibodies themselves. They typically leave that up to the researchers (and then if you’re lucky someone will have published with it or will have posted a review so you know whether or not it works before you order it yourself). So because Santa Cruz Biotech produces so many antibodies, but doesn’t verify them, there are a lot out there that are unverified. You basically end up with a higher likelihood of getting one that doesn’t work just because they make so many. But then they get the reputation of being unreliable.