Whereas most employment (please correct me if I’m wrong in assuming this) continues as long as you’re doing what’s required of you to a reasonable standard, scientific research requires employees to constantly prove they’re worth re-employing. This worth is measured in quantifiable outputs such as number of publications (and their journals’ impact factors) and research funding won each year. This is the “publish or perish” system. But surely as scientists, we should be questioning and testing whether or not this is a good system. Does it produce better quality research? Does it lead to more discoveries? Can it facilitate work that takes decades to complete and whose worth to society may only become apparent many years later, if at all? How does basic research – the quest for understanding – fit into this paradigm where even government funding stipulates that the research must produce outcomes with commercial potential?
Here is an excellent article from The Guardian, with 2013 Nobel Laureate, Peter Higgs, addressing the problem with our obsessively measuring outputs in an inherently uncertain field of endeavour.
(On the other hand, I would be perfectly happy to see politicians held to a similarly rigorous scrutiny of their track record, come election time. That’s something for a scientist to dream of…)
Also from The Guardian, another N0bel Laureate explains why his lab will no longer submit to top-tier scientific journals, and how the “high-impact bonus” is not encouraging the best scientific practice.
I would be interested to find any analyses of how the current system of funding and employment is working, and what the alternatives might be. Please leave comments if you have any leads for me to follow up.