I recently wrote about about how dishonesty is a matter of course in modern (Baconian) science (and here) with regard to the British climate memo scandal. Yesterday I ran across a post of penetrating insight on how such dishonesty can fester and what can be done about it:
Truthfulness in science should be an iron law
by Bruce Charlton, Editor-in-chief, Medical Hypotheses
The article fills in details of how the apparently altruistic motivation of Baconian science, as I put it, "easily slides from 'benefit mankind' to 'benefit my scientific career' or 'benefit a political program.'" I recommend reading the whole thing. In case you're in a rush, the post starts with a summary abstract, but for a bit more of the substance, below are a few of the best excerpts. I've added a comment in brackets.
Scientists are usually too careful and clever to risk telling outright lies, but instead they push the envelope of exaggeration, selectivity and distortion as far as possible. And tolerance for this kind of untruthfulness has greatly increased over recent years. So it is now routine for scientists deliberately to ‘hype’ the significance of their status and performance, and ‘spin’ the importance of their research.
Furthermore, it is entirely normal and unremarkable for scientists to spend their entire professional life doing work they know in their hearts to be trivial or bogus – preferring that which promotes their career over that which has the best chance of advancing science. Indeed, such misapplication of effort is positively encouraged in many places, including some of what were the very best places, because careerism is a more reliable route to high productivity than real science – and because senior scientists in the best places are expert at hyping mundane research to create a misleading impression of revolutionary importance.
...
So, in a bureaucratic context where cautious dishonesty is rewarded, strict truthfulness is taboo and will cause trouble for colleagues, for teams, for institutions – there may be a serious risk that funding is removed, status damaged, or worse. When everyone else is exaggerating their achievement then any precisely accurate person will, de facto, be judged as even worse than their already modest claims. In this kind of situation, individual truthfulness may be interpreted as an irresponsible indulgence.
Clearly then, even in the absence of the sort of direct coercion which prevails in many un-free societies, scientists may be subjected to such pressure that they are more-or-less forced to be dishonest; and this situation can (in decent people) lead to feelings of regret, or to shame and remorse. Unfortunately, regret and shame may not lead to remorse but instead to rationalization, to the elaborate construction of excuses, and eventually a denial of dishonesty.
...
Peer usage was the traditional process of scientific evaluation during the Golden Age of science (extending up to about the mid-1960s). Peer usage means that the validity of science is judged retrospectively by whether or not it has been used by peers, i.e. whether ideas or facts turned-out to be useful in further science done by researchers in the same field. For example, a piece of research might be evaluated by its validity in predicting future observations or as a basis for making effective interventions. Peer usage is distinctive to science, probably almost definitive of science.
Peer review, by contrast, means that science is judged by the opinion of other scientists in the same field. Peer review is not distinctive to science, but is found in all academic subjects and in many formal bureaucracies. When peer usage was replaced by peer review, then all the major scientific evaluation processes – their measurement metrics, their rewards and their sanctions - were brought under the direct control of senior scientists whose opinions thereby became the ultimate arbiter of validity. By making its validity a mere matter of professional opinion, the crucial link between science and the natural world was broken, and the door opened to unrestrained error as well as to corruption.
...
Overall, senior scientists have set a bad example of untruthfulness and self-seeking in their own behaviour, and they have also tended to administer science in such a way as to reward hype and careful-dishonesty, and punish modesty and strict truth-telling. And although some senior scientists have laudably refused to compromise their honesty, they have done this largely by quietly ‘opting out’, and not much by using their power and influence to create and advertise alternative processes and systems in which honest scientists might work.
The corruption of science has been (mostly unintentionally) amplified by the replacement of ‘peer usage’ with peer review as the major mechanism of scientific evaluation. Peer review (of ever greater complexity) has been applied everywhere: to job appointments and promotions, to scientific publications and conferences, to ethical review and funding, to prizes and awards. And peer review processes are set-up and dominated by senior scientists.
Peer usage was the traditional process of scientific evaluation during the Golden Age of science (extending up to about the mid-1960s). Peer usage means that the validity of science is judged retrospectively by whether or not it has been used by peers, i.e. whether ideas or facts turned-out to be useful in further science done by researchers in the same field. For example, a piece of research might be evaluated by its validity in predicting future observations or as a basis for making effective interventions. Peer usage is distinctive to science, probably almost definitive of science.
[Government funding of science skyrocketed after the War. With more funding, more scientists and more scientific papers; more publications meant a prospective filtering process was needed. Also: more funding alloys love of truth with other motivations. LG]
Peer review, by contrast, means that science is judged by the opinion of other scientists in the same field. Peer review is not distinctive to science, but is found in all academic subjects and in many formal bureaucracies. When peer usage was replaced by peer review, then all the major scientific evaluation processes – their measurement metrics, their rewards and their sanctions - were brought under the direct control of senior scientists whose opinions thereby became the ultimate arbiter of validity. By making its validity a mere matter of professional opinion, the crucial link between science and the natural world was broken, and the door opened to unrestrained error as well as to corruption.
...
Honest individuals are clearly necessary for an honest system of science – they are the basis of all that is good in science. However, honest individuals do not necessarily create an honest system. Individual honesty is not sufficient but needs to be supported by new social structures. Scientific truth cannot, over the long stretch, be a product of solitary activity. A solitary truth-seeker who is unsupported either by tradition or community will degenerate into mere eccentricity, eventually to be intimidated and crushed by the organized power of untruthfulness.
...
A Great Awakening to truth in science
The best hope of saving science from a progressive descent into complete Zombiedom seems to be a moral Great Awakening: an ethical revolution focused on re-establishing the primary purpose of science: the pursuit of truth.
In using the phrase, I am thinking of something akin to the periodic evangelical Great Awakenings which have swept the USA throughout its history, and have (arguably) served periodically to roll-back the advance of societal corruption, and generate improved ethical behaviour.
Such an Awakening would necessarily begin with individual commitment, but to have any impact it would need to progress rapidly to institutional forms. In effect there would need to be a ‘Church’ of truth; or, rather, many such Churches – especially in the different scientific fields or invisible colleges of active scholars and researchers.
I use the word ‘Church’ because nothing less morally-potent than a Church would suffice to overcome the many immediate incentives for seeking status, power, wealth and security. Nothing less powerfully-motivating could, I feel, nurture and sustain the requisite individual commitment. If truth-pursuing groups were not actually religiously-based (and, given the high proportion of atheists in science, this is probable), then such groups would need to be sustained by secular ethical systems of at least equal strength to religion, equally devoted to transcendental ideals, equally capable of eliciting courage, self-sacrifice and adherence to principle.
...
Much of Charlton's post reminded me of Alexander Solzhenitsyn's essay "Live Not By Lies." Solzhenitsyn's writing was characteristically Slavic in its directness, but equally it needs to be recalled that his living witness to the truth in suffering gave him such razor-sharp insight. In any event I think the pointed resolutions he recommends at the end of his essay could inspire similar (New Years?) resolutions for scientific honesty.
There you have it: there is no substitute for the value of transcendental truth. Only by placing this truth above all other goods is it possible to maintain scientific integrity, which after all ought to be about truth. Somehow there need to be institutional 'incarnations' of this principle (appropriately enough for today's Feast of the Epiphany).
In recommending further reading, Dr. Charlton unfortunately did not link the articles. For your convenience, here are the links:
‘‘Peer usage versus peer review’ (BMJ 2007; 335:451)
‘Zombie science’ (Medical Hypotheses 2008; 71:327–329)
‘The vital role of transcendental truth in science’ (Medical Hypotheses 2009; 72:373–376)
‘Are you an honest academic?’ (Oxford Magazine 2009; 287:8–10)
As I hinted in my comment, I am skeptical of how practicable it would be to return to peer usage. However I am eager to read more of Dr. Charlton's thoughts on the matter. He is clearly a man who has pondered these issues!
(h/t The Joy of Curmudgeonry for pointing out the blog)
Bruce G. Charlton, Truthfulness in science should be an iron law Medical Hypotheses 73 (2009), 633-635.