Saturday, December 17, 2005

Scientific misconduct

You've probably heard about the ongoing Korean Cloning Controversy. (If not, try this and/or this article.)

If the allegations are true, then this would be a pretty significant case of scientific fakery. I'm reminded of the double whammy in 2002 regarding (1) Elements 116 and 118 and (2) the work of Jan Hendrik Schön.

What has happened to scientific integrity? Have things always been this bad, and never publicized -- or has there been a real decline in professional ethics?

In either case, how can we improve things? Do we need more ethics workshops for students and scientists? Stricter punishment of cheating in undergraduate classes? Less pressure to come up with positive experimental results?


Blogger Vincent said...

Your question of whether scientific integrity is at a low point is a good one. Are we merely witnessing small-number statistics? Has this sort of thing always been going on but has simply been more publicized lately? (Think of it as the "Summer of the Shark" syndrome.) Perhaps the work of cheaters and liars is exposed more easily when a field is hot, since false results are quickly contradicted by newer research.

12/18/2005 05:09:00 PM  
Blogger Justin said...

I can't believe any system could eliminate scientific misconduct-- there will always be those whose ambition exceeds their talents.

I suspect that it is somewhat easier today in certain fields to fake results than it was in the past simply because there are far fewer labs and scientisits that would be able to replicate interesting findings. High energy particle physics experiments, for example, generally happen at one of two accelerators in the world and tend to involve most if not all the scientists that would be interested in the results as a co-author. An unscrupulous scientist could doctor the data for that experiment without much risk of detection for quite some time. There are probably fewer than 10 labs in the world that are set up to study human cloning, for example. Compare that to experiments that were cutting edge 50 years ago, most of which can be replicated in Junior Lab by tens of thousands of undergraduate students. It's a lot easier to give in to the temptation to fake results when you know that it would be years before anyone was in a position to try to reproduce the results.

In order to combat this sort of challenge, I would propose
1) More formal rules that explain the obligations and responsibilities of co-authors. I would expect that this would lead to different sorts of designations for co-authors ranging from the current situation where co-authors have little to know responsibility to independently verify all the findings in the paper up through a designation that the co-author has independently replicated the experiment and the experimental findings and personally vouches for all the paper's finings. That would also tend to make it more clear where credit for results lies. If someone is attaching their name to a paper because it happens to come from their lab though a lower-level professor/ post-doc/ grad student did all the work, they'll tend to choose co-author designation that makes more conservative assertions about what the co-author has done.
2) When a paper is published in a leading journal, all experimental data, in its raw form, must be made available on the journal's FTP/ HTTP/ etc site. This makes it easier for others to build upon interesting results and makes it easier for other scientists to do other sorts of analysis on the data and to ensure that the data itself is reasonable.

12/19/2005 04:00:00 PM  
Blogger Eric said...

If scientists are not allowed to collaborate with each other in a trusting atmosphere, then a lot of good scientific projects will become impossible. Modern science often requires combining highly specialized knowledge, technical expertise, hardware, and software.

Of course, it reflects very poorly on the whole team that is affliated with forged data. But holding every coauthor responsible for verifying the entire paper would be totally unreasonable.

If one journal imposes burdensome requirements for coauthors or making data publically avaliable, researchers will simply choose to publish in other journals. (The journals are already facing significant challenges for justifying their existance (and great cost) in the face of virtually free online sites such as

If your experimental results are sufficiently important, then in the vast majority of cases there will eventually be similar experiments that could prove you wrong. (Of course, if you make a sufficiently inconsequential discovery, you may be able to get away with it. But then it's not worth doing on purpose.) This provides a strong incenstive for carefully checking our own work (in my case mostly for unintentional software bugs or analysis errors by students), so that we maintain a reputation as a researcher who can be trusted and taken seriously by our peers. There are plenty of examples of honest results that are later found to be inaccurate, misleading, irrelevant, or plain wrong by later research. And these do affect scientists' reputations. I would think that maintaining our scientific reputation would also provide sufficient motivation to be honest.

Yes, scientific miscondust is very bad, but it's also relatively rare. But as long as there aren't obvious conflicts of interest (e.g., demonstrating that a drug is effective/safe), then I'm more worried about "honest" inaccuracies (e.g., publication bias, people finding what they expected to find, etc.).

12/19/2005 10:55:00 PM  
Blogger Justin said...

I certainly wouldn't want to put burdensome requirements on coauthors. That's why I suggested having different levels of co-author to reflect the actual distribution of work. Scientists should be free to collaborate, but the paper should make clear who is responsible for what, and who is making particular representations. Clearly, a paper that describes experimental results that were duplicated by different scientists in different labs will be inherently more likely to be correct than a paper that has experimental results from a single person at a single site that had never been witnessed by a co-author. I would suggest that different levels of co-authorship would be a simple way to denote this differing level of credibility. It would also, I suspect, lead to a bit more effort to cross-check results inside the group (i.e. co-authors would want to at least observe certain experiments and validate the details of the analysis in order to have a more prestigious co-author level on the paper). This would tend to reduce both intentional fabrications as well as inadvertent errors before publication.

As for journals, I would expect that they would be excited to become a repository of experiental results precisely because that is a function that clearly has value, that is hard for others to duplicate freely, and that scientists ought to be very willing to pay for. Having instant access to raw data would be a boon to other labs, making it easier for them to build on interesting results. It would guard against the accidental (or intentional) loss of important data due to hardware failures on the original scientist's machine. It would help to guard against the obsolescence of electronic data storage (i.e. all the NASA data on reel-to-reel). It would also make it much more likely that falsified data would be identified much more quickly.

I suspect most fakery is the result of scientists believing that they are seeing results but not having enough data (or enough good data) to conclude that the results they're seeing are real. If they believe that the experiment will be refined in the next year or two to prove them out, and they know that it is terribly unlikely that anyone else would be in a position to replicate their results for 5 or 6 years, they may well be tempted to "pretty up" the results now, get the awards/ acclaim/ grants/ prestige/ tenure that the results provide, and expect to "make everything right" by refining the experiment before anyone discovers the deception. The proliferation of very specialized knowledge, hardware, and software along with very loose rules on collaboration has made it very likely that "interesting" results won't be verified for many years simply because it would take that long to assemble another team to do so, to get the appropriate resources scheduled, etc., which, I believe, has made it more likely that researchers would be tempted into misconduct.

12/20/2005 01:11:00 PM  
Blogger Justin said...

FYI, there is another New York Times article on the subject, Global Trend: More Science, More Fraud.

12/20/2005 03:16:00 PM  

Post a Comment

<< Home