22 Words

Scientists explain their processes with a little too much honesty [17 pictures]

June 27, 2013 | By Abraham | 39 comments

A few months ago, scientists started making “secret” confessions on Twitter  using the hashtag “#overlyhonestmethods.” And now some kind soul (who probably should have been doing their research instead of playing around on the internet) combined a few of these Tweets with lab photos for the rest of us to enjoy…

Overly Honest Scientists - 06

Overly Honest Scientists - 12

Overly Honest Scientists - 02

Overly Honest Scientists - 03

Overly Honest Scientists - 04

Overly Honest Scientists - 01

Overly Honest Scientists - 05

Overly Honest Scientists - 07

Overly Honest Scientists - 08

Overly Honest Scientists - 09

Overly Honest Scientists - 10

Overly Honest Scientists - 11

Overly Honest Scientists - 13

Overly Honest Scientists - 14

Overly Honest Scientists - 15

Overly Honest Scientists - 16

Overly Honest Scientists - 17

Share the ridiculous honesty with your friends using the buttons below…


    1. Sonya says:

      I believe that it is healthy for us to admit that this type of thing goes on regularly in science. Part of the reason I left the science research field was that everyone in my lab knew that many decisions are made based on convenience or compliance with regulations and not on best use of scientific method, but nobody would acknowledge that it happens or that it affects the conclusions that are made. In order to fight ‘junk science’, we must acknowledge that it exists and is more prevalent than we would like to think.

      1. Chelsea says:

        Don’t forget about when new data of either your own or other groups are inconvenient to your hypothesis, they’re explained away. This reason and yours are why I left research as well. I don’t even call it “junk science” since it’s just the status quo (at least for molecular biology).

        1. Andrew says:

          I fought with my PI because my fact based conclusions disagreed with his agenda. I argued that using drug candidates with off-target effects of unknown cause meant that the conclusions about our receptor based on cellular assays and in vivo mouse behavior studies were suspect due to possible confounding from those off-target effects. He insisted that one would have to prove that the off-target effects cause problems with interpreting experimental results and that his conclusions were right until proven otherwise, which is the exact opposite of the scientific method. He also insisted that one would have to prove the drug candidates were dangerous instead of reasonably safe (which is the FDA requirement, as the easiest way to NOT prove something is dangerous is not to test it). I can go on and on, but what I’ve said (which I have written evidence of) is enough to show how bad bad science can be.

          On the other hand, we who love good science can fight to take back our fields. It will not be easy and it will not be without a price, but we can do it. And the first step is to remember Nuremberg: “Just following orders is no excuse” – say no to bad science.

        2. Simone says:

          I think that the real problem here is that journals don’t want to publish “negative” results. Suppose that I take a reasonably interesting hypothesis and spend one year studying it into all aspects and experimenting on it thoroughly. Whether the hypothesis is proven true or false at the end it doesn’t change the fact that I worked on it and that should be acknowledged – if it’s wrong, I just proved so and this will help future researchers to scrap that hypothesis to begin with. But the general feeling is that only positive results should be published – hence leading to doing dialectic acrobatics to try presenting your negative results as positive.

      1. Rick says:


        People were having fun with a hashtag on Twitter. That doesn’t mean that they are pouring out soulful confessions. Some of these might not be 100% accurate. And most of them would be meaningless scientifically even if stipulated to be true.

        And any decent editor is capable of dealing with any conflict between a troublesome referee and authors who ignore said referee. Authors who simply ignore a referee’s comments risk being told that they need to restart the submission process from scratch.

    2. Mark says:

      It’s only junk if they do not report the exact parameters of their methodology. If their results are reproducible, and their rationale is sound, who cares if they work a 9-5 day?

      I’ll admit it is important to have some pre-set standards – spin at this force, check at these time-points, but many other things are all super arbitrary anyway.

      Reviewer 1 usually is a f***ing idiot anyway.

      1. Z says:

        Indeed. It’s not like you can just randomize or control every element of an experiment. The first one is a combinatorial explosion, while the second one is only feasible within reasonable limits. Would we all like our social science experiments to have one million representative subjects from all countries and continents? Sure. Would we like to explore all possible combinations of reagents, machine settings, and other parameters? Yup. Is it going to fit into the paltry <$100k annual grant to fund that experiment? Nope!

        The truth is, experimental design is, well… design. Design is an ill-defined domain that falls somewhere between art and science. Everybody needs to make some assumptions in their work. A LOT of assumptions. If you don't, you simply could not do anything at all. Assuming that protein synthesis works the same from 9 AM – 5 PM as it does from 5 PM – 9 AM? That seems like a pretty safe assumption (unless your lab electricity breaks down regularly between the hours of 1-3AM, I suppose). I mean, how many of these are actually bad science? Certainly #1 (if you want to publish that, better go pick up some muffins and corner that postdoc), #2 (over-reliance on convenience samples is an endemic problem for social science), and the rat thing (I doubt Tom Petty was disclosed to the IRB).

        #3 (Paywall issues), maybe? It depends why you're citing it. Certainly, you have to read all the papers that are central to your arguments in full. However, sometimes you just want to cite that someone did something (e.g., the existence of such work). Or maybe you read the conference paper, but the expanded journal article is behind a paywall. Based on the abstracts, you know the conclusions are the same for the conference paper and full paper, and feel that people should read the article if they can (even if you can't). In other cases, you're really referring to a *person* rather than a specific book, via their seminal work. In that case, there is sometimes an "expected citation" which is used, even if no one in their right mind would actually track down the original print (original Chinese holdings of Sun Tsu's Art of War, anyone?). The same holds for major projects: sometimes you want to cite the one you looked at for 2 minutes but the first author is the PI (rather than the one you read for 2 hours by his baker-postdoc), so that people can actually track down their larger body of work.

        Thankfully, I've had good institutional holdings throughout my career, so it's easy to grab the paper if I haven't read it and at least go through it quickly. But if you aren't that lucky, what's the alternative? If you're at a 3rd tier institution in Ghana with no subscriptions beyond what Google gives you, should you just pack up your bags and not do science? It's certainly an issue for the research community that scientists are being shut off from publications, but I don't see the harm in having individual scientists trying to do the best with the limited information that they have. References fundamentally are intended to give attribution and point people to other good, relevant supporting work. If they do that, I'm happy. If they don't do that, a decent reviewer should notice it.

        Reviewers – Sometimes reviewers are wrong or misread things. Sometimes you use language that was too ambiguous, causing reviewers to ask for nonsensical things (pro-tip: Never state that something "correlates well with" in your discussion unless you mean quantifiable statistical correlation. Even if the two entities are things that are obviously non-quantifiable, such as people or books, someone will ask you for the numbers.) While you don't tell them to sod off, you do need to gently state that the requested "corrections" were not made because they were invalid or ill-advised, usually while clarifying your phrasing to make such misconceptions impossible.

        Almost all the rest of these are just fixing parameters, basically. If no literature states that those parameters are important, you have no reason to think they're important, and they are not the focal point of your study? Who cares what you set them to? As long as you report it, great. Science without fixing some parameters is like Philosophy without any axioms. You'd have nothing to stand on.

      2. GvS says:

        Time points tend to be arbitrary too. Even though one should randomize the intervals to avoid systemic bias, no one does that and 99.9% of all comercial software doesn’t even have an option to randomize measurement intervals.

    3. al says:

      If any thing this is how we avoid ‘junk science’, transparency of method is paramount to being able to critically evaluate and replicate experiments.

    4. Misha says:

      I think you are misapplying the term “junk science.” I think it’s meant to describe purposely biased science (in favor of an agenda). I’m sure we can find links regarding that.

  1. Surly says:

    No, I wouldn’t say this leads to junk science. The cheap and convenient methods discussed here are hilarious, but they save time and money. And they still give perfectly valid results.

    I did field research that involved dissecting frozen mice. Sometimes I laid several out on a sunny slab of concrete to thaw. I had to stand by with a rock to make sure the monkeys didn’t steal and eat them. Meanwhile, one of my lab partners did immune assays while carrying frozen shrews in her bra.

    Both of those thawing-acceleration methods were sketchy and hilarious. But they had no effect on the rodents’ internals. And most of the stuff under #overlyhonestmethods doesn’t actually affect the results they looked at, either.

    1. Nicole says:

      Yep. For example, we do the time point one all the time. As long as they start and end on the correct days, and there are a sufficient number of sampling points in between, it really doesn’t matter which day we run them on if we’re trying to avoid a weekend. The results are not affected.

    2. Bonnie says:

      Yep. I had a method that actually incubates for three days and it was convenient to set it up before the weekend. Actually to plan your methods so you can use the O/N or weekend breaks as incubation times, or match the methods to your teaching scheldule is a sign of well organized researcher. More gets done, less time is wasted. Of course that doesn’t mean bending incubation/experiment times to be convenient – only choosing which method to do now.

    3. Z says:

      “carrying frozen shrews in her bra”

      Well, that is a variant of The Taming of the Shrew I had not heard of…

  2. Zaf says:

    I agree results do not change with modification, as long as one is not dishonest in methodology and reporting. In one experiment I had to do vacuum filtration. Yes, to your surprise, I did not have a vacuum pump. I used the vacuum-sweeper and connected the hose to the filtration flask (with a trap in between so that the sweeper is not ruined). The findings were published in a refereed research journal. In my grant proposal, a request was made for a vacuum pump. I should not have done this, but I did mention my existing device. The provost got very upset, and when the time for my promotion and tenure came he denied it, despite the recommendation of the departmental and college review committees and the dean. I appealed to the Board of Regents. They examined my quality of research publications and I got my promotion and tenure. Yes, the vacuum sweeper saved me from getting fired!!

    1. Tink says:

      Only if you believe all climate science can be done in a lab, or that all scientists are climate scientists…

      It doesn’t necessarily stand that any of the above are in any way dishonest about their research, or making up results. They are, however, choosing when or where to take those results, for how long, etc. So long as they don’t pretend their methodologies were different to reality, it doesn’t matter. Science doesn’t care whether you tested something on a saturday night, or a monday morning, it just cares that you keep conditions reproducible. Whilst we need to be on the lookout for potential causes of bias, not everything you do will cause bias (and despite everything that you do, you may not be able to prevent it anyway).

      Incidentally, the time I spent in research showed me how much time and energy scientists put into all their work. Evenings, weekends, holidays; science waits for no man or woman. Many are pressured into spending more time at work than is healthy; and certainly much more time at work than most of us would like. They still need to go out and eat, live e.t.c. and aren’t even particularly highly paid for the time they put in, as others have noted. I’m always amazed by how little we all consider people with unpredictable, unsociable hours.

  3. Janarthan Rama Murti says:

    This confessions are so awesome!!!!!!!!!!!!!
    Bring out more of them :)
    This is so useful……please dont think youre doing something wrong, I believe this confession move science forward…

    Imagine what would Alexander Fleming’s confession would be (he discovered penicillin):

    “I left culture plates to be grown with fungus in the lab sink, instead of washing them immediately, cause i was did not organise my lab well & didnt want to work on Sundays, but Hey that helped me discover Penicillin” lol

    1. Janarthan Rama Murti says:

      This confessions are useful, They very well help us identify possible bias in our experiments & studies, hence helping us improve our research

  4. Rachel says:

    This happened a lot at uni I remember :p Thankfully my current place isn’t too bad. Whenever we collect samples we spend the week on the field and then bring it back to lab and most of the work is already done on the trip. The rest is just final identification :)

  5. price says:

    Talking about pros, v3 diet pills give faster results and reduce the extra weight shorter period. Phendimetrazine v3 diet pills suppress appetite so that there are no cravings for food or snacks between meals.

  6. Till says:

    I’d agree that most of these confessions could be true – and not all of them are “sploppy science”.

    But I have an issue with the presentation here: scientists are of course people in white coats in labs, even if the confessions clearly point to sociological or psychological studies (i.e. the one about 50 Ivy league kids).

  7. CK says:

    Take these as a laugh. And yes, as scientists, we all have learned these funny claims such as the reason for overnight incubation is just because we all want to go home for dinner, which is very true. But if overnight incubation does not hamper any of the final findings, why not? Assume that most of us scientists are intelligent people, we know what we can and cannot do, right? At the end of the day, scientists are still human (and lowly paid), we still need to eat, we still need to do our shoppings and laundries. We all wish that we have food replicators like in Startrek, don’t we? Nor do we have NS-5 robots to take care of our household work. We need time off from our ‘job’, just like anybody else. The other issue that some of these tweets bring up is money. You often have to ask why we still have to use a spectrophotometer or a centrifuge which are almost as old as you yourselves? Why do I have to write a grant for ‘lab proofed’ chairs or stools costing 5x more than those at IKEA which look exactly the same? These stories will go on forever in the science community. We can only say that life is only getting tougher for us.

    1. Darius says:

      I agree with CK, we are still human and need to have a life, you cannot work properly if you feel like an enslaved bench monkey. And PhDs need a proper end, one cannot do everything, that’s why things are published, so that others can go on with what it is missing

  8. Chelz says:

    I have to admit to using the “quick-thaw” method as well. Trying to do a Bradford assay? Forgot to get the Comassie Blue to room temp? No prob. Make certain its well-sealed and pop it into your bra. Viola! (but not going into the methods, thank you very much.)

  9. Mitz says:

    got surprised! I have used sample eppendorfs as my earplug..to get rid from the machine noise as well as for quick thaw! An intelligent way to perform RT incubation!

  10. Another reader says:

    It just goes to show that T.S. Kuhn was right. Science is a community, not a set of methods (that everyone agrees everyone should agree on, even though they don’t).

Leave a Reply

As seen on Huffington Post, CNN, BuzzFeed, New York Times, Scientific American, Mentalfloss, USA Today, Funny or Die, Gawker, Gizmodo, Laughing Squid, Boing Boing, Hot Air, Jezebel, Neatorama

About 22 Words

22 Words collects a blend of everything from the serious and creative to the silly and absurd. As your source for the crazy, curious, and comical side of the web, 22 Words can be counted on to share funny and fascinating viral content as well as more obscure (but equally interesting) pictures, videos, and more.

© 2014 | 22 Words

Privacy Policy

Close This Window Close