Bem's Guide to write a journal article - (May/04/2014 )
Just came across about Daryl J Bem's guide "Writing the Empirical Journal Article" from 2003. It's about writing psychology paper's but anyway quite known and hosted on many university websites for download (e.g. Yale).
Anyway I listened to a feature in the radio where they mentioned a quite questionable paragraph (I set it in italics) of this text, and I wonder what you think about. I also find it quite weird and would not do this, since it sounds as if he suggests to adapt the hypotheses according to the results and as you can do what you want to get a paper with "positive" results (which is also from a statistical point of view not correct)....
Here is the text:
"Which Article Should You Write?
There are two possible articles you can write: (i) the article you planned to write when you designed your study or (ii) the article that makes the most sense now that you have seen the results. They are rarely the same, and the correct answer is (ii).
The conventional view of the research process is that we first derive a set of hypotheses from a theory, design and conduct a study to test these hypotheses, analyze the data to see if they were confirmed or disconfirmed, and then chronicle this sequence of events in the journal article. If this is how our enterprise actually proceeded, we could write most of the article before we collected the data. We could write the introduction and method sections completely, prepare the results section in skeleton form, leaving spaces to be filled in by the specific numerical results, and have two possible discussion sections ready to go, one for positive results, the other for negative results.
But this is not how our enterprise actually proceeds. Psychology is more exciting than that, and the best journal articles are informed by the actual empirical findings from the opening sentence. Before writing your article, then, you need to Analyze Your Data...."
To some extent he is correct - the results should be interpreted in the context of the larger picture - which may mean that your original hypothesis was flawed in some manner (not necessarily wrong), but in a way that means you can interpret the results in a different manner. For example if you set out to find out if protein A affects production of X, and you find that it doesn't but it does affect something closely related that could be confused for X (isoform?) - then you have a positive result, but it isn't what you set out to find out.
On the other hand, if you set out to do that experiment with no clear idea of X, and discover that there is a barely detectable interaction with protein number 583 out of the 1000 that you tested, with P < 0.05, then you are chasing windmills. Sad to say, in psychology, this is a common problem. "We'll find an effect SOMEWHERE if only we look hard enough at enough possibilities." After all, you would expect to find 50 such proteins having an effect. Perhaps you could report that as a result!!
Unfortunately too true phage.
phage434 on Sun May 4 21:26:34 2014 said:
On the other hand, if you set out to do that experiment with no clear idea of X, and discover that there is a barely detectable interaction with protein number 583 out of the 1000 that you tested, with P < 0.05, then you are chasing windmills. Sad to say, in psychology, this is a common problem. "We'll find an effect SOMEWHERE if only we look hard enough at enough possibilities." After all, you would expect to find 50 such proteins having an effect. Perhaps you could report that as a result!!
I can understand that point that often your results are not what you expected and it seems logical you report about your results then. I find that quite normal to be honest, you can not really predict everything.
However, the problem here is: "psychology" .... in my opinion (often) not really a science.
Remember the dutch professor who pretty much made up 20 years of "scientific" work/papers...
(altough, this is happening with "real science" more and more too...
http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328
http://www.sciencemag.org/content/342/6154/60.full
many large drug/pharmacy companies dont even care anymore about "scientific academical" publications ...
Yes surely it's psychology which seems to be a "grey area" between science and pseudo-science or even voodoo...
Anyway for me this sentences sounded like a fishing for the right hypothesis and if one is not supported by your data you select another one until it fits...and you avoid to have difficult to publish or unpublishable negative results. But this is not really a research plan but making post-hoc a coherent story out of your data.
It fits also with the practice that you collect so many different data that you can later ignore the not significant ones, which also produces a bias of course....
I agree with Bob's first post and also with your concerns. But isn't it common to do that ? Even well-known scientists admit that (I don't know if someone ever said this on record but for instance Elizabeth Blackburn once gave a talk at our institute and upon a question from the audience said that when you read a paper it's normal that the proceeding presented there is not the actual order of methods applied but rather what has been deemed to be the logical proceeding to get these results and prove this point (i.e. the story that makes the most sense and is most elegant)). That doesn't mean it's made up, but that the order of experiments was constructed in retrospective.
In my lab I often heard stuff like "you have got to make a logical story out of it" "it must have a linear thread" so I thought this is normal ? Nobody would accept a paper that states chronologically "we had no clue in the first place so we poked here and there and 90 % of our exps failed anyway so here is what we got" (a bit exaggerated but basically it's like that)
Sure that this isn't the ideal practice, but isn't this something more acceptable than fishing for significants (which I don't think Bem meant in the first place; I think he meant sth along the lines of what I just said) ?
EDIT: I just read pitos links. Appalling. Reminds me of how these random nonsensical papers that you can create somewhere on the web were actually accepted in some small journals..
There's nothing wrong with discovering a barely detectable effect in an unexpected place in a different experiment. But the next step is to test that result multiple times and make absolutely certain that it is a real effect. Ideally, you design a different experiment testing the same hypothesis a different way. The problem comes when that initial result is immediately reported (often to the institute press office, not in a journal publication).
Of course, there is nothing wrong with what you describe, actually its pretty normal !
And most of the "amazing discoveries" were made by "accident"!
But its the way how its done that is often a problem and especially in the field of psychology there are some weird papers.
BTW to make it a bit extremer: if you design an experiment based on what you expect (and "how it should be") you are often very biased too! So that approach is also often not good!
Tabaluga on Wed May 7 20:31:31 2014 said:
I agree with Bob's first post and also with your concerns. But isn't it common to do that ? Even well-known scientists admit that (I don't know if someone ever said this on record but for instance Elizabeth Blackburn once gave a talk at our institute and upon a question from the audience said that when you read a paper it's normal that the proceeding presented there is not the actual order of methods applied but rather what has been deemed to be the logical proceeding to get these results and prove this point (i.e. the story that makes the most sense and is most elegant)). That doesn't mean it's made up, but that the order of experiments was constructed in retrospective.
In my lab I often heard stuff like "you have got to make a logical story out of it" "it must have a linear thread" so I thought this is normal ? Nobody would accept a paper that states chronologically "we had no clue in the first place so we poked here and there and 90 % of our exps failed anyway so here is what we got" (a bit exaggerated but basically it's like that)
Sure that this isn't the ideal practice, but isn't this something more acceptable than fishing for significants (which I don't think Bem meant in the first place; I think he meant sth along the lines of what I just said) ?
EDIT: I just read pitos links. Appalling. Reminds me of how these random nonsensical papers that you can create somewhere on the web were actually accepted in some small journals..
IMO this accidental findings can happen but you should not use it as a general working method (e.g. leaving my used petri dishes in a dirty hood and hopefully somewhen a new antibiotics producing fungus is growing...).
And especially in the beginning it might be sometimes a necessary and/or fruitful way to have a flawed hypothesis or a very broad one, but later you should understand the system enough to have better hypotheses that don't need to be adjusted to reality frequently...
And I think all branches of science have such problems but might be easier to find in natural sciences (using statistics to find data anomalies or just trying to repeat the experiment such as in the cancer works).
But not always (see e.g. here), and very complicated and specialised science such as quantum physics or mathematics seems sometimes similar prone to fraud problems as the social sciences (a classic is the Sokal affair) with its very special and difficult sounding scientific jargon :