Friday, December 28, 2007

Peer-Reviewed Research

This article on peer-reviewed research methods caught my eye. This quote from the article sums up the subjectivity of the claim to validity based on a "peer-reviewed" status:
Couldn't a group of individuals committed to promoting their own research -- which may or may not be well-founded -- get together to form their own "journal," which they could legitimately claim publishes "peer-reviewed research"?

They can, and they do.

1 comment:

t.k.foster said...

All that article addressed is blog-based authoritarian peer-review and never once showed a weakness with experimental peer-review. These are two very different ideas, the first being by less intellectual publications.

Peer-review is a process that follows the scientific research method of someone proposing a hypothesis, testing the hypothesis, repeating the tests, and coming to a conclusion. In order for someone to get peer-reviewed approval in a quality publication, those who are reviewing must be able to repeat the results of the experiment (if someone got a statistically significant Cramer's V on abstinence and sex throughout his/her experiments, the reviewers must also get the same in their experiments; if it's not the same, the article is trashed).

If the results are the same and they evaluate the conclusions and think they are appropriate, they will approve (evaluation of the controls and variables will determine whether the conlusion is supported or not). If they do not get the same results, it is not approved. If it is lacking in variables and controls, the article will be sent back stating that "more testing needs to be done." Most peer-review processes to quality publications (like the NEJM) take years and years to get published because one person may have tests that say one thing, but the other scientists will tell them they aren't testing for all the variables.

My friend Ashley and I have been doing research on gender and religion to be submitted for peer-review for two years and although six different professors have been able to replicate the results of our experiment (replicating similar Somer's DYX), they have sent it back to us three different times stating that the "control is lacking and other variables need to be added before coming to this conclusion." We have added more controlled environments and tested for other possible variables, but at this point we still have more work to do. The point being although they may think the conclusion is possible because they have replicated our results, they still want controls and variables evaluated so that this is the only conclusion one can come to when they evaluate this data. Of course, those of us that go about this method understand this and realize how hard it is to do (the blog you listed is showing a weakness of one publication which is quite easy to get "peer-reviewed" but lacks a lot of credibility due to the non-repetitive nature of the journal).

(Without basic scientific knowledge one would not understand the amount of controls and variables one must test for to come to one conclusion and then to take that and repeat it and hope that others get the same results - these other results often added at the end of the peer-review process so that others can check even further on data. Take for example, a peer-reviewed study on gravity. If I said that I theorized that if I dropped an egg to the ground it would fall down, I would have to test it multiple times. Once I did that, I would have to control for variables - for example, does it always fall to the ground relative to surface of the ground, like water, sand, etc. Then I would have to control for the air and wind, the weather, etc. The point being that scientists check for all these other factors before jumping to conclusions and peer review is then scientists repeating all the same experiments with the same controls and variables, putting their data and sending it back to you as well. Many then can use that to support as well or if there are problems use it to address those problems. However, if someone says, "Oh I have a Ph.D. and doesn't replicate the study - and we can check to see if she/he did - then that is nothing but someone exerting ad verecundiam authority on an article. Repetitive tests don't lie, some people do. This is why we test, and test, and test some more.)

Either way, the article you quoted shows no weakness of the strong type of peer-review listed above, but shows why people just stamping an approval sign without researching it themself and claiming that they're "experts" in the field is very weak indeed.

Maybe people should learn how peer-review is supposed to work and you wouldn't have these weak publications like the author of the article mentions. The problem isn't with peer-review, but "experts" who are too lazy to actually do the research themself, send it back to the people they are evaluating with their results, and offer suggestions until all the possible controls and variables are looked at.

(The irony of this all being that several comments on the guy's blog showed his false stereotype, but of course, why am I not suprised that wasn't evaluated whatsoever?)