You take the question "if a tree falls in the woods and nobody is there, does it make a sound?" seriously, don't you? I mean, go back and look at some of the early film of guys pulled out of sieges and trenches in WWI who were just obviously beyond their senses, shaking uncontrollably, etc. Do you need a double-blind study to confirm they had shell shock?JohnStOnge wrote:Just to show you guys that I'm not the only one who thinks as I said I think, here is an op ed by a Professor of Psychology at the University of Virginia arguing agianst those who dismiss his field as a "soft" science or not a science at all:
http://articles.latimes.com/2012/jul/12 ... s-20120712" onclick="window.open(this.href);return false;
And he shows that he doesn't understand statistical experimentation by stating this:
No. It can't. Well, theoretically it can if it actually has an effect that is opposite of what is desired. You could set up an experiment, for example. to test for whether or not Crestor actually increases cholosterol levels. And if you show Crestor increases cholesterol levels then you've shown it does not decrease them.An often-overlooked advantage of the experimental method is that it can demonstrate what doesn't work.
But that's clearly not what he's talking about. He clearly does not understand that failing to reject the null hypothesis does not mean you have shown it to be true. Either that or he's being misleading. There are two conclusions you can legimiately state after the completion of a statistical experiment. One is that the null hypothesis of "not the effect we're looking for" has been rejected. In that case the alternative hypothesis...the one you're looking to support...is inferred. The other is that there is not sufficient evidence to reject the null hypothesis. But you can NEVER say, "I've shown the null hypothesis to be true." And that's what he's saying when he says an experiment demonstrates that a method doesn't work.
Yes I understand if you don't believe me since I'm just a regular guy and he's a professor of psychology at the University of Virginia. But I'm right. And that calls into question whether he even really knows a quality statistical experiment when he sees one.
C'mon man.

