Any statisticians out there?
|
Doug Hemken wrote:Mark, given your description of the design, "effect size" is likely based on paired differences? So the pre- and post- summary statistics don't really tell the story ... that's the whole reason paired differences is useful. I take it effect size is mean(paired differences)/sd(paired differences) ?It's not clear to me what effect size actually means, but that seems like a reasonable definition. Under the assumption of positive correlation between the individuals (it does seem like a paired analysis, my original computations only saw the numbers given and I assumed it wasn't), then one can use the given variances to compute an upper bound on the actual (paired) variance. This is because Var(X - Y) = Var(X) + Var(Y) - 2Cov(X,Y), so Var(X-Y) is less than Var(X)+Var(Y). (I can't think of a paired experiment where there would be negative correlation, so we'll run with this). So this would give us an upper bound of around 20 for the standard deviation of the differences (of course this is a point estimate, 20 is not an upper bound that is accounting for sampling variability). So a bad point estimate for effect size is 9/20 = 0.45. 0.7 seems plausible I suppose. On the other hand, one way to get nearly identical marginal standard deviations for both your pre and post groups is to have every pair change by a constant, i.e. effect size of infinity (as you'd be dividing by a standard deviation of 0 in our definition of effect size). |
|
It has been my experience as a statistical consultant that most of our clients mean some version of Cohen's D when they talk about effect size. |
|
Doug Hemken wrote:Mark, given your description of the design, "effect size" is likely based on paired differences? So the pre- and post- summary statistics don't really tell the story ... that's the whole reason paired differences is useful. I take it effect size is mean(paired differences)/sd(paired differences) ?It's described as being calculated using Hedges g if that helps. Mean post- mean pre all divided by SD pooled. Does that clarify anything? |
|
I get Hedges' g of -0.54 and Cohen's d of -0.62 using the formulas here: polyu.edu.hk/mm/effectsizef… |
|
Hedges being a corrected Cohens. |
|
If the language they use is actually "post intervention" it certainly seems like they have a CRB design. Unless they are clueless, I have to take pooled variance to mean MSTR.BL (or whatever notation you prefer for the interaction mean square term between the treatment and the blocks). Interestingly, they don't provide summary statistics that allow you to reproduce this value. But it's completely conceivable that .7 could be your value, though I'm not thrilled with how they are disseminating results. |
|
The phrase "an experimental group" is confusing. |
|
Not a statistician here. Was merely forced to teach myself stats to complete my research, haaa. Luckily I also had a team of professional statisticians paid on my grant to advise me from time to time, but honestly asking a statistician to give you input on your problems is like asking a firefighter to clean chocolate off your face by turning on the firehose. |
|
Aerili wrote:... that shit is fuckin' voodoo.After 20+ years I'm often inclined to agree! |
|
This is why I stay away from data. |
|
Aleks Zebastian wrote:climbing friend, May I suggest you get a girlfriend?I'm pretty sure statistics are less confusing than women, though not as fun to onsite. |
|
Not sure what any of this means ^^ |