Stewart Robert Hinsley wrote: > > In article <31D2F779.2A45@ariel.its.unimelb.edu.au> > damien@ariel.its.unimelb.edu.au "Damien Broderick" writes: > > > Chris Lawson wrote: > > > > > Meta-analysis, IMHO, doesn't count. [...] This is a bit like > > > finding 3 inadequate samples of mince meat, and mixing all 3 in the mincer > > > again. > > > > Inadequate for what purposes? You imply `tainted', but parapsychologists > > have tried (under the whip of their opponents, such as Hyman) to rid their > > data bases of contaminated data. Are you really trying to tell us that > > adding 10 smallish samples together will not bring down the standard > > deviation, proportionately, to the point where an otherwise tenuous effect > > rises up over the noise level? As you admit, pharmacologists use this > > procedure all the time. It's not as compelling as levitating on to the White > > House lawn (and being shot out of the sky), but gimme a break here... > > Meta-analysis is acceptable in other fields. Only an a priori conviction > > that psi is crap would make one *more* worried about its use in parapsych. > > > I'm fairly sure I've seen negative views of meta-analysis outside the > context of parapsychology. AFAIK, the problem is that meta-analysis can > introduce biases. > > If one does enough experiments one will eventually get one with a result > a few standard deviations ought. Combining this with the rarity of the > publication of null results gives rise to a bias. > > For example, consider a system in which there is no correlation between > a postulated cause and effect. Say 100 experiments are done, of which 80 > give a null result, 10 give a borderline positive correlation, and 10 > a borderline negative correlation. Say that of these 1/4 (20) of those > giving a null result, 1/2 (5) of those giving a negative correlation, and > all (1) of those giving a positive correlation, are published. In this > circumstance meta-analysis clearly gives rise to a misleading conclusion. > > Another problem with meta-analysis is how to decide how to weigh the > various data sets. Giving them equal weightings is wrong. If the > experiments have no systematic errors then weighing them according the > sizes and standard deviations of the data sets is appropriate. (Someone > more statistically sophisticated then I am could provide you with the > equations.) It is not obvious to me that it is always possible to > produce objective weightings of the results of disparate parapsychological > experiments; but, if the meta-analyst unconsciously gives greater weight > to the positive results this skews the result of the meta-analysis. > > -- > Stewart Robert Hinsley The adequate is the enemy of the good. > > stewart@meden.demon.co.uk Thank you, Stewart! You have put it succinctly. Damien Broderick said I "admit" that meta-analysis is used in fields such as pharmacology. As I clearly pointed out in the original, I am suspicious of meta-analysis in ANY field. Occasionally such studies are useful, but the vast majority show a small effect with marginal statistical signifance, and therefore tell us no more than that a PROPER study with a larger sample size is needed to assess the hypothesis. I read a lot of medical data in my work, and the meta-analyses are weighted IMHO one