What is Meta analysis
Meta-analysis is the statistical synthesis of the data from a set of comparable studies of a problem, and it yields a quantitative summary of the pooled results. It is the process of aggregating the data and results of a set of studies, preferably as many as possible that have used the same or similar methods and procedures; reanalyzing the data from all these combined studies; and thereby generating larger numbers and more stable rates and proportions for statistical analysis and significance testing than can be achieved by any single study. The process is widely used in the biomedical sciences, especially in epidemiology and in clinical trials. In these applications, meta-analysis is defined as the systematic, organized, and structured evaluation of a problem of interest. The essence of the process is the use of statistical tables or similar data from previously published peer-reviewed and independently conducted studies of a particular problem. It is most commonly used to assemble the findings from a series of randomized controlled trials, none of which on its own would necessarily have sufficient statistical power to demonstrate statistically significant findings. The aggregated results, however, are capable of generating meaningful and statistically significant results.
Because the results from different studies investigating different independent variables are measured on different scales, the dependent variable in a meta-analysis is some standardized measure of effect size. One of the commonly used effect measures in clinical trials is a relative risk (RR) when the outcome of the experiments is dichotomous (success versus failure). The standard approach frequently used in meta-analysis in clinical research is termed the 'inverse variance method based on Woolf (Woolf, 1955) and first proposed by Glass in 1976. The average effect size across all studies is computed, whereby the weights are equal to the inverse variance of each study’s effect estimator. Larger studies and studies with less random variation are given greater weight than smaller studies. In the case of studies reporting a RR, the log RR has a standard error (se) given by where PiT and PiC are the risks of the outcome in the treatment group and control groups respectively of the ith study while n is the number of patients in the respective groups. The weights (w) allocated to each of the studies are inversely proportional to the square of the standard error, thus which gives greater weight to those studies with smaller standard errors. The combined effect size is computed by the weighted average as where is the effect size measure and it has a standard error given by . Assuming these estimates are distributed normally, the 95% confidence limits are easily obtained by: ,
We (Doi and Thalib, 2008) introduce a new approach to adjustment for inter-study variability by incorporating a relevant component (quality) that differs between studies in addition to the weight based on the intra-study differences that is used in any fixed effects meta-analysis model. The strength of our quality effects meta-analysis is that it allows available methodological evidence to influence subjective random probability, and thereby helps to close the damaging gap which has opened up between methodology and statistics in clinical research. We do this by introducing a correction for the quality adjusted weight of the ith study called (Doi and Thalib, 2008). This is a composite based on the quality of other studies except the study under consideration and is utilized to re-distribute quality adjusted weights based on the quality adjusted weights of other studies. In other words, if study i is of good quality and other studies are of poor quality, a proportion of their quality adjusted weights is mathematically redistributed to study i giving it more weight towards the overall effect size. As studies increase in quality, re-distribution becomes progressively less and ceases when all studies are of perfect quality. To do this we first have to adjust weights for quality and one way to incorporate quality scores into such an analysis is as follows: (Tritchler, 1999; Klein et al. 1986; Fleiss and Gross, 1991; Smith et al. 1995)
Where Qi is the judgement of the probability (0 to 1) that study i is credible, based on the study methodology. The variance of this weighted average is then: "(Tritchler, 1999)"
However, this probabilistic viewpoint on quality adjusted weights is not satisfactory and thus we expand on this system of incorporating quality by both adjusting the weight as well as redistributing weights based on quality. This was done as follows: Given that is our quality adjustor for the ith study and N is the number of studies in the analysis, then the quality effects modified is given by:
where and
The final summary estimate is then given by: while the variance of this weighted average is then
Although it may seem that is a function of , given that it would mean that by multiplying with we are actually adjusting the product of quality and weight by and by our definition in the text, the latter is a function of the quality and weights of other studies excluding this ith study. Our suggested adjustment has a parallel to the random effects model where a constant is generated from the homogeneity statistic and using this and other study parameters a constant () is generated given by . The inverse of the sampling variance plus this constant that represents the variability across the population effects is then used as the weight . In effect as gets bigger the increases thus widening the confidence interval. The weights however become progressively more equal and in essence this is the basis for the random effects model – a form of redistribution of the weights so that outlier studies do not unduly influence the pooled effect size. This is precisely what our method does too, the only difference being that we use a method based on quality rather than statistical heterogeneity and is not as artificially inflated as in the random effects model. The random effects model adds a single constant to the weights of all studies in the meta-analysis based on the statistical heterogeneity of the trials. Our method re-distributes the quality adjusted weights of each trial based on the measured quality of the other trials in the meta-analysis. Since , addition of an external constant will inflate the variance much more than a redistribution of the weights – if the studies demonstrate varying effects. Obviously, if a random variable is inserted to inflate the variance based on heterogeneity, it is not clear what aspect of between-trial differences is being assessed and fails to take into account any meaningful differences between the individual studies. Senn has provided an analytic demonstration of this in his paper “Trying to be precise about vagueness” (Senn, 2007).
Our method has been incorporated into version 1.7 of the MIX program, which is a comprehensive free software for meta-analysis of causal research data available from the web at: http://www.mix-for-meta-analysis.info
For Further information get in touch with Dr. Lukman Thalib