Home Home Home Sitemap

Skip Navigation Links.
Collapse Faculty of MedicineFaculty of Medicine
Administration
Missions & Objectives
Clinical Year Medical
Students Guideline
Health and Safety
Committee Guidelines
FOM Annual Report
2020-2021
Research & Publications
Teaching & Training
Activities
Guidlines for social and networks
FOM Time Table
Final Exam
Elective Application Form
FOM Hand Book
and Program
Student's Guideline
Collapse Centre for Research  <br /> Support & ConferencesCentre for Research
Support & Conferences
Conferences
Collapse DepartmentsDepartments
Expand AnatomyAnatomy
Animal Resources Center
Expand BiochemistryBiochemistry
Expand Biomedical EngineeringBiomedical Engineering
Expand Community MedicineCommunity Medicine
Expand Electron MicroscopyElectron Microscopy
Expand Medical PhotographyMedical Photography
Expand MedicineMedicine
Expand MicrobiologyMicrobiology
Expand Nuclear MedicineNuclear Medicine
Expand Obstetrics & GynaecologyObstetrics & Gynaecology
Expand PaediatricsPaediatrics
Expand PathologyPathology
Expand PharmacologyPharmacology
Expand PhysiologyPhysiology
Expand Primary CarePrimary Care
Expand PsychiatryPsychiatry
Expand RadiologyRadiology
Expand SurgerySurgery

The Quality Effects Approach to Meta-Analysis Developed at Kuwait University

What is Meta analysis

Meta-analysis is the statistical synthesis of the data from a set of comparable studies of a problem, and it yields a quantitative summary of the pooled results. It is the process of aggregating the data and results of a set of studies, preferably as many as possible that have used the same or similar methods and procedures; reanalyzing the data from all these combined studies; and thereby generating larger numbers and more stable rates and proportions for statistical analysis and significance testing than can be achieved by any single study. The process is widely used in the biomedical sciences, especially in epidemiology and in clinical trials. In these applications, meta-analysis is defined as the systematic, organized, and structured evaluation of a problem of interest. The essence of the process is the use of statistical tables or similar data from previously published peer-reviewed and independently conducted studies of a particular problem. It is most commonly used to assemble the findings from a series of randomized controlled trials, none of which on its own would necessarily have sufficient statistical power to demonstrate statistically significant findings. The aggregated results, however, are capable of generating meaningful and statistically significant results.

Go to Top

Standard fixed effects meta-analysis

Because the results from different studies investigating different independent variables are measured on different scales, the dependent variable in a meta-analysis is some standardized measure of effect size. One of the commonly used effect measures in clinical trials is a relative risk (RR) when the outcome of the experiments is dichotomous (success versus failure). The standard approach frequently used in meta-analysis in clinical research is termed the 'inverse variance method based on Woolf (Woolf, 1955) and first proposed by Glass in 1976. The average effect size across all studies is computed, whereby the weights are equal to the inverse variance of each study’s effect estimator. Larger studies and studies with less random variation are given greater weight than smaller studies. In the case of studies reporting a RR, the log RR has a standard error (se) given by  where PiT and PiC are the risks of the outcome in the treatment group and control groups respectively of the ith study while n is the number of patients in the respective groups. The weights (w) allocated to each of the studies are inversely proportional to the square of the standard error, thus  which gives greater weight to those studies with smaller standard errors. The combined effect size is computed by the weighted average as where is the effect size measure and it has a standard error given by . Assuming these estimates are distributed normally, the 95% confidence limits are easily obtained by: ,

Go to Top

Problem with heterogeneity

We (Doi and Thalib, 2008) introduce a new approach to adjustment for inter-study variability by incorporating a relevant component (quality) that differs between studies in addition to the weight based on the intra-study differences that is used in any fixed effects meta-analysis model. The strength of our quality effects meta-analysis is that it allows available methodological evidence to influence subjective random probability, and thereby helps to close the damaging gap which has opened up between methodology and statistics in clinical research. We do this by introducing a correction for the quality adjusted weight of the ith study called (Doi and Thalib, 2008). This is a composite based on the quality of other studies except the study under consideration and is utilized to re-distribute quality adjusted weights based on the quality adjusted weights of other studies. In other words, if study i is of good quality and other studies are of poor quality, a proportion of their quality adjusted weights is mathematically redistributed to study i giving it more weight towards the overall effect size. As studies increase in quality, re-distribution becomes progressively less and ceases when all studies are of perfect quality. To do this we first have to adjust weights for quality and one way to incorporate quality scores into such an analysis is as follows: (Tritchler, 1999; Klein et al. 1986; Fleiss and Gross, 1991; Smith et al. 1995)

Where Qi is the judgement of the probability (0 to 1) that study i is credible, based on the study methodology. The variance of this weighted average is then: "(Tritchler, 1999)"

However, this probabilistic viewpoint on quality adjusted weights is not satisfactory and thus we expand on this system of incorporating quality by both adjusting the weight as well as redistributing weights based on quality. This was done as follows: Given that is our quality adjustor for the ith study and N is the number of studies in the analysis, then the quality effects modified is given by:

where and

The final summary estimate is then given by: while the variance of this weighted average is then

Go to Top

Although it may seem that is a function of , given that it would mean that by multiplying with we are actually adjusting the product of quality and weight by and by our definition in the text, the latter is a function of the quality and weights of other studies excluding this ith study. Our suggested adjustment has a parallel to the random effects model where a constant is generated from the homogeneity statistic and using this and other study parameters a constant () is generated given by . The inverse of the sampling variance plus this constant that represents the variability across the population effects is then used as the weight . In effect as gets bigger the increases thus widening the confidence interval. The weights however become progressively more equal and in essence this is the basis for the random effects model – a form of redistribution of the weights so that outlier studies do not unduly influence the pooled effect size. This is precisely what our method does too, the only difference being that we use a method based on quality rather than statistical heterogeneity and is not as artificially inflated as in the random effects model. The random effects model adds a single constant to the weights of all studies in the meta-analysis based on the statistical heterogeneity of the trials. Our method re-distributes the quality adjusted weights of each trial based on the measured quality of the other trials in the meta-analysis. Since , addition of an external constant will inflate the variance much more than a redistribution of the weights – if the studies demonstrate varying effects. Obviously, if a random variable is inserted to inflate the variance based on heterogeneity, it is not clear what aspect of between-trial differences is being assessed and fails to take into account any meaningful differences between the individual studies. Senn has provided an analytic demonstration of this in his paper “Trying to be precise about vagueness” (Senn, 2007).

Go to Top

Software

Our method has been incorporated into version 1.7 of the MIX program, which is a comprehensive free software for meta-analysis of causal research data available from the web at: http://www.mix-for-meta-analysis.info

Go to Top

References

  • Brockwell, S.E. and Gordon, I.R. (2001) A comparison of statistical methods for meta-analysis. Stat Med 20, 825-40.
  • Cochran, W.G. (1937) Problems arising in the analysis of a series of similar experiments. Journal of the Royal Statistical Society 4, 102-118.
  • DerSimonian, R. and Laird, N. (1986) Meta-analysis in clinical trials. Control Clin Trials 7, 177-88.
  • Doi, S.A. and Thalib, L. (2008) A Quality-Effects Model for Meta-Analysis. Epidemiology 19, 94-100.
  • Fleiss, J.L. and Gross, A.J. (1991) Meta-analysis in epidemiology, with special reference to studies of the association between exposure to environmental tobacco smoke and lung cancer: a critique. J Clin Epidemiol 44, 127-39.
  • Hardy, R.J. and Thompson, S.G. (1998) Detecting and describing heterogeneity in meta-analysis. Stat Med 17, 841-56.
  • Klein, S., Simes, J. and Blackburn, G.L. (1986) Total parenteral nutrition and cancer clinical trials. Cancer 58, 1378-86.
  • Senn, S. (2007) Trying to be precise about vagueness. Stat Med 26, 1417-30.
  • Smith, S.J., Caudill, S.P., Steinberg, K.K. and Thacker, S.B. (1995) On combining dose-response data from epidemiological studies by meta-analysis. Stat Med 14, 531-44.
  • Tritchler, D. (1999) Modelling study quality in meta-analysis. Stat Med 18, 2135-45.
  • Woolf, B. (1955) On estimating the relation between blood group and disease. Ann Hum Genet 19, 251-3.

Go to Top

Contact Information

For Further information get in touch with
Dr. Lukman Thalib

 


Copyright © 1993 - 2024
Health Sciences Center - All Rights Reserved.