Skip to main content icon/video/no-internet

Evidence synthesis has come to replace meta-analysis as a term referring to the statistical combination of multiple sources of evidence. In its simplest form, each evidence source is represented by a sufficient statistic, which may be, for example, a numerator and a denominator, a mean with its standard error, or a summary estimate such as an estimate of the log odds ratio and its standard error. The evidence synthesis is then the process of finding a suitable weighted average of these quantities. However, meta-analysis need not be restricted to summary statistics and has, for example, been extended to analyses of data from individual patients in multiple studies. Furthermore, evidence synthesis may be used to imply much more general forms of statistical synthesis, involving data sources of multiple types, each perhaps providing information on one or more parameters.

Bayesian evidence synthesis is then the use of Bayesian statistical methods in evidence synthesis. This can be formulated as follows: There are K unknown basic parameters, θ, and N data points Yi, i = 1,…, N, each representing, let us assume, a sufficient statistic from study i, in this case consisting of numerators ri and denominators ni. We may also define additional functional parameters θ K+1,…, θ M. To rule out recursive definitions, it must be possible to define these as functions GK+1,…, GM of the basic parameters. Finally, each data point provides an estimate of a Gi(θ) that is some function of parameters.

One approach to computation, the maximum likelihood solution, assuming that the N data points are independent, would be to find values of θ that maximize

None

bearing in mind that the likelihood contribution from each study might take a different distributional form (normal, binomial, Poisson, etc.).

Bayesian evidence synthesis specifies a prior distribution for the basic parameters only P(θ). There is no requirement that the parameters be independent, so we may consider this to be a joint prior distribution, if necessary. We then find the joint posterior distribution by application of Bayes's theorem:

None

The data to be synthesized form a connected network that can be described in terms of a directed acyclic graph (DAG). However, the synthesis problems are capable of being reparameterized in many different ways, so that items of data that inform, for example, a basic parameter in one parameterization may inform a functional parameter in another. Hence, several DAGs may describe the same network. Some evidence networks may also be described in terms of graphs. One important feature that remains invariant under reparameterization is the inconsistency degrees of freedom, N − K. This can be thought of as representing the number of independent ways in which the evidence can be inconsistent under a given model. For example, in the DAG presented in Figure 1 and discussed in more detail below, there are three basic parameters and four independent data items to inform them. The inconsistency degrees of freedom is therefore 4 − 3 = 1. The Bayesian formulation, which forces the investigator to be explicit about which parameters are basic, and therefore have a prior distribution, and which are functional, yields valuable insights into the structure and dynamics of the data and model.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading