Alternative implementations of Monte Carlo EM algorithms for likelihood inferences

Two methods of computing Monte Carlo estimators of variance components using restricted maximum likelihood via the expectation-maximisation algorithm are reviewed. A third approach is suggested and the performance of the methods is compared using simulated data.


INTRODUCTION
The expectation-maximisation (EM) algorithm [1] to obtain restricted maximum likelihood (REML) estimators of variance components [7] is widely used. The expectation part of the algorithm can be demanding in highly dimensional problems because it requires the inverse of a matrix of the order of the number of location parameters of the model. In animal breeding this can be of the order of hundred of thousands or millions.
Guo and Thompson [3] proposed a Markov chain Monte Carlo (MCMC) approximation to the computation of these expectations. This is useful because in principle it allows to analyse larger data sets but at the expense of introducing Monte Carlo noise. Thompson [9] suggested a modification to the algorithm which reduces this noise.
The purpose of this note is to review briefly these two approaches and to suggest a third one which can be computationally competitive to the Thompson estimator.

THE MODEL AND THE EM-REML EQUATIONS
The sampling model for the data is assumed to be y|b, s,σ 2 e ∼ N Xb + Zs, Iσ 2 e where y is the vector of data of length n, X and Z are incidence matrices, b is a vector of fixed effects of length p, s is a vector of random sire effects of length q, I is the identity matrix and Iσ 2 e is the variance associated with the vector of residuals e. Sire effects are assumed to follow the Gaussian process: where σ 2 s is the variance component due to sires. Implementation of restricted maximum likelihood with the EM algorithm requires setting the conditional expectations (given y) of the natural sufficient statistics for the model of the complete data (y, s) equal to their unconditional expectations. Let θ = σ 2 s , σ 2 e . In the case of the present model, the unconditional expectations of the sufficient statistics are E s s|θ = qσ 2 s and E e e|θ = nσ 2 e . The conditional expectations require computation of E s s|y,θ =ŝ ŝ + tr Var s|y,θ =ŝ ŝ + Var s i |y,θ (1) and E e e|y,θ = y − Xb − Zŝ y − Xb − Zŝ + tr Var e|y,θ , (2) whereθ is the value of θ at the current EM iterate, andb andŝ are the expected values of b, s|y,θ which can be obtained as the solution to Henderson's mixed model equations [5]: In (3),k =σ 2 e σ 2 s . Throughout this note, to make the notation less cumbersome, a parameter x with a "hat" on top,x, will refer to the value of the parameter at the current EM iterate.

THE GUO AND THOMPSON ESTIMATOR
A MC estimate of the expectation in (1) requires realizations from the distribution s|y,θ . A restricted maximum likelihood implementation of the approach proposed by Guo and Thompson [3] involves drawing successively from p s i |s −i , b,θ, y and from p b j |b −j , s,θ, y , where i = 1, . . . , q; j = 1, . . . , p, using the Gibbs sampler [2]. In this notation x i is a scalar, and the vector x −i is equal to the vector x = {x i } with x i excluded.
With T realisations from s|y,θ , the MC estimate of the variance component at the current EM iterate isσ where T is the number of rounds of Gibbs samples used within the current EM iterate. After several cycles, once convergence has been reached, the converged values are averaged to obtain the final MCEM REML estimator. Guo and Thompson [3] provide a detailed description of the algorithm and a useful overview can be found in [8].

THE THOMPSON ESTIMATOR
Consider the decomposition of Var s i |y,θ in (1): where z i is the ith row of the incidence matrix Z. In (5), only the second term has MC variance. This term is given by (6) and (5) in (1) yields: The MC estimator of (7) is given by i is the expectated value over the fully conditional distribution The MC estimate of the variance component at the current EM iterate is now equal tô This expression is equivalent to equation (4) in [9].

AN ALTERNATIVE ESTIMATOR
Consider the distribution of s|θ, y = 0 , which is normal, with mean zero and variance equal to the variance of s|θ, y . That is: The term Var s|θ, y = 0 corresponds to the lower diagonal block of the inverse of the coefficient matrix of (3) at the current EM iterate. Then equation (1) can be written: Var s i |y = 0,θ .
Decomposing the second term in (10), using (5) and (6), and noting that The MC estimator of (11) is given bŷ 0,i is the expected value over the fully conditional distribution s i |s −i , b, y = 0,θ at the jth Gibbs round. The MC estimate of the variance component at the current EM iterate is now equal tô

COMPARISON OF MONTE CARLO VARIANCES
The smaller MC variance of (8) relative to (4) is explained via the decomposition of the variance of s i |y,θ in (5): only the second term is subject to MC variance.
In order to compare the MC variance of (12) and (8) note that the ith element ofs ( j) in (8),s ( j) i , can be written as Inserting (13) in (8) yieldŝ which shows that (8) has an extra term relative to (12) which contributes with extra MC variance.

An example with simulated data
The performance of the three methods is illustrated using three simulated data sets with heritabilities equal to 10%, 30% and 50%. In each data set, one thousand offspring records distributed in 100 herds were simulated from 100 unrelated sires. The figures in Table I show the performance of the three methods in terms of their MC variances, which were computed empirically based on 1 000 replicates.
The figures in Table I show clearly the ranking of the methods in terms of their MC variances. Table II shows a comparison of (8) and (12) in terms of computing time (these two only are shown because the time taken to run (4) and (8) is almost identical). The length of the MC chain for updating σ 2 s at each EM iterate (T) was increased for each method, until the same MC variance of 0.0005 was obtained, and the time taken was recorded.
Estimator (12) requires knowledge of E s|y,θ at each EM iterate. Therefore, it takes longer per EM iteration than (8). However, the proposed method is still more efficient than that based on (8) since it compensates by requiring a shorter MC chain length (T) for updating σ 2 s at each EM iterate. In general, the relative efficiency of the methods will be dependent on the model and data structure.

EXTENSION TO CORRELATED RANDOM EFFECTS
MCEM provides an advantage in models where the E step is difficult to compute, and this can be the case in models with many correlated random Table II. CPU-time in seconds (per complete EM replicate) taken for estimators (8) and (12) to achieve an MC variance equal to 0.0005. (Based on 100 replicates). effects. An example is the additive genetic model, where each phenotypic observation has an associated additive genetic value. Consider the model where y is the data vector of length n, X and Z * are known incidence matrices, b is a vector of fixed effects of length p and a is the vector of additive genetic values of length q. With this model, usually q > n. The sampling model for the data is y|b, a, σ 2 e ∼ N Xb + Z * a, Iσ 2 e where σ 2 e is the residual variance. Vector a is assumed to be multivariate normally distributed as follows: where A is the q × q additive genetic covariance matrix and σ 2 a is the additive genetic variance.
Consider the decomposition of A as in [6]: Define the transformation ϕ = F −1 a.
The MCEM restricted maximum likelihood estimator of σ 2 a at the current EM iterate, equivalent to (4) is noŵ where ϕ ( j) is the vector of Gibbs samples at the jth round, whose elements are drawn from p ϕ i |y,θ , i = 1, . . . , q. Using the same manipulations as before that led to (8) and (12), it is easy to show that the estimators equivalent to (8) and (12) are and respectively. In (16),φ (j) is the vector of Gibbs samples at the jth round with 0 is the vector of Gibbs samples at the jth round with elementsφ 0,i = E ϕ i |ϕ −i , b, y = 0,θ . In both expressions, w −1 i+p,i+p is the inverse of the diagonal element of row/column i + p of design matrix W.

DISCUSSION
Markov chain Monte Carlo (MCMC) methods are having an enormous impact in the implementation of complex hierarchical statistical models. In the present paper we discuss three MCMC-based EM algorithms to compute Monte Carlo estimators of variance components using restricted maximum likelihood. Another application along the same lines can be found in [4], which shows how to use the Gibbs sampler to obtain elements of the inverse of the coefficient matrix which features in equations (1) and (2).
The performance of the methods was compared in terms of their Monte Carlo variances and in terms of length of computing time to achieve a given Monte Carlo variance. The different behaviour of the methods is disclosed in expressions (4), (14) and (12). The method based on (8) divides the overall sum of squares involved in (4) into two terms, one of which has no Monte Carlo noise. In our method, further partitioning is achieved which includes a term which is not subject to Monte Carlo noise. However, this is done at the expense of requiring a solution to a linear system of equations. When tested with simulated data, the proposed method performed better than the other two. The data and model used induced a simple correlation structure. The relative performance of the proposed method may well be different with models that generate a more complicated correlation structure.
Efficient implementation is likely to require a fair amount of experimentation. For example, the solution to the linear system in each round within an EM iterate need only be approximate and can be used as starting values for the next iteration. Similarly, the number of iterates within each round (variable T) can be tuned with the total number of cycles required to achieve convergence. One possibility which we have not explored is to set T = 1 and to let the system run until convergence is reached.