Full conjugate analysis of normal multiple traits with missing records using a generalized inverted Wishart distribution

A Markov chain Monte Carlo (MCMC) algorithm to sample an exchangeable covariance matrix, such as the one of the error terms (R0) in a multiple trait animal model with missing records under normal-inverted Wishart priors is presented. The algorithm (FCG) is based on a conjugate form of the inverted Wishart density that avoids sampling the missing error terms. Normal prior densities are assumed for the 'fixed' effects and breeding values, whereas the covariance matrices are assumed to follow inverted Wishart distributions. The inverted Wishart prior for the environmental covariance matrix is a product density of all patterns of missing data. The resulting MCMC scheme eliminates the correlation between the sampled missing residuals and the sampled R0, which in turn has the effect of decreasing the total amount of samples needed to reach convergence. The use of the FCG algorithm in a multiple trait data set with an extreme pattern of missing records produced a dramatic reduction in the size of the autocorrelations among samples for all lags from 1 to 50, and this increased the effective sample size from 2.5 to 7 times and reduced the number of samples needed to attain convergence, when compared with the 'data augmentation' algorithm.


INTRODUCTION
Most data sets used to estimate genetic and environmental covariance components, from multiple trait animal models, have missing records for some traits. Missing data affect the precision of estimates, and reduce the convergence rates of the algorithms used for either restricted maximum likelihood (REML, [13]) or Bayesian estimation. The common Bayesian approach for estimating (co)variance components in multiple trait animal models or test-day models with normal missing records, is to use prior distributions that would be conjugate if the data were complete. Using this approach Van Tassell and Van Vleck [17] and Sorensen [15] implemented 'data augmentation' algorithms (DA, [16]) by sampling from multivariate normal densities for the fixed effects and breeding values, and from inverted Wishart densities for the covariance matrices of additive genetic and environmental effects. The procedure needs sampling of all breeding values and errors for all individuals with at least one observed trait, conditional on the additive genetic and environmental covariance matrices sampled in the previous iteration of the Markov chain Monte Carlo (MCMC) augmentation scheme. The next step is to sample the additive genetic and environmental covariance matrices conditional on the breeding values and error terms just sampled. Therefore, the MCMC samples of breeding values tend to be correlated with the sampled covariance matrix of additive (co)variances and the sampled errors tend to be correlated with the sampled environmental covariance matrix, due to the dependence between one set of random variables to the other. Also, samples of the covariance matrices tend to be highly autocorrelated, indicating problems with slow mixing and hence inefficiency in posterior computations. The consequence of the correlation between parameter samples and autocorrelations among sampled covariance matrices is that the number of iterations needed to attain convergence tend to increase [10] in missing data problems, a situation similar to that which happens when the expectation-maximization algorithm of Dempster et al. [4] is used for REML estimation of multiple trait (co)variance components.
It would then be useful to find alternative MCMC algorithms for estimating covariance matrices in normal-inverted Wishart settings with smaller autocorrelations among samples and faster convergence rates than regular DA. Liu et al. [11] suggested that MCMC sampling schemes for missing data that collapse the parameter space (for example, by avoiding the sampling of missing errors) tend to substantially increase the effective sampling size and to consequently accelerate convergence. Dominici et al. [5] proposed collapsing the parameter space by integrating the missing values rather than imputing them. In the normal-inverted Wishart setting for multiple traits of Van Tassell and Van Vleck [17] or Sorensen [15], this would lead to the direct sampling of the covariance matrices of additive breeding values and/or error terms, without having to sample the associated missing linear effects. The scheme is possible for the environmental covariance matrix only, since error terms are exchangeable among animals while breeding values are not. The objective of this research is to show how to estimate environmental covariance components with missing data, using a Bayesian MCMC scheme that does not involve Full conjugate Gibbs algorithm for missing traits 51 sampling missing error terms, under a normal-inverted Wishart conjugate approach. The procedure (from now on called FCG for 'full conjugate' Gibbs) is illustrated with a data set involving growth and carcass traits of Angus animals, and compared with the DA sampler for multiple normal traits.

Multiple normal trait model with missing data
The model for the data on a normally distributed trait j ( j = 1, ..., t) taken in animal i (i = 1, ..., q), is as follows: where y i j , a i j , and e i j are the observation, the breeding value and the error term, respectively . The vector β j of 'fixed' effects for trait j is related to the observations by a vector X i j ' of known constants. As usual, y i j is potentially observed, whereas a i j and e i j are not, i.e., some animals may not have records for some or all of the traits. All individuals that have observations on the same traits (say t g ≤ t), share the same 'pattern' of observed and missing records. A pattern of observed data can be represented by a matrix (M g ) having t g rows and t columns [5], where g = 1, ... G, and G is the number of patterns in a given data set. Let n be the total number of animals with records in all traits. All elements in any row of M g are 0's except for a 1 in the column where trait j is observed. Thus, M g = I t whenever t g = 0. For example, suppose t = 6 and traits 1, 2 and 5 are observed. Then: There are 2 t -1 different matrices M g related to patterns with at least one trait observed. We will set the complete pattern to be g = 1, so that M 1 = I t . We will also assume that the complete pattern is observed in at least t animals, and denote with n g the number of animals with records in pattern g. Then, n = G g=1 n g so that n is the total number of animals with at least one trait recorded.
Let y be the vector with the observed data ordered by traits within animal within pattern. Stacking the fixed effects by trait and the breeding values by animal within trait, results in the following matrix formulation of (1): The matrix Z relates records to the breeding values in a. In order to allow for maternal effects, we will take the order of a to be rq × 1(r ≥ t), rather than rt × 1. The covariance matrix of breeding values can then be written as: where g j j is the additive genetic covariance between traits j and j if j j , and equal to the variance of trait j otherwise. As usual [7], the matrix A contains the additive numerator relationships among all q animals.
In (2), the vector e contains the errors ordered by trait within animal within pattern. Thus, the vectors e (g 1) , e (g 2) ,..., e (g t g ) denote the t g × 1 vectors of errors for the different animals with t g observed traits in pattern g. Error terms have zero expectation and, for animal i with complete records, the variancecovariance matrix is equal to Var(e (1i) ) = R 0 = [r j j ], with r j j the environmental (co)variance between traits j and j . If animal i has incomplete records in pattern g, the variance is then Var(e (i g) ) = M g R 0 M g . For the entire vector e with errors stacked traits within animal within pattern, the variance-covariance matrix is equal to: Under normality of breeding values and errors, the distribution of the observed data can be written as being proportional to: Full conjugate Gibbs algorithm for missing traits 53

Prior densities
In order to avoid improper posterior densities, we take the prior distribution of the p × 1 vector β to be multivariate normal: β ∼ N p (0, K). If vague prior knowledge is to be reflected, the covariance matrix K will have very large diagonal elements (say k ii > 10 8 ). This specification avoids having improper posterior distributions in the mixed model [8]. The prior density of β will then be proportional to: The breeding values of the t traits for the q animals are distributed a priori as a ∼ N rq (0, G 0 ⊗ A), so that: The matrix of the additive (co)variance components G 0 follows a priori an inverted Wishart (IW) density: G 0 ∼ IW(G * 0 , n A ), where G * 0 is the prior covariance matrix and n A are the degree of belief. Thus: We now discuss the prior density for R 0 . Had it been the data complete (no missing records for all animals and for all traits), the prior density for R 0 would have been IW (R * 0 , ν), the hyperparameters being the prior covariance matrix R * 0 and the degrees of belief ν. In order to allow for all patterns of missing data, and after the work of Kadane and Trader [9] and Dominici et al. [5], we take the following conjugate prior for R 0 : In words of Dominici et al. [5], the specification (9) mimics the natural conjugate prior for the t g -dimensional problem of inference on the variables of pattern g.

Joint posterior distribution
Multiplying (5) with (6), (7), (8) and (9), produces the joint posterior density for all parameters, and this is proportional to: A hybrid MCMC method may be used to sample from (10), by combining the classic DA algorithm for normal multiple traits of Van Tassell and Van Vleck [17] and Sorensen [15], for β, a and G 0 , with the conjugate specification (9) for R 0 proposed by Dominici et al. [5].

Conditional posterior distributions of β, a and G 0
The algorithm employed by Van Tassell and Van Vleck [17] and Sorensen [15] involves the sampling of the fixed effects and the breeding values first. In doing so, consider the following linear system: The system (11) is a function of K, G 0 and R 0 , and allows writing the joint conditional posterior density of β and a as: The normal density (12) can be sampled either by a parameter or by a block of parameters [17]. To sample from the posterior conditional density of G 0 , let S be . a 1 A −1 a r  a 2 A −1 a 1 a 2 A −1 a 2 . . .
Then, Van Tassell and Van Vleck [17] observed that the conditional posterior distribution of G 0 is inverted Wishart with the scale matrix G * 0 + S and degrees of freedom equal to n A + q, so that:

Conditional posterior distribution of R 0
In this section the sampling of the environmental covariance matrix R 0 is implemented. The procedure is different from the regular DA algorithm proposed by Van Tassell and Van Vleck [17] or Sorensen [15], since no missing errors are sampled: only those variances and covariances of R 0 that are missing from any particular pattern are sampled. In the Appendix it is shown that the conditional posterior distribution of R 0 has the following density: To sample the covariance matrix of a normal distribution, Dominici et al. [5] proposed an MCMC sampling scheme based on a recursive analysis of the inverted Wishart density described by Bauwens et al. [1] (th. A.17 p. 305). This approach is equivalent to characterizing (15) as a generalized inverted Wishart density [2]. We now present in detail the algorithm to sample R 0 . The first step of the algorithm requires calculating the "completed" matrices of hyperparameters for each pattern. We denote this matrix for pattern g as R * g , being composed by submatrices R * g oo , R * g om and R * g mm for the observed, observed by missing and missing traits, respectively. Once the R * g 's are computed, R 0 is sampled from an inverted Wishart distribution with the parameter covariance matrix equal to the sum of the "completed" matrices obtained in the previous step. The sampling of R 0 then involves the following steps:

Sampling of the hypercovariances between observed and missing traits in the pattern
This step requires a matrix multiplication to a sampled random matrix. Formally: In practice, the sampling from the matricvariate normal matrix (see [1] p. 305) in (16) can be achieved by performing the following operations. First, multiply the Cholesky decomposition of the covariance matrix (R * g The resulting random vector is then set to a matrix of order a t g × (t − t g ) by reversing the vec operation: the first column of the matrix is formed with the first t g elements of the vector, the second column with the next t g elements, and so on to end up with column (t−t g ). Note that R oo , R mm and R mo are submatrices of R 0 from the previous iteration of the FCG sampling. Then, the mean R −1 oo R om should be added to the random t g × (t − t g ) matrix. Finally, the resulting matrix should be premultiplied by the t g × t g matrix associated with the observed traits in pattern g equal to R * oo = R * 0oo + E g . The matrix R * 0oo contains the hypervariances and covariances in R * 0 . The matrix with the sum of squared errors in pattern g (E g ) is defined in the Appendix.

Sampling of the hypervariance matrix among missing traits in pattern g
This step is performed by sampling from: where W m denotes the Wishart density. The covariance matrix is R mm.o and the degrees of freedom are ν g plus two times the number of traits.

Obtaining the hypercovariance matrix
Let P g be a permutation matrix such that the observed and missing traits in pattern g recover their original positions in the complete covariance matrix. Then, the contribution of pattern g to the hypercovariance matrix for Full conjugate Gibbs algorithm for missing traits 57 sampling R 0 is equal to: so that the complete hypercovariance matrix is equal to G g=1 R * g .

Sampling of R 0
The matrix R 0 is sampled from:  (8), and go back to 1.

Analysis of a beef cattle data set
A data set with growth and carcass records from a designed progeny test of Angus cattle collected from 1981 to 1990, was employed to compare the autocorrelations among sampled R 0 and the 'effective sample sizes' of the FCG and DA algorithms. Calves were born and maintained up to weaning at a property of Universidad de Buenos Aires (UBA), in Laprida, south central Buenos Aires province, Argentina. There were 1367 animals with at least one trait recorded, which were sired by 31 purebred Angus bulls and by commercial heifers. Almost half of the recorded animals were females that did not have observations in all traits but birth weight. Every year 6 to 7 bulls were tested, and either 1 or 2 sires were repeated the next year to keep the data structure connected. Every year different heifers were artificially inseminated under a completely random mating scheme. After weaning (average age = 252 days), all males were castrated and taken to another property for the fattening phase. The steers were kept on cultivated pastures until they had at least 5 mm of fat over the ribs, based on visual appraisal. The mean age at slaughter was 28 months and the mean weight was 447 kg. Retail cuts had the external fat completely trimmed. The six traits measured were: (1) birth weight (BW); (2) weaning weight (WW); (3) weight at 18 months of age (W18); (4) weight of three retail cuts (ECW); (5) weight of the hind pistola cut (WHP) and (6) half-carcass weight (HCW). Descriptive statistics for all traits, are shown in Table I.
The year of measure was included as a fixed classification variable in the models for all six traits, and sex was included in the model for BW. Fixed covariates were age at weaning for WW, age at measure for W18, and age at slaughter for ECW, WHP and HCW. Random effects were the breeding values and the error terms. The parameters of interest were the 6 × 6 covariance matrices G 0 and R 0 . An empirical Bayes procedure was used to estimate the hyperparameters. The prior covariance matrices G * 0 and R * 0 were taken to be equal to the Expectation-Maximization [4] REML estimates of G 0 and R 0 , respectively, as discussed by Henderson [7]. The degrees of belief were set equal to 10. After running 200 000 cycles of both FCG and DA, autocorrelations from lag 1 to 50 for all 21 parameters of R 0 were calculated using the BOA program [14]. Once the autocorrelations were calculated, the 'effective sample size' (Neal, in Kass et al. [10]) were computed as:  where ρ(i) is the autocorrelation calculated at lag i, K is the lag at which ρ seems to have fallen to near zero and T is the total number of samples drawn. We took K = 50 and T = 200 000.

RESULTS
To summarize the results, the environmental variances of a growth trait (BW, r BW ), and a carcass trait (ECW, r ECW ), as well as the covariance between them (BW and ECW, r BW−ECW ) are reported. The autocorrelation functions of the MCMC draws for r BW , r BW−ECW , and r ECW for the FCG and DA samplers are displayed in Figures 1, 2 and 3, respectively.
For all environmental dispersion parameters the autocorrelations of the FCG sampler were in between 0.5 to 4 times smaller than those obtained with   Table II shows the posterior means and the posterior standard deviations, for all environmental (co)variance components, expressed as correlations. A graph of the posterior marginal density of r BW−ECW is displayed in Figure 4.

DISCUSSION
The original contribution of this research was to implement a Bayesian conjugate normal-inverted Wishart approach for sampling the environmental covariance matrix R 0 , in an animal model for multiple normal traits with missing records. This conjugate setting for normal missing data was first proposed by Kadane and Trader [9], and later implemented by Dominici et al. [5] to an exchangeable normal data vector with missing records. The procedure was different from the regular DA algorithm implemented by Van Tassell and Van Vleck [17] for multiple traits, or by Sorensen [15] for multivariate models, in the sense that there is no need to sample the residuals of missing records: only those variances and covariances of R 0 that are missing from any particular pattern are sampled. By avoiding the sampling of missing residuals from R 0 , the effect that the correlation between these latter terms and the sampled R 0 has on the convergence properties of the algorithm is eliminated [11], which in turn has the effect of decreasing the total amount of samples needed to reach convergence. The use of the algorithm in a data set with an extreme pattern of missing records produced a dramatic reduction in the size of the autocorrelations among samples for all the lags from 1 to 50. In turn, this increased the effective sample size from 2.5 to 7 times, and reduced the number of samples needed to attain convergence compared with the DA algorithm of Van Tassell and Van Vleck [17] or Sorensen [15]. Additionally, the time spent on each iteration of FCG was approximately 80% of the time spent on each iteration of DA. The total computing time will depend on the fraction of missing to total residuals to sample and the total number of patterns, for the particular data set.
The sampling of the environmental covariance matrix without having to sample the residuals presented here was facilitated by the fact that error terms are exchangeable among animals. This allowed collapsing the parameter space by integrating out the missing residuals, rather than imputing them. Unfortunately, breeding values are not exchangeable (for example, the breeding values of a sire and its son can not be exchanged) so that the sampling can not be performed with the additive genetic covariance matrix. A possibility is to parametrize the model in terms of Mendelian residuals (ϕ) rather than breeding values, by expressing a = [I t ⊗ (I q − P) −1 ]ϕ in (2) [3]. The matrix P has row i with all elements equal to zero except for two 0.5 in the columns corresponding to the sire and dam of animal i. However, this reparametrization does not reduce the number of random genetic effects and the programming is more involved, so there is not much to gain with this formulation. The alternative parametrization of a leaving only the breeding values of those animals with records requires direct inversion of the resulting additive relationship matrices by trait one by one, since the rules of Henderson [7] can not be used, a rather impractical result for most data sets.
Although the algorithm FCG was presented for a multiple trait animal model, it can be equally used for the environmental covariance matrix of longitudinal data with a covariance function, such as the one discussed by Meyer and Hill [12]. In that case let R 0 = ΦΓΦ in (9) and onwards, where the parametric matrix of (co)variance components is Γ and Φ is a known matrix that contains orthogonal polynomial functions evaluated at given ages. Also, the prior covariance matrix is such that R * 0 = ΦΓ * Φ .