Confidence intervals for validation statistics with data truncation in genomic prediction

Background Validation by data truncation is a common practice in genetic evaluations because of the interest in predicting the genetic merit of a set of young selection candidates. Two of the most used validation methods in genetic evaluations use a single data partition: predictivity or predictive ability (correlation between pre-adjusted phenotypes and estimated breeding values (EBV) divided by the square root of the heritability) and the linear regression (LR) method (comparison of “early” and “late” EBV). Both methods compare predictions with the whole dataset and a partial dataset that is obtained by removing the information related to a set of validation individuals. EBV obtained with the partial dataset are compared against adjusted phenotypes for the predictivity or EBV obtained with the whole dataset in the LR method. Confidence intervals for predictivity and the LR method can be obtained by replicating the validation for different samples (or folds), or bootstrapping. Analytical confidence intervals would be beneficial to avoid running several validations and to test the quality of the bootstrap intervals. However, analytical confidence intervals are unavailable for predictivity and the LR method. Results We derived standard errors and Wald confidence intervals for the predictivity and statistics included in the LR method (bias, dispersion, ratio of accuracies, and reliability). The confidence intervals for the bias, dispersion, and reliability depend on the relationships and prediction error variances and covariances across the individuals in the validation set. We developed approximations for large datasets that only need the reliabilities of the individuals in the validation set. The confidence intervals for the ratio of accuracies and predictivity were obtained through the Fisher transformation. We show the adequacy of both the analytical and approximated analytical confidence intervals and compare them versus bootstrap confidence intervals using two simulated examples. The analytical confidence intervals were closer to the simulated ones for both examples. Bootstrap confidence intervals tend to be narrower than the simulated ones. The approximated analytical confidence intervals were similar to those obtained by bootstrapping. Conclusions Estimating the sampling variation of predictivity and the statistics in the LR method without replication or bootstrap is possible for any dataset with the formulas presented in this study. Supplementary Information The online version contains supplementary material available at 10.1186/s12711-024-00883-w.


Background
Validation by data truncation has been proposed to validate models for genetic and genomic predictions [1].In recent years, its popularity has increased over modelbased statistics, such as the Akaike information criterion or likelihood ratio [2].Widely used statistics for validation by data truncation are those included in the linear regression (LR) method, which compares sets of estimated breeding values (EBV) [3], and predictivity [4], the latter defined as the correlation between EBV and adjusted phenotypes, divided by the square root of the heritability.These validation statistics focus on the performance of the model to predict breeding values.Validation using these methods was done in dairy [5] and beef [6] cattle, pigs [7], chickens [8], sheep [9], goats [10], fish [11], wheat [12], and trees [13], among others.For validation in dairy cattle, using weighted averages and deregressed evaluations could be more robust than the LR method or predictivity [14].Overall, the validation methods covered in the present study provide measures of bias and accuracy of genomic predictions.Standard errors and confidence intervals of validation statistics can be obtained by k-fold cross-validation [2].Many studies assessed the variation of the LR method statistics by replicating the validation (e.g., [15,16]).However, in routine genetic evaluations, k-fold validation is not useful because of population structure [1], it does not account for the reduction in variance in the selected population [3], and the interest is in predicting the genetic merit of young individuals [3].Therefore, validation by data truncation is a common practice for routine genetic evaluations in animal and plant breeding [17][18][19][20][21][22][23].
In an early stage of developing the LR method, Legarra and Reverter [24] proposed calculating confidence intervals for the dispersion of the predictions (slope of the regression of true on estimated breeding value) using classical regression theory (i.e., considering u p as fixed) [25].However, the random and correlated nature between u p and u w introduces a systematic underesti- mation of the standard error of the dispersion.Thus, the estimated confidence intervals are narrower than the true ones.
Two methods are currently used to obtain standard errors and confidence intervals for validation by data truncation in genetic and genomic predictions.The first approach to assess the variation of validation statistics is to perform forward validations at several time points [18,20,23].This practice gives an idea of the variation of the validation statistics over time.However, it cannot predict the variation of any statistics for a specific time point, and it is necessary to correct the statistics because some time periods might be more represented than others [18].In addition, this method is computationally expensive for large datasets and involves complex manipulations of the available dataset.The second approach uses bootstrapping (sampling with replacement of the validation individuals to create pseudo-replicates of the validation dataset [17,19,22,26]).Bootstrapping is attractive since it is computationally inexpensive and only requires running the validation once.To our knowledge, only Mäntysaari and Koivula [17] tested the adequacy of bootstrapping to obtain the variability of validation statistics for genomic selection, showing a good agreement with the first approach; however, this was only shown for one dairy cattle dataset.In addition, non-sampling-based, analytical confidence intervals for the LR method statistics and predictivity have not been reported, although they are of interest on their own and could simplify the process of assessing the quality of validation statistics.Therefore, the objectives of this study were to derive standard errors and analytical confidence intervals for validation by data truncation statistics used in genetic and genomic evaluations, to benchmark against their simulated sampling distributions, and to compare them against confidence intervals obtained by bootstrapping.

Methods
In the following section, we show the general model used to derive the formulas for the confidence intervals of the different validation statistics, and a useful result for the next derivations.Then, we derive the mathematical expression for each validation statistic and suggest approximations when it is not possible to obtain the exact expressions.Finally, we describe two simulations used for testing the adequacy of the presented confidence intervals.The derivation is frequentist in nature and considers the sampling distribution of the statistics of either validation method, considering the sampling variation in the phenotypes.This is the framework used by many methods to derive confidence intervals and also by related methods such as bootstrap [27].Indeed, Efron [28] showed that cross-validation methods with replicates have frequentists interpretations.

Theory
For the sake of presentation, we assume a single-trait model with an additive genetic effect as the only random effect, although the results extend to other types of models: where y is the vector of phenotypes, b is the vector of fixed effects, u is the vector of additive genetic effects, e is the vector of errors, and X and Z are incidence matrices.
The validation methods in this study (LR method and predictivity) consist of splitting the data into a whole and a partial dataset, denoted with the subscripts w and p, respectively.The whole dataset has all the available phenotypes, whereas in the partial dataset the phenotypes after a given date have been removed.Then, validation methods compare EBV versus either EBV (method LR) obtained from the whole dataset, or precorrected phenotypes present in the "whole" but not in the "partial" dataset (predictivity).The comparison is usually for a set of individuals, named "focal"; this can (1) y = Xb + Zu + e, be e.g.bulls acquiring progeny records in the "whole" (but not in the partial dataset) or individual pigs acquiring, say, growth records in the "whole" (but not in the partial dataset).
Predicting u for the validation or testing set based on the whole data u w requires solving the model in Eq. ( 1).The prediction of u for the validation set based on the partial data u p is obtained by remov- ing the phenotypes of the individuals in the validation set before solving the model in Eq. (1).As shown in Appendix I, if y is assumed to follow a multivariate normal distribution and the predictions are obtained by best linear unbiased prediction in absence of selection (i.e., under random mating and random culling) [29], the joint distribution of u w and u p is: where G = Var(u) , C 22  w is the prediction error variance of u w , and C 22  p is the prediction error variance of u p .If the predictions are obtained from mixed model equations (MME), C 22  w and C 22 p are obtained as blocks of the inverse of the MME for the animal effect.Absence of selection is assumed for simplicity and because the variances in Eq. ( 2) become complicated (and basically impossible in practice, as selection is not easily described algebraically) to obtain (see Appendix I), and this is a standard simplifying assumption in animal breeding applicationsfor instance, reliabilities are obtained from Eq. ( 2) or an approximation.As shown in the Appendix I, the conditional distribution of u w given u p is: Note that Eqs. ( 2) and (3) also hold for a subvector of u w and u p .Thus, the following derivations hold for the entire vectors u w and u p (i.e., the population) as well as for a subvector of u w and u p (i.e., the estimated breed- ing values of a subset of the population).

Bias
Legarra and Reverter [3] derived the estimate of the bias of predictions ( µ wp ) as the difference between the averages of u p and u w .In matrix notation: where n is the number of individuals in the testing set and 1 is a vector of ones.Because of the joint multivariate normality of u p and u w , µ wp is normally distributed (2) (4) (see p 92 in [25]).Therefore, a Wald confidence interval [30] for µ wp can be constructed if its standard error is known.Taking the variance of Eq. (4): The above equation is simply the difference of the averages of the prediction error variances of the predictions.Then, a confidence interval for µ wp is: where z 1− α 2 is the value of the standard normal distribution quantile function for the confidence level 1 − α 2 .For large datasets, it is computationally unfeasible to obtain C 22  p and C 22 w .In that situation, we can simplify Eq. ( 6), assuming that animals are non-inbred and mostly unrelated such that the off-diagonal elements of G , C 22  w , and C 22  p can be safely ignored.Thus, C 22 p − C 22 w ≈ R w − R p , where R w and R p are diagonal matrices of genomic (G) EBV's reliabilities in the whole and partial datasets, respectively.Letting σ 2 g be the genetic variance, g n rel w − rel p and an approximate confi- dence interval for µ wp is:

Dispersion
The regression coefficient of u w on u p (b wp ) quantifies the dispersion of the predictions with partial data.If there is no under/over dispersion, the expected value of b wp is equal to 1.The mathematical expression for b wp is: where cov and var are the sample covariance and vari- ance, respectively, and S = I − n −1 11 ′ .A Wald confi- dence interval for b wp can be constructed because b wp is asymptotically normal when the number of focal individuals in the validation set increases (see p. 249 in [25]).By the law of the total variance (see p. 167 in [31]): For the first term in the right-hand side, we have: (5) Var Using a first-order Taylor approximation: By the expectation of quadratic forms [32] and the zero expectation of u p and u w , the numerator of the right- hand side in Eq. ( 11) is E u ′ Thus: For the second term in the right-hand side of Eq. ( 9): Therefore, the variance of b wp is: and the Wald confidence interval for b wp is: By making similar assumptions as for the estimator of the bias, G − C 22  p ≈ R p .Then, tr S C 22 ( , which results in: from which an approximate confidence interval for b wp can be constructed.Assuming that the increase in reliability from the partial to the whole dataset is constant among the validation animals, rel w i rel p i = c (which is always higher than 1).Then, an approximate confidence interval for b wp is:

Ratio of accuracies
The Pearson correlation coefficient between u p and u w ρ wp has an expected value equal to the ratio of accuracies obtained with the partial and the whole dataset.The formula for ρ wp is: (17) In principle, a confidence interval can be obtained that involves explicitly elements in Eq. ( 2); however, this yielded inelegant expressions that were unusable in practice (see Appendix II).We propose to use a confidence interval for ρ wp using the Fisher transformation [25] and (see p. 261 in [29]).The inverse hyperbolic tangent of a correlation coefficient r tanh −1 (r) = 1 2 log 1+r 1−r follows approximately a normal distribution with a standard error equal to 1 assuming that the samples are identically and independently distributed (which is not the case for genetic evaluations).Thus: To obtain a confidence interval for ρ wp , we apply the hyperbolic tangent and get: where tanh(x) = e 2x −1 e 2x +1 .This can be computed for any dataset size but note that this confidence interval is not symmetric around ρ wp .

Reliability
The reliability of the EBV is defined as the square of the correlation between the true and estimated breeding values.Legarra and Reverter [3] and Macedo et al. [29] proposed that the reliability be estimated as the ratio (19) between the sample covariance of u p and u w and the genetic variance of the validation set σ 2 g i .This variance must account for selection and can be approximated using averages of additive relationships among validation animals, which accounts for e.g.few families of large sibships, or calculated with the method of Sorensen et al. [33] which correctly accounts for selection.We will assume this variance as known.The estimator of the reliability has the following expression: Although ρ 2 cov wp is not normally distributed, we assume that, for large sample sizes, its distribution is approximately normal.Taking the variance of ρ 2 cov wp gives: As in Eq. ( 9), we apply the law of total variance for the numerator of the right-hand side in Eq. ( 22): Following similar arguments as for Eqs.(10)  Finally, a confidence interval for ρ 2 cov wp is constructed as: ( ( Following the same assumptions as for the bias and dispersion parameter leads to i=1 rel w i + rel p i rel p i .Assuming that the increase in reliability from the partial to the whole dataset is constant among the validation animals, that is, Thus, an approximate confidence interval for ρ 2 cov wp is:

Predictivity
The ratio between the correlation of u p and the phenotypes of the validation set adjusted for fixed effects y * and the square root of the heritability (h) is an estimate of the cor- relation between estimated and true breeding values [4].This statistic is sometimes called predictivity ρ y * , u p and has the following mathematical expression: As with the ratio of accuracies, the Fisher transformation can be used to obtain the following confidence interval for ρ y * , u p : This can be computed for any dataset size.

Simulations
We tested the adequacy of our analytical [(Eqs.( 6), ( 15), ( 20), (25), and ( 28)] and approximated analytical [Eqs.(7), (17), and ( 26)] confidence intervals using two simulated examples.In both, we obtained the empirical distribution of the validation statistics by replicating the simulation.Then, we compared the standard error and 95% confidence interval of that sampling distribution (i.e., True) versus confidence intervals obtained with the formulas presented in the previous section (i.e., Analytical or Approximated), and by bootstrapping.The confidence intervals with bootstrap were obtained by sampling with replacement of the validation set, replicated 10,000 times.

Example 1
The first dataset was created using a publicly available pedigree created by Yutaka Masuda (https:// github.com/ masud ay/ data/ blob/ master/ tutor ial/ rawfi les/ rawped).The pedigree had 11 generations (26) without selection (i.e., random mating and random culling) and 4641 individuals.Single-trait models with generation as a fixed effect (b) and additive genetic effect (u) as a random effect were simulated for different heritabilities h 2 and proportions (prop) of animals with phenotypes in the population.In total, a grid of 81 scenarios corresponding to h 2 = {0.1,0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9} and prop = {0.1,0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9} was evaluated.Each scenario was replicated 50 times by sampling the vector of phenotypes from a multivariate normal distribution with mean Xb (b is fixed across replicates) and variance ZAZ ′ σ 2 a + Iσ 2 e , where A is the numerator relationship matrix [34], σ 2 a = 1 and σ 2 e = 1 h 2 − 1 .The validation set was composed of phe- notyped animals from the most recent generation.The number of animals in the validation set was constant among heritabilities, and for each prop i was equal to 44, 74, 119, 149, 188, 234, 274, 318, and 362, respectively.All the computations were done in Julia [35].
Example 2 For the second example, we replicated the simulation of Vitezica et al. [36], which consists of a dairy cattle selection scheme with single-step genomic best linear unbiased predictor [37][38][39].In each replicate, the partial dataset was created by removing the phenotypes Table 1 Mean squared differences between estimated and true variance, lower bound of the 95% confidence interval (lCI), and upper bound of the 95% confidence interval (uCI), averaged over all levels of heritability and proportion of animals with records for the different validation statistics for Example 1 sparse inversion techniques, which calculates the elements in the inverse corresponding to the non-zero elements of the original matrix [41,42].All the other analyses were done using Julia [35].

Results
Table 1 shows the average squared difference between the estimated and true values of the variance and 95% confidence interval bounds for Example 1.Average squared differences grouped by different prop and h 2 are reported (see Additional file 1: Tables S1 and S2).The analytical confidence intervals and variances were closer to the true simulated values than those obtained by bootstrapping.The confidence intervals obtained by bootstrapping were very similar to the approximated analytical confidence intervals.
The same patterns can be observed in Figs. 1, 2, 3, 4 and 5, which compare the simulated against estimated standard errors and confidence intervals for a combination of three heritabilities (low, medium, and high) and three proportions of animals with phenotypes (low, medium, and high).For the bias (Fig. 1), the estimation of the confidence intervals was less accurate with low prop .Within each prop , the approximated and bootstrap standard errors and confidence intervals tend to overestimate the simulated ones as the heritability increases.
The situation was the opposite for the dispersion parameter (Fig. 2).In this case, the approximated and bootstrap confidence intervals were too narrow for high prop with respect to the true confidence intervals obtained from the simulated data.These results suggest that bootstrapping does not consider properly the complex covariance structure between u p and u w .
The confidence intervals for the ratio of accuracies were slightly underestimated for the bootstrap method.
The same was observed for the predictivity.However, the variance among replicates was very high for scenarios with low prop or low h 2 .In such cases, the confidence intervals for the predictivity would cover a large portion of its range, making inference based on the predictivity statistic inaccurate.
For reliability (Fig. 5), the analytical confidence intervals were very close to the simulated ones.The approximated analytical and the bootstrap confidence intervals were systematically narrower than the simulated ones.In addition, the simulated confidence intervals were not symmetric around the mean, as the lower bound was closer to the mean than the upper bound.This could indicate that approximate normality is not appropriate.
Results for Example 2 are shown in Fig. 6 for bulls and Fig. 7 for cows.The analytical confidence intervals for reliabilities in cows were closer to the simulated ones than those for bulls.For bulls, the results were overall more variable and showed that the analytical confidence intervals for all the statistics were biased, probably because bulls were highly selected.This violates the assumption of absence of selection and can affect the expressions involving G .In addition, this issue could have been gen- erated by the sparse inversion [41,42] implemented in BLUPF90 + , which calculates in an exact manner the elements of C 22  w and C 22 p corresponding to the non-zero pattern of the MME and ignores the rest of the elements, which have their values set zero before sparse inversion.However, these elements, which are not needed for reliabilities or restricted maximum likelihood (REML), are needed to obtain confidence intervals analytically, e.g., in [15].For instance, the prediction error covariance of two unrelated bulls with daughters in the same herd.Another reason could be that the amount of information removed for the validation bulls was not sufficient, which is shown by a high ρ wp .Under high ρ wp , the gain in accu- racy from the partial to the whole dataset will be minimal to null, and the standard errors of the validation statistics will tend to zero because C 22  p ≈ C 22 w .Similar to the results from Example 1, the bootstrap confidence intervals were narrower than the simulated ones.

Discussion
The aim of this study was to derive standard errors and analytical confidence intervals for the LR method and predictivity.For the estimators of the bias, dispersion, and reliability from the LR method, we calculated their standard errors and built Wald confidence intervals assuming that the estimators are asymptotically normally distributed.Unlike [24], we used the marginal (unconditional) distribution of the estimators to account for the randomness of u p and the dependence between u p and u w .Not accounting for the randomness of u p results in an underestimation of the standard errors of the validation statistics; hence, resulting in narrower confidence intervals.The resulting standard errors and confidence intervals are functions of the relationships between the individuals in the validation set and their prediction error (co)variances in the whole and partial datasets.
For the estimator of the ratio of accuracies from the LR method and the predictivity, we used the Fisher transformation to obtain a confidence interval of those correlation coefficients.Although this method is straightforward, it assumes that all the samples are identically and independently distributed, which is not true when performing validation by data truncation in genetic evaluations.Looking for better formulas that account for heterogeneity in variances and dependence among samples will involve complicated expressions (see Appendix II).In addition, Krishnamoorthy and Xia [43] and Gnambs [44] showed that Fisher's transformation worked well with a large number of observations regardless of whether its assumptions were violated.Also, unlike the standard errors for bias, dispersion, and reliability, which depend only on the model [see Eqs. ( 5), (14), and ( 24)], the variances for the ratio of accuracies and predictivity depend directly on the values of the statistics themselves.
Although confidence intervals for predictivity can be obtained with Fisher's transformation, comparing different models based on those confidence intervals is improper because it does not consider the dependency between the statistics.A bootstrap method to account for this was proposed by [24], but parametric methods exist.In other words, the methods presented in this study explain how to obtain confidence intervals for ρ y * , u p but they do not assess the null (H0) hypothesis ρ y * , u p (A) = ρ y * , u p (B) , where A and B denote different methods or models for prediction.A proper test in this situation is the Williams test [45], which uses the statistic +2 ρ y * , u p (A) ρ y * , u p (B) ρ u p (A), u p (B) .Statistic T follows approximately a t distribution with n − 3 degrees of free- dom.Indeed, this test has already been used but not in the context of the LR method [46].
According to the results of our study, analytical confidence intervals should be preferred over bootstrap confidence intervals.However, the analytical confidence intervals for the bias, dispersion, and reliability are computationally expensive to obtain in large datasets because they need the prediction error variances and covariances for the validation animals in the whole and partial datasets.Alternatives for large-scale genetic evaluations could be to approximate C 22  w and C 22 p with Markov chain Monte Carlo methods [47].In this study, the approximations that we propose assume C 22  p − C 22 w and G − C 22 p to be diagonal.In such a case, C 22 w and C 22 p can be obtained from the (G)EBV reliabilities reported in the evaluation, which corresponds to, for instance, the information that is used in Interbulls' tests [48,49].The robustness of the diagonal assumption depends on the data.For not-very-related individuals with high reliabilities, the assumption holds.More complex scenarios, for instance, families with half-sibs and low to medium reliabilities will require further inspections due to a block structure of C 22  p − C 22 w and G − C 22 p .Assuming that the increase in reliability from partial to whole data (c) is constant among animals leads to an expression where only rel p or rel w are required.This could be attractive in cases where performing validation by data truncation is not possible (e.g., when phenotypes could not be shared or rel p might not be available) or when adding a source of information or calculating the reliability is not possible ( rel w might not be available).The assumption of a con- stant increase in reliability, using the average increase of the reliability for the calculations, was shown to be robust in this study in spite of the range of c , which ranged up to (1.86-8.91)for some scenarios in Example 1.In our simulations, the approximated analytical confidence intervals were similar to those obtained by bootstrapping.
In many scenarios in both examples, the bootstrap confidence intervals were narrower than the simulated ones.In other words, bootstrapping was "too optimistic" regardless of the variation of the empirical distribution of the validation statistic.The reason could be the correlated data structure shown by populations under artificial selection.Bickel et al. [50] reviewed situations and presented scenarios where classical bootstrap fails.In such a case, they proposed sampling with replacement using fewer observations than the total number.In addition, this could increase the efficiency of bootstrapping.The number of observations to sample would depend on the data and could be calibrated with the analytical confidence intervals in case they are too expensive to obtain for routine evaluations.
The additive genetic variance and the accuracy of the EBV change when selection occurs [51,52].To our knowledge, the interaction between predictivity and selection has not been studied.That statistic depends on the square root of the heritability.Thus, if the estimate of the heritability under selection is biased, the predictivity could also be biased.In case the model and genetic parameters are correct, the predictivity could be biased if selected animals are chosen for the validation set.Simulation studies reported that the LR method worked well when the model used to estimate breeding values matches the true data-generating process [14,29].According to these results, one could infer that the LR method would estimate the bias, dispersion, and accuracy properly in the presence of selection if it is correctly taken into account in the model with, for instance, the method of Henderson [53,54].However, this is rarely used in genetic evaluations, and selection is often ignored in the estimation of breeding values.In such a case, the LR method can estimate the direction of the bias but not its magnitude if the model is incorrect but reasonably robust [29].The LR method cannot estimate the bias when the model is seriously mis-specified, which in the case of [29] was when a simulated environmental trend was ignored in the model.Macedo et al. [29] found that the dispersion and accuracy were well estimated in all scenarios.However, Himmelbauer et al. [14] found that the LR method performed well for males but not for females in dairy cattle selection schemes.In addition, they reported that the estimator of the reliability depends heavily on how the additive genetic variance for the validation set is calculated.The confidence intervals derived in this study can be affected by selection in two ways: (i) by the bias of the validation statistics and (ii) by the effect of selection on the standard error of the dispersion and reliability estimators.The first way affects the location of the confidence interval, and given a biased estimator, it is not possible to correct.The second way affects the confidence interval length because the additive relationships in the validation group change due to selection [53,54].Specifically, the affected term is G − C 22  p , which is the variance of u p .According to Henderson [53,54], the vari- ance of u p is reduced under selection.However, the effect on the standard error of the estimators of the dispersion and reliability is hard to assess because the variance of u p is involved in convoluted algebraic operations.

Conclusions
We derived analytical standard errors and confidence intervals for predictivity and the LR method statistics of bias, dispersion, ratio of accuracies, and reliability.Based on the examples shown in this study, the analytical confidence intervals were more accurate than the confidence intervals obtained by bootstrapping.We also developed approximated analytical confidence intervals for situations where the analytical ones are not feasible due to computational limitations.This study provided a framework for proper validation by data truncation statistical inference applied to genetic evaluation when replication is not possible.

p S C 22 pu p 2 =u p 2 =
− C 22 w S u p = tr S C 22 p − C 22 w S G − C 22 p .For the denominator, we have E u ′ p S Var u ′ p S u p + E u ′ p S 2 tr S G − C 22 p S G − C 22 p + tr S G − C 22 p 2 .
and (11), the first term is equal to tr S C 22 p − C 22 w S G − C 22 p

−Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5
Fig.1Comparison between the true (T), analytical (An), approximated (Ap), and bootstrap (B) standard error and confidence interval for the estimator of the bias over different combinations of heritability h 2 and proportion of animals with records (p) for Example 1.The length of the box indicates the magnitude of the standard error with respect to the mean of the bias over the replicates.The length of the whiskers indicates the length of the 95% confidence interval

Fig. 6 Fig. 7
Fig.6 Comparison between the true (T), analytical (An), approximated (Ap), and bootstrap (B) standard error and confidence interval for the estimator of the bias, dispersion, ratio of accuracies1 , and reliability for the bulls in Example 2. The length of the box indicates the magnitude of the standard error with respect to the mean.The length of the whiskers indicates the length of the 95% confidence interval.1Standard errors were not available for the analytical confidence interval

22 p
h cov(y * , u p) √ var(y * )var( u p) (Eq.27) we have: The joint distribution of u p and y * is [3]: where K = G + R − XC 11 X , with R = Var(e) and C 11 the block of the generalized inverse of the mixed model equations pertaining to the fixed effects.After algebra, Var y * ′ Sy * = 2 tr(SKSK) , Var y * ′ S u p = tr SLS G − C 22 p +tr S G − C 22 p S G − C 22 p where L = C 22 p + R − XC 11 X , Cov u ′ p S u p , y * ′ Sy * = Cov u ′ p S u p , y * ′ S u p = Var u ′ p S u p , and Cov y * ′ S u p , y * ′ Sy * = 2 tr SKS G − C