Let y ={y
i
} be a vector of phenotypes recorded in a binary fashion (0/1) from n animals genotyped for p markers X = {x
i
}. Four different methods were applied: two linear regressions using a Bayesian framework, and two machine-learning ensemble algorithms.
Model 1: threshold Bayes A
A threshold version of Bayes A (TBA) model was proposed here, which is an extension of the Bayesian regression proposed by Meuwissen et al. [1]. The traditional threshold model [4] postulates that there is an underlying random variable, called liability (λ) that follows a continuous distribution, and that the observed dichotomy is the result of the position of the liability with respect to a fixed threshold (t):
The liability is taken as the response variable. The proposed modification consists of the linear regression of the single nucleotide polymorphism (SNP) coefficients on a liability variable with Gaussian distribution. The TBA can be described as follows:
where, λ is the underlying liability variable vector for y, μ is the population mean, 1 is a column vector (n×1) of ones; b = {b
j
} corresponds to the vector for the regression coefficient estimates of the p markers or SNP assumed normally and independently distributed a priori as , where is an unknown variance associated with marker j. The prior distribution of is assumed to be distributed as the scaled inverse chi-square , with υ
j
= 4 and . Elements of the incidence matrix X, of order n × p, may be set up as for different additive, dominant or epistatic models. In the more practical scenario, it takes values -1, 0 or 1 for marker genotypes aa, Aa and AA, respectively. The residuals (e) are assumed to be distributed as , with residual variance , as stated above. As in a regular threshold model, two parameters have to be set fixed (e.g. threshold and the residual variance are set to zero and one, respectively) since these parameters are not identifiable in a liability model.
This method can be solved via the Gibbs sampler described in Meuwissen et al. [1], with the simple incorporation of the data augmentation algorithm to sample the individual liabilities from their corresponding truncated normal distribution as described in Tanner and Wong [15]. The joint posterior distribution of the n liabilities is:
Model 2: threshold Bayesian LASSO
The Bayesian LASSO described by Park and Casella [16] and its version for genomic selection detailed in de los Campos et al. [17] can also be extended to discrete traits [18]. As stated in the previous model, the response variable is a liability response (λ) that follows a continuous distribution. The Bayesian threshold LASSO (BTL) can be solved as:
where λ is the vector of liabilities for all individuals, μ is the population mean, 1 is a column vector (n × 1) of ones; are the LASSO estimates with their respective incidence matrix X as described for model TBA. As a modeling choice, e was considered the vector of independently and identically distributed residuals, as . In accordance with tradition, we fixed the threshold to be 0 and the residual variance to be 1 as described for model TBA; alternate choices result in the same model.
In a fully Bayesian context, the LASSO estimates can be interpreted as posterior modes estimates when the regression parameters have independent and identical double-exponential priors [19]. Park and Casella [16] have proposed a conditional Laplace prior specification for the LASSO estimates of the form:
where is the residual variance, and γ is a parameter controlling the shrinkage of the distribution. Inferences about γ may be done in different ways [16]. To follow the Bayesian specifications, a gamma prior is proposed here for γ2 , with known rate (r) and shape (δ) hyper-parameters, as described by de los Campos et al. [17]. Samples from posterior distributions of those estimates may be drawn from the Gibbs sampling algorithm described in de los Campos et al. [17], with the corresponding data augmentation algorithm for liabilities, as described for TBA.
Model 3: gradient boosting
Gradient boosting may be classified as an ensemble method [20]. This algorithm combines different predictors in a sequential manner with some shrinkage on them [12] and performs variable selection. Gradient boosting forms a "committee" of predictors with potentially greater predictive ability than that of any of the individual predictors in the form:
Each predictor ( h
m
(y; X) for m ∈ (1, M)) is applied consecutively to the residual from the committee formed by the previous ones. This algorithm can be calculated using importance sampling learning ensembles as follows:
(Initialization): Given data (y, X), let the prediction of phenotypes be F0 = μ, with μ being the population mean.
Then, for m in {1 to M}, with M being large, calculate the loss function (L) for
where j
m
is the SNP (only one SNP is selected at each iteration) that minimizes at iteration m, h(y
i
; x
i
, j
m
) is the prediction of the observation using SNP j at the current iteration, F
m
-1(x
i
) is the updated prediction at the previous iteration and L(·) is a given loss function. The updated prediction at each iteration m may be expressed as F
m
(x
i
) = F
m
-1(x
i
)+v·h(y
i
; x
i
, j
m
) with v being some shrinkage factor that, without loss of generality, can be assumed constant and small (0< v <1), but it may be optimized to balance predictive ability and computation time.
Therefore, after the initialization, the algorithm flows as follows:
Step 1: Compute residuals as , and fit the weak learner for each SNP j (j ∈{1,..., p}) to current residuals, where ν was set to 0.01.
Step 2: Select SNP j, where , i.e. the SNP minimizing the loss function.
Step 3. Update predictions as F
m
(x
i
) = F
m
-1(x
i
)+ν·h(y
i
; x
i
, j
m
), (i∈{1,..., n}), where h(y
i
; x
i
, j
m
) is the estimate for individual i obtained by regressing the current residual (r
i
) at iteration m on its genotype for the SNP selected in step 2.
Step 4: Increase the iteration index m by 1, and repeat steps 2-4 until a convergence criterion is reached.
Here, we used ordinary least square regression as predictor h(y; X) and two different loss functions: the L
2
loss function (L2B), which is a quadratic error term in the form (y
i
-F
m
(y
i
; x
i
, j
m
))2, and a pseudo-Huber loss function (LhB) in the form . The pseudo Huber loss function is a priori more appealing for discrete traits because it is continuous, differentiable, greater than or equal to the logit loss function and overcomes the disadvantage of the squared loss by becoming more linear when (y
i
-F
m
(y
i
; x
i
, j
m
)) tends to infinite. The choice of the number of iterations, M, is a model comparison problem which may be overcome in many different ways [12, 20]. Here, a cross-validation design was used as described in González-Recio et al. [8]. More details on the gradient boosting can be found in Freund and Schaphire [21], Friedman [12] and González-Recio et al. [8].
Model 4: Random Forest
Random Forest can be viewed as a machine learning ensemble algorithm and was first proposed by Breiman [11]. It is massively non-parametric, robust to over-fitting and able to capture complex interaction structures in the data, which may alleviate the problems of analyzing genome-wide data. This algorithm constructs many decision trees on bootstrapped samples of the data set, averaging each estimate to make final predictions. This strategy, called bagging [22], reduces error prediction by a factor of the number of trees.
A RF algorithm aimed at genome-wide prediction is described next, in a more extensive manner than the previous methods, as this is the first time that this algorithm is used in a genomic breeding value prediction context:
Let y (n × 1) be the data vector consisting of discrete observations for the outcome of a given trait, and X = {x
i
} where x
i
is a (p × 1) vector representing the genotype of each animal (0, 1 or 2) for p SNP, to which T decision trees are built (see classification and regression tree theory e.g. [20]). Note that main SNP effects, SNP interactions, environmental factors or combinations thereof may be also included in x
i
. This ensemble can be described as an additive expansion of the form:
Each tree (h
t
(y; X) for t∈(1, T)) is distinct from any other in the ensemble as it is constructed from n samples from the original data set selected at random with replacement, and at each node only a small group of SNP are randomly selected to create the splitting rule. Each tree is grown to the largest extent possible until all the terminal nodes are maximally homogeneous. Then, ct is some shrinkage factor averaging the trees. The trees are independent identically distributed random vectors, each of them casting a unit vote for the most popular outcome of the disease at a given combination of SNP genotypes.
Each tree minimizes the average loss function of the bootstrapped data, and is constructed using a heuristic approach as follows:
1. First, bootstrapped samples from the whole data set are drawn with replacement so that realization (y
i
, x
i
) may appear several times or not at all in the bootstrapped set Ψ(t) t = (1,..., T).
2. Then, draw mtry out of p SNP markers at random, and select the SNP j, j∈(1,..., mtry), where
with L(y, h
t
(X)) being a certain loss function. i.e. SNP j is the one that minimizes a given loss function at the current node, and is selected in this step. The algorithm takes a fresh look at the data that have arrived at each node and evaluate all possible splits. Many loss functions can be chosen (e.g. logit function, squared loss function, misclassification rate, entropy, Gini index, ...). The behavior of a given loss function may depend on the nature of the problem. The squared loss function is popular for continuous response variables, and the logit function for categorical responses.
3. Split the node in two child nodes according to SNP j genotype that one individual may or may not have (e.g. individuals with the risk allele will pass to a child node, and the remaining animals will pass to the other child node).
4. Repeat steps 2-3 until a minimum node size is reached (usually <5). The predicted value of the genotype x
i
is the majority vote for the outcome at the terminal nodes (for regression problems, it is the average phenotype of the individuals in the node).
Finally, a large amount of trees are constructed repeating steps 1-4 to grow a random forest. The forest may be stopped when the generalization error averaged across the out of bag samples (see section below) have converged. Convergence may be visually tested but it may also be determined using traditional methods for convergence testing of Monte Carlo Markov chains.
Final predictions can be made by averaging the values predicted at each tree to obtain a probability of being susceptible. In a naïve 0 = non-susceptible/1 = susceptible scenario, individuals with probability <0.5 may be considered as non-susceptible. To predict observations of new individuals, their marker genotypes are passed down each tree, and the estimate of the corresponding terminal nodes is assigned to the new individual in each tree. The predictions of each tree in the RF algorithm are averaged for each animal to compute the final prediction.
There are two main aspects that can be tuned in random forest: the first one is the number of SNP or covariates sampled at random for each node (mtry). Generalized cross-validation strategies can be used to optimize mtry. In high dimensional problems such as GWAS, Goldstein et al. [23] have suggested mtry to be fixed to >0.1 p. The algorithm may speed up for smaller mtry values. Nonetheless, cross-validation can be used to determine the best value of mtry for each trait, although at an expense of increasing computation time. Genetic background may influence the behavior of this tuning parameter. The second aspect is the criterion to select the best SNP to split the node. As commented above, different criteria may be used and the best choice may depend on the nature of the problem. Entropy theory seems the most appealing to evaluate genomic information on discrete traits (as concluded from pilot studies, results not shown). Other loss functions such as the L1-loss function or the misclassification rate could be implemented in an easy manner. Without loss of generality we show how to implement the entropy theory in the node splitting decision. The information gain (IG) for each covariate s drawn at random in a given node was calculated as described in Long et al. [9]:
Suppose there are individuals with genotype k (k ∈ {0, 1, 2}) at each SNP covariate x
j
showing y = 1 (e.g. presence of disease) at such node, and individuals with the same genotype with y = 0 (e.g. absence of disease). The information gain for each covariate x
j
can be calculated as:
where , and is the entropy of the probability distribution of y, and A is the set of all states that y can take ({0,1}). The SNP covariate with the highest IG at each node is used to split the node into two new child nodes, each one containing the individuals from the parent node with the risk or the non-risk allele, respectively.
There are two features involved in the RF algorithm that deserve further attention: the out of bag samples, and the variable importance.
Out of bag sample
The out of bag data (OOB) is an interesting feature of RF. Each tree is grown using a bootstrapped sample of the data, which leaves roughly one third of the observations out because some animals will appear more than once and others will not appear at all. The samples that do not appear are called the OOB samples. The OOB acts as a tuning/validation set at each tree and is almost identical to a n-fold cross validation, removing the need for a set aside test or tune test. Tuning of parameters can be done along the RF using the OOB, and generalization error can be calculated as the error rate of the OOB [11, 24].
Variable importance
RF may use the OOB to provide an importance measure of predictor variables (SNP or environmental effects). The relative variable importance (VI) is estimated as follows. After each tree is constructed, the OOB are passed down the tree and the prediction accuracy of disease outcome is calculated using the chosen criterion (e.g. misclassification rate, L2 loss function). Then, genotypes for the p th SNP are permuted in the OOB, and the accuracy for the permuted SNP is again calculated. The relative importance is calculated as the difference between these prediction accuracies (that of the original OOB and that of the OOB with the permuted variable). This step is repeated for each covariate (SNP) and the decrease of accuracy is averaged over all trees in the random forest. The variable importance may be expressed as a percentage of the accuracy obtained with the most important SNP, and provides insight in the level of association of the SNP with the disease. The SNP with higher VI may be of interest for prediction of trait susceptibility (e.g. disease resistance, low fertility) at low marker density, candidate gene studies or gene expression studies.
Our own java code has been developed for implementing RF for categorical or continuous traits under a genome-wide prediction context, and is available upon request to the authors.
Data sets
Simulated and field data sets were used for the model comparisons. Description of these data is given next.
Simulated set
QMSim software [25] was run to simulate a population of thousands of animals genotyped for roughly 10,000 markers. First, 1000 historical generations were generated in a population with effective size decreasing from 1400 to 400 to mimic a bottleneck, in order to produce a realistic level of LD for the platform used in the simulation. At this point, 40 generations were generated to achieve a population size of 21,000 animals. Then, 20,000 females and 300 males from the last historical population were selected as founders, followed by 15 generations of selection for estimated breeding values from best linear unbiased predictions and random matings. During these generations, replacement ratio were set at 0.83 and 0.45 for males and females, respectively. A random sample of 2500 animals in generations 11 to 14 was used as training set, while the whole generation 15 was used as testing set (1500 animals). Phenotypes were simulated as a Gaussian distribution with heritability equal to 0.25. Then, the phenotype of the animals was coded as 0 or 1 depending on whether their simulated phenotype was below or above, respectively, of the population average (using only generation from 11 to 14), which creates a discrete scenario for the phenotypes.
A genome was simulated with 30 chromosomes 100 cM long. Two scenarios with different numbers of QTL were simulated. In the first, three QTL were randomly located along each chromosome with effects sampled from a gamma distribution. This generated 90 QTL affecting the trait that still segregated in the training population. A second scenario with 33 QTL per chromosome was also simulated with a total of 1000 QTL having some effect on the trait and following a traditional infinitesimal model specification.
Then, 9990 bi-allelic markers were uniformly distributed along the genome and coded as 0, 1 or 2, regarding the number of copies of the most frequent allele. Simulation was performed to obtain a linkage disequilibrium close to 0.33 (squared correlation of the alleles at two consecutive loci). Ten replicates were analyzed, and the mean and standard deviations are presented.
Discrete field set
A field data set was used here to illustrate the behavior of the methods in classification problems applied to genome-wide prediction of disease resistance in pigs. In this study we used one of the most important congenital diseases in pig industry as response variable: scrotal hernia (SH). Most affected individuals cannot feed effectively and consequently growth is affected [26]. This leads to higher feed costs, slower throughput, lack of product uniformity and consequent loss in income. In a nucleus breeding population, such individuals cannot be considered for use as breeding stock and effectively end up as culls. Heritability estimates around 0.30 and prevalence between 1% have been reported previously for this trait [27, 28].
Data were provided by PIC North America, a Genus Plc company. The data set contained records of scrotal hernia incidence (score 0 or 1) in 2768 animals from three different lines. Animals from two purebred lines (A and B) were born in elite genetic nuclei, where environmental conditions were better controlled and risk of infections was lower. Animals from a crossbred line (C), from line A and other lines not used in this study, were born in commercial herds. Selection emphasis in line A was placed on reproduction and lean growth efficiency. Line B has been selected mainly for reproductive traits. Selection against scrotal hernia was equally emphasized in both lines A and B. The prevalence of the disease ranged between 1 and 2% in all lines. Genotypes of all animals with phenotypic records were obtained for 6742 SNP located in different genomic regions identified as candidate regions in previous studies [29, 30]. A comprehensive scan under the available marker density was performed with all chromosomes being covered. After genotype editing following Ziegler et al. [31], 5302 SNP were retained, and all 923 total animals from line A, 919 from line B and 700 from line C were used. Fifty per cent of animals in the data set of each line were affected with scrotal hernia. For each individual and main effect for SNP j th, we defined two covariates and , with if the genotype was aa (0, otherwise), and if the genotype was AA (0, otherwise).
Analyses within each line were performed leaving out the 15% youngest individuals, as testing set. The raw phenotype was used as dependent variable in a control case design. Note that systematic effects were not included as covariates for simplicity, although any covariate may be included in the algorithms without loss of generality. The predicted susceptibilities of animals in the testing set were the percentages of trees in a random forest that a given animal was considered as affected.
Predictive ability
Performance of the models was based on predictive ability to correctly predict genetic susceptibility in the testing sets. The true genetic susceptibilities of individuals in the simulated data set are known. However, true genetic merits are unknown in the field data case. Therefore, predictive ability was evaluated in a different manner in the field data, as described below.
Simulated set
The true genetic susceptibilities were obtained from the simulations and followed a Gaussian distribution, whereas distributions of predicted susceptibilities were dependent on the model used. A Gaussian distribution was assumed for Bayesian regressions and an unknown distribution bounded between 0 and 1, representing the probability of individual i to be susceptible, for machine learning methods. Pearson's correlations were calculated between true and predicted genetic susceptibility merit for each model and simulated scenario.
In addition, the area (AUC) under the receiving operating characteristic curve was calculated for each model in each simulation. This curve is a graphical plot of the sensitivity, or true vs. false positive rate (1 − specificity) for a binary classifier system as its discrimination threshold changes [32]. The AUC can be used as a model comparison criterion and can be interpreted as the probability that a given classifier assigns a higher score to a positive example than to a negative one, when the positive and negative examples are randomly picked. Individuals with a true genetic susceptibility above or below the population average were assumed positive or negative cases, respectively. Models with higher values of AUC are desirable and are considered more robust.
Discrete field set
True genetic susceptibilities of individuals in the field data are unknown. Instead, estimated breeding values (EBV) for SH susceptibility obtained from routine genetic evaluation using the BLUP method [33] were assumed as the true genetic values. Routine evaluations included 6.9 million animals in the pedigree and approximately 2.3 million records of SH. The effects of line, litter, farm, and month of birth nested into farm were included in the threshold animal model used in the analyses. This may indeed be a crude approximation because EBV were calculated under a linear model with strong assumptions of linearity, additivity, non migration or non selection, although millions of records and animals are used in these genetic evaluations and the accuracy ranged between 0.50 and 0.96 for 95% of the EBV. To minimize the issue of this approximation, animals were classified as susceptible or non-susceptible. Non-susceptible animals were those in the lower α percentile of the EBV distribution in each line, whereas those in the upper (1-α) percentile were considered as susceptible (α ∈ {5,10,25,50}). Lower values of α selected the more extreme animals, thus a smaller approximation error is expected.
Predicted accuracy was calculated between these EBV (y) and predictions () in the testing set from methods TBA, BTL, RF, L2B or LhB. The predictive accuracy was estimated using misclassification rate, the phi coefficient correlation, sensitivity and specificity.
The phi coefficient correlation is the equivalent to the Pearson's product moment correlation for binary variables. It can be calculated as
This coefficient may be not robust enough under certain circumstances such as those in which the categories are extremely uneven. Under these circumstances r
ϕ
has a maximum absolute value determined by the distribution of and y.
Sensitivity and specificity for a given classifier may be computed as
and
Sensitivity measures the proportion of healthy animals that are identified as not being affected (TN = true negatives), whereas specificity measures the proportion of affected animals that are correctly identified as such (TP = true positives). Values of sensitivity and specificity closer to 1 are preferred. Specificity and sensitivity are more informative than raw rate of misclassification, as the latter does not differentiate if misclassification is on true healthy or true affected animals.
Furthermore, all animals in the respective testing sets were used to calculate the AUC statistic, described above, for each method within a line. Animals with SH were considered as positive examples, whereas animals without SH were considered negative examples. As stated before, AUC measures predictive ability and may be considered as a model comparison criterion. Higher AUC values are desirable, as mentioned above.