Does probabilistic modelling of linkage disequilibrium evolution improve the accuracy of QTL location in animal pedigree?

BACKGROUND
Since 2001, the use of more and more dense maps has made researchers aware that combining linkage and linkage disequilibrium enhances the feasibility of fine-mapping genes of interest. So, various method types have been derived to include concepts of population genetics in the analyses. One major drawback of many of these methods is their computational cost, which is very significant when many markers are considered. Recent advances in technology, such as SNP genotyping, have made it possible to deal with huge amount of data. Thus the challenge that remains is to find accurate and efficient methods that are not too time consuming. The study reported here specifically focuses on the half-sib family animal design. Our objective was to determine whether modelling of linkage disequilibrium evolution improved the mapping accuracy of a quantitative trait locus of agricultural interest in these populations. We compared two methods of fine-mapping. The first one was an association analysis. In this method, we did not model linkage disequilibrium evolution. Therefore, the modelling of the evolution of linkage disequilibrium was a deterministic process; it was complete at time 0 and remained complete during the following generations. In the second method, the modelling of the evolution of population allele frequencies was derived from a Wright-Fisher model. We simulated a wide range of scenarios adapted to animal populations and compared these two methods for each scenario.


RESULTS
Our results indicated that the improvement produced by probabilistic modelling of linkage disequilibrium evolution was not significant. Both methods led to similar results concerning the location accuracy of quantitative trait loci which appeared to be mainly improved by using four flanking markers instead of two.


CONCLUSIONS
Therefore, in animal half-sib designs, modelling linkage disequilibrium evolution using a Wright-Fisher model does not significantly improve the accuracy of the QTL location when compared to a simpler method assuming complete and constant linkage between the QTL and the marker alleles. Finally, given the high marker density available nowadays, the simpler method should be preferred as it gives accurate results in a reasonable computing time.


Background
For several decades, detection and mapping of loci affecting quantitative traits of agricultural interest (Quantitative Trait Loci or QTL) using genetic markers have been based only on pedigree or family information, especially in plant and animal populations where the structure of these experimental designs can be easily controlled. However, the accuracy of gene locations using these methods was limited, due to the small number of meioses occurring in a few generations. Recent advances in technology, such as SNP genotyping, leading to dense genetic maps have boosted research in QTL detection and fine-mapping. Nowadays, methods for fine-mapping rely on linkage disequilibrium (LD) information rather than simply on linkage data. Linkage disequilibrium, the non-uniform association of alleles at two loci, has been successfully employed for mapping both Mendelian disease genes [1][2][3][4] and QTL [5][6][7]. Interested readers can also refer to reviews by [8][9][10][11]. For all chromosomal loci, including those that are physically unlinked, linkage disequilibrium can be generated or influenced by various evolutionary forces such as mutation, natural or artificial selection, genetic drift, population admixture, changes in population size (exponential growth or bottleneck, for instance). Most methods using the linkage disequilibrium concept for QTL fine-mapping are based on the genetic history of the population. Whichever method is used to include population genetics concepts (calculation of Identity By Descent (IBD) probabilities under given assumptions about population history [6], Wright-Fisher based allele frequency model [12], backward inferences through the coalescent tree [13]), computation is always time consuming. Furthermore, since mapping accuracy depends on the length of the haplotype used in the study [14][15][16][17], this computational time could become prohibitive when many markers are being considered. Therefore, with new technologies such as SNP genotyping and the amount of data they generate, it is interesting to evaluate the improvement in accuracy produced by these time consuming methods opposed to using simpler methods. In this study, we focused on animal populations of agricultural interest. Generally, these populations have a small effective size, and are composed of a few families with about a hundred descendants.
We considered that a dense genetic map was available. Our main objective was to compare the QTL prediction accuracy of two methods in the half-sib family design. These two methods differed in the way they modelled the evolution of linkage disequilibrium between a QTL and its flanking markers, through the probability of bearing the favourable QTL allele given the marker observations. The first method, HaploMax, was a haplotype-based association analysis, very similar to the one developed by Blott et al. [7]. In this method, there was no specific modelling of linkage disequilibrium evolution: linkage disequilibrium was complete at time 0 on the mutated haplotype and remained complete during the following generations. Therefore, the probability of bearing the favourable QTL allele given the mutated haplotype is always equal to one during the generations. This is why we mentioned the deterministic evolution of linkage disequilibrium. The second method, HAPimLDL, was a maximum likelihood approach [12] and it used probabilistic modelling of the temporal evolution of linkage disequilibrium based on a Wright-Fisher model. This probabilistic modelling of the temporal evolution of linkage disequilibrium made it possible to vary the probability of bearing the favourable QTL allele given the marker informations during generations. Our hypothesis was that, in these animal populations with a small effective size and having evolved over a few generations, a rough model based on the deterministic evolution of linkage disequilibrium was as accurate as a probabilistic-based model and should therefore be preferred from a computational point of view. Both methods assumed a single QTL effect for all the families. Both allow any number of flanking markers to be considered using a sliding window across a previously identified QTL region. Both methods have been implemented in an R-package freely available from the Comprehensive R Archive Network (CRAN, http://cran. r-project.org/).
In this paper, we have considered only half-sib family designs. In this framework, we used simulations to compare the performance of these two fine-mapping methods. We investigated the effect of various scenarios on the performance of the methods: allelic effect of the QTL, marker density, population size, mutation age, family structure, selection rate, mutation rate and number and size of the families. For each of these scenarios, we investigated the improvement produced by probabilistic modelling of linkage disequilibrium evolution.

Methods
The genetic model used in this paper was described by [18]. The population was considered as a set of independent sire families, all dams being unrelated to each other and to the sires. We considered a bi-allelic QTL with additive effect only and a single QTL effect for all the families. We assumed the same phase across families. We will only briefly describe the HaploMax method, as it is a standard method. The HAPimLDL method, which has been developed for this work, is presented in detail.

The HaploMax method
HaploMax is a marker-haplotype-regression method adapted to the following two hypotheses: the QTL is biallelic, and QTL alleles and marker alleles are in complete linkage. In each marker interval, and for each flanking marker haplotype, we performed a haplotypebased association analysis with a sire effect and a dose haplotype effect (0 for absence of the haplotype, 1 for one copy of the haplotype, 2 for homozygosity). We tested each haplotype in turn against all the others [7] and the HaploMax value was given by the haplotype maximising the F-test values.
The HaploMax method is therefore perfectly suited to demonstrate the effect of a causal bi-allelic mutation. In HaploMax, there was no probabilistic modelling of linkage disequilibrium evolution. Linkage disequilibrium was complete at time 0 and remained complete during the following generations.

The HAPimLDL method for half-sib family designs
This likelihood-based method is detailed in the following sub-sections. It combines family information with probabilistic modelling of linkage disequilibrium evolution (LDL stands for Linkage and Linkage Disequilibrium). For clarity purposes, some of the longer calculations are presented in the Appendix.

Notation
A bi-allelic QTL is assumed with alleles Q and q.
Let i (i = 1, ..., I ) be the identification of a family. Let ij ( j = 1, ..., n i ) be the index of a mate of sire i (i = 1, ..., I ) and ijk (k = 1, ..., n ij ) denote the progeny of dam ij. When considering strictly half-sib families, only one progeny is measured per dam (n ij = 1) (in the case of bovine populations, for instance), and the k index can be omitted.
Assuming that the available information consists of the phenotypic value of each progeny and a set of haplotypes of observed markers aligned on a genetic map, we can establish the following notations: transmitted respectively by its father and mother, • y ij , phenotype of progeny ij.
If x denotes a putative bi-allelic QTL locus on the genome: where Q x i 1 ( ) and Q x i 2 ( ) denote the QTL allele at locus x carried respectively by the two homologous chromosomes. Note that there are three genotypes but four diplotypes since there are two heterozygous diplotypes (Qq and qQ).
, marker and locus x haplotypes of sire i. This is the extended marker haplotype of sire i including the alleles at the QTL locus x .
• Q x ij d ( ) , the allele at the QTL locus x transmitted by the dam ij to her single progeny, • Q x ij s ( ) , the allele at the QTL locus x transmitted by the sire i to his progeny ij.

LDL likelihood
The population was considered as a set of independent sire families, all dams being unrelated both to each other and to the sires. The likelihood is constructed as follows: a Gaussian mixture models the phenotypes as a function of QTL states. These are unknown, but their probability depends on the surrounding markers through LD, which is modelled by the Wright-Fisher model. Further, if the chromosome has been received from a sire, the probability of descent of each paternal chromosome is considered.
• z = 1, 2, 3 and 4 stands for QQ, qq, Qq and qQ respectively, • a = 1 and 2 for Q and q, • μ i is the phenotype mean within the sire family i, and s 2 the residual variance, • (·; μ, s 2 ) is the Gaussian probability density function with mean μ and variance s 2 • for a = 1 and 2, the a Qa and a qa parameters, subject to the constraint of their sum being equal to 0, are the effects of the diplotypes at locus x. The constraint a qQ = a Qq = 0 leads to an additive model • the symbol " " in the quantities In this likelihood, the probabilities due to linkage that are contained in the transmission probabilities puted using QTLMAP subroutines that implement the approximate method described in [18]. The expression above considers QTL effects, probabilities of transmission of QTL alleles from sires to offspring, and probabilities of QTL states in the founders. The linkage disequilibrium signal comes from the quan- which are the probabilities of QTL alleles in the parents conditional on the surrounding marker haplotypes. QTL diplotype probabilities given marker information, contained in ℙ(Z i (x) = z|h i ), were computed assuming the Hardy-Weinberg equilibrium. Thus, QTL allelic probabilities given marker information for both sire and dam were computed under the linkage disequilibrium model described in the next section.
The probability terms, , involving sire QTL allele given sire QTL diplotype, are either 0 or 1.

Likelihood approximation and linkage disequilibrium model
QTL allelic probabilities given marker information for the parents are terms that are modelled through the evolution of linkage disequilibrium across generations. These terms depend on the frequencies of marker haplotypes and on the frequencies of QTL allele and marker extended haplotypes. Under traditional models of population genetics, these haplotype frequencies are stochastic. Thus, the likelihood function cannot be easily calculated and must be approximated. Following [12], we used the likelihood given the expected value of haplotype frequencies to approximate the overall expected value of the likelihood and we limited marker haplotypes to a small number of markers surrounding the putative QTL locus (in our study, we considered either two flanking markers or four flanking markers). This led to the following approximations for a = 1, 2 and k = 1, 2: These haplotype frequencies at time t could be expressed as functions of marker frequencies, digenic, trigenic... disequilibria at time t [19]. Moreover, under the hypotheses of a Wright-Fisher model, no interference and a large population size, the expected values of marker frequencies and disequilibria at time t could be derived from the same quantities at time 0 and the recombination rates between the QTL locus and the markers [19,20]. Therefore, we generalised the formula obtained by [12] in order to take into account any number of surrounding markers. These calculations are detailed in the Appendix.
Finally, we had to model the haplotype frequencies at time 0. Following [12], we assumed an initial creation of linkage disequilibrium that was due to mutation or migration. Generally speaking, assuming that the Q allele at time 0 appeared on a haplotype denoted h*, then the time zero model was where the parameter b represents the proportion of new copies of allele Q introduced at time 0, δ x = y is the Kronecker delta operator (equal to 1 if x = y and 0 otherwise), Π h,Q (0) and Π Q (0) are the frequencies of the haplotypes (h, Q) and h at time 0, and Π h is the frequency of haplotype h.
In our specific study, we simplified the time 0 model assuming that there was no pre-existing copy of the Q allele and we set b equal to 1.

HAPim R-package
From a computational point of view, the HAPimLDL likelihood calculation was divided into two parts. In the first part, devoted to the calculation of transmission probabilities and the reconstruction of sire and progeny chromosomes, we used a modified version of the software QTLMAP written in Fortran 95 [18]. The second part aimed at calculating and maximizing the likelihood in the half-sib design. It was developed using the R free software environment for statistical computing [21]. An R package named "HAPim" was implemented and is freely available from the Comprehensive R Archive Network (CRAN, http://cran.r-project.org/).

Simulations
Simulations were carried out in order to compare these methods in the specific design of half-sib families. For each simulation, 500 replicates were performed.
The populations were simulated using the LDSO (Linkage Disequilibrium with Several Options) program developed in Fortran 90 by [22] and based on the genedropping method [23]. There was no constraint on the QTL frequency, but we discarded simulations for which there was no heterozygous sire. Evolution of the founder population was modelled through two parameters: the effective size (i.e. the number of founders) and the time of evolution. We studied two extreme scenarios for the founder population. In the first, at time 0, we assumed complete linkage disequilibrium of QTL-markers (by introducing a mutation in a single haplotype) and linkage equilibrium between markers. In the second scenario, the QTL and the markers were at equilibrium. Evolution time was 50 generations in almost all simulations, except a 200 generation evolution time in one case of the "disequilibrium scenario" and a 100 generation evolution time in one case of the "equilibrium scenario". We considered three effective population size values: 100, 200 and 400. In most simulations we did not assume selection, mutation, or bottleneck. However, to investigate the robustness of the methods, three simulations were also performed to study the effect of selection and one to study the influence of mutation.
We simulated a set of half-sib families. Two parameters-the number of sires (equal to 10, 20, 25, 50 or 100) and the number of progeny per sire (equal to 10, 20, 25,50 or 100)-were varied to address the problem of how to choose between many small families and a few large families.
All simulations were compared both to each other and to the reference simulation. In the reference simulation, we considered a 10 cM chromosomal area with 40 evenly spaced bi-allelic markers and a population size of 100 evolving over 50 generations. We simulated a set of 20 sires, each having 100 progeny. A single QTL with a substitution effect of 0.25 was simulated at a position of 3.35 cM. We then varied the different parameters with respect to this reference simulation in order to assess their respective influence. We considered three different values of map density (0.125 cM, 0.25 cM and 0.5 cM). The phenotypic values were simulated with a fixed dose-response model at the QTL position (i.e. regression model as a function of the number of Q alleles) and a residual variance of 1.
In the first set of simulations, presented in Tables 1  and 2, we analyzed only three-locus haplotypes (composed of the QTL and its two flanking markers). In Table 3, we also conducted simulations where the haplotype length was equal to 5 (the QTL and two flanking markers on both sides of the QTL).

Results
In the following tables, we present square roots of the mean square error (MSE) of the QTL position. The MSE value is given by the following formula whereŝ r is the estimated QTL position in replicate r, s is the true QTL position and 500 is the total number of replicates. We also computed the mean absolute error criterion and found a clear linear dependency between these two criteria (data not shown).
We compared the two methods, HaploMax and HAP-imLDL, with a t-test on the MSE values and found no significant difference between them for any of the scenarios studied.

Complete linkage disequibrium between the QTL and the markers
In this set of simulations we simulated the scenario for which there were complete linkage disequilibrium QTLmarkers and linkage equilibrium between markers in the founder population.

Influence of genetic and population parameters
Here we describe the sensitivity of the two methods to the following parameters: QTL allelic effect value, marker density, population's effective size of population, number of generations, mutation and selection. However, despite the fact that our goal was the accuracy of location, we computed some power values for both methods, the 5% thresholds being obtained by permutation. For the reference simulation, the power value was equal to 63% for Haplomax and to 56% for HAPimLDL. The highest power values were obtained for the QTL value equal to 0.5 and were around 90% for both methods. The lowest power values were obtained when N e was equal to 400 and N g equal to 50, and were around 15%. Table 1 summarises the simulation results. It is not surprising to see that the bigger the QTL allelic effect, the more accurate the method. The marker density had only a very slight influence on the MSE value. HaploMax presented an erratic trend with the marker density. HAPimLDL showed a clear decrease in the MSE values with increasing marker density.
With regard to the design parameters, we noticed that the precision of the QTL position decreased as the sample size (i.e. number of sires × number of progeny per sire) decreased, regardless of the family structure. For a fixed number of generations, the MSE values increased as the effective size of the population increased. However, when both effective size and number of generations varied, provided that their ratio remained constant, MSE values were not modified, which is completely consistent with traditional theory in population genetics.
When we allowed all SNP markers to mutate at a mutation rate equal to 10 -6 , we found a loss of accuracy of about 20-25% for HaploMax and about 50% for HAP-imLDL (data not shown). In this case, the power value was equal to 59% for HaploMax and to 49% for HAPimLDL.

Influence of phenotypic selection
The influence of phenotypic selection is presented in Table 2. We considered two values for the additive QTL effect and two selection strengths (light and strong).
The QTL effect had no influence on the accuracy of location. However, selection led to a loss of accuracy of about 50% with light selection and 60% with strong selection. On the one hand, the selection causes a hitchhiking effect which amplifies the signal from the region where the QTL is located but, on the other hand, it widens this region, leading to a loss of accuracy (higher MSE values). For example, a possible outcome of selection is that just a few different haplotypes are carriers of the Q allele. This loss of accuracy had already been pointed out by [24]. It was concluded that selection increased MSE values, leading to large confidence intervals of the QTL position, and therefore to additional difficulties in locating the mutation. Moreover, the power values collapsed in this situation (around 4% for both methods with strong selection and around 13% for both methods with light selection).

Influence of haplotype length and population structure
In Table 3, we studied the influence of haplotype length on the accuracy of the QTL location. It is clear that there is a significant gain when using four markers instead of two. All the previous conclusions remained valid when using four markers. If four markers were used in the model, increasing the sample size seemed to be the only way to decrease the MSE.
The influence of the population structure itself is also investigated in Table 3. Since we noted that haplotypes containing four markers led to the best results, we have focused the discussion only on this type of haplotype. Through this set of simulations, we have tried to resolve the issue of whether it is better to study many small Square roots of MSE values (in cM) for both methods, HaploMax and HAPimLDL, under various scenarios; we assumed complete linkage disequilibrium between the QTL and the markers and linkage equilibrium between the markers in the founder population; the haplotype is composed of the QTL and two flanking markers; the true QTL position is 3.35 cM on a 10 cM-long chromosomal region; unspecified parameters are equal to the corresponding parameters in the reference simulation; in this table, QTL denotes the QTL allelic effect value, N e is the effective size of the population, N g is the number of generations, N s is the number of sires, N p is the number of progeny per sire and dens is the marker density; each scenario was simulated 500 times. Square roots of MSE values (in cM) for both methods in the presence of phenotypic selection; we assumed complete linkage disequilibrium between the QTL and the markers and linkage equilibrium between the markers in the founder population. The haplotype is composed of the QTL and two flanking markers; the true QTL position is 3.35 cM on a 10-cM long chromosomal region; unspecified parameters are equal to the corresponding parameters in the reference simulation; in this table, QTL denotes the QTL allelic effect value, N e is the effective size of the population, N g is the number of generations, N s is the number of sires, N p is the number of progeny per sire, dens is the marker density and sel denotes the selection parameter; each scenario was simulated 500 times.
families or a few large families. The results are in favour of having many founders, which increases the power value. However, this is only clear when both the sample size and the number of markers are large.

The equilibrium case
In this section, we simulated a scenario where the QTL and the markers were at equilibrium in the founder population. We only varied the effective size (50 or 100) and the number of generations (50 or 100) with respect to the reference simulation. Results are presented in Table 4. We noted that MSE values in Table 4 are lower than the corresponding MSE values in Table 1. This was not surprising since, in the situation where the QTL and the markers were at equilibrium, there were more sires carrying the favourable QTL allele than in the "complete disequilibrium" case studied in Table 1. Moreover, the HaploMax method again gave MSE values slightly below those given by the HAPimLDL method. Finally, we noticed that MSE increased when the effective size decreased or the number of generations increased. This is also completely coherent since, in this situation, allelic frequencies have moved towards fixation.

Discussion
Within a dense genetic map framework, we have compared two QTL mapping methods aiming at locating one QTL on a chromosome in half-sib family designs.
On the one hand, in the HaploMax method there was no specific modelling of linkage disequilibrium evolution and the probability of bearing the favourable QTL allele given the mutated haplotype was always equal to one during the generations. On the other hand, in the HAP-imLDL method we used a probabilistic modelling of the temporal evolution of linkage disequilibrium. In this latter method, the probabilistic modelling allowed a temporal evolution of the conditional probability of bearing the favourable QTL allele given the marker observations. Our simulated scenarios mimicked animal populations shortly after creation of the breed (i.e. small populations with a short evolution time). We compared our results with those of [25], leading to conclusions very similar to theirs: very slight influence of marker density on the mapping accuracy, mapping accuracy increasing with sample size, QTL effect, number of generations since mutation occurrence, and effective size. However, although we achieved results of the same order of magnitude, slight differences in MSE values were observed mainly due to the following three reasons: we did not study exactly the same type of population; [25] assumed that haplotypes were known, but we reconstructed them; and, finally, we did not consider the same value for the number of generations parameter. It has been established that the evolution time parameter has a great influence on the accuracy of the location [ [25], table Square roots of MSE values (in cM) for both methods for two haplotype lengths: the QTL and its two flanking markers and the QTL and its four flanking markers; we assumed complete linkage disequilibrium between the QTL and the markers and linkage equilibrium between the markers in the founder population; the true QTL position is 3.35 cM on a 10-cM long chromosomal region; the QTL allelic effect value is equal to 1, the effective size of the population is equal to 100, the number of generations is equal to 50 and the marker density is equal to 0.5 cM; N s is the number of sires and N p is the number of progeny per sire; each scenario was simulated 500 times. five]. Despite these differences, and despite the fact that one of our methods took into account the transmission from sires to sibs, both studies showed the same tendencies with regard to the mapping accuracy. We found a gain in mapping accuracy when using a 4-SNP haplotype instead of a 2-SNP one. However, this result is valid with a fixed density marker (the one we used in our simulation study). With a very high density marker, a 1-SNP haplotype will probably lead to the best results. Finally, we demonstrated that neither method was robust to selection. The simulations showed that both methods led to similar results concerning QTL position accuracy. The simplest method, HaploMax, performed as well as HAP-imLDL. This is in agreement with recent findings.
In [26], it has also been concluded that a three-markerhaplotype-based association analysis (deterministic complete LD modelling) could be as efficient as the IBD method of [6]. The conclusion of our study is that the probabilistic modelling of the linkage disequilibrium evolution using a Wright-Fisher model did not improve the accuracy of the QTL location when compared to a simple method using deterministic modelling that assumed complete and constant linkage between the QTL and the marker alleles. The deterministic model, which is a rough model, was efficient enough in our simulated scenarios, which mimicked animal populations shortly after the creation of the breed (i.e. small populations with a short evolution time). The conclusion might then be to use HaploMax for animal populations with a small effective size and having evolved over a few generations. In fact, the forward method associated with causal mutation, used in our simulation study, reflected exactly the theoretical evolution model used to compute the LD dynamics in the likelihood function, thus favouring the HAPimLDL method as against the HaploMax method. Therefore, we can conclude that the HAPimLDL method did not perform significantly better than simpler methods within our evolution scenarios.
When dealing with populations with large effective sizes or with very old mutations, combining linkage with probabilistic modelling of linkage disequilibrium evolution should produce the greatest accuracy. Actually, in these populations, a huge number of recombination events would occur, leading to a small extent of the linkage disequilibrium signal. Therefore, deterministic complete linkage disequilibrium modelling would be less appropriate in this case.