Skip to main content

Table 1 Test set MSE for GBLUP, BLASSO and ABNN evaluated on the simulated QTLMAS2010 data

From: Approximate Bayesian neural networks in genomic prediction

GBLUP

88.42

     

BLASSO

89.22

     

ABNN

Weight decay \(\lambda_{1}\)

     
 

1.0

1.1

1.2

1.3

1.4

1.5

# units \(k\) = 1

      

 \({\text{MSE}}_{{\mathcal{M}}}\) (SD)

83.64 (0.272)

83.26 (0.216)

83.27 (0.243)

83.50 (0.256)

82.69 (0.218)

83.51 (0.256)

# units \(k\) = 2.1

      

 \({\text{MSE}}_{{\mathcal{M}}}\) (SD)

88.40 (0.940)

87.94 (0.733)

89.31 (1.167)

88.12 (0.840)

87.42 (0.714)

88.01 (0.727)

  1. Two architectures were evaluated for the ABNN were \(k\) refers to the number of units per hidden layer. \({\text{MSE}}_{{\mathcal{M}}}\) is the model-averaged MSE and SD is the standard deviation over iterations excluding burn-in. The best model MSE is indicated in italic characters