Skip to main content

Table 2 Test set MSE for GBLUP, BLASSO and ABNN evaluated on the real Cleveland pig dataset

From: Approximate Bayesian neural networks in genomic prediction

GBLUP

0.8759

    

BLASSO

0.8741

    

ABNN

Weight decay \(\lambda_{1}\)

    
 

21

22

23

24

25

# units \(k\) = 1

     

 \({\text{MSE}}_{{\mathcal{M}}}\) (SD)

0.8688 (0.000796)

0.8675 (0.000790)

0.8653 (0.000722)

0.8676 (0.000741)

0.8687 (0.000728)

# units \(k\) = 2.1

     

 \({\text{MSE}}_{{\mathcal{M}}}\) (SD)

0.9230 (0.00432)

0.9233 (0.00440)

0.9221 (0.00399)

0.9233 (0.00430)

0.9235 (0.00439)

  1. Two architectures were evaluated for the ABNN were \(k\) refers the number of units per hidden layer. \({\text{MSE}}_{{\mathcal{M}}}\) is the model-averaged MSE and SD is the standard deviation over iterations excluding burn-in. The best model MSE is indicated in italic characters