Skip to main content

Table 5 Details of the results of each model.

From: Improved Meta-learning Neural Network for the Prediction of the Historical Reinforced Concrete Bond–Slip Model Using Few Test Specimens

Tasks

No.

Label

Prediction accuracy ranking (maximum representative optimal)

Asqu Bpla Cnor Dnsti Ecoa

#43

Train

Successful models: MAML(MSE = 0.49, R2 = 0.93) > Fine-tune (MSE = 0.61, R2 = 0.92) > MMN (MSE = 0.71, R2 = 0.91) > Four-stage model (MSE = 0.76, R2 = 0.90) > DNN (all data set (MSE = 2.21, R2 = 0.71)) > DNN (target data set (MSE = 3.31, R2 = 0.56))

Failed models:none

#65

Train

Successful models: four-stage model (MSE = 0.40, R2 = 0.97) > MMN (MSE = 1.19, R2 = 0.90) > MAML (MSE = 2.48, R2 = 0.79) > DNN(target data set (MSE = 10.391,R2 = 0.10))

Failed models: DNN (all data set (MSE = 32.73, R2 = 0)), Fine-tune(MSE = 59.82, R2 = 0)

#63

Test

Successful models: MMN(MSE = 0.10,R2 = 0.99) > Four-stage model(MSE = 1.28,R2 = 0.85) > MAML(MSE = 2.87,R2 = 0.66) > DNN(target data set (MSE = 3.79, R2 = 0.56)) > DNN (all data set(MSE = 3.82, R2 = 0.55)) > Fine-tune (MSE = 3.87, R2 = 0.55) > 

Failed models: none

#320

Test

Successful models: MMN (MSE = 0.02, R2 = 0.59) > MAML (MSE = 0.33, R2 = 0.10)

Failed models: four-stage model(MSE = 15.94,R2 = 0),DNN (target data set (MSE = 37.84,R2 = 0)), DNN (all data set (MSE = 62.56,R2 = 0)), Fine-tune (MSE = 94.18,R2 = 0)

Asqu Bpla Cnwat Dsti Enor

#77

Train

Successful models: MMN(MSE = 0.40, R2 = 0.97) > MAML (MSE = 1.49,R2 = 0.88) > Four-stage model (MSE = 1.66, R2 = 0.87) > DNN (all data set (MSE = 51.75, R2 = 0))

Failed models: DNN (target data set(MSE = 81.85, R2 = 0)), Fine-tune (MSE = 91.44, R2 = 0)

#78

Train

Successful models: MMN(MSE = 0.19, R2 = 0.98) > Four-stage model (MSE = 0.76, R2 = 0.94) > MAML(MSE = 4.42,R2 = 0.62)

Failed models:DNN (all data set (MSE = 28.17, R2 = 0)), DNN (target data set (MSE = 51.84, R2 = 0)), Fine-tune (MSE = 67.40, R2 = 0)

#73

Test

Successful models:MMN (MSE = 0.29,R2 = 0.98) > Four-stage mode l(MSE = 1.27, R2 = 0.93)

Failed models: DNN (target data set (MSE = 161.68, R2 = 0)), Fine-tune (MSE = 223.61, R2 = 0), MAML (MSE = 234.6, R2 = 0), DNN (all data set (MSE = 252.98,R2 = 0))

#76

Test

Successful models:MMN (MSE = 0.47, R2 = 0.97) > Four-stage model (MSE = 1.22, R2 = 0.93)

Failed models: DNN (target data set (MSE = 160.70, R2 = 0)), Fine-tune (MSE = 222.5, R2 = 0), MAML (MSE = 233.4, R2 = 0), DNN (all data set (MSE = 251.73, R2 = 0))

SRRC task

SRRC#11

Train

Successful models: MMN (MSE = 0.14, R2 = 0.98) > DNN (all data set (MSE = 0.80, R2 = 0.88)) > Fine-tune (MSE = 1.94, R2 = 0.72)

Failed models: four-stage model (MSE = 10.29, R2 = 0), DNN (target data set (MSE = 10.77, R2 = 0)), MAML (MSE = 43.45, R2 = 0)

SRRC#13

Train

Successful models:MMN(MSE = 0.44, R2 = 0.89) > Four-stage model (MSE = 0.60, R2 = 0.86)

Failed models: MAML (MSE = 5.05, R2 = 0), DNN (target data set (MSE = 14.36, R2 = 0)), DNN (all data set (MSE = 41.61, R2 = 0)), Fine-tune (MSE = 98.12, R2 = 0)

SRRC#2

Test

Successful models:MMN (MSE = 0.49, R2 = 0.92) > Four-stage model (MSE = 0.50, R2 = 0.91) > MAML (MSE = 1.87, R2 = 0.68)

Failed models: DNN (target data set (MSE = 21.94, R2 = 0)), Fine-tune (MSE = 34.67, R2 = 0), DNN (all data set (MSE = 35.37, R2 = 0))

SRRC#4

Test

Successful models: MMN (MSE = 0.07, R2 = 0.99) > Four-stage model (MSE = 0.62, R2 = 0.92) > Fine-tune (MSE = 1.84, R2 = 0.76) > DNN (all data set (MSE = 3.94, R2 = 0.48)) > MAML (MSE = 5.87, R2 = 0.22)

Failed models:, DNN (target data set (MSE = 15.59, R2 = 0))

  1. The failed models are those where the R2 values are approximately equal to 0.