==== Skipping fold data generation ==== Training master network # activation: relu # batchsize: [256, 128] # bproplen: 20 # datadir: data # dev_data: fMLLR/fMLLR_dev_data.bin # dev_ivectors: None # dev_offsets: TIMIT_dev_offsets.bin # dev_targets: ./targets_kaldiDBN_dev.bin # dropout: [0.2] # epoch: [20] # fold_data_pattern: data/data_{0}.bin # fold_network_pattern: Xfolds/fold_{0}.npz # fold_offset_pattern: data/offsets_{0}.bin # fold_output_pattern: data_out/data_{0}.bin # fold_target_pattern: data/targets_{0}.bin # frequency: -1 # ft: data/fMLLR/final.feature_transform # gpu: 0 # layers: 4 # lr: [0.01, 0.001, 0.0001, 1e-05] # network: lstm # no_progress: True # optimizer: ['adam', 'momentumsgd'] # out: models_master/lstm_4_1024_fmllr_0 # plot: False # resume: # rpl_model: result_rpl/model # shuffle_sequences: True # splice: 0 # tdnn_ksize: [5] # timedelay: 5 # train_data: fMLLR/fMLLR_train_data.bin # train_fold: None # train_ivectors: None # train_offsets: TIMIT_train_offsets.bin # train_rpl: False # train_targets: ./targets_kaldiDBN.bin # tri: True # units: [1024] # use_validation: True === Training stage 0: epoch = 20, batchsize = 256, optimizer = adam epoch main/loss validation/main/loss main/accuracy validation/main/accuracy elapsed_time 1 3.49149 2.18525 0.289411 0.43669 85.609 2 1.95632 1.79654 0.492047 0.515874 170.974 3 1.65348 1.64581 0.555966 0.550124 254.731 4 1.41282 1.59436 0.598127 0.567321 342.106 5 1.25841 1.55688 0.633758 0.572534 428.855 6 1.1723 1.51586 0.658492 0.582785 514.567 7 1.03772 1.55724 0.681431 0.580692 602.493 === Training stage 1: epoch = 20, batchsize = 128, optimizer = momentumsgd, learning rate = 0.001 epoch main/loss validation/main/loss main/accuracy validation/main/accuracy elapsed_time 1 0.819967 1.39609 0.748077 0.615734 147.033 2 0.757419 1.39113 0.764524 0.618737 303.764 3 0.744088 1.37088 0.771738 0.620688 460.487 4 0.713505 1.38315 0.778751 0.621644 613.598 === Training stage 2: epoch = 20, batchsize = 128, optimizer = momentumsgd, learning rate = 0.0001 epoch main/loss validation/main/loss main/accuracy validation/main/accuracy elapsed_time 1 0.692318 1.38076 0.78333 0.622174 150.387 2 0.684302 1.37933 0.785812 0.622904 306.32 3 0.697334 1.37083 0.78661 0.622615 464.746 4 0.684746 1.38471 0.786091 0.622174 621.763 === Training stage 3: epoch = 20, batchsize = 128, optimizer = momentumsgd, learning rate = 1e-05 epoch main/loss validation/main/loss main/accuracy validation/main/accuracy elapsed_time 1 0.691076 1.38437 0.785742 0.622631 152.417 2 0.684966 1.38445 0.784431 0.622527 313.539 3 0.699144 1.36757 0.783632 0.622848 469.114 4 0.677947 1.38436 0.788363 0.622415 628.229 ==== Skipping training folds ==== Skipping predicting training and development data ==== Skipping training RPL layer ==== Evaluating -folds +master -rpl ................................................................................................................................................................................................ Writing results Elapsed time: 416.602203 s Loading master network Calculating network outputs Writing output files PER: 15.09 %