deep-learningkeraslasagne

Highway networks in keras and lasagne - significant perfomance difference


I implemented highway networks with keras and with lasagne, and the keras version consistently underperforms to the lasagne version. I am using the same dataset and metaparameters in both of them. Here is the keras version's code:

X_train, y_train, X_test, y_test, X_all = hacking_script.load_all_data()
data_dim = 144
layer_count = 32
dropout = 0.04
hidden_units = 32
nb_epoch = 10

model = Sequential()
model.add(Dense(hidden_units, input_dim=data_dim))
model.add(Dropout(dropout))
for index in range(layer_count):
    model.add(Highway(activation = 'relu'))
    model.add(Dropout(dropout))
model.add(Dropout(dropout))
model.add(Dense(2, activation='softmax'))


print 'compiling...'
model.compile(loss='binary_crossentropy', optimizer='adagrad')
model.fit(X_train, y_train, batch_size=100, nb_epoch=nb_epoch,
    show_accuracy=True, validation_data=(X_test, y_test), shuffle=True, verbose=0)

predictions = model.predict_proba(X_test)

And here is the lasagne version's code:

class MultiplicativeGatingLayer(MergeLayer):
    def __init__(self, gate, input1, input2, **kwargs):
        incomings = [gate, input1, input2]
        super(MultiplicativeGatingLayer, self).__init__(incomings, **kwargs)
        assert gate.output_shape == input1.output_shape == input2.output_shape

    def get_output_shape_for(self, input_shapes):
        return input_shapes[0]

    def get_output_for(self, inputs, **kwargs):
        return inputs[0] * inputs[1] + (1 - inputs[0]) * inputs[2]


def highway_dense(incoming, Wh=Orthogonal(), bh=Constant(0.0),
                  Wt=Orthogonal(), bt=Constant(-4.0),
                  nonlinearity=rectify, **kwargs):
    num_inputs = int(np.prod(incoming.output_shape[1:]))

    l_h = DenseLayer(incoming, num_units=num_inputs, W=Wh, b=bh, nonlinearity=nonlinearity)
    l_t = DenseLayer(incoming, num_units=num_inputs, W=Wt, b=bt, nonlinearity=sigmoid)

    return MultiplicativeGatingLayer(gate=l_t, input1=l_h, input2=incoming)

# ==== Parameters ====

num_features = X_train.shape[1]
epochs = 10

hidden_layers = 32
hidden_units = 32
dropout_p = 0.04

# ==== Defining the neural network shape ====

l_in = InputLayer(shape=(None, num_features))
l_hidden1 = DenseLayer(l_in, num_units=hidden_units)
l_hidden2 = DropoutLayer(l_hidden1, p=dropout_p)
l_current = l_hidden2
for k in range(hidden_layers - 1):
    l_current = highway_dense(l_current)
    l_current = DropoutLayer(l_current, p=dropout_p)
l_dropout = DropoutLayer(l_current, p=dropout_p)
l_out = DenseLayer(l_dropout, num_units=2, nonlinearity=softmax)

# ==== Neural network definition ====

net1 = NeuralNet(layers=l_out,
                 update=adadelta, update_rho=0.95, update_learning_rate=1.0,
                 objective_loss_function=categorical_crossentropy,
                 train_split=TrainSplit(eval_size=0), verbose=0, max_epochs=1)

net1.fit(X_train, y_train)
predictions = net1.predict_proba(X_test)[:, 1]

Now the keras version barely outperforms logistic regression, while the lasagne version is the best scoring algorithm so far. Any ideas as to why?


Solution

  • Here are some suggestions (I'm not sure if they will actually close the performance gap you are observing):

    According to the Keras documentation the Highway layer is initialized using Glorot Uniform weights while in your Lasagne code you are using Orthogonal weight initialization. Unless you have another part of your code where you set the weight initialization to Orthogonal for the Keras Highway layer, this could be a source of the performance gap.

    It also seems like you are using Adagrad for your Keras model, but you are using Adadelta for your Lasagne model.

    Also I am not 100% sure about this, but you may also want to verify that your transform bias terms are initialized the same way.