neural-networkcaffeconv-neural-networkredefinecoding-efficiency

Multiple pathways for data through a layer in Caffe


I would like to construct a network in Caffe in which the incoming data is split up initially, passes separately through the same set of layers, and is finally recombined using an eltwise layer. After this, all the parts will move as a single blob.

The layer configuration of the part of the network for which the data moves parallely will be identical, except for the learned parameters.

Is there a way to define this network in Caffe without redefining the layers through which the different parts of the data go through multiple times? In other words, is it possible to define a layer once and have multiple pathways for input and output, something like having multiple top and bottom parameters with a mapping between them?


Solution

  • I don't think raw caffe's prototxt format allows for what you are after. But you can get this using caffe.NetSpec() python interface. That is, using python interface to construct the net and write the prototxt file.

    import caffe
    from caffe import layers as L
    ns = caffe.NetSpec()
    ns.data, ns.label = L.Data(ntop=2, name='data', data_param={'source':'/path/to', 'batch_size': 32})
    tops = []
    for i in xrange(3):
        nm = 'path{}'.format(i)
        top = L.Convolution(ns.data, name=nm, convolution_params={'num_output':32})
        ns.__setattr__(nm, top)
        tops.append(top)
    # concat
    ns.concat = L.Concat(*tops, name='concat', concat_param={'axis':1})
    print '{}'.format(ns.toProto())