How to get filters data from the layer objects for the configuration and model like this?
ComputationGraphConfiguration config =
new NeuralNetConfiguration.Builder()
.seed(seed)
.gradientNormalization(GradientNormalization.RenormalizeL2PerLayer)
.l2(1e-3)
.updater(new Adam(1e-3))
.weightInit(WeightInit.XAVIER_UNIFORM)
.graphBuilder()
.addInputs("trainFeatures")
.setInputTypes(InputType.convolutional(60, 200, 3))
.setOutputs("out1", "out2", "out3", "out4", "out5", "out6")
.addLayer(
"cnn1",
new ConvolutionLayer.Builder(new int[] {5, 5}, new int[] {1, 1}, new int[] {0, 0})
.nIn(3)
.nOut(48)
.activation(Activation.RELU)
.build(),
"trainFeatures")
.addLayer(
"maxpool1",
new SubsamplingLayer.Builder(
PoolingType.MAX, new int[] {2, 2}, new int[] {2, 2}, new int[] {0, 0})
.build(),
"cnn1")
.addLayer(
"cnn2",
new ConvolutionLayer.Builder(new int[] {5, 5}, new int[] {1, 1}, new int[] {0, 0})
.nOut(64)
.activation(Activation.RELU)
.build(),
"maxpool1")
.addLayer(
"maxpool2",
new SubsamplingLayer.Builder(
PoolingType.MAX, new int[] {2, 1}, new int[] {2, 1}, new int[] {0, 0})
.build(),
"cnn2")
.addLayer(
"cnn3",
new ConvolutionLayer.Builder(new int[] {3, 3}, new int[] {1, 1}, new int[] {0, 0})
.nOut(128)
.activation(Activation.RELU)
.build(),
"maxpool2")
.addLayer(
"maxpool3",
new SubsamplingLayer.Builder(
PoolingType.MAX, new int[] {2, 2}, new int[] {2, 2}, new int[] {0, 0})
.build(),
"cnn3")
.addLayer(
"cnn4",
new ConvolutionLayer.Builder(new int[] {4, 4}, new int[] {1, 1}, new int[] {0, 0})
.nOut(256)
.activation(Activation.RELU)
.build(),
"maxpool3")
.addLayer(
"maxpool4",
new SubsamplingLayer.Builder(
PoolingType.MAX, new int[] {2, 2}, new int[] {2, 2}, new int[] {0, 0})
.build(),
"cnn4")
.addLayer("ffn0", new DenseLayer.Builder().nOut(3072).build(), "maxpool4")
.addLayer("ffn1", new DenseLayer.Builder().nOut(3072).build(), "ffn0")
.addLayer(
"out1",
new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
//.nOut(36)
.nOut(10)
.activation(Activation.SOFTMAX)
.build(),
"ffn1")
.addLayer(
"out2",
new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
//.nOut(36)
.nOut(10)
.activation(Activation.SOFTMAX)
.build(),
"ffn1")
.addLayer(
"out3",
new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
//.nOut(36)
.nOut(10)
.activation(Activation.SOFTMAX)
.build(),
"ffn1")
.addLayer(
"out4",
new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
//.nOut(36)
.nOut(10)
.activation(Activation.SOFTMAX)
.build(),
"ffn1")
.addLayer(
"out5",
new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
//.nOut(36)
.nOut(10)
.activation(Activation.SOFTMAX)
.build(),
"ffn1").addLayer(
"out6",
new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
//.nOut(36)
.nOut(10)
.activation(Activation.SOFTMAX)
.build(),
"ffn1")
//.pretrain(false)
//.backprop(true)
.build();
I mean the NDArray (or what?) of the convolutional layer activations after the model is got trained that is used for drawing activation maps like that:
It is not clear for me what kind of Layer's API returns the 2D data for building that.
If you are using the DL4J ui module, you can get those visualizations simply by adding the ConvolutionalIterationListener as another listener for your model.
If you don't want to use the listener, you can at least check out its code to see how you can create those visualizations on your own.