How to Create Convolutional Neural Networks Using Java and DL4J

  Programming

Using Deeplearning4J, you can create convolutional neural networks, also referred to as CNNs or ConvNets, in just a few lines of code. If you don’t know what a CNN is, for now, just think of it as a feed-forward neural network that is optimized for tasks such as image classification and natural language processing.

In this short tutorial, I’m going to show you how to create a simple CNN and train it using the CIFAR-10 dataset, a very popular dataset that has thousands of labeled images.

Prerequisites

To follow along, you’ll need:

  • The latest version of IntelliJ IDEA
  • Java 8

1. Project Setup

Fire up IntelliJ IDEA and create a new Maven project using the quickstart archetype. Once the project has been generated, open the pom.xml file and add the following inside the <dependencies> tag:

<dependency>
  <groupId>org.deeplearning4j</groupId>
  <artifactId>deeplearning4j-core</artifactId>
  <version>0.7.2</version>
</dependency>
<dependency>
  <groupId>org.nd4j</groupId>
  <artifactId>nd4j-native-platform</artifactId>
  <version>0.7.2</version>
</dependency>

As you can see, we’ll be using DL4J 0.7.2 in this tutorial.

At this point, the project setup is complete.

2. Create an Iterator

You don’t have to manually download the CIFAR-10 dataset for this tutorial. Instead you can use a class called CifarDataSetIterator, which automatically downloads the dataset using the DataVec library. So, inside the main() function of the App.java file, add the following code:

CifarDataSetIterator dataSetIterator = 
                new CifarDataSetIterator(2, 5000, true);

Note that, for now, we are using only 5000 images from the dataset. Feel free to change that number.

If you want to list all the labels present in the dataset, you can use the following code:

System.out.println(dataSetIterator.getLabels());

At this point, if you compile and run your project, you should see the following output:

[airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck, ]

3. Create the Neural Network

It’s now time to start creating the individual layers of our neural network. We’re going to have the following layers:

  • Three convolution layers
  • Three subsampling layers
  • One output layer

We’re going to order the layers such that each convolution layer is immediately followed by a subsampling layer. The output layer will, of course, be the last layer and it will have 10 neurons, to represent the 10 labels of our dataset.

Accordingly, add the following code to your file:

ConvolutionLayer layer0 = new ConvolutionLayer.Builder(5,5)
        .nIn(3)
        .nOut(16)
        .stride(1,1)
        .padding(2,2)
        .weightInit(WeightInit.XAVIER)
        .name("First convolution layer")
        .activation(Activation.RELU)
        .build();

SubsamplingLayer layer1 = new SubsamplingLayer.Builder(SubsamplingLayer.PoolingType.MAX)
        .kernelSize(2,2)
        .stride(2,2)
        .name("First subsampling layer")
        .build();

ConvolutionLayer layer2 = new ConvolutionLayer.Builder(5,5)
        .nOut(20)
        .stride(1,1)
        .padding(2,2)
        .weightInit(WeightInit.XAVIER)
        .name("Second convolution layer")
        .activation(Activation.RELU)
        .build();

SubsamplingLayer layer3 = new SubsamplingLayer.Builder(SubsamplingLayer.PoolingType.MAX)
        .kernelSize(2,2)
        .stride(2,2)
        .name("Second subsampling layer")
        .build();

ConvolutionLayer layer4 = new ConvolutionLayer.Builder(5,5)
        .nOut(20)
        .stride(1,1)
        .padding(2,2)
        .weightInit(WeightInit.XAVIER)
        .name("Third convolution layer")
        .activation(Activation.RELU)
        .build();

SubsamplingLayer layer5 = new SubsamplingLayer.Builder(SubsamplingLayer.PoolingType.MAX)
        .kernelSize(2,2)
        .stride(2,2)
        .name("Third subsampling layer")
        .build();

OutputLayer layer6 = new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
        .activation(Activation.SOFTMAX)
        .weightInit(WeightInit.XAVIER)
        .name("Output")
        .nOut(10)
        .build();

Note that you are free to choose other values for the kernel sizes, the strides, and the paddings. But, I suggest you use RELU as the activation function for all the convolution layers, and SOFTMAX for the output layer. As for the subsampling layers, MAX is the most often used pooling type.

Also make sure you always pass 3 to the first convolution layer’s nIn() method. This is important because all our input images have 3 channels.

We must now create a MultiLayerConfiguration object specifying the configuration details of our neural network. Using it, we can also arrange our layers in the correct order.

MultiLayerConfiguration configuration = new NeuralNetConfiguration.Builder()
        .seed(12345)
        .iterations(1)
        .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
        .learningRate(0.001)
        .regularization(true)
        .l2(0.0004)
        .updater(Updater.NESTEROVS)
        .momentum(0.9)
        .list()
            .layer(0, layer0)
            .layer(1, layer1)
            .layer(2, layer2)
            .layer(3, layer3)
            .layer(4, layer4)
            .layer(5, layer5)
            .layer(6, layer6)
        .pretrain(false)
        .backprop(true)
        .setInputType(InputType.convolutional(32,32,3))
        .build();

Note that you are again free to experiment with different values for the learning rate, l2, momentum, and optimization algorithms. Another important thing to note in the above code is the call to the setInputType() method, which specifies that our neural network’s input type is convolutional, with 32x32 images having 3 colors.

Finally, you can create the neural network by passing the configuration object to the constructor of the MultiLayerNetwork class. Once created, the network must be initialized by calling its init() method.

MultiLayerNetwork network = new MultiLayerNetwork(configuration);
network.init();

4. Train and Evaluate the Neural Network

To start training the convolutional neural network you just created, just call its fit() method and pass the iterator object to it.

network.fit(dataSetIterator);

Once the training is complete, you can evaluate your network by calling its evaluate() method. I suggest you pass a new CifarDataSetIterator object to it, this time using the test data only. The method returns an Evaluation object. By calling its stats() method, you can get a detailed report of your network’s performance:

Evaluation evaluation = network.evaluate(new CifarDataSetIterator(2, 500, false));
System.out.println(evaluation.stats());

Go ahead and run the project now. You’ll, of course, have to wait for several minutes for the training to complete.

Conclusion

After tinkering with the network’s parameters for about an hour or so, I’ve managed to achieve an accuracy of about 60%. If you put in more effort, and are patient enough for longer training durations, I’m sure you can achieve much higher accuracies. The best convnets out there have achieved almost 95% accuracy.

If you found this article useful, please share it with your friends and colleagues!