Nitin B
Created February 14, 2019

Neural network on Ada

This project aims to create an infrastructure for neural networks

110
Neural network on Ada

Things used in this project

Hardware components

STMicroelectronics STM32F407 Discovery board
×1

Software apps and online services

GNAT Community
AdaCore GNAT Community

Story

Read more

Code

Reference implementation of CIFAR10

ADA
This is only for reference. Both tensorflow layers and run_network layers should go hand in hand. This cannot be run on STM32F407-Discovery because limited RAM. STM32F429ZI-Discovery with external RAM is capable enough to run this network.
with weights_biases; use weights_biases;

package body network is

   procedure run_network(buf_in: conv_buffer) is
      buf1: conv_buffer(1 .. 65535);
      buf2: conv_buffer(1 .. 65535);
   begin

      convolution(buf_in, buf2, conv1, 3, 32, 32, 32, 32, 32, 2, 2, 1, 1);
      biasAdd(buf2, biases1, 32, 32, 32);
      relu(buf2, 32768);

      convolution(buf2, buf1, conv2, 3, 32, 32, 32, 32, 32, 2, 2, 1, 1);
      biasAdd(buf1, biases2, 32, 32, 32);
      relu(buf1, 32768);

      maxPool(buf1, buf2, 3, 32, 32, 32, 32, 2, 2, 2, 2);

      convolution(buf2, buf1, conv3, 3, 16, 16, 16, 16, 64, 2, 2, 1, 1);
      biasAdd(buf1, biases3, 16, 16, 64);
      relu(buf1, 16384);

      maxPool(buf1, buf2, 3, 16, 16, 16, 16, 2, 2, 2, 2);

      convolution(buf2, buf1, conv4, 3, 8, 8, 8, 8, 64, 2, 2, 1, 1);
      biasAdd(buf1, biases4, 8, 8, 64);
      relu(buf1, 4096);

      maxPool(buf1, buf2, 3, 32, 32, 32, 32, 2, 2, 2, 2);

      transpose(buf2, buf1, 64, 1024);
      matmul(buf1, buf2, matmul1, 1024, 64);
      biasAdd(buf1, biases5, 64);
      relu(buf2, 64);

      transpose(buf2, buf1, 10, 64);
      matmul(buf1, buf2, matmul2, 64, 10);

      softmax(buf2, 10);
      
   end run_network;

end network;

Source code and training script

Credits

Nitin B

Nitin B

4 projects • 8 followers

Comments