What are Artificial Neural Networks?

Neural networks are a group of algorithms that are loosely modeled on the human brain and are developed to recognize patterns and interpret sensory data. Human brains interpret the context of real-world situations in a way that computers can’t and neural networks were developed primarily to address this issue.

We have already looked at Machine Learning algorithms in a previous post, and we are now going to delve into the different classifications of neural networks according to the network itself and how the layers of neurons are connected.

 

How Many Different Neural Networks Are There?

There are various types of neural networks, each of which come with their own specific use cases and levels of complexity. We are going to focus on four main classifications, offering a brief introduction into each one:

  1. Monolayer neural network
  2. Multilayer neural network
  3. Recurrent neuronal network
  4. Convolutional neural network

 

Monolayer Neural Network

The Monolayer neural network is the simplest neural network of them all, composed of a single layer of neurons that project the inputs to a layer of output neurons where different calculations are made. Just as the name suggests, all the neurons are laid out in the same layer and form one single layer. The neural network recieves the input signal and feeds them into the corresponding neurons which then produce the output signals.

 

Multilayer Neural Network 

The multilayer neural network is a generalization of the monolayer neuronal network, the difference being that while the monolayer neuronal network is composed of a single layer of input neurons and a layer of output neurons, it has a set of intermediate layers (layers hidden) between the input and the output layer.

Depending on the number of connections that the network presents, it may be totally or only partially connected. In this sense, neurons are seperated into multiple layers with each layer corresponding to a parallel layout of neurons that share the same input data. Since they contain multiple layers of processing, they are used to learning from nonlinear data and are able to seperate it more easily.

 

Recurrent Neural Network 

Recurrent neural networks (RNN) do not have a layer structure, but allow arbitrary connections between neurons, allowing the ability to create cycles, with which it is possible to create temporality and allow the network to have memory.

The data entered at the time of their entrance are transformed and circulated through the network even at the following time instants t + 1, t + 2 and so on. We refer to them as recurrent because they carry out the same task for every element of a sequence, with the output depending on the previous calculations. A simple way to think about it, is to imagine they have a memory with which you can refer back to all the computations that have been performed before.

 

Convolutional Neural Network 

The main difference between the convolutional neuronal network and the multilayer perceptron is that the neurons don´t join with each and every one of the layers, only with a subgroup of them (in simpler words, it specializes 😉), and due to this we can reduce the number of neurons and computational complexity necessary for their execution.

CNN´s perceive images in volumes as if they were three-dimensional objects, rather than being flat. This is because digital colour images contain red-blue-green encoding and mixing these three colours produces the very color spectrum that humans see. A CNN absorbs these images as three seperate strata of colors stacked on top of each other.

 

What´s next? 

Want to learn more about Artificial Neural Networks? Sign up to our webinar here on the 16th of April and explore the world of Deep Learning.

Written by our datascience expert, Diego Calvo, check out his blog here.