Forward Propagation is the algorithm that Neural Network Models use to make predictions based on the data being input.

The above diagram shows a high-level depiction of how Forward Propagation works. It takes inputs (example: age, salary, gender) and passes that data into a hidden layer.

# Example

As an example, let’s say we have a goal of predicting the # of purchases a month a family will make on our site. We have a model that says the monthly transactions are based on:

- a families purchases in a given week and
- the number of family members.

This is an example, and obviously not a real world set of indicators. To understand how Forward Propagation works, let’s roll with this simplistic idea.

### Inputs

The inputs would be:

- purchases in a week
- # of members of the family

### Output

The output would be the prediction of the # of purchases a month per family.

### Diagram

If they had 3 purchases in a week and there are 5 members in the family, this would look like this:

## Hidden Layer

In the diagram above, I’ve added weights to the lines… but let me describe what the circles are. The first column of circles represents the inputs (purchases in a week time period, family size.)

The second column of circles describes the “Hidden layer.” The idea of a “hidden layer” tripped me up. Don’t get tripped up. Just think of it as a **controller** that **hands the data coming into it** from the **inputs** (via the weights) and **passes the result to the prediction output**.

## Weights

There is a weight associated to each input (in the example above, it’s represented by an arrow.) The weights (or gravity) assigned is then used to compute the values in the hidden layer.

The math is simple math: (3 * 1 ) + (5 * 1) = 3 + 5 = 8 and (3*-2) + (5 * 1) = -6 + 5 = -1

The process is repeated once more to get the output from the hidden layer via those last weights: (8 * 2) + (-1 * 1) = 16 -1 = 15

**15** is the predicted result: **15** purchases this month for a family of 5 that did 3 purchases this last week.

## Linear Algebra

Forward Propagation is basically using linear algebra. We’re simply doing basic math of Multiplying and adding.

# Python Example using Numpy

import numpy as np

input = np.array([3,5])

weights = {'hidden_node0':np.array([1, 1]),

'hidden_node1':np.array([-2, 1]),

'output':np.array([2,1])}

node_0 = (input * weights['hidden_node0']).sum()

node_1 = (input * weights['hidden_node1']).sum()

hidden_layer_result = np.array([node_0, node_1])

output = (hidden_layer_result * weights['output']).sum()

The above code is following the same flow of the diagram. For the case of this demo, the code is making an array of the values being passed in.

We start with the values 3 (for how many purchases the family made last week) and 5 (the size of the family.) Next, a dictionary of weights is created. These weights correspond to the weights in the diagram (on the arrows.)

**Hidden_node0** has weights of 1, 1.

**Hidden_node1** has weights of 2, 1.

Calcuations are made on the hidden layer, using the input times the appropriate weights. These are just like I did by hand, are saved to the variables node_0 and node_1.

The hidden layer results are created as their own array of values (from **node_0** and **node_1** and finally a formal output result is made taking the **hidden_layer_result** and multiplying it by the appropriate weights (**weights[‘output’]**)

Running this little script above will return 15. 15 is the predicted amount of transactions of this model. **Obviously this is an example, and in no way reflects reality. This simply shows how forward propagation works**.