In our final tutorial part of our

Deep Learning for Stock Market prediction series, we learn how to use our model and functions to actually predict a stock’s likelihood of going up or down in the near future, given a set of “input” variables.

**Here’s a simple analogy.**

Imagine a list of variables (X1, X2, X3). We are trying to use these 3 numbers to predict what Y is:

**Column1** |
**X1** |
**X2** |
**X3** |
**Y** |

**Case 1** |
1 |
10 |
3 |
1 |

**Case 2** |
3 |
5 |
5 |
0 |

**Case 3** |
100 |
90 |
92 |
1 |

**Case 4** |
59 |
1 |
20 |
1 |

**Case 5** |
42 |
44 |
2 |
0 |

**…** |
… |
… |
… |
… |

**Case X** |
143 |
121 |
5 |
0 |

We’ve already trained our model – it should already know that Y is 1 if X3 is between the other two variables (X1 and X2), otherwise Y is 0. We did this in Part 6 of our series, training the model on Training Data. Now we are programming our algorithm to take X1, X2, X3, and, using a trained model, predict what Y is for other Test data. You can learn more about Training and Testing Data from our “What is Deep Learning?” post.

Hopefully you now understand what we are trying to do. If not, make sure to read the beginning posts of the series, starting with the Introductory Part 1!

Anyways, I decided to save the weights/model from the training process, so that I could load it up when needed for testing. This will save us a considerable amount of time in the future, as we no longer have to train the model whenever we want to use it. We can use keras’s model_from_json for this:

from keras.models import model_from_json

To load the model, we use:

loaded_model = model_from_json(loaded_model_json)

And to load the weights we use:

loaded_model.load_weights("SavedModel/model.h5")

To analyze any stock, we need to do two things. First, we find articles and analyze them for their sentiment, putting them into a CSV file. Then we take the CSV and use it to predict a if a stock will go up or down, using our model.

The first half of this process is quite simple – it is basically the same as the Training model CSV creation, except that we don’t enter in a slope for the CSV – that’s what the model is supposed to guess. We created the Training Model CSV in

Part 6 of this tutorial series (As a reminder, it took a stock ticker symbol and collected relevant financial news articles about the stock, analyzing the sentiment of each article. It then collected other variables important to the stock, and combined them into a CSV file.)

The second half uses model.predict to take in the values (input variables) from the CSV and predict if the stock will go up or down (output). We use the number Z to represent this value throughout the rest of the post.

We could use a number of different ways to explain if a stock will go or down, and how sharp of an increase/decrease this will be. We can’t give an exact amount of money it will go up (that would require much more than a simple classification problem), but we can use a system that would allow the comparison of stocks to other stocks. Since decimal numbers are easily comparable, that’s what we use.

The final prediction (Z) for any stock X, given the number of positive articles P and negative articles N, is as followed:

Z = (P-N)/(P+N)

Where X will increase when Z>0,

Stay when Z=0, and

Decrease when Z<0.

(P+N) allows us to take into account the total number of articles when we calculate the final result. This is an extra precaution to allow a little more adjustment for the number of total articles collected, if some stocks are more popular than others.

Once we have our number, Z, we now know how strong the stock will perform, where a number closer to 1 is better. This number can also be used to compare stocks against each other, which is useful when deciding which stocks to buy from a list of multiple stock choices.

With this, we now have a working Stock Prediction algorithm that can take in a stock ticker and return a Buy Rating in just a couple seconds! Look out for our next and last part of the series, where we take a quick look at several key changes that can be made to improve both the accuracy and efficiency of our algorithm. Enjoy! 🙂

**Follow This Series**

## Recent Comments