Deploy your PyTorch model to Production

Deploy your PyTorch model to Production

6 years ago
Anonymous $L9wC17otzH

https://medium.com/@nicolas.metallo/deploy-your-pytorch-model-to-production-f69460192217

Following the last article about Training a Choripan Classifier with PyTorch and Google Colab, we will now talk about what are some steps that you can do if you want to deploy your recently trained model as an API. The discussion on how to do this with Fast.ai is currently ongoing (more) and will most likely continue until PyTorch releases their official 1.0 version. You can find more information in the Fast.ai Forums, PyTorch Documentation/Forums, and their respective GitHub repositories.

It’s recommended that you take a look at the PyTorch Documentation as it’s a great place to start, but in short, there are two ways to serialize and restore a model. One is loading only the weights and the other loading the entire model (and weights). You will need to first create a model to define its architecture otherwise you will end up with an OrderedDict with just the weight values. Both options would work for inference and/or for resuming a model's training from a previous checkpoint.