Implementation of Gradient Descent in Python
https://medium.com/coinmonks/implementation-of-gradient-descent-in-python-a43f160ec521
Every machine learning engineer is always looking to improve their model’s performance. This is where optimization, one of the most important fields in machine learning, comes in. Optimization allows us to select the best parameters, associated with the machine learning algorithm or method we are using, for our problem case. There are several types of optimization algorithms. Perhaps the most popular one is the Gradient Descent optimization algorithm. The first encounter of Gradient Descent for many machine learning engineers is in their introduction to neural networks. In this tutorial, we will teach you how to implement Gradient Descent from scratch in python. But first, what exactly is Gradient Descent?
Gradient Descent is an optimization algorithm that helps machine learning models converge at a minimum value through repeated steps. Essentially, gradient descent is used to minimize a function by finding the value that gives the lowest output of that function. Often times, this function is usually a loss function. Loss functions measure how bad our model performs compared to actual occurrences. Hence, it only makes sense that we should reduce this loss. One way to do this is via Gradient Descent.