One way to do gradient descent in Python is to code it myself. However, given how popular a concept it is in machine learning, I was wondering if there is a Python library that I can import that gives me a gradient descent method (preferably mini-batch gradient descent since it's generally better than batch and stochastic gradient descent, but correct me if I'm wrong).
I checked NumPy and SciPy but couldn't find anything. I have no experience with TensorFlow but looked through their online API. I found tf.train.GradientDescentOptimizer, but there is no parameter that lets me choose a batch size, so I'm rather fuzzy on what it actually is.
Sorry if I sound naive. I'm self-learning a lot of this stuff.
To state the obvious, gradient descent is optimizing a function. When you use some implementation of gradient descent from some library, you need to specify the function using this library's constructs. For example, functions are represented as computation graphs in TensorFlow. You cannot just take some pure python function and ask TensorFlow's gradient descent optimizer to optimize it.
If your use case allows you to use TensorFlow computation graphs (and all the associated machinery - how to run the function, compute its gradient, ), tf.train.*Optimizer
would be an obvious choice. Else, it is unusable.
If you need something light, https://github.com/HIPS/autograd is probably the best option of all the popular libraries. Its optimizers can be found here: https://github.com/HIPS/autograd/blob/master/autograd/misc/optimizers.py