Does SGD in Tensorflow make a move with each data point?



I assumed the "stochastic" in Stochastic Gradient Descent came from the random selection of samples within each batch. But the articles I have read on the topic seem to indicate that SGD makes a small move (weight change) with every data point. How does Tensorflow implement it?


Yes, SGD is indeed randomly sampled, but the point here is a little different.

SGD itself doesn’t do the sampling. You do the sampling by batching and hopefully shuffling between each epoch.

GD means you generate gradients for each weight after forward propping the entire dataset (batchsize = cardinality, and steps per epoch = 1). If your batch size is less than the cardinality of the dataset, then you are the one doing sampling, and you are running SGD not GD.

The implementation is pretty simple, and something like

  1. Forward prop a batch / step.
  2. Find the gradients.
  3. Update weights with those gradients
  4. Back to step 1

Answered By – Yaoshiang

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More