The input of my network is n sequential pixels of a NxN image (where n is small compared to N) and the output is 1 pixel.
The loss is defined as the squared difference between the ouput and the desired output.
I want to use the optimizer on the average loss after iterating over the whole image.
But if I try to collect the loss in a list and average those losses after all the iterations are done, feeding this to my optimizer, causes an error because Tensorflow does not know where this loss comes from since it is not on the computational graph.
Apparently, feeding an array [x,n] (where x is the number of inputs, I would otherwise have to feed seperatley in each iteration and n is the amount of sequential pixels) to my network and then optimizing the loss computed for this input, is exactly what I was looking for.