I am implementing a RL agent with policy gradient method. I define a dense network for actor and another dense network for critic. For example, my critic network is:
state_input = Input(shape=(self.num_states,))
x = Dense(self.hidden_size, activation='tanh')(state_input)
for _ in range(self.num_layers - 1):
x = Dense(self.hidden_size, activation='tanh')(x)
out_value = Dense(1)(x)
model = Model(inputs=[state_input], outputs=[out_value])
model.compile(optimizer=SGD(lr=self.learning_rate), loss='mse')
In training phase I'm calling tensorboard:
from keras.callbacks import TensorBoard
tensorboard = TensorBoard(log_dir="/logs/{}".format(time()),
histogram_freq=1, batch_size=32,
write_graph=True, write_grads=True,
write_images=True, embeddings_freq=0,
embeddings_layer_names=None,
embeddings_metadata=None,
embeddings_data=None, update_freq='epoch')
critic_loss = self.critic.fit([obs, advantage, old_prediction], [action],
batch_size=self.batch_size,
shuffle=True, epochs=self.epochs, verbose=False,
callbacks=[tensorboard_actor])
But I'm getting this error:
TypeError: 'module' object is not callable
TypeError: 'module' object is not callable in your case is caused by time module
I am assuming that you imported time module as
import time
and called the function time()
tensorboard = TensorBoard(log_dir="/logs/{}".format(time())
It can be solved easily by importing:
from time import time