openai-gymdqnkeras-rl

Training DQN Agent with Multidiscrete action space in gym


I would like to train a DQN Agent with Keras-rl. My environment has both multi-discrete action and observation spaces. I am adapting the code of this video: https://www.youtube.com/watch?v=bD6V3rcr_54&t=5s

Then, I am sharing my code

class ShowerEnv(Env):
    def __init__(self, max_machine_states_vec, production_rates_vec, production_threshold, scheduling_horizon, operations_horizon = 100):
        """
        Returns:
        self.action_space is a vector with the maximum production rate fro each machine, a binary call-to-maintenance and a binary call-to-schedule
        """

        num_machines = len(max_machine_states_vec)
        assert len(max_machine_states_vec) == len(production_rates_vec), "Machine states and production rates have different cardinality"
        # Actions we can take, down, stay, up
        self.action_space = MultiDiscrete(production_rates_vec + num_machines*[2] + [2]) ### Action space is the production rate from 0 to N and the choice of scheduling
        # Temperature array
        self.observation_space = MultiDiscrete(max_machine_states_vec + [scheduling_horizon+2]) ### Observation space is the 0,...,L for each machine + the scheduling state including "ns" (None = "ns")
        # Set start temp
        Code going on...
.
.
.
.
def build_model(states, actions):
    actions_number = reduce(lambda a,b: a*b, env.action_space.nvec)
    model = Sequential()    
    model.add(Dense(24, activation='relu', input_shape= (1, states[0]) ))
    model.add(Dense(24, activation='relu'))
    model.add(Dense(actions_number, activation='linear'))
    return model

def build_agent(model, actions):
    policy = BoltzmannQPolicy()
    memory = SequentialMemory(limit=50000, window_length=1)
    dqn = DQNAgent(model=model, memory=memory, policy=policy, 
                nb_actions=actions, nb_steps_warmup=10, target_model_update=1e-2)
    return dqn
.
.
.
.
states = env.observation_space.shape
actions_number = reduce(lambda a,b: a*b, env.action_space.nvec)

model = build_model(states, actions)
model.summary()

dqn = build_agent(model, actions)
dqn.compile(Adam(lr=1e-3), metrics=['mae'])
dqn.fit(env, nb_steps=50000, visualize=False, verbose=1)

After initializing with 2 elements, so 5 actions, I get the following error:

ValueError: Model output "Tensor("dense_2/BiasAdd:0", shape=(None, 1, 32), dtype=float32)" has invalid shape. DQN expects a model that has one dimension for each action, in this case [2 2 2 2 2]

How can I solve this. I am quite sure because I do not fully understand how to adapt the code in the video to a MultiDiscrete action space. Thanks :)


Solution

  • I had the same problem, unfortunately it's impossible to use gym.spaces.MultiDiscrete with the DQNAgent in Keras-rl.

    Solution:

    Use the library stable-baselines3 and use the A2C agent. It's very easy to implement it.