I've created a custom openai gym environment with a discrete action space and a somewhat complicated state space. The state space has been defined as a Tuple because it combines some dimensions which are continuous and others which are discrete:
import gym
from gym import spaces
class CustomEnv(gym.Env):
def __init__(self):
self.action_space = spaces.Discrete(3)
self.observation_space = spaces.Tuple((spaces.Discrete(16),
spaces.Discrete(2),
spaces.Box(0,20000,shape=(1,)),
spaces.Box(0,1000,shape=(1,)))
...
I've had some luck training an agent using keras-rl, specifically the DQNAgent, however keras-rl is under-supported and very poorly documented. Any recommendations for RL packages that can handle this type of observation space? It doesn't appear that openai baselines, nor stable-baselines can handle it at present.
Alternatively, is there a different way that I can define my state space in order to fit my environment into one of these better defined packages?
You may want to try rllib
, a Reinforcement Learning package within ray
, which is extended in UC at Berkeley.
https://rise.cs.berkeley.edu/projects/ray/
It includes a lot of implemented algorithms:
and it is quite easy to use. You just need to add your environment which is fully explained at: https://ray.readthedocs.io/en/latest/rllib-env.html