machine-learningreinforcement-learningopenai-gymrayrllib

Using Ray RLlib with custom simulator


I'm very new to Ray RLlib and have an issue with using a custom simulator my team made. We're trying to integrate a custom Python-based simulator into Ray RLlib to do a single-agent DQN training. However, I'm uncertain about how to integrate the simulator into RLlib as an environment.

According to the image below from Ray documentation, it seems like I have two different options:

  1. Standard environment: according to the Carla simulator example, it seems like I can just simply use the gym.Env class API to wrap my custom simulator and register as an environment using ray.tune.registry.register_env function.
  2. External environment: however, the image below and RLlib documentation gave me more confusion since it's suggesting that external simulators that can run independently outside the control of RLlib should be used via the ExternalEnv class.

If anyone can suggest what I should do, it will be very much appreciated! Thanks! Ray RLlib Environments


Solution

  • If your environment is indeed can be made to structurized to fit Gym style (init,reset,step functions) you can use first one.

    External environment is mostly for RL environments that doesn't fit this style for example Web Browser(test automation etc) based application or any continual finance app etc.