I have created a custom gym environment where the actions can be any integer from -100 to +100. As far as I have seen it is no possible to create a discrete space that allows negative values, and the only solution I have come with is to create a Box space from -100 to +100 (notice that this is a continuous space).
Since most reinforcement learning agents assume a discrete space for the actions space I am having difficulties in running my code (I know there are some agents such as DDPG that runs in continuous action spaces).
It is possible to have in gym a discrete space that allows negative values?
AFAIK, in OpenAI-Gym discrete environments you have indexes for each possible action, because of that you may don't need negative values. However, you can map each action index the an arbitrary value, positive or negative.
For example, in the Cartpole environment you can apply a positive (push to the right) or a negative (push to the left) force to the cart. This problem is modeled using a discrete environment, where action 0 = negative force
and action 1 = positive force
. For more details check the Cartpole source code (e.g., line 95).
Similarly, in your case, although your 200 action indexes are all positive, they can represent positive or negative actions.