simulationrulesrule-engineagent-based-modelingtraffic-simulation

How to implement a rule-based decision maker for an agent-based model?


I have a hard time understanding how to combine a rule-based decision making approach for an agent in an agent-based model I try to develop.

The interface of the agent is a very simple one.

public interface IAgent
{
   public string ID { get; }

   public Action Percept(IPercept percept);
}

For the sake of the example, let's assume that the agents represent Vehicles which traverse roads inside a large warehouse, in order to load and unload their cargo. Their route (sequence of roads, from the start point until the agent's destination) is assigned by another agent, the Supervisor. The goal of a vehicle agent is to traverse its assigned route, unload the cargo, load a new one, receive another assigned route by the Supervisor and repeat the process.

The vehicles must also be aware of potential collisions, for example at intersection points, and give priority based on some rules (for example, the one carrying the heaviest cargo has priority).

As far as I can understand, this is the internal structure of the agents I want to build:

enter image description here

So the Vehicle Agent can be something like:

public class Vehicle : IAgent
{
  public VehicleStateUpdater { get; set; }

  public RuleSet RuleSet { get; set; }

  public VehicleState State { get; set; }

  public Action Percept(IPercept percept)
  {
    VehicleStateUpdater.UpdateState(VehicleState, percept);
    Rule validRule = RuleSet.Match(VehicleState);
    VehicleStateUpdater.UpdateState(VehicleState, validRule);
    Action nextAction = validRule.GetAction();
    return nextAction;
  }
}

For the Vehicle agent's internal state I was considering something like:

public class VehicleState
{
  public Route Route { get; set; }

  public Cargo Cargo { get; set; }

  public Location CurrentLocation { get; set; }
}

For this example, 3 rules must be implemented for the Vehicle Agent.

  1. If another vehicle is near the agent (e.g. less than 50 meters), then the one with the heaviest cargo has priority, and the other agents must hold their position.
  2. When an agent reaches their destination, they unload the cargo, load a new one and wait for the Supervisor to assign a new route.
  3. At any given moment, the Supervisor, for whatever reason, might send a command, which the recipient vehicle must obey (Hold Position or Continue).

The VehicleStateUpdater must take into consideration the current state of the agent, the type of received percept and change the state accordingly. So, in order for the state to reflect that e.g. a command was received by the Supervisor, one can modify it as follows:

public class VehicleState
{
  public Route Route { get; set; }

  public Cargo Cargo { get; set; }

  public Location CurrentLocation { get; set; }

  // Additional Property
  public RadioCommand ActiveCommand { get; set; }
}

Where RadioCommand can be an enumeration with values None, Hold, Continue.

But now I must also register in the agent's state if another vehicle is approaching. So I must add another property to the VehicleState.

public class VehicleState
{
  public Route Route { get; set; }

  public Cargo Cargo { get; set; }

  public Location CurrentLocation { get; set; }

  public RadioCommand ActiveCommand { get; set; }

  // Additional properties
  public bool IsAnotherVehicleApproaching { get; set; }

  public Location ApproachingVehicleLocation { get; set; }
}

This is where I have a huge trouble understanding how to proceed and I get a feeling that I do not really follow the correct approach. First, I am not sure how to make the VehicleState class more modular and extensible. Second, I am not sure how to implement the rule-based part that defines the decision making process. Should I create mutually exclusive rules (which means every possible state must correspond to no more than one rule)? Is there a design approach that will allow me to add additional rules without having to go back-and-forth the VehicleState class and add/modify properties in order to make sure that every possible type of Percept can be handled by the agent's internal state?

I have seen the examples demonstrated in the Artificial Intelligence: A Modern Approach coursebook and other sources but the available examples are too simple for me to "grasp" the concept in question when a more complex model must be designed.

I would be grateful if someone can point me in the right direction concerning the implementation of the rule-based part.

I am writing in C# but as far as I can tell it is not really relevant to the broader issue I am trying to solve.

UPDATE:

An example of a rule I tried to incorporate:

public class HoldPositionCommandRule : IAgentRule<VehicleState>
{
    public int Priority { get; } = 0;

    public bool ConcludesTurn { get; } = false;


    public void Fire(IAgent agent, VehicleState state, IActionScheduler actionScheduler)
    {
        state.Navigator.IsMoving = false;
        //Use action scheduler to schedule subsequent actions...
    }

    public bool IsValid(VehicleState state)
    {
        bool isValid = state.RadioCommandHandler.HasBeenOrderedToHoldPosition;
        return isValid;
    }
}

A sample of the agent decision maker that I also tried to implement.

public void Execute(IAgentMessage message,
                    IActionScheduler actionScheduler)
{
    _agentStateUpdater.Update(_state, message);
    Option<IAgentRule<TState>> validRule = _ruleMatcher.Match(_state);
    validRule.MatchSome(rule => rule.Fire(this, _state, actionScheduler));
}

Solution

  • I see your question as containing two main sub-questions:

    so let's go to each of them.

    Modeling Flexibility

    I think what you have now is not too bad, actually. Let me explain why.

    You express the concern about there being "a design approach that will allow me to add additional rules without having to go back-and-forth the VehicleState class and add/modify properties".

    I think the answer to that is "no", unless you follow the completely different path of having agents learning rules and properties autonomously (as in Deep Reinforcement Learning), which comes with its own set of difficulties.

    If you are going to manually encode the agent knowledge as described in your question, then how would you avoid the need to introduce new properties as you add new rules? You could of course try to anticipate all properties you will need and not allow yourself to write rules that need new properties, but the nature of new rules is to bring new aspects of the problem, which will often require new properties. This is not unlike software engineering, which requires multiple iterations and changes.

    Rule-based Modeling

    There are two types of way of writing rules: imperative and declarative.

    One intuitive way to understand the difference between imperative and declarative styles is to think about writing an agent that plays chess. In an imperative style, the programmer encodes the rules of chess, but also how to play chess, how to open the game, how to choose the best movement, and so on. That is to say, the system will reflect the chess skills of the programmer. In a declarative style, the programmer simply encodes the rules of chess, and how the system can explore those rules automatically and identify the best move. In this case, the programmer doesn't need to know how to play chess well for the program to actually play a decent game of chess.

    The imperative style is simpler to implement, but less flexible, and can get really messy as the complexity of your system grows. You have to start thinking about all sorts of scenarios, like what to do when three vehicles meet, for example. In the chess example, imagine if we alter a rule of chess slightly; the whole system needs to be reviewed! In a way, there is little "artificial intelligence" and "reasoning" in an imperative style system, because it is the programmer who is doing all the reasoning in advance, coming up with all the solutions and encoding them. It is just a regular program, as opposed to an artificial intelligence program. This seems to be the sort of difficulty you are talking about.

    The declarative style is more elegant and extensible. You don't need to figure out how to determine the best action; the system does it for you. In the chess example, you can easily alter one rule of chess in the code, and the system will use the new rule to find the best moves in the altered game. However, it requires an inference engine, the piece of software that knows how to take in a lot of rules and utilities and decide which is the best action. Such an inference engine is the "artificial intelligence" in the system. It automatically considers all possible scenarios (not necessarily one by one, as it will typically employ smarter techniques that consider classes of scenarios) and determines the best action in each of them. However, an inference engine is complex to implement or, if you use an existing one, it is probably very limited since those are typically research packages. I believe that when it comes to real practical applications using the declarative approach people pretty much write a bespoke system for their particular needs.

    I found a couple of research open source projects along those lines (see below); that will give you an idea of what is available. As you can see, those are research projects and relatively limited in scope.

    After all that, how to proceed? I don't know what your particular goals are. If you are developing a toy problem to practice, your current imperative style system may be enough. If you want to learn about declarative style, a deeper reading of the AIMA textbook would be good. The authors maintain an open source repository with implementations for some of the algorithms in the book, too.

    https://www.jmlr.org/papers/v18/17-156.html

    https://github.com/douthwja01/OpenMAS

    https://smartgrid.ieee.org/newsletters/may-2021/multi-agent-opendss-an-open-source-and-scalable-distribution-grid-platform