model-view-controllerdesign-patternsgame-ai

Game AI: Pattern for implementing Sense-Think-Act components?


I'm developing a game. Each entity in the game is a GameObject. Each GameObject is composed of a GameObjectController, GameObjectModel, and GameObjectView. (Or inheritants thereof.)

For NPCs, the GameObjectController is split into:

IThinkNPC: reads current state and makes a decision about what to do

IActNPC: updates state based on what needs to be done

ISenseNPC: reads current state to answer world queries (eg "am I being in the shadows?")

My question: Is this ok for the ISenseNPC interface?

public interface ISenseNPC
    {
        // ...

        /// <summary>
        /// True if `dest` is a safe point to which to retreat.
        /// </summary>
        /// <param name="dest"></param>
        /// <param name="angleToThreat"></param>
        /// <param name="range"></param>
        /// <returns></returns>
        bool IsSafeToRetreat(Vector2 dest, float angleToThreat, float range);

        /// <summary>
        /// Finds a new location to which to retreat.
        /// </summary>
        /// <param name="angleToThreat"></param>
        /// <returns></returns>
        Vector2 newRetreatDest(float angleToThreat);

        /// <summary>
        /// Returns the closest LightSource that illuminates the NPC.
        /// Null if the NPC is not illuminated.
        /// </summary>
        /// <returns></returns>
        ILightSource ClosestIlluminatingLight();

        /// <summary>
        /// True if the NPC is sufficiently far away from target.
        /// Assumes that target is the only entity it could ever run from.
        /// </summary>
        /// <returns></returns>
        bool IsSafeFromTarget();
    }

None of the methods take any parameters. Instead, the implementation is expected to maintain a reference to the relevant GameObjectController and read that.

However, I'm now trying to write unit tests for this. Obviously, it's necessary to use mocking, since I can't pass arguments directly. The way I'm doing it feels really brittle - what if another implementation comes along that uses the world query utilities in a different way? Really, I'm not testing the interface, I'm testing the implementation. Poor.

The reason I used this pattern in the first place was to keep IThinkNPC implementation code clean:

    public BehaviorState RetreatTransition(BehaviorState currentBehavior)
    {
        if (sense.IsCollidingWithTarget())
        {
            NPCUtils.TraceTransitionIfNeeded(ToString(), BehaviorState.ATTACK.ToString(), "is colliding with target");
            return BehaviorState.ATTACK;
        }

        if (sense.IsSafeFromTarget() && sense.ClosestIlluminatingLight() == null)
        {
            return BehaviorState.WANDER;
        }

        if (sense.ClosestIlluminatingLight() != null && sense.SeesTarget())
        {
            NPCUtils.TraceTransitionIfNeeded(ToString(), BehaviorState.ATTACK.ToString(), "collides with target");
            return BehaviorState.CHASE;
        }
        return currentBehavior;
    }

Perhaps the cleanliness isn't worth it, however.

So, if ISenseNPC takes all the params it needs every time, I could make it static. Is there any problem with that?


Solution

  • NO. No no no. You're creating a ridiculous number of hidden (and not hidden) dependencies in your AI. First, MVC is not really a good pattern to use here, since there really is no "view" that you need to care about, there are only actions. Also, your "model" here is really the state of the world as known to the AI at the time, which is entirely separate from the AI itself, although this could be considered to be a "view" of the game world in terms of a snapshot of your object's positions and attributes (I've done it this way, was highly effective).

    The core problem, however, is that your retreatTransition code is highly coupled to actions and state. What would happen if you had to make a change? What if you needed 200 different types of AI that are all similar, how would you maintain that? The answer is that you couldn't, it would be a mess. You're effectively creating a state machine here and state machines don't scale well. Also, you couldn't add/change/remove a state from your machine without editing code.

    What I would recommend is instead, consider moving to a different architecture. Your TDD approach here is great, however you need to take a step back and look at different AI architectures and understand the core concepts before you make a choice. I would start by looking at Jeff Orkin's excellent article "3 states and a plan" which is about the goal-based architecture of F.E.A.R. (http://web.media.mit.edu/~jorkin/goap.html). I've implemented this before and it was highly effective and stupid-easy to design and maintain. Its core design would also facilitate TDD (actually BDD is a better choice) quite well.

    Another thing: Your ISenseNPC looks like its tightly coupled to world state. The percepts of your AI (things that it can observe from the world) should be completely separate, so this says to me that you should have a class WorldModel or something that is passed in to the ISenseNPC object, which then inspects the WorldModel for relevant information via its percepts (think of a percept as a way that the AI can perceive the world, something like a sensor, vision radius, sonar, etc) and then you can even create individual percepts and add them to your ISenseNPC, which would decouple the world state, the way an AI perceives that world, and then the AI's understanding of the world itself. From there, your AI can make decisions on what it should do.

    You're modeling a simple reflex agent, which is just a set of rules that respond to a given percept sequence, which is fine for simple AI. It's basically a glorified state machine, however you can then create a mapping of percepts to behaviors in your Think object, which could be maintained separately and changing that mapping or extending it would not require changing code (Single Responsibility Principle at work). Furthermore, you could create a game editor that can enumerate all percepts, decisions, and actions and link them together for any given AI, which would facilitate you maintaining your AI's without having to even go into the game or even rebuild the code (potentially). I think you'll find this to be far more flexible and maintainable than what you're trying to do here. Ditch that MVC for this particular thing, MVC is highly suited to graphics and to a lesser extent physics, but it really doesn't fit AI very well since AI doesn't really have a "view."

    Please let me know if you have any other questions about this, I've had some experience in implementing a goal-based architecture for a game as well as some other things and I'd be happy to help you out.