Sunday, October 23, 2011

Path Replanning


I've been picking on path replanning again. It is interestingly annoying problem. I have currently settled on following idea to make it somewhat manageable.

Prerequisite
In ever dynamic game world, the path finding and path following becomes statistic. The changes are that most of the time your request does not pan out the way they were requested. It is really important to have good failure recovery, both on movement system level as well as on behavior level. Plan for the worst, hope for the best.

Movement Request
At the core of the problem is movement request. Movement request tells the crowd/movement system that a higher level logic would like an agent to move to a specific location. This call happens asynchronously.

Path to Target
When the request is processed, the movement system will find the best path to the goal location. If something is blocking the path, partial path will be used and the caller is signaled about the result. From this point on, the system will do it's best to move the agent along this route to the best location that was just found.

Path Following
If a disturbance is found during the following, the movement system will do a small local search (same as the topology optimization) in order to fix the disturbance. If the disturbance cannot be solved, the system will signal that path is blocked and continue to move the agent up until to the valid location on the current path unless it is interrupted.

API
This could lead to following interface between the high level logic and the movement request.


struct MovementRequestCallback
{
 // Called after path query is processed.
 // (Path result accessible via ag->corridor)
 // Return true if path result is accepted.
 virtual bool result(Agent* ag) = 0;

 // Called when path is blocked and
 // cannot be fixed.
 // Return true if movement should be continued.
 virtual bool blocked(Agent* ag) = 0;

 // Called when the agent has reached
 // the end of the path trigger area.
 // Return true if movement should be continued.
 virtual bool done(Agent* ag) = 0; 
};

bool dtCrowd::requestMovement(int agent,
 const float* pos, const dtPolyRef ref,
 MovementRequestCallback* cb);

The 'result' callback allows the high level behavior to cancel movement in case unfavorable path rest is found (i.e. partial path), or it allows the behavior to set some internal state based on the path result.

The 'blocked' function allows the higher level logic to handle more costly replans. For example the higher level logic can try to move to a location 3 times until it gives up and tries something else, and it can even react to replays if necessary.

The 'done' function allows to inject extra logic on what to do when path is finished. For example a 'follow enemy' behavior may want to keep on following the goal even if it is reached, whilst 'move to cover' might do a state transition to some other behavior when the movement is done.

The general idea is to move as much of the state handling behavior out from the general code, and try to make the replan to be as cheap as possible. The downside is that the replan cannot react to big changes in the game world, but I argue that that is not necessary and should be handled with higher level logic anyway (i.e. the path can be come much longer).

What do you think?

Monday, August 1, 2011

Hierarchical Pathfinding in Detour

Uwe Koch has written a great thesis about hierarchical pathfinding with navmeshes. He presents a generalized version of HPA* implemented in Detour. The short summary: 4-5 times faster than Detour's A*, and uses 10-20 times less memory (graph nodes).

Applying graph partitioning to hierarchical path´Čünding in computer games

Path Replanning in DetourCrowd


I just added first version of path replanning for DetourCrowd. If a polygon becomes invalid along the path, the path is replanned. Invalid polygon means that the polygon just disapperas (i.e. a tile is removed), or its' flags don't match with the filter anymore. In the above video, the red polygons have disabled flags and the agents react to the changes over time.

The way it works on practice is that before the current path is followed, we peek ahead to make sure the next N polygons are valid (I used 10 in the above example). If a any polygon during that sub path is invalid, a replan is issued from the furthest valid position along the path. if the current polygon becomes invalid, or of the target location becomes invalid, the agent will stop.

There are some things I don't like about this method. Firstly, it is quite wasteful in resources. Issues almost a full replan is pretty horrible. I tried some local repair operations, but they ended up being too very complicate and hard to time-slice.

Secondly, the replanning does not react if a new venue becomes available. The topology optimization pass will catch many of these cases, but if the goal was unaccessible when the replannig happened, the current implementation will not try to replan when it reaches the end of the path.

The end of the path could be flagged or some other tricks, but I think I might be missing a bit bigger picture here. What actually does a movement request mean?

Example of nasty case

Here's a simple example which emphasizes one of the problems with replanning. There is a house and it has a door which can be open or closed. The NPC does not have key (cannot open door), and wants to move inside the house. If the door is open, the NPC can walk in, that is path A in the picture. But what should be do if the door is closed?

From navmesh point of view, it looks like the graph is broken at the location of the door. The usual fall back in case no path is found is to return a path which leads to nearest position around the target. In our case that would be the path B in the picture.

It is really hard to tell the A* that there actually is a closed door and that the nearest accessibility to goal location is semantically the door. And this is when things start to slip from a technical issue to a simulation or story issue.

Imagine a case where the agent starts to move to the building, and before it gets there the door is closed. The story seen by the player could be something like this: The bad guard tried to capture the princess, but at the last moment the princess managed to close the door, and then the guard went to hide under the trees.

Game levels are full of cases like this. The AI has no notion of the trees, but when the AI walks under a tree and stands there, the player will give meaning to it.

There are more problems with that case too. For a moment imagine that we could replan the path every time the world changed. Now if we had a situation where the door would open and close every few seconds, the agent would get dead locked at the east side of the building since the solution would flicker between paths A and B.

These are just few examples which happen when you add replanning to your navigation system.

Solution

I think the proper solution to reacting to dynamic changes in the navigable surface is to treat the planned path the same way as any other plan in the AI system. That is, it is very likely to fail, and the plan will be considered as failed, if small adjustments to it cannot fix it. This makes assumptions about the request quite clear. It may not be the best solution, but I think it puts the decision at the right spot.


Request for comments! How do you handle partial paths and replanning?

Sunday, July 3, 2011

Paris Game/AI Conference 2011 Slides and Demo

The Paris Game/AI Conference is over and it was a blast! There was just so much interesting stuff in there that my head was just about to burst. I really like the shooter symposium where handful of studios gave microtalks (about 15mins each) on similar subjects followed by a panel discussion and Q&A. It was really inspiring to see different ways to solve similar problems.


My talk this year was about some work I did for Killzone3. I made a handful of tools for them to improve their AI level design workflow. In my talk I concentrated on how to automatically build cover annotation for the player. Killzone has this mechanic where the player can latch to a cover and slide along it. AI cover locations are discrete points are explained and were deducted from the player cover.

The cover locations are found using voxelization and tracing the contours of voxelized areas. Then the contours are sampled to see if they are close to a wall, and further the wall height is calculated and cover planes are build from that.

I also explained how this idea can be further expanded to automatically find jump-links and other non-locomotion navigation annotation.

Here's the link to the slides and demo (with source, sorry only OSX binaries):


I think the presentation did not go as well as last year. I tried to fix some problems I had last year, but ended up failing in some things I think nailed last year.

Firstly, last year I had two topics, and it seems I was only able to get the second topic through. So my idea for this year was to present one battle-proven practical idea and show how to vary it. Hopefully with enough details that people can implement it and maybe fond other uses for the technique too. I think the scope was good this year.

Secondly, even if I think my presentation last year was cool (and I got a lot of good feedback from it), I think it was a distraction. So this year I tried to simplify my slides to bare bones. The regular slides format does not work very well the way I like to explain things. I find it much easier to show different debug renderings in a demo and talk on top of that.

I horrible mistake I made this year was that I did not have enough time to practice the presentation out loud, in front of other people. I chopped some topics, since my practice runs were always over time. During the presentation I was super nervous because I did not have good confidence on time, and I ended up rushing through the slides super fast.

Lessons learned, I hope my next presentation will be much better :)

Saturday, April 16, 2011

Temporary Obstacle Progress

I've been super busy recently. Our startup just opened a public beta. I have managed to get some progress done on the temporary obstacle handling, though.

This is now probably the 4th rewrite of the system, so things are slowly being massaged in place (it usually takes 5 rewrites :). Adding and removing obstacles works again, and I've done some good progress on making the tile cache class better. Though, the API is still a bit in flux.

I'm currently testing with cylindrical obstacles, my plan is to support only extruded convex poly obstacles at first and add support for custom obstacles types in the long run.

I get quite consistent average of 0.1ms per layer build time, and about 10% compression ratio of the layer data when using 48x48 layers. The compressed layer grid data needed to rebuild any navmesh tile with obstacles in the scene pictured above takes 75 kB. The navmesh itself for that scene takes 72 kB.

I have few things left to do to finish this feature: 1) a custom pass before the mesh is being passed to navmesh builder (to handle flags), 2) handle off-mesh connections. Once that is done, I will start making releases again. I wonder if I should call the next release Recast & Detour 1.5 or 1.9?

Friday, March 25, 2011

Detour API Changes

Once you update to R289 you will notice that the Detour API has changed a bit. This change was necessary so that Detour could support multiple layers per tile location.

Here's quick migration guide you should follow when switching to the latest code.

1) There is no need to remove the padding of the polymesh coordinates before the vertices are passed to dtCreateNavMeshData(). So if you had code, like this before, remove it! If you are not using tiles, this does not apply.
 // Remove padding from the polymesh data. TODO: Remove this odditity.
for (int i = 0; i < m_pmesh->nverts; ++i)
{
unsigned short* v = &m_pmesh->verts[i*3];
v[0] -= (unsigned short)m_cfg.borderSize;
v[2] -= (unsigned short)m_cfg.borderSize;
}

2) Since the polymesh data is not offset anymore, you can feed the polymesh bounding box directly to navmesh builder:
 rcVcopy(params.bmin, m_pmesh->bmin);
rcVcopy(params.bmax, m_pmesh->bmax);
3) I made the BV-tree building optional as it is not needed for tiles which has just a couple of polygons. To keep the old behavior, you need to add this line to your navmesh build params:
 params.buildBvTree = true;
4) When you access tiles, you need to specify tile x, y and layer indices. If you don't use layers, just pass 0 (zero) as layer index. For example:
 m_navMesh->removeTile(m_navMesh->getTileRefAt(tx,ty,0),0,0);

That's it. The navmesh data number bumped so you will need to rebuild your data too.

I also removed the test code from Sample_TileMesh. From now on the Sample_TempObstacles will be my sandbox. The code to build tiles at runtime will live under DetourTileCache. It is all a bit of mess still, I'll keep you updated about the progress.

I think I will move back to numbered releases so that I can keep the SVN a bit dirty when I need to. Once the temp obstacle stuff is done, I think it is time to release Recast & Detour 1.9.

Oh, and I updated the VC project too!

Sunday, March 20, 2011

Simulating Human Collision Avoidance

There were many good collision avoidance papers published last year. One trend that I saw already when I was preparing my Paris Game AI Conference presentation last year was that the next step in the human like collision avoidance will come from inspecting motion capture data.

One of my favorites from last year was A Velocity-Based Approach for Simulating Human Collision Avoidance by Karamouzas & Overmars. Technically their solution is very close to the sampling based RVO, but there is one very important difference; quoting the paper:
Our analysis, though, focuses on the predicted time to collision between interacting participants and the deviation from their desired velocities, whereas they studied the effect that the minimum predicted distance has on the participants’ accelerations.
In practice it means that they did bunch of measurements with real people and noticed that the velocity sampling range depends on the predicted time of impact.

That is, if the agent things it will hit something 3 seconds in the future, it is likely to adjust the speed and angle just a tiny amount, but if the collision is imminent, the agent may adjust the velocity a lot. The plot at the top of the post shows how the sampling range changes based on the predicted time of impact.

This is tiny detail, but very important one. The resulting animations (accessible via the link above) look pretty good too.

Thursday, March 17, 2011

Bulletstorm is Using Recast

Much to my girlfriends disappointment, I just got my copy of Bulletstorm today! She was expecting the courier to bring her a fragrance from Venice and UPS brought this gory awesomeness from Warsaw instead.

Congrats to Mieszko and the gang at People at Can Fly for shipping such a great game! And thank you for mention me in the credits :)

Saturday, March 12, 2011

Layers in Detour

Well.. this may look like a unmeasurable step in no direction at all, but it is indeed layered tile pieces working in Detour! There are a lot of rough edges to be fixed, but I'm glad I got this far.

Some things were a bit painful to debug. Looks like the tile stitching needs to be redone at some point in the future. As you can see in the above picture, the navmesh is missing a detail mesh. I have an idea how to make a semi good detail mesh really quickly.

Anyways, once I check in the stuff it means that Detour and Recast APIs will change a bit. I will write migration guide once I have polished the code a bit.

Friday, March 11, 2011

Temporary Obstacle Processing Overview

I was asked on twitter to give an overview of the navmesh generation process. Click on the above picture to see what happens when a temporary obstacle is added.

What is missing from the picture is the logic that triggers tile updates when an obstacle is added or removed. I have not quite figured that all out. The plan is to have one class that allows you to add and remove temp obstacles, and then you update that class each frame, give it a time quota, and it will figure out which tile layers to update based on the obstacle shape and finally update the navmesh.

The tile layers will be always updated all the way from the compressed data, so adding or removing an obstacle will be the same process, but the temporary obstacle set used to build the layer will be different.

The obstacle shapes are expected to be expanded by the agent radius.

Each obstacle marks an area type into the grid. This means that you can punch holes or just mark certain areas depending on what you do. My plan is to support a couple of obstacle types, like extruded polygons and circles, but nothing prevents you from inventing you own shapes too! Maybe you can have some kind of cone to describe a line of fire of an agent, or a sphere which describes the radius of an explosion.

I'm quite excited about the performance I measured earlier today! The final step is to connect it all to Detour. It will be mostly boring bookkeeping coding on how to connect the tiles and such.

Heightfield Layer Progress pt. 3

Today was scary day. I finally added some data collection to my heightfield layer code I've been working on recently. The numbers look super promising!

I measured the time to build navmesh for the above level, which I have been using earlier too. For build time and memory usage I measured the time that it takes to build a layer of navmesh from existing heighfield layer (rcHeightfieldLayer if you are familiar with the code). That is pretty much the bare bones of the process at runtime. I did not measure the time to rasterize temp obstacles yet.

I tested with a 32x32 tiles. The build time for a single tile layer varies from 0.01ms to 0.2ms, 95% of the tiles take less than 0.1ms. As expected there is some variation from run to run in the timings. The build process for one tile takes less than 15kB of memory. I'm really happy about that as all data could potentially fit in cache.

The data that needs to be stored in the memory takes 545kB for the above level, which compresses to 77kB using fastlz. What is interesting here is that the memory requirements for layers is smaller than for compressed compact heightfield even if there is a lot of waste in the layers. I think the win comes from the fact that there is no need for additional information about the layers.

Here's the complete test data in numbers.

Sunday, March 6, 2011

Removed Old Sample

FYI – I removed the Solo Mesh Tiled sample which has been obsolete for quite some time already. It does not provide any advantage over using tiled mesh all the way through and has become a slight maintenance burden. I will keep the merge functions around for the time being, but eventually they might go away too.

Heightfield Layer Progress pt. 2

Firstly, I apologize the current state of the SVN. Some files are missing from the VC project, some headers have messy stuff and there are some extra stuff in the UI. I often like to check in stuff I'm working on when I get to a certain point so that I can revert if the new direction is a dead-end. It usually takes few tries to get something figured out properly.

The above image show some phases of the new navmesh generation procedure. It is very much like the original Recast version, but I added some extra constraints to the data.

Firstly, the voxelized heighfield is first split into 2D layers. These layers are stored in compressed form. This 2D data can be packed more easily than 2.5D data as neighbour data is more often similar.

When the navmesh of a a portion of the level needs to updated, for example when a temporary obstacle is added or removed, the layer data is uncompressed the obstacles are rasterized, and the navmesh of that layer is rebuild.

The layer structure ensures that each uncompressed layer takes the same amount of memory during process, which helps to manage the memory layout. Less data also means more local memory access, which should speed things up too.

Secondly, I have limited the max tile size to 255. This means that I can use bytes as coordinates and further reduce the memory requirements. The same magic number limits the maximum number of regions per layer too.

I'm currently sweeping through the code and try to remove allocations as much as I can. The code will require allocations, but I have been able to remove a lot of reallocs, which makes the code more linear allocator friendly. My goal is to make the memory consumption below 64k, on tile sizes between 32-64, maybe even 32k is possible for the sizes closer to 32x32.

There are some things left to do on the generation side, like removing those excess vertices. I think I'll wait until I finish those things before I dare to measure the performance.

After that I will need to figure out how to change Detour to support multiple layers. My current plan is to store each tile at location (x,y,layer), where the layer is just some number you decide yourself. This should nicely allow other kinds extra navmesh layers too, like overlapping streaming sections, etc.

Also, another nice side effect is that the layer generation process will make 2D navmesh generation much easier. Basically you could just feed in an image and get navmesh out.

Thursday, March 3, 2011

Insomniac is using Recast for Resistance 3

Just heard interesting news from GDC. Insomniac is using Recast to create navmeshes for Resistance 3! AiGameDev.com has coverage of the talk on their forum.

I was kinda expecting this based on the screenshots in Mike Acton's slides his unfinished usability presentation :)

I would love to know how they arrange the data so that they can do pathfinding on SPU. Their future directions look very much inlined with my ideas, i.e. rebuild tiles on SPU for sleeping temporary obstacles.

Can't wait to hear the whole presentation.

Saturday, February 26, 2011

Heightfield Layer Portals

I worked today a bit on the layers again. I was trying to find the minimal data that needs to be stored. There were a couple of cases which required a bit more thinking.

For example should the layer contain that extra border or not? I decided that it should not. That required me to store the edges which may lead to a connected layer and it also means that the obstacle will need to be expanded by the agent radius, but that is something I was planning to do anyways.

It turned out to be a good choice to dig up the portals anyways. Now it is possible to calculate a hierarchical graph based on the portals only. The way the layer partitioning is done makes sure that the connections between two layers are always axis aligned. This makes things a lot easier and robust. As difference to the connections between navmesh tiles, with layers there can be a portal in the middle of a tile too. I will need to add support for this in Detour later.

I think I'll add a bit more info for the portals, for example which portals are connected to each other within a single layer.

Friday, February 25, 2011

Heightfield Layers Progress

I have been working on the heightfield layers a bit more. As I've explained earlier, I'm trying to make the temporary obstacle processing much faster by decomposing the walkable area into 2D layers.

Then the temp obstacles will be rasterized onto those 2D areas and build into pieces of navmesh. The idea is to try to limit the changes of a temporary obstacle processing to as local area as possible and to speed up the generation process in general by making the input data simpler.

Each layer stores area type and 2D heighfield. The area types should compress really well with RLE, and it should be possible to compress the heighfield using a simple linear spline fitting. Add LZ compression on top of that and the aux data for generating new tiles should not be a huge burden anymore.

Interestingly, this layered heighfield data is also something that can be used to make 2.5D HPA* (the papers so far have used 2D data). If someone out there is up for the challenge, let me know and I'll help you to decipher the data Recast creates.

Tuesday, February 22, 2011

Iterating Cube Vertices, Edges and Faces

This keeps haunting me. There must be an awesome way to arrange the data so that you can do a simple for loop and iterate over all cube vertices, edges and faces. I'm sort of after something you could use in matching cube like algorithms to fetch data from a sample grid without having to use lookup tables.

Vertices or sample offsets are easy:


    for (int i = 0; i < 8; ++i)
{
const int dx = i & 1;
const int dy = (i>>1) & 1;
const int dz = (i>>2) & 1;
printf("%d: %d,%d,%d\n", i, dx,dy,dz);
}


Assuming that you have 3 values per sample, one along each axis, the following code can be used to walk through all the valid edges:



for (int i = 0; i < 12; ++i)
{
const int ax = (i >> 2) & 3;
const int as = (4-ax) >> 2;
const int bs = (5-ax) >> 2;
const int v = ((i&1) << as) | ((i&2) << bs);
printf("va=%d vb=%d dir=%c\n", v, v|(1<<ax), "XYZ"[ax]);
}


But... I'm not satisfied, there must be a cleaner way.

I once figured out how to procedurally draw a cube with just one for loop, but I cannot remember how to do it anymore, IIRC it used rotating sequence of XYZ, ZXY, YZX somehow, much like how you calculate cross products.

Sunday, February 13, 2011

Very Temporary Obstacle Avoidance pt. 2



I exposed my previous attempt at local obstacle avoidance to some more challenging situations and oh boy did it explode! For example the above shows really complicated case, which looks really innocent. There are a couple of problematic cases.

Firstly, the obstacles that reach to the sides are problematic to handle. The sides needs to classified regarding the goal direction (blue) but they can reach back to over 180 degrees in certain cases. I had to make a lot of special case code to make sure to detect when right is still right even if it is on left side. Yeah!

The second problem case is how to handle the terminator nodes (those red dots, which indicate that an obstacle touches a wall). Initially I thought just to use the line to goal to split detect if a terminator is left or right. It worked well for many of my tests, but there are cases like in the above video, where the terminator changes sides!

So anyhow, hours of hacks and tricks later, I thought I'd give up a little.


Few days later, I thought why not try the old local grid pathfinder I made some time ago. I took a well tested rasterizer, changed it to support convex polygon and in no time I had working prototype of local obstacle avoidance in DetourCrowd.

The code is pretty simple to understand, it should be pretty fast too. I did not test the impact yet, but my previous tests indicate that it is fast.

The local pathfinder operates on the yellow area you can see in the video. If the blocking obstacle is larger than that, path cannot be found. The navmesh and the obstacles are updated every now and then, I could potentially use lower update rate too.

There a couple of problems still left to be solved. Firstly, the currently is no reliable way to detect that the agent is stuck. Secondly, in some cases the local pathfinder finds a bit different route, which leads to flicker.
_

We'll see how this all works out. I hope to speed up the navmesh temp obstacle baking a little. It is still the best choice. I think the local grid path planner could be usable for simpler games or those who cannot spend the extra memory required by the tile cache.

Saturday, February 12, 2011

Very Temporary Obstacle Avoidance


In the rush of trying to make everything perfect, it is sometimes hard to remember that a simpler solution could work too. This ties to my efforts to handle temporary obstacles.

If you want a rock solid way of handling temporary obstacles, there is no other way than to bake them into your navigation graph. There is just no way around it.

But at the same time there are huge number of cases, where you just would like to sprinkle few crates and barrels here and there for the player to shoot at, and will just break your navigation.

Purely local obstacle avoidance cannot handle them (or you are really lucky, if it does!). Local avoidance gets stuck in local minima (i.e. U-shaped cluster of obstacles), or platos (i.e. row of obstacles).

There is range of algorithms that fit somewhere in between local avoidance and global path planning. Which will solve the case of some temporary obstacles here and there, but will break of the obstacles for too complicated shapes.

Avoiding Temporary Obstacles Locally

One potential solution is to use small grid around the NPC to find around the obstacles. I have experimented with this earlier and it works really well. Another solution is to cluster the obstacles to for larger obstacles and use their silhouettes to find path around them.

That silhouette based obstacle avoidance is also discussed in an article in Game Programming Gems 3 called "A Fast Approach to Navigation Meshes" by Stephen White and Christopher Christensen. It is one of my all time favorite articles. I must have read it billion times.

I have tried to improve the algorithm few times in the past, but never quite got it right. I was working on a recent request last week and the idea popped up again, and I thought I'd give it a another go.

Basic Method

The basic process works so that you have a certain segment goal in the world where you would like to move to. First you check that if there is something blocking the way. In practice it means finding intersection between the segment from agents current location to the next sub-goal (i.e. next corner on the path) and all the temporary obstacles near by.

If something is blocking the way, we find tangent of the obstacle, and choose one side to steer around the obstacle. The side is selected based on which detour is shorter. The distance is approximated by a path from the start point, via the tangent point, to the goal location.

The first tricky problem with the method is how to handle multiple obstacles. Naively just visitin all the obstacles and finding a furthest tangents, will lead poor behavior when there is passages through the obstacles.

Obstacle Clusters

One solution to this is to first cluster all the overlapping obstacles. Then when you hit one obstacle in a cluster, you simply find the furthest tangents of all the obstacles in the cluster.


Sometimes this leads to a case where the newly chosen path will be still blocked. The solution is to check if there is some obstacles blocking the path to the new target, and iterate until the path is collision free.

Iterative algorithms are always tricky in realtime games and simulations. The goo things about this method is that just one iteration will give you a collision free solution. If the new path will lead to collision with another obstacle, that collision will eventually be the first hit and it will be corrected. The result is ugly, but it'll work. In practice this means that you can limit the iterations to 2-4 and it will handle most of the cases with gracefully!

Dealing with Obstacles Touching Walls

The next problem to solve is how to handle cases where one side of the obstacle touches navigation mesh boundary wall.


The solution I have used is to add special obstacles at the navmesh boundary, I call terminator nodes. When a terminator node falls on either side of the silhouette that side is marked as blocked and that is prohibited to be selected. If there is terminator node on both sides, it means that the path is blocked. This is really important feature of this method. It is not perfect, but it signals when it does not know that answer!

It Will Break

You should be also aware that using this method will eventually lead to situations where the path is blocked and the pathfinder cannot help you since it does not know about the blocker. You have to deal with it.

Your imagination is the limit how to handle such cases. You could for example just teleport the NPC to the other side of the obstacle, or the NPC could jump or climb over the obstacle using animation, the NPC could try to kick the blocking obstacle, shoot it, or you could detect that newly added obstacle creates a blocking wall and just break obstacles in the cluster until it is safe again.

That is, you need to have some kind of game mechanic to either break obstacles or skip them. One thing you should not do is to just disable the obstacle, since it can and will lead the agent to be inside the obstacle for long time.

Closing Words

There are quite a few more tricky details in the process I did not cover. I hope to polish those ideas a bit more and explain them at later stage. I think the method should be applicable to convex polygon obstacles too, which would make it even more usable. I have to try that out.

I think this method could be interesting choice for those who wish to add few obstacles here and there and whose game design can allow some game logic to pass through the obstacles when they block the NPCs path.

Wednesday, February 2, 2011

Implementing Undo the Simple Way

Over the years I have implemented all kinds of undos for the editors and tools I have made. I have tried all kinds of ways, ranging from storing delta information to abstract command class that can undo redo itself. There is one kind, though, that is by far my favorite:

class Document ()
{
static int MAX_UNDO = 10;
struct UndoState
{
String cmd;
String path;
};
UndoState m_undoStack[MAX_UNDO];

int m_undoHead = 0; // Current state.
int m_undoTip = 0; // The highest state that is set.
}

// Adds undo state to the undo stack.
Document:: addUndoState(String cmd, String path)
{
// This kills all the states after the current one
m_undoTip = m_undoHead+1;
// Buffer getting full, shift down.
if (m_undoTip >= MAX_UNDO) {
for (int i = 0; i < MAX_UNDO-1; ++i)
m_undoStack[i] = m_undoStack[i+1];
m_undoTip--;
}
// Store state
m_undoHead = m_undoTip;
m_undoStack[m_undoTip].cmd = cmd;
m_undoStack[m_undoTip].path = path;
}

// Takes snapshot of the document.
Document::snapshot(String cmd)
{
String path = getTempFileName();
ZipWriter zip;
saveState(&zip);
zip.save(path);
addUndoState(cmd, path);
}

// Returns true if can undo.
Document::canUndo()
{
return m_undoHead > 0;
}

// Returns true if can redo.
Document::canRedo()
{
return m_undoHead < m_undoTip;
}

// Reverts to previous state of the document.
Document::undo()
{
if (!canUndo()) return;
m_undoHead--;

ZipReader zip;
zip.load(m_undoStack[m_undoHead].path);
loadState(&zip);
}

// Advances to next state of the document.
Document::redo()
{
if (!canRedo()) return;
m_undoHead++;

ZipReader zip;
zip.load(m_undoStack[m_undoHead].path);
loadState(&zip);
}

// Saves document state.
Document::saveState(Writer* out)
{
// Save document state
...
}

// Loads document state.
Document::loadState(Reader* in)
{
// Load document state
...
}

Sunday, January 30, 2011

Counting the Sheep



I had little time today, so I implemented off-mesh link handling for DetourCrowd. Yet another layer of state handling. The logic to detect and change state is not too complicated. I was mainly trying to see how to fit the link animation processing in the system. Not too much on how looks for now.

It looks like I might be able to make the system so that all custom handlers for steering and off-mesh connection handling will be just one (virtual) function call which handles all the agents. Something like:

void updateSteering(dtCrowdAgent** agents, const int nagents, const float dt);

DetourCrowd will just make sure the call happens in right order and the user can then do the update in anyway he feels necessary. Which brings me to a question to the readers of this blog:
How do you mix and match locomotion and other navigation animations?
For example, an agent is running and then jumps over a fence, and then keeps on running. I'm especially interested in what kind of logic might be used to trigger the change from running to jumping, if there is some kind of transition phase, etc. I know from the past that that is one nasty topic. I could really use some feedback on this. Good implementations, failed attempts, something that barely works, I want to hear it all :)

Another hard topic is how to manage potential queueing to an off-mesh link that becomes choke point. Or how to evenly distribute agents to multiple jump-links. Should the links be locked so that only one NPC can use it at time, or should there be invisible colliders at each end of the jump-link so other would avoid the location even more.

Some of those problems were touched by this paper: Situation Agents: Agent-based Externalized Steering Logic. I hope to have some time to try some of the ideas out.

Saturday, January 29, 2011

DetourCrowd

I finally managed to clean up the CrowdManager code and put it in proper place, yay!

The code is now located in DetourCrowd folder. My idea is to keep Detour a clean low level library and build DetourCrowd on top of that. While is was organizing things, I also moved the velocity planning code to DetourCrowd folder.

The crowd manager now allocates its' own dtNavMeshQuery, so there is no need to pass one to all the update functions anymore.

I also implemented timesliced pathfinder. The slicing is controlled using a constant called MAX_ITERS_PER_UPDATE. You can probably crank it up to 500 or so on real games. Try different values and see how it affects the performance graph (i.e. should not peek too much). I have currently set it very low for fuzz testing.

There are still some things missing:
  • Off-mesh link handling
  • Better handling of navmesh changes (i.e. how paths respond to temporary obstacles)
  • Off-mesh links with custom callbacks
  • Better handling of magic constants
  • Serialization
  • Custom movement styles
I will work on the items roughly in that order. I have a couple of other big things queued up for this spring, so no guarantees when things will be implemented.

Any feedback on the change or the crowd manager in general are welcome!

(Sorry again Windows peeps, I will try to find a PC to fix the VC project.)

Monday, January 24, 2011

2D Grids for the Win!

The process of optimizing something from 10 mins to 1 second is quite different exercise than optimising something from 10 ms to 1 ms. I'm stumble upon this recently in my effort to make the temporary obstacle fast and scalable.

Once upon a time I used to work in a company where AI navigation data generation took easily 15-30 mins per level. I thought that it was waste of designers time, so I wanted to fix it. I ended up with a solution which can handle changes incrementally, so that instead of wasting 15 mins of designers time to tweak a simple scenario would become 150 ms instead.

Problems with 2.5D Heighfields

There were a lot of interesting lesson learned from building Recast. One of them was how to store 2.5D heighfields. I ended up using two different data structures, one which is fast to modify, and another which is has fast neighbour lookup.

Both of the heighfield representations has one annoying thing when trying to reach sub 10 ms performance. They need some extra data to describe the structure of the data. This additional data complicates random access and takes up additional memory. Something like 2D grid would be quite ideal data structure to work with.

Another challenge for sub 10 ms operation is amount of data. In Recast and Detour, the tiles span from the bottom of the world to the top of the world. Many games are usually quite flat–1 to 4 floors on top of each other–but it is still huge variable cost when you process different tiles. Firstly, updates unpredictable amount of time, and secondly the process uses unpredictable amount of memory. Ideally we should know the upper bound of the memory usage before hand. Even if it means a bit bloated memory usage for the general case.

Finally, the column data structure partitions the work unequally. Adding temporary obstacle at the top of a building requires to update the navmesh of the whole building, there might have been many floors which does not require any updates at all.

Side Tracks

I blogged earlier about an idea to create small navigation grids and connect them loosely like waypoints. I did some prototypes and I have done a lot of thinking on how to make it work, but so far it has been failure. But the idea spawned some other ideas.

My plan was to use simple 2D heighfields per grid to describe the world, and to use compressed cache of these tiles to describe the whole world. I liked the loose arrangement too, but I could never quite figure out how to make path following work very well in that context.

Anyways, at the same time I was working on temporary obstacles for Recast & Detour too. And I implemented a tile cache, which basically stores that layered heightfield in compressed form and uses it to kickstart a navmesh generation process. In case you just wanted to mark some areas impassable, that method speeds up the tile generation process my almost a magnitude.

I got it to a point where things are fast, but because of the layer structure, the speed is unpredictable. Adding ideas from one project to another I got a simple stupid idea: what if I partition the layered heighfield to stack of 2D heighfield layers?

2D Data for the Win

I did try that idea long time ago, but it did not work, it was slow and took too much memory. But few things has changed since. Firstly, now the process can use tiles, which means that the 2D heightfields are something like 64x64, not 1056x1522 per layer.

Secondly, the monotone region partitioning makes finding non-overlapping regions really fast. It also makes sure that the transitions between two neighbour layers will be axis aligned. Previously I had a lot of problems trying to match non-axis aligned edges. Currently Detour uses axis-aligned edges to create connections between tiles.




What this means in practice is that I have now a method that transforms a layered 2.5D heighfield into a separate layers of 2D heightfields.

The process starts by finding monotone regions in the 2.5D heighfield. During the process I also calculate which regions are on top of each other, as well as which regions are next to each other. In the next phase, I start from the first region, and flood fill to neighbours, and if they do not overlap the correctly flooded region, they will be assimilar to it. This process is repeated to all regions until they are all marked with unique label. It is a bit like depth peeling, but it considers connectivity.

Since 2D data does not need any additional structure to describe it, it is much simpler to compress it too. For example a simple lossy spline fitting could be use to store the height, and simple RLE should handle are types. Add some cheap LZ compression on top of that, and it should be a lot of win.

Going Forward

My plan is to change the rest of the navmesh generation process so that it can operate on 2D layers. It allows me to take certain shortcuts and make things faster. I hope to make the whole process from 2D heightfield to navmesh to only use stack allocator to make it more usable at runtim. In addition to that I will allow Detour to handle layers too. The current process of building a whole column worth of navmesh will remain intact.

The end result I hope to reach is fast preprocess step which partitions the data to a tile cache, were each tile contains 2D data only. This should allow faster generation, more local changes and better compression of the tiles.

All in all that should allow really fast and predictable process of temporary obstacles.

Explainability

I have been doing a lot of UI design recently at work. We just got first bunch of feedback from the users and as you can imagine they had a lot of trouble trying to navigate through the app. This made me thnk about.

I had a recent discussion with a good friend of mine and he told me that most of his UX work nowadays is about copy. At that point in time my biggest concern was to make our UI slick and lean while he was pondering how to communicate something through words. That slick and lean was the version that went to user testing and did not receive too much praise.

As I worked further on my layout I noticed interesting thing. We did not have explainable names for all the stuff in UI. The concepts were there, but they were not exposed. Sometimes I had omitted text to make the layout look better, or crammed things together to make it more compact.

We had put a lot of effort to make the app discoverable. That is, there is one layer in the UI design which allows you to explore and test all the features of the app one at a time. There is one layer, which allows really quick access to all these features via shortcuts and modifier keys. It looks pleasing to the eye too. To make the entry to the app even smoother and the structure more understandable, we added another layer of design: explainability.

Everything we have put in the UI has name and relation. Pretty much every feature and its' relation to others can be explained with just one paragraph of text (as a matter of fact, that is often my litmus test). So the UI extends outside the app too. Explainability implies literate, but not litter. Tufte's rule of adding as much information as possible with as little ink as possible still applies.

Sounds stupidly obvious, but it took me good 15 years to understand it!

Wednesday, January 19, 2011

My Recent Failures in Polygon Clipping

I have been prototyping some prerequisites for cookie cutter temporary obstacles recently. And the short story is that it won't happen. The longer story is, I had a couple of different ideas how I would implement the clipping.


First Attempt


I always wanted to try out the dynamic navmesh subdivide algorithm from AGPW4. The same algorithm is also used in FEAR SDK. The idea is that to subtract polygon A from polygon B, you split polygon A with each edge of poly B. You know, like BSP.

The good thing is that the algorithm is really simple to implement and the resulting polygons are also convex. The bad thing is that it creates a lot of extra vertices and polygons. After few overlapping intersections you have left nothing but magic fairy dust. And not the good kind.

I did a couple of test to remove the extra vertices. Some test were successful but in overall the whole effort just turned into a giant epsilon fest. That is not the kind of code I would like to support and maintain.

So the lesson was that if you choose to use the BSP-like algorithm to cut your polygons, make sure your code can handle T-junctions. That way you can always track which edges to connect without any additional epsilon magic.


Second Attempt


I think I might have gotten the previous attempt to work to some degree with a lot of sweat. But the fact that it would create huge amounts of polygons really turned me away from it. I like how grid based methods make it easy to handle obstacles. So my second attempt was to use that.

The idea was that I would create a grid which covers a whole polygon plus some padding and rasterize the obstacles into that grid. Then I would use monotone partitioning to create polygons from that raster data. Finally I would clip those polygons against the edges of the original polygon.

While this was kinda silly idea, I think it was worth pursuing. I gave me a lot of ideas, which I will discuss later. Anyways. This idea suffers from the extra vertex mania.


Third Attempt

Despite a lot of failing, I was not done yet! As my last resort I tried to use a proper intersection and tessellation library. While I was able to solve the case for one polygon with multiple obstacles, the performance was not too tempting. While the GLU tesselator is robust and scales pretty well, things are different when you want to do things that are below 1ms.

That would have still left me with all kinds of epsilon tests to handle the (necessary) new vertices added at the polygon edges. Vertex welding is not particularly complicated thing to do, but it can and will create degenerate and flipped geometry in odd cases. I don't like odd cases.


While my attempts to create cookie cutter obstacles has failed, the process has sprout some great ideas to pursue. The one big lesson I learned during this research so far has been that it is really interesting to write code which has known upper bound, let it be memory usage or processing requirements. And I think that is the key to make something dynamic happen quickly–hard, known, bite sized limits in computation and memory.

Tuesday, January 18, 2011

Time-sliced Temporary Obstacle Handling



While I was testing the temp obstacle handling, I noticed that some locations would take quite a bit more time to process than others. For example in the level that is shown in the video above, the rebuild times would range from 1 ms to 15 ms depending on how many tiles the obstacle would affect and how many layers there were in that particular location. The number of obstacle affect the build time too. The added performance cost is pretty linear to the number of obstacles (and their size). In my tests each obstacle takes about 0.05-0.1ms to process.

To fight the variance I implemented a time-sliced version of the generation process. In the above video, the temporary obstacles are updated incrementally so that on each update the process consumes 1 ms. The code will find its way to the SVN later this week.

Friday, January 14, 2011

First Iteration of Temporary Obstacles Checked In

The very first iteration of temporary obstacle handling using compressed lean heighfields just got checked in (R258). Unfortunately, it is not yet "just plug it in" solution, but merely some simple modifications and a sample implementation how it can be implemented using the toolkit.

I ended up calling the minimal representation to create a compact heighfield "lean heighfield". I'm not sure if it is particularly good name, but at least it is more distinct than "compressible heighfield".

If you were previously regenerating tiles using Recast at runtime, this modification allows you to save and compress the voxelized results and speed up the tile rebuilding almost by an order of magnitude (it usually varies between 5-10 times faster).

I will keep on prototyping some more to see if I can support modifying the Detour meshes directly.

In the process I also changed the navmesh builder, so that the detail mesh is now optional. Detail mesh is important if you have large tiles or one big navmesh, or if you require some approximation of the height at runtime. When using small tiles–which is good practice when you are likely to change them alot at runtime–the detail mesh is not always necessary. Not requiring to the detail mesh speeds up the generation process and requires less memory.

[EDIT] As usual, the Visual Studio project is lagging behind. I will try to allocate a slot on some win machine to put it up to date.

Tuesday, January 11, 2011

Temporary Obstacle Progress


I managed to make some progress with the temporary obstacle handling. Most of the required pieces are now in place, next up is bunch of massaging to make it great.

So the stuff I have working now is a preprocess step which rasterizes the tiles, creates a minimal representation of the heightfield and then "zips" it. These data chunks are stored in tile cache. Upon request tile cache will create a piece of navmesh for you which includes all the temporary obstacles too.

I use tile size of 48x48 in the video. It takes about 2 ms to update one tile. So in worst case adding one obstacle takes 8 ms. Interestingly the conversion from the minimal representation to compact heightfield takes about 25% of the time, and generating detail mesh takes another 25% of the time.

For those in dire need for the extra nanoseconds, I think I'll provide dummy detail mesh generation process which just triangulates the polygons instead of trying to add more detail. Or maybe even support that at runtime like in the old days.

Apart from the few offenders, each Recast call takes very little time, so it is possible to timeslice the generation.

Sunday, January 9, 2011

Compressed Heighfields

One of the items on my TODO list for 2011 is support for temporary obstacles. I posted earlier about the possibility to compress the heighfield and use part of the Recast process to regenerate tiles much faster.

I have been prototyping heighfield data compression recently. The above picture shows the test scene I have been using. It has both simple plane like surfaces as well as building with many levels.

The compression process starts by stripping down and packing the heighfield data. I store elevation, free space and area type for each walkable span. In addition to that the ground grid stores number of layers per column. See my previous post on the topic for a bit more details.

That is the minimum amount of data to reproduce a compact heighfield for a particular tile. The minimal heighfield data for the test scene takes 713kB. This is more than the 512kB I reported earlier. The reason is that each tile also contains border, which is required in order to make the tiles match ar borders.

The compressed data for the whole scene takes 82kB. This is the amount of data that needs to kept in memory to regenerate any tile in the scene. It is quite a bit more than the 32kB I reported earlier. The difference between the tests is that the old data was compressed as one huge chunk using zip and my recent prototype compresses each tile individually and uses libLZF which does not compress so well, but it is a lot faster.

For a comparison, the actual navmesh for the test scene takes 98kB. I will provide some timings as I get further with the implementation.

Changes in the Build Process

Currently the build process for a tiled navmesh goes like this:

for each tile
- rasterize triangles
- find walkable cells
- mark areas
- build regions
- build contours
- build poly mesh
- build detail mesh
- store navmesh

With compressed heighfield the process changes to follows:

Preprocess
for each tile
- rasterize static triangles
- find walkable cells
- mark areas
- store compressed heighfield

At Run Time
for a changed tile
- uncompressed heighfield
- mark obstacles
- build regions
- build contours
- build poly mesh
- build detail mesh
- store navmesh

Triangle voxelization is usually dominating facter when generating a tile. For example it takes 71.5% of the time to generate the highlighted tile in the test scene.

The perfectionist in me still would like to find a way to use just the Detour data to adjust the navmesh for temporary obstacles, but I think this test has been worth the effort so far.

Thursday, January 6, 2011

Loosely Ordered Mega Navigation Grids

I got this stupid idea while reading AiGameDev article on fixed point math used in PathEngine. A light lit up above my head when I read about the section related to streaming. PathEngine does not try to match the navigation representation at neighbor tile borders, but instead the navigation tiles overlap and the agent switches to another mesh when it is over it. At first it feels odd, but it is kinda awesome too.

That is sort of my idea. What about a navigation representation which is based on overlapping 2D representation of the space? Sort of like HPA*, but you don't have square tiles laid out on a grid, but the tiles could be place anywhere in 3D and would have height data too? That way you could have full 3D navigation–overhangs and all that jazz–but the dynamic awesomeness of a 2D grid!

So I thought, I'd test it out. In the above screenshot there is the very first step. I initially had the idea to use cubemap, but the irregular sample distribution on ground made a lot of things really complicated.

What you see on the screenshot is 32x32 voxelized section of the scene. The blue area represents 2D heighfield placed at the location of the red cross. Notice how there is black patch under the bridge, since the bridge is visited first.

My idea is to use a graph at the higher level to find a path through the tiles. And to use my earlier attempt at local grid planner to find the path between waypoints. The overal design fits well into the Navigation Loop thinking too.

I see all kinds of interesting opportunities for this method, such as using different accuracies for different waypoint patches, dynamically creating new patches as the scene changes, etc.

The main idea behind the design of this idea was how to handle changes in the navigation data at runtime, without sacrificing any of the properties of navmeshes.

The amount of data is of course one concern. Heightfield and some flags probably take around 1kB per patch, but the great thing is that you most likely need to store very few patches, just the ones around the agents and use cache to manage them, sort of mega-texture for navigation grids.