Thursday, December 31, 2009

Reactive Navigation Mash-up

I've been reading some papers recently which try to solve the reactive local navigation by finding obstacle interference in different spaces by using some co-operative methods. Here's a couple of those papers.
Hexmoor and Krishna has a multituple of other papers on their approach too. Can be found via Hexmoor's publications page.

Space Transformations
Parameter space transformation feels like something that could be a good base for local avoidance using animation driven motion. The rough idea is that you have a parametrized trajectory, it could be an arc or two-point-turn or anything else, and you transform obstacles in the world to a space where the trajectory is one dimension and the parameter another.

My TOI sampling method basically does that using straight trajectories. Another notable velocity space local navigation method is Dynamic Window Approach.

Co-operation for Conflict Resolution
The good and the bad of many reactive obstacle avoidance methods are that they are local, egocentric. You don't need much information from other agents in order to avoid them. That was the good bit.

The hard thing is that it often results flickering solutions. A classic example is playing chicken, that is two agents move towards each other and they should avoid to not collide (similar case is also if a faster agent tries to overtake slower one). One potential solution is that both agents choose to avoid each other from the same side, on the next iteration 50 ms later they decide to avoid from the others. They will continue this until they eventually collide.

RVO, HRVO and many other methods have been proposed to solve the case. Krishna's and Hexmoor's work continue that saga by using additional graph structure to solve the conflicts. I'm still digesting Khrisna's and Hexmoor's solution, but I brought me an idea.

Reactive Navigation as Collision Response Problem
The first impression I got from the co-operative graph method was islands used in collision detection, then I remembered the different spaces where collisions can be detected and the next thing obvious thing was an idea combining the two and bringing aboard few things from collision detection.

One way to look at the navigation problem is to think it in 3D space, where x,y and cartesian coordinates of an agent and z is time. In that space-time world, the agents become skewed cylinders.

Now that our aim is to create collision free paths in that space, we can do that simply by doing collision detection and response in that space. So I went and prototyped that and it kinda works pretty well in simple test cases. The result was boring 4 agents crossing each other and bumping into things (yay!) as can witnessed in the following picture.

I used similar iterative method as you would use for verlet collision detection. For each pair (i,j) of agents, I calculate the nearest location along their velocities:
vab = ai.vel - aj.vel
closestPt_PointRay(aj.pos, ai.pos, vab, t)
hiti = ai.pos + ai.vel*t
hitj = aj.pos + aj.vel*t
After tha I proceed with the usually circle-circle collision response by moving them apart and that new resolved location is the new heading of the agent. Shake it for a dosen iterations and you have resolved your future collisions. When the agents are really close to each other the result is steering away from the other agent. This leads to some jags in the path (as can be seen in the picture too).

Ideally this method should also allow to include the time into the response equation. If an agent is moving into some direction that results nicely skewed cylinder, not if the agents location in the future is the same as current one, then the cylinder is not skewed at all. I have not quite figured out how to make the collisions to affect time, but maybe there is a good way to do that.

Tuesday, December 29, 2009

Introducing Project Cane

I just added "Project Cane" [1] into Google code. The goal of that project is to implement and explore different local navigation methods and bet them against each other.

I also hope to gather a good selection of realistic test cases in the process. Something that I have seen to cause problems before and something that turns out while testing new code. Eventually the result of this project will end up as a module for Detour.

The code is quick and dirty. If you want to reuse the code in your project I encourage to reimplement it. If you have something to contribute, say, port of existing local navigation method, some test cases, your favorite steering method, let me know and I'll give you access.

The current implementation includes the TOI sampling based method I have been blogging about recently. It works in many cases pretty well, but produces ugly results under certain cases and with certain parameters.

One of the biggest drawback is the sampling. For example try out the example chicken_corridor.txt with fewer samples (say, 32) and it gets rather ugly. It seems to be crucial to get the tiny features right, in order to make the local navigation robust in constrained areas.

Another thing that is causing weird behavior is that the method makes the assumption that moving at constant speed is a good thing. While it makes sense (and some papers refer to that assumption too), in some cases it just creates movement which is not what I expect.

I'll be adding my HRVO implementation as soon as I get it to work with segments too.

In general, I'm not quite satisfied with any method I have tried so far. Interestingly, many of then can be tuned so that they will solve the specific tests cases nicely. And usually the tuning has to do with how to handle tiny details in velocity handling.

In certain cases it is beneficial to filter out higher frequency details to create steady and flicker free velocity selection, but sometimes the right solution can only be found by looking at the fine details (such as the case mentioned above).

Usually my attention in this kind of cases turns into some sort of hierarchical methods. Translating that to the above problem, I could be possible to downsample the sampled TOIs and use that to select a rough sector where the velocity will be selected from and then use the more accurate samples to refine the selection. Velocity selection is really hard problem. No wonder that most of the studies eventually resort to search (i.e. see: VFH, VFH+, VFH*).

After a steady progress of research, I feel that I'm back to square one again, maybe a bit wiser this time around.


[1] Yes, I'm fan of Md. House and local navigation is like poking around with a cane anyways.

Wednesday, December 23, 2009

Stumbling Towards the Holidays

After a small hiatus, I've been grinding off my to-do list. The first item was to finish the local navigation proto.

Firstly, I implemented static segment obstacles into my testbed to create more realistic scenarios. The outcome was badly stuck agents. It was way worse than I expected it to be.

After adding the graph based steering helper from my previous post, things got immediately much better. You can see that in action at the end of the video. The obstacle graph is the extra blobs, connected by arcs. The connected islands are formed and each labeled using simple graph flood fill. It would be necessary to tesselate the edge segments as in the video, but I was testing original idea and they just kinda got left out there.

Instead of the graph walk, I use similar method as in Oriented Visibility Graphs to calculate the new desired movement direction (longish white arrow in the video). The cool thing is that my version it can handle many kinds of local minimas, but not all.

The result from the obstacle graph is somewhat present in the TOI sampling too, I wonder if it would be possible to combine the two algorithms.

I hope I can squeeze out few evenings during the holidays and finish off and release the proto.

Before that happens, thank you for all you who have followed and commented this blog, I hope to get good pace of postings coming up next year, too. Until then,

Merry Merry, Happy Happy and New Year too!

Wednesday, December 9, 2009

Recast Debug Draw Changed

I used submitted a change to the SVN where I moved the debug draw functions to a separate folder DebugUtils. Now also the Detour debug draw stuff uses the same abstract interface so it is free of any DSL and OpenGL dependancies.

As an added bonus, I also added a couple of functions to dump the Recast poly mesh as well as the poly mesh detail as an .obj file.

[EDIT] Of course there were some missing files in that change list. Revision 86 has all the needed fioes.

Monday, December 7, 2009

Improving Local Avoidance

Chris Paulson asked in the comments of another post how to improve steering based local avoidance. The reply grew quite large, so I thought I'd put it here as another post instead.

All kinds of local steering behaviors have the problem of getting stuck at local minima. That is what Chris is experiencing too, and that is what my recent research is trying to solve.

When splitting the system in reactive (local) and planning (path finding on static graph) navigation, the level of detail where the split happens is important. If the high level plan is too coarse, the local planning is not able to resolve. I think that is what Chris is experiencing.

I have noticed that agents usually do not resolve from local minimas, which are twice as large as their radius. For example two barrels next to each other are still ok to avoid locally, but if they are spaced more widely, so that the agent still cannot move through them, then they may be too big an obstacle. Add another barrel or wall to make it a a but more U-shaped and you get a case which is not easily solvable using reactive local navigation.

Local and Global Navigation

My reasoning has always been that as long as the results would be unpredictable anyways, use local navigation. Moving obstacles are always avoided locally, since at least in game environment we can assume that their movement is unpredictable.

Once they come to rest and touch other objects, they should be included in the planning level navigation graph. The Recast/Detour way is to update a tile once something has changed in it. As an example see Yakov's work.

The reasoning is to try to make it as simple as possible. Modifying the navigation mesh allows the rest of the system to always use the same format of navigation data. Obstacles that change the planning graph does not need to be special cased in the code.

Alternative to rebuilding a whole tile is the just cut holes into it. Cutting holes to polygons is not exactly very easy to get right. That cookie cutter method is limited to only obstruct movement, not allow it. For those reasons (and because Recast is pretty fast ;)), my preferred solution is to build small piece of the navigation mesh every time it is necessary. In contrast to Yakov's work, I probably would not update the mesh as often as he is doing.

Obstacle Graph

If dynamic mesh updates shy you away, or you simply cannot do that. Local navigation can be enhanced to help out to avoid getting stuck at local minima. One way is to add additional representation for it, for example see [1]. Another, maybe simpler idea would be to find islands touching obstacles, calculate their boundary, and steer towards the next "shadow boundary" of the obstacle island.

This proposed method has certain similarities with [2], which Google surfaced for me when I was searching about points of visibility graphs (there were a recent discussion about then at

As an example take a look at the picture at the top of this post. There the orange agent is trying to avoid set of barrels. Most reactive local navigation methods would probably steer the character left into the local minima between the left most barrel and the wall.

To aid this situation, it is possible to calculate a graph of all the obstacles which are touching (check using obstacle radius plus agent radius). Then sort all the links per each obstacle based on their angle.

Using this data it is possible to locate the navigating agent by finding its' spot between two links at the nearest obstacle in the graph. Then you can follow the contour of the obstacle island by always choosing the next adjacent link left or right (that is, in the sorted per obstacle list if links) depending on which direction we want to traverse the object. This traversal will lead us to follow the contour of the obstacle because the links were sorted.

During the traversal, we can find the max steering angle left and right. Steering towards this angle will lead us out of any local minima. It is also possible to calculate the winding of the traversal and check if the agent is inside or outside a set of obstacles. If the traversal code hits a wall should make the steering and in that direction invalid. If both sides are blocked, we know that the link from current navigation node/polygon to the next one is blocked.

One way to resolve from this is to then temporarily disable the link and replan. The temporary disable could last a until certain time has passed, or until any of the objects move in the island have moved. It is really important to get this feedback part right. If you get stuck in local navigation, you should never replan unless you can fix that issue in global graph!

My favorite still is to adjust the navmesh even if it is a bit more complicated to implement.

[1] Automated static and dynamic obstacle avoidance in arbitrary 3D polygonal worlds
[2] Oriented Visibility Graphs: Low-Complexity Planning in Real-Time Environments

Thursday, December 3, 2009

Detour Cleanup Complete

I just committed few changes to the SVN, which removes dtStatNavmesh and dtTileNavmesh. The brand new dtNavMesh has combined functionality if the both.

Migration should be as easy as renaming dtStat* or dtTile* to dt*. There are two different init() methods depending if you use the mesh as solo or tile based.

dtNavMesh now also has BVtree per tile mesh for faster polygon lookup, which is good news if you are using large tile sizes. The tile mesh limitations, max tile and poly count, are now parameters which can be set at runtime. The max node count for the A* is not init param too, instead of magic constant in the sources.

The reason for this merging was that it was becoming a burden to manage the two types of meshes which were supposed to have the same functionality.

I apologize again for the trouble.

Constrained Movement Along Path Corridor

I just added simple path iterator code for Detour. I have been postponing this Detour feature for quite some time now since I was not sure how my recent research on obstacle avoidance would affect it.

Even if path following sounds easy, it can be really hard to get to work on all cases. I've yet to see any good piece of code which does path following robustly. There is a simple stupid case which is really hard to get right, that is, what to do when you encounter in-between path point?

Most code out there just checks if the character is close enough a path point and starts steering towards the next one. The problem is that depending how you generate the steering velocity, this can lead to circling around the path point or if you increase the threshold too much there is a shortcut.

I know quite a few games which have successfully shipped with the most awful path following code out there.

The most inspiring recent method to follow path can be found in Indicative Routes for Path Planning and Crowd Simulation. It feels a lot similar as the distance based GPU heightfield raycasting. While interesting, the method has one failure. In order not to shortcut, the radius of the agent should be reduced from the radius of K. Which in practice creates the silly singularity problem again.

The most simple and most robust path following code would of course just interpolate over the spline, but if you want to do some sort of local steering to avoid other moving agents that will not work anymore.

I don't know the perfect solution, but here's my current shot at it.

Global and Local Planning

I split the navigation into two problems, global navigation and local navigation. This is a common way to overcome the uncertain environment in for example robot navigation research.

The point of global navigation is to find a high level path to the target location. In practice when using navigation meshes this means finding the list of convex navigation polygons which will lead us to the target. I use A* for this search. The global navigation structure is static, but can change to reflect changes in the game world.

The second part is local navigation, it will execute the maneuvers to move the agent from the start location to the goal location via the path corridor.

Since I'm aiming to solve the navigation problem amongst multiple moving agents, I cannot use one shot shortest path generation since its' results will be invalid immediately when any other agent moves. For this reason, I do not find the full straight path spline to the goal. Even if straight path string pulling is really fast with Detour, finding the whole path would just was the processor time.

Instead, on every tick a character wants to move, only few next points (A and B in above picture) of the straight path are calculated. These few path points are used to calculate a velocity to steer the agent towards the target.

This method also ensures that if an agent overshoots to reach a path point, we still always get the shortest path towards the target and prevent silly things like trying to go back to a intermediate path path which we have actually already passed.

In order to avoid the singularity case when the agent is close to one of these path points, the point B is used when the character is really close to point A. This is not optimal, but currently I do not have really good alternative for this. With Detour this will not cut corners, though, because of the way the movement model along the navmesh works.

Moving Along a Path Corridor

The goal of the movement model along navmesh polygons is to move the agent through navigable space, or in case of collision slide along the navmesh boundaries.

In order to keep the whole system robust, the agents are kept and assumed to be inside the navigation mesh at all times. In practice this is not the case. When an agent is close to a polygon edge, floating point accuracy will make the agent to flicker in and out of the polygon boundaries.

The above picture illustrates a case where agent A is moving in the direction of the arrow towards T. Because the movement vector is colliding with a polygon edge, the target location projected on the polygon boundary so the it results sliding motion. There are several ways to do this.

A classic way to do it is to iteratively cast ray (or ellipsoid), clamp it based on the hit surface normal, until the movement is resolved. This method is quite hard to get robust because there are a lot of cases where you need to handle collision between parallel segments.

Since our movement model is supposed to travel inside convex volumes, we use a bit different and more robust method. The whole iterative ray-casting loop can be replaced by finding a closest point from the target location on the boundary of the polygon.

If you want sliding movement inside a convex polygon, that is all you need to do! While this may not be numerically most robust method, the logic to implement this method is robust. We simply move the agent and clamp it to the polygon boundaries. Even if the agent is slightly outside the polygon because of precision, it can be all the time treated to be inside.

This method can be extended to move along path corridor.
  • We start from the current navigation polygon (PA), current location (A) and start moving towards the target location (T).
  • If the target is inside the polygon, we set the current location to the target and we are done!
  • In case the target is outside the polygon, the current location will become the target location clamped to the nearest point on the polygon boundary (B).
  • If this clamped position is really close to the portal edge which leads to the next polygon (E), we move the current polygon the next one (PB).
  • If the clamped position does not touch the portal edge, we cannot move know that we cannot move any closer to the target and we'll stop.
  • This process is repeated until we either reach the target location or cannot move anymore.
The process may sound a bit backwards, but because the way it is implemented we can dodge a lot of robustness issues.

Towards Obstacle Avoidance

The way to extend this to obstacle avoidance is to adjust the velocity. First, the velocity towards the target is calculated as per above description, and then it is adjusted according to your favorite local avoidance method. For example the method described earlier in this blog could be used, finally the resulting velocity is used to calculate the next movement step.

Since the obstacle avoidance may sometimes need to deviate from the corridor, I expect that I need to revise the navmesh collision response at some point. Instead of allowing to move only to the next polygon along the path corridor, it should be possible to move to any neighbor polygon and just append or remove polygons in the beginning of the path. There are a couple of details still, which I have not solved, though.


I hope I did not skip too many details while writing this. I just got the last pieces together today while implementing this. The updated code can be found from the Recast SVN at google code. The path iterator can be found in Sample_SoloMesh.cpp, in toolRecalc() method, check the path find tool code. The Windows project is yet to be updated.

[EDIT] Visual studion and Win32 exe updated.

Tuesday, December 1, 2009

Zak Pak is a nice package

Recently I have played much more boardgames than computer games. Maybe the New Super Mario Wii is an exception. We got new game a little while ago called Zak Pak (Pack & Stack).

I was immediately attacted to the wooden pieces and the nice graphics of the game. Some board game sites also seemed to like the game. After the first couple of rounds of the game with our good friends I felt not so compelled about the game anymore.

The biggest problem of that game is that it has many layers of randomness laid on top of each other. First you pick random pieces which you need to fit into a random truck. Surely the outcome is very random too.

The most stupid rule of the all is the way the trucks are picked up. First everyone picks up 2 truck cards (at random) and then all the truck cards are turned over at the same time. Then you choose one track that fits the best to your random piece set from the other person's truck cards. Fine, but then... the last person who picked up his truck must discard it and pick his/hers card at random from the truck card deck.

Since everyone was concentrating on picking up the best card from the deck, there is no time to be the referee to see who was the last person to pick up a card. This leads to a lot of frustration as people sort out who is the looser. And as an added bonus, the game is scored by penalty too.

So there are some stupid little rules which make this game to have bad frustration. Playing a hard level on Super Mario is good frustration, makes you try harder!

The reason I recommend this game to everyone interested in game design is that it screams to be redesigned. Everything that comes with the game is gorgeous, the cards, the pieces and the dice. It is just the rules that suck :)

So the second time we played the game we tried our own variation, which is quite different. There are bunch of rule variations at the BoardgameGeek website too. Our rules were as follows:

Pakkaa & Stakkaa


Separate the 4 high trucks from the deck, shuffle them and put them on the table in a row. Shuffle the rest of the truck cards and place them into a pile on the table. Each player turns one truck card from the pile and gets 25 worth of money. The game is about to begin.


Loading: In the beginning of a turn a player rolls the dice and picks up pieces accordingly. The pieces are loaded into the truck until it is full or no piece fits anymore. The pieces that were placed on last round cannot be moved anymore.

At any time during his turn the player can purchase one extra car slot for 50 pieces of money. The player can draw the new truck immediately after the purchase. If the player runs out of money the extra car slot can be sold for 25 pieces of money. The extra car slot allows the player to keep 2 trucks on the table all the time.

As long as some are available, the player is also able to purchase any of 4 high special trucks for 20 pieces of money. The special trucks are for one time use only, once they are full and cashed, the truck is gone. The special trucks have a little bit of different scoring.

Scoring: At the end of each turn, excess cargo pieces will be penalized and fully loaded trucks can be cashed.

Per each cube that is left out of the load the player has to pay two penalty points. The excess pieces are returned to the piece pile.

If a truck is fully loaded, the player gets one piece of money per each cube (1x1 piece) of the load. After that the pieces are returned to the piece pile and the truck card is put on discard pile and the player draws a new card from the truck pile to replace it.

The truck that is being cashed is a special 4 high truck, first the dark blue dice is rolled. If the result is blank, the player gets no money from any of the cargo, but gets a new extra truck to replace the old one. Of the result of the roll was 1, the player gets the price normal 1 money per 1 cube, but if the result was 2, the player gets 2 money per 1 cube.


The game ends on the round where the last truck card is drawn. Unfinished trucks are discarded. Points are counted and the winner is the one who has the most points.

The result was not an awesome game, but it was a lot of fun to come up with the rules and iterate few rounds. Initially we allowed for example to overload the truck and used similar dice rolling to score that truck, but it did not quite work that well.

The extra car slot worked better than I expected. In the first half of the game, the players tend not to get any penalty points at all, but on the latter half it gets more dramatic.

For a two player game the pieces never ran out, even if I feared that that might be the case. It usually takes about two rounds to fill one truck.

Currently our game feels a bit too solitaire. For the next session I want to try out some rule to exchange the excess pieces. Maybe trading them, or come up with some other action that allows to to help or mess up other people's trucks.

Anyways, I recommend Zak Pak, not because it is good game, but because it can be a good game design exercise. If you have your own version of Zak Pak, please share it!

Friday, November 27, 2009

Obstacle Avoidance Experiments

I had a little time today to do some more experiments with obstacle avoidance. It is starting to shape up nicely, I hope that in next update I can share some interactive examples too.

Today's experiment was to solve of unfortunate side effect of the sampling method I chose--namely initial collision. If two agents are overlapping, the sampling will always return blocked and the two agent will get stuck. I previously side stepped this problem by just ignoring the initial collision.

The collision happens most likely because of floating point inaccuracy or that the agent movement code did a bit different thing than what the local navigation told it to do. Some situations on games can make it that happen, so it is good to be able to handle that.

First I tried to adjust the time-of-impact values based on the initial collision, but that did not work out as well as I hoped. Eventually my solution was to calculate both the time-of-impact as well as time-of-exit values per sampling direction. The same code actually is able to return both the values easily (TOI is calculated using ray-circle intersection, TOI is the first hit and TOE is the second).

My final solution was to add yet another component to the sample scoring calculation (remember, the sample direction with smallest score is selected). Previously the score or penalty was calculated as follows:

score = Wa*da + Wt*1/toi

Where da is the angle difference between the desired direction and sample direction and toi is time of impact in that direction. Wa and Wt are weights, I have used values 2 and 1 for the weights respectively in my test.

If there is a collision, I'd like the agent to move to to the direction which resolves the collision as quickly as possible. The best direction is where the time-to-exit is the smallest. TOE is zero, if there is no initial collision. For each sample, the largest TOE among the all hits is selected. The modified score calculation looks as follows. I used We=100 in my tests.

score = Wa*da + Wt*1/toi + We*toe^2

The result is that the agent prefers to move away from the collision, but does not select overly stupid direction because of that unfortunate state.

Another thing is test was how the min and max time horizons affect the behavior of the agents. You may remember from the earlier blog posts that max time horizon specifies how early the agent starts to change the movement direction, and min time means how early the agent starts to change the movement speed.

Setting min time horizon close to max time horizon results much more varying and smoother movement, which smaller time horizon results faster and more aggressive movement. It is pretty much as simple as that. I just had not played around with it earlier.

As expected, more aggressive settings make the crowd to spread out much more in more congested situations, while less aggressive setting makes the crowd to make more swirl like solution.

I'm quite happy how few parameters there are to tune with this local navigation solution. I also like how readable their change is too. The overall behavior is mostly controlled how min and max time horizon is set. The weights allow further tuning how much the agent is supposed to favor moving towards the goal (Wa higher) versus moving fast (Wt higher).

Friday, November 20, 2009

Recast SVN Version in Flux

I'm currently merging the functionality of dtStatNavMesh and dtTiledNavMesh into one class called dtNavMesh. A first iteration is already in SVN, coexisting with the old classes. I will remove the old classes once I'm done with the merging. If you want to sync to the most recent "stable" version, please sync to revision 72. I hope to get the merging finished next week.

I apologize for the upcoming migration trouble, I think it will be for the better in the long run. If you guys have the time, please take a look at the new dtNavMesh, and let me know if you have some concerns regarding any functionality lost in merging.

Wednesday, November 18, 2009

A Couple of Recent Crowd Papers

The guys at UNC have been busy with crowd simulation recently. Here's a couple of recent papers.

Aggregate Dynamics for Dense Crowd Simulation
This paper described method which is similar to boids, but instead of the boinds rules of cohesion and alignment, they use incompressible fluid simulation. Scales well to thousands of agents.

Directing Crowd Simulations using Navigation Fields
Directing crowd movement using mixture of RVOs and potential fields. I can imagine this method could be easily help certain scenarios appearing in games, although it might be too expensive. It provides some interesting ideas none the less.

Sunday, November 15, 2009

Recast Debug Draw Interface

I added simple abstract interface for debug drawing. Currently only the Recast debug draw code is converted over, I will do Detour later too. I hope it helps a bit to reuse to debug draw code in different projects.

The is pretty much as simple as it gets. It has begin(), vertex(), and end() methods. The begin method gets primitive type, number of vertices and primitive size as parameters. Sequential calls to vertex is assumed to fill in the primitives to be draw, and all vertices are send when end is called. This should allow to easily implement both buffered and immediate rendering.

Changes are available in the SVN.

I also started to add some preparation to get the area stuff working again. Last time I was working on it, I made too big changes in one jump. This time around I try to do more gradual changes.

I will be merging Detour tiled and stat navmeshes into one at some point in near future. The impact for stat mesh users should be really small. I like the simplicity of the stat mesh, but the beauty will be gone soon anyways when the areas and jump-links get implemented.

Monday, November 9, 2009

Obstacle Avoidance Video

Here's a video of the previously discussed obstacle avoidance scheme.

The obstacle avoidance method presented in this video uses time-of-impact sampling around the agent to adjust the agent's desired movement velocity (black arrow) to a direction of less potential collision (yellow arrow).

The figure around the agent is displaying time of collision in in each sampling direction (64 directions in the video). TOI up to a max time horizon (yellow circle) is considered while avoiding the other agents. If the TOI is less than the min time horizon (red dashed circle, time it takes to stop the agent), the steering velocity is scaled down.

Saturday, November 7, 2009

Rediscovering Obstacle Avoidance

EDIT: Added picture describing the new method, video coming soon.

After fiddling around with the different methods to implement velocity obstacles (especially the directive circle style approaches), I noticed that there are two parameters which can be used to tune to get out of the cases where the agent is completely surrounded by the velocity obstacle. The parameters are: desired movement speed and time horizon. Time horizon means the latest potential collision we are interested in avoiding (or reversely, how early something should be avoided).

The directive circle or BOA style velocity obstacle implementations assume that the agent wants to travel at constant speed. This allows to simplify some of the calculations, since the admissible velocities can be found at a circle, which radius is desired speed. In practice it means that instead of choosing amongst all admissible velocities, we choose an admissible direction. Simplifies things a lot.

What should the system do, when all the directions are blocked? There two options and it is quite hard to choose which one is better.

The first option is to slowdown. That is, reduce the size of the circle. This usually leads the obstructed areas on the directive circle to become smaller and it may reveal new openings to follow.

The second option is to reduce the time horizon. The higher the time horizon value is set, the earlier the obstacles are avoided. By reducing the time horizon value, the velocity obstacle cones will be truncated more, which gives some extra space around the agent to manouver again.

The directive circle (and ND, and VFH) approach looses it charm one you start to try to solve those two problems. You end breaking their simple and intuitive premise and either doing a directive circle for each velocity you might find useful and even more complex spaghetti to choose amongst them. One of the biggest problems is that you need to make decisions based in binary data, which leads to jittering when you try to chanse the correct value.

This led me to another idea, which combining several things I discovered while prototyping the different methods.

Recursively Reciprocal Directive Velocity Obstacle Circle

By exercising the age old AI truth: "In case of doubt, discretize and brute force!" I managed to make yet another sampling based obstacle avoidance, which is super simple to implement, easy to understand, intuitive to tune and seems to work well in practice too.

The basic idea is:
  1. Sample the first time of collision at different directions around the moving agent
  2. Score each sample based on time to collision and how much it deviates from the desired movement direction
  3. The direction with best score is chosen
  4. That's it!
Calculating the time of impact for two moving circles is simple, it is featured in books and there probably some code laying in the webs too. The devil is of course in the details, as follows.

To get it working better, I introduced minimum and maximum time horizons. The max time horizon is used to clamp the time-to-collision values, so that every direction where collision is not likely to happen really soon now gets the same time-to-impact score. This will make sure that the nearest direction will be chosen instead. Larger max time horizon will make the agent to avoid things earlier. This can be thought as sort of shininess factor.

The minimum time horizon is used to scale down the velocity. If the time-of-impact towards the best direction is less than min time horizon, the velocity is scaled down proportionally. That is, if the agent is standing next to an obstacle, the velocity will be scale to zero. Or if the agent tries to run towards a wall, the system will scale the actual velocity down.

Another thing that can be adjusted are the weights used for score calculation. If the time-of-impact is weighted more, you tend to get more straight segments of movement, while if the angle difference is weighted more, the agent tends to approach the goal more. Usually that means more wiggly trajectory in micro scale, but more goal directed movement in general.

That sort of solved most annoying problems that the directive circle type of methods, namely speed selection and approaching a goal in front of an obstacle.

On top of all that I use multiple iterations. This adds the needed reciprocal, recursive or reflective or any other R starting property into the mix. What it means in practice is that the velocity selection does not flicker that much nor does the system have problems choosing sides on head-to-head cases. It should be possible to extend the system to use the reciprocal rule too.

All in all, it is quite a bit similar to the reciprocal velocity obstacles implementation except that I formulated the change in speed in different terms. In practice this means crap loads of less samples (although you can have them back by using more iterations).

I'm so happy about this discovery that I think I'm going to go and have a beer instead of digesting yet another heap of more papers for supper :)

Friday, November 6, 2009

Recent Interesting Paper Findings

I have went through numerous papers recently. My search terms are starting to return the same papers all the time, but every now and then there are some good ones worth sharing. Here's some which are relevant to my recent prototyping results.

Very similar approach to directive circle method. It discusses the reasoning behind using only maximum velocity when selecting the velocity.

I had the gut feeling that multiple iterations would help the velocity selection. These papers provide some more information on the subject.

This paper discusses how VOs can be used in formation type of situations.

This paper discusses how the time horizon (how much the cone is clamped) can be adjusted.

Zvi Shiller was co-author of the well known velocity obstacle paper and has papers and videos at his page regarding the subject. Those non-linear velocity obstacles look pretty cool!

Thursday, November 5, 2009

Hybrid Reciprocal Directive Circle

Or HRDC for short. That is my homage to obstacle avoidance scene. The method is combining the good stuff from HRVO with easy of implementation of DC.

I got my implementation of directive circle working better and in many cases it works as nicely as my geometric implementation of HRVOs. I like how it makes it easy to combine different obstacles, and it looks like implementing static segment obstacles will be pretty easy.

DCs are faster to generate so some form of look-head planning could be feasible (it could be turned off as LOD).

Wednesday, November 4, 2009

Noodle Soup Considered Harmful

I just saw a report of the enchangements they are planning for Blender 2.5. One of the enchangements was a node graph to create game logic in Blender Game Engine. While the proposal is definitely and improvement over what they have in there, I think in general boxes and arrows are super horrible way to create game logic. Let me elaborate.

This idea has been bubling under for quite a few years now. I've been in two projects where the game logic has been build using some sort of flow graph. While it is intuitive model to create something a bit more complicated than when this do that, the final noodle soups look really horrible and are impossible for anyone to debug.

I have been trying to put the frustration of using the boxes-and-arrows editors into words, abd finally when I read a book called The Back of the Napkin, I realised what was the problem. The problem is that the designers are forced to answer all the questions with one answer.

The Back of the Napkin is about visual problem solving. One of its' ideas is that certain types of pictures answer certain types of questions the best. If the only means to create game mechanics is flowgraphs, then the designer is most of hte time answering to question How. If your level editor also supports some sort of timelines, you may be able to answer the When question too. Trigger volumes give opportunities to asnwer few where questions too.

In all system I've seen so far all these questions are answered using separate tools. This breaks the big picture. You do something in timeline editor and then something somewhere else, even if technically you could be editing the game in one unified view, zoom in when necessary, and build the best picture (and representation) to describe the required task.

Mashing up these different representations is really important to let the designers to get their work done. Trapcode Soundkeys is good example of such mash up. It kind of is against all programmers reason. Draw boxes to get output of an audio analyzer? That's a lot of bullnoodle right there! But it works, its intuitive and you immediate get it. It answer the the questions where and how much really well and you can adjust it easily based on what you see coming in.

That is why the idea logic blocks and flowgraphs as main means of gameplay logic must die. Creating gameplay logic is not about logic only, it is about answering complex questions with understandable "pictures"!

P.S. Yes, the MAX/MSP/PD crowd gets some of these things right, but not all.

Tuesday, November 3, 2009

Making Velocity Selection More Stable

I have done a couple of tests the get my velocity obstacle prototype to reduce the flicker in the velocity selection. Here's a couple of things that seem to work well.
  1. Iterations: Just like you can improve collision detection by iterations, you can help the velocity selection with iterations too. By adjusting the sensed velocity slowly towards the selected velocity allows the other agents to react to the gradual change. Even after this change I often get jitter when two agents are about to start avoiding each other.
  2. Samples Selection: I originally used the sample selection presented in HRVO code, but I noticed that I get more stable results if I just use the samples at the max velocity circle. This sometimes results full stops, but it might not be a bad solution at all. It should be possible to use a bit of higher level logic, which could choose between several states based on how crowded the situation is. This should help the animation play too.
  3. Finite Time Threshold: Truncating the tip of the VO cone helps certain situations a lot, especially approaching to the goal location. There seems to be some correlation between how well the avoidance works versus how far the time threshold is, but it varies quite a lot from situation to situation. I think it is possible to adjust the threshold and get better results.
  4. Velocity Obstacle Shape: The shape of the velocity cone has big impact on how smooth the velocity selection is. It is a no-brainer but it definitely was not the first thing in my list to try to get the system running smoother. Making the tip of the truncated cone more sharp helps a lot the first contact velocity selection (the problem that was unsolved by the iterations).
I think my further prototyping will concentrate on investigating how the shape of the velocity obstacle affects the avoidance. For example what happens if the shape of the VO is more like a parabola, or what if it is more like a trumpet. Another thing to prototype is how different avoidance region types affect the movement.

I also did a quick prototype of the directive circle approach based on this paper: Robot Motion Planning in Dynamic Environments with Moving Obstacles and Target. It uses similar max velocity selection method which I found more stable and has interesting trick to combine the VOs. Sampling artifacts aside, it seems to have some problems when approaching the goal location. I will test it a little more. The way the obstacles are combined in that approach allows me to test if adding more clearance might help the steering too.

Friday, October 30, 2009

Velocity Obstacle Prototype

I made a quick velocity obstacle prototype last night. Above is video shows of some example scenarios.

I used similar velocity sampling and side bias in HRVO. It is cool how some test cases result almost symmetrical results. The last bit in the video uses more random start locations and parameters which makes the movement more random too.

I'll release the sources once I get it a bit more cleaned up.

In order to get smooth results, the prototype clamps the max acceleration (which is the reason why the agents sometimes collide) and also slightly smooths out the perceived velocity which is used to calculate the VOs.

The test cases do not exhibit too much flickering in the velocity selection, but I think it is a problem. I will do a couple of other protos to see if there could be simple way to make more coherent velocity selection with some simple extra logic.

Thursday, October 29, 2009

A Random Thought to Extend VOs to Gameplay Animations

Whilst pondering LogicalError's question to my previous post and sipping that second cup of coffee this morning, I got a pretty interesting idea.

One thing that adds a lot of realism to NPC locomotion animation and can be quite frustrating to implement is something that is usually called as idle-to-move and move-to-idle animations. The idea is that when you start moving, you have special kind of animations which are played before the locomotion animations kick in. This allows the animators to create cool animations which can do turns and have anticipation in them.

The usual implementation problem is: how to make sure that those animations can be played so that the AI is moving in right direction and doesn't bump into things? I've seen all kinds of solutions to this, ranging from just checks is the target point is in navigable idea, to batch of raycasts to iterative collision sweeps.

Navigation meshes will help quite a bit in that process, making the line checks cheaper. The problem is that that check can be too exact. What if you adjusted your target just by 0.745 degrees and you would be able to do your nice animation? Some more random samples? An even harder case is those other potentially moving agents. Usually the grand solution is postponed until it is time to ship the game and the feature will drop into the cut-features list.

The idea I got this morning was that, if the trajectory of the animation can be thought to be relatively straight, we could actually use velocity obstacles validate potential directions and lengths of animations and choose one that fits the best.

The velocity obstacles can be scaled to take the maximum time into account. If the max time is infinity, the VO will look like cone, but as the time horizon is scaled down, the tip will be more rounded and will move further away from the apex (FOVT). You can check more details in the Clear Path paper. This is useful if you need to calculate collision free movement for certain duration of time.

I assume that the multiple VO contains all nearby moving agents as well as solid navmesh edges as obstacles in the follow paragraphs.

So based on the animation, we would calculate how long it takes and based on the end location we could calculate the velocity during the animation. Then we could calculate the multiple velocity obstacle with max time set to the length of the animation and try to find a direction which is closest to the direction we would wish to move and which has exactly the same velocity as we are requesting.

In practice this means that if the target velocity is inside the MVO, then we would pick a point which is at the intersection of the VO edges and a circle which radius is the length of our velocity. Voila, problem solved!

Move-to-idles are usually even harder since we need to hit a point certain distance away from the target we want to reach. Potentially it might be possible to use VOs for that too. An idea could be to place the agent at the target location and maybe inverse all the velocities of the other agents or something weird like that to be able to project set of points which could be available to arrive to the target obstruction free. Or you could just trigger move-to-idle if at certain distance and the VO tells you that the path is clear for the duration of the animation.

Given the idle-to-move idea, of course nothing prevents you to extend that idea for stuff like jumps and rolls and other gameplay animations.

The idea is a bit sketchy still, but I see a certain potential to it. Let's use VOs to solve all our problems! ;)

Wednesday, October 28, 2009

Choosing the Right Velocity

I browsed through the RVO library code while checking the differences that HRVO added to it and I noticed that they smooth out the velocity quite a lot. I have to comb through the literature again to see if someone has mentioned anything about it. I remember seeing a mention in some paper that they assume that the velocities are valid at least a certain amount of time.

HRVO velocity selection is pretty nice and simple. Looks a lot like the ST (strategy) from Fiorinis paper. Basically it generates potential sample points at:
  1. all intersection points of the velocity obstacles
  2. all the intersection points of the velocity obstacle edges and the circle which has radius of max speed
  3. plus the desired velocity projected pass apex of a VO (if I understood that bit of the code correctly).
Next all the points are validated, so that they lie out side the multiple velocity obstacle. And the sample point that is closest to the preferred velocity is chosen.

The above image shows a scenario of sample points. The valid sample points are blue and invalid red-ish.

A critical point is what happens after that. Instead of applying the whole velocity to steer the agent, they clamp the delta from current velocity to the new velocity by max acceleration. Sounds like the likely thing to do, but I did not expect it to be such dominating factor of making velocity obstacles to work. Games often fiddle with the velocity directly. It could be that I was making false assumptions.

I have to give the RVO library another spin and see how much the smoothing actually affects the results.

Tuesday, October 27, 2009

Not Bumping Into Things pt.2

I stumbled upon this paper again yesterday:

The Obstacle-Restriction Method (ORM) for Robot Obstacle Avoidance in Difficult Environments

I like how they form sub-goals to overcome the obstacles. I think that is what I meant when I mentioned the coherent silhouettes in my previous post.

So my gut feeling is that coherent sub-goals is the key to get VOs a notch more reliable. Let's see if I can squeeze in some prototyping time later this week to prove my point :)

I think this sort of thinking is generally really good in game AI. If something is way too complicated to tackle with one system, or too flickery, make it hierarchical. If there are some decision that can be made a lower pace, do that, and feed the results to the next system so that it has easier time to solve its' sub-problem.

Monday, October 26, 2009

Not Bumping Into Things

Usually when I do research to solve some problem, I get fond of some particular method, try it out, get disappointed because it did not quite solve all my cases, then I try to track back towards the root of the idea to try to figure out what might have been the original motivation behind the solution and try to find a better suiting method from there.

My relationship with velocity obstacles has reached the point that I have gotten a bit disappointed with the solution. I have a feeling it has gotten certain things right, but it does not quite deliver.

It looks like the basic idea has been invented in the literature over and over again. For example one of my Google searches led me back to the old Craig Raynolds (Mr. Boids) paper from SIGGRAPH 88 called: Not Bumping Into Things.

I must have read that paper numerous times, but it did not strike me until today that it is actually describing velocity obstacles. The left side of the picture at to top of the post is from Raynold's paper, and the scribble right to it is my drawing from another angle.

I also found Fiorini's disseration, Robot Motion Planning in Dynamic Environments, which goes a bit more into certain details as well as list of references, which is missing from the paper that is usually available on the net. I wonder if his website along with complete list of publications is available somewhere? The one at NASA does not respond anymore.

One reason which made me revisit Fiorini's work is that unlike many more recent papers he tries to build meaning to the different safe velocity areas. One thing I noticed while testing VOs is that it is really easy to get a lot of noise in the input data, that is the cones jump around quite a bit, and that seems to be the greatest reason for any sort of oscillation problems. Shit in shit out, surprise, suprise!

Just take a look at this youtube video. The flashing red rectangle at top right corner is the velocity obstacles of the agent. I have witnessed similar behavior in my tests. Also, because of the feedback nature of the system once you get noise, it will amplify. One way to get rid of that (and as a side effect, inject a horde of other problems) is to either lowpass filter the velocity that is passed to the VO calculation or, lowpass the change in velocity. RVOs is kinda-sorta doing that, by adjusting the velocity towards the safe velocity just by a fraction.

So my gut feeling is that it is possible to build better method to choose good safe velocity by analyzing the arrange of the VO structure. I also think it is possible have better temporal behavior using similar tricks as in Coherent Stylized Silhouettes. The better temporal behavior should fix the usual pitfalls of VOs such as oscillations and choosing side to pass and still keep the system responsive.

Friday, October 23, 2009

Recast Detail Mesh Fixes

I just update the detail mesh generation code in SVN, which should improve the possible bad cases of the detail mesh generation.

One case where the detail mesh generation failed previously was the Delaunay triangulation which is used to generate the triangulation of per polygon detail surface mesh. This should be now fixed.

Next time around I will design a system which will dodge any kind of triangulation.

The algorithm is a little slower than the previous one and has worse big O behavior too. So large areas will take longer time to compute. If you have really horrible performance, I suggest using the tiled mesh generation so that larger polys will be split because of the tiling.

Another case where the detail polymesh was sometimes bugging out was how the height samples were picked from the voxelized representation. This should be improved now a bit, but I was not able to create a simple fix for the case where radius is zero. It is surprisingly complicated task to gather all the height samples which fall under a polygon. Most of the time it works ok, but some corner cases are just nightmare to get right.

So, if you have had problems with the detail meshes, grab the latest source from the SVN and give it a go.

[EDIT] I also added few convenience versions of rcRasterizeTriangles. One which accepts indices as unsigned shorts and another which can be used to rasterize non-indexed meshes.

Friday, October 16, 2009

Delaunay Triangulations

I have a love-hate relationship to all kinds of triangulations. They are super useful tools for many problems, but at the same time they are super annoying to get robust. I have been pounding my head to the wall trying to get one Delaunay triangulator to work properly.

Bourke's code is broadly used and seems quite robust too, but it fails to deliver one feature, convex hull, which is really important for my use. This seems to be common problem with Delaunay triangulators which rely on super triangle as the initial state of the triangulator.

After browsing through countless of other implementations and papers I finally found a O(n^2) algorithm last night, which seems to suit my need very well (I'm talking about the O(N^2) algorithm of the Java applet). I'm not sure which algorithm it is, it does not look like any of the usual suspects. But it does not require super triangle, and there is a case which handles the hull too. Even better, the algorithm behaves well even if manually set the hull edges myself. And finally as an added bonus, the idea of the algorithm is easy to understand visually, hence, I'm able to debug it myself too when I run into problems with it again.

Most (if not all) of the existing Delaunay triangulation implementations I could find out there fail sooner or later if you have points in your input data which are along the same line. That is a condition which majority of my input points satisfies.

I'm yet to do full set of test, but so far it looks good. The Java implementation has a couple of bugs in it, especially how it handles face labels.

Looks like I can finally move the detail mesh generation off the TODO list...

[EDIT] The algorithm in question seems to be a gift wrapping variant of Delaunay triangulation. See A Comparison of Sequential Delaunay Triangulation Algorithms, the papers that paper refers to seem to be pretty hard to find.

Monday, October 12, 2009

Blast from the Past pt. 2

I was browsing my old hard drive last night while trying to find some old material in preparation for a lecture in a local game dev school. I found a couple of good docs I've written in 2006, which I thought would be worth sharing. The grammar in the docs is the same awful you are used when reading this blog.

Moppi Flower Explained
During the years, quite a few people have begged the source code for our demo from BP'04. Instead of sharing the code, I thought it was better to share the knowledge.

Proto Watch
I got a nice and shiny wrist watch for Christmas Present in 2006. It had the most horrible UI ever! Since I did not have internet at home during the Christmas breaks since my room mate took the ADSL modem with him, I thought I'll try to see if I can figure out a better UI for a wrist watch.

For the sake of proving my point, I also wrote a simple prototype of the concept. All the functionality was written in C using only unsigned bytes. I wanted to implement the using a micro-controller too, but then the company I was working for at the time decided it was time for yet another crunch. So we crunched instead.

Thursday, October 8, 2009

Blast from the Past

I just found this nice repository of old from from International Joint Conference on Artificial Intelligence. Lots of good stuff all the way from 1969.

One of my favorite findings was this paper from 1989 (If not mistake, the classic paper on velocity obstacles was published a decade later):

A Maneuvering-Board Approach to Path Planning with Moving Obstacles
Lou Tychonievich, David Zaret, John Mantegna, Robert Evans, Eric Muehle and Scott Martin

The year 1989 was like 20 years ago and yet we manage to produce games where NPCs have trouble avoiding walls and other kinds of minor things. So nice we have Google nowadays.

One quite important trick present in that '89 paper, that is missing from the Fiorini paper (IIRC) is the cone truncation. Applying the truncation helps a lot in many practical cases. Especially when trying to reach your final destination. Gladly it is well represented in Reciprocal n-body Collision Avoidance.

Tuesday, October 6, 2009

Status Update

I decided to put the area stuff on hold for a while. The update was becoming too heavy, and I need to revise my plan of attack, and probably just simplify few things too.

I'm thinking about dropping the statNavMesh and just keeping the tileNavMesh and add helper functions to make the use similar as using the statmesh. It is already quite tedious to keep different generation processes as well as runtimes up to date when a new feature is added. In the same vain, I was thinking about to only use the monotone area partitioning. As it is way simpler than the watershed partitioning code, it is much easier to add new features to it, such as making it aware of the areas.

The current SVN version has several fixes which should make Recast to compile on Linux and 64bit systems. There's a couple of good bug fixes there too, which were kindly pointed to me by August Gustavsson. One was in the tile navmesh path finder and another was crash in the border vertex remove process.

The detail mesh has raised its hairy head again. The Delaunay triangulation algorithm I used to generate the triangles for the detail mesh does not always create a mesh which includes the convex hull of the points. Some of the Delaunay triangulators I have used in the past do that, though. I have tried a couple of alternative methods, but the ones which create the necessary convex hull usually create extra vertices along the tesselated edges or they crash in some sliver triangles. Do I just hate triangulations.

I will try to see if I can hack some sort of constraining code for the outer edges by flipping few triangles around after the triangulation so that removing the super triangle will not eat the useful triangles away.

I will fix the detail mesh edge vertex explosion on radius zero, too.

Sunday, September 20, 2009

Tiles, Tiles, Tiles...

This is what I have been up to recently. The lack of updates the past couple of weeks is mostly due to my involvement in renovating my bathroom. It's been a lot of fun, but I think the romanticism of hard work is slowly wearing off and I'd like to do some coding for a change.

I have an apartment in an old building and none of the walls are straight, so I had to cut a lot of custom fit tiles around the walls. One interesting thing I noticed was that in order to optimize that repretitive work, I started to use fuzzy measures, such as "a little less than 5 cm" or "a bit more than 3 and a half cm" instead if 52 mm or 38mm.

In Finland we use metric system and everything construction related is always expressed in mm or cm (except two-by-fours and two-by-twos). But in places where imperial units are used you often see fractional units being used, for example something measuring 2"1/4 is not that uncommon. Even if I like that we have universal system of units, I think it is not very practical in many cases in every day life.

I want fractional measures back to metric system too. Humans do a lot of work using binary search when estimating measures. "Five centimeters and a half and a little less and nudge to the right" is more accurate than a millimeter!

Despite being engaged in that physical excersice, the work on Recast has not stopped. The area stuff has progressed pretty well. I have generation side working pretty OK. It needs quite a lot of clean up and API massaging to be released.

Quite a few people have also reported patches to make Recast compile and work under Linux and 64bit. I was silly to actually do those changes in the same codebase where I did my area work, so I cannot submit those changes quite yet. But for those who are waiting for it, those changes are coming.

Once I get all the hackery peeled off, I will release an intermediate release which includes some of the area progress as well as the Linux fixes.

Thursday, September 3, 2009

Area Progress

I have got some pretty good progress on the areas. I have been concentrating on marking the areas. Volume based areas were pretty simple to do. So far I have only axis aligned box, but I do not expect the rest to be too complicated.

The height based areas were a bit more challenging. In above picture, the blue are is marked by the black box, the yellow area is where the character can stand, the red areas are where the character can crouch and on blue ares the character needs to go "prone". Those are dummy categories just to test that I can handle more than one height value.

That kind of sampling tends to generate a lot of small areas here and there. For example if a wall is slightly tilted, you may get one voxel worth of proning and three voxels worth of crouching. I tried several kinds of filtering methods to get rid of those artifacts.

First I tried median filter, but I either had to do too many passes or the filter ate away certain small bridge areas, which should have been left unaffected. After some fiddling, I tried similar filtering method I used for the region merging and it turned out much better. The filter checks adjacency, so it will not remove small areas which connects two neighbor regions (such as low doors), it does a bit smarter merging, so that tiny 'crouch' area can be merged to a 'prone' area but not to 'stand' area or else the character collision would fail.

Next up is to enable the partition system to understand the areas. I expect that to be the most challenging part of the this whole feature.

Sunday, August 30, 2009

Monotone Regions

I have been working recently on the area generation improvements. One area I have been researching is how to generate and store the different areas. It looks like there is need for two kinds of areas, the ones that describe the basic structure of the level, such danger spots, and the ones that are modifiers to the previous ones, such as encoding movement type into the mesh.

Additionally some areas may need dilation and others may require erosion. For example the general walkable space needs to be eroded by the agent radius (if desired), while an area marked for crouching should be dilated at least the agent radius, same applies for danger locations too.

The area generation code will most likely get quite a bit more complicated. That is one reason I've been looking into some alternative ways to partition the area into simpler pieces.

I like water shed partitioning because it generates nice regions, but another way faster alternative is monotone partitioning. One thing that is really nasty about monotone regions is that in practice it tends to create a lot of long sliver areas, which is not good for navigation.

The above picture shows a scene partitioned which is processed by partitioning the walkable are into monotone regions. The region buiding process is almost an order of magnitude faster (my favorite performance improvement), the total build time about twice as fast. Normal single pass process suffers a lot from long sliver polygons, but the tiled process is almost unaffected.

It is going to be a challenge to implement the area stuff efficiently. I'm not too worried about actually tagging the areas, but any further process such as dilation or other kinds of filtering can become a bottleneck. Not to mention how setup the API, and how the Detour API is going to cope with that without exploding.

Tags and flags are nice, but sometimes they can end up bloating your compact data.

Thursday, August 27, 2009

Font Stash

Rendering strings without OS support is always pain in the ass. Add different font faces, font sizes and languages into the mix and it is not funny any more. I like typography and I like having good font rendering in my apps. Sean's awesome stb_truetype was such a relief for me. No more big and ugly Freetype to be included with your project!

Now that actually generating the glyphs was not a problem anymore, I thought I'd try to create a font cache which would allow me to load few fonts faces and render strings as any font size I please. The glyphs are rendered and packed into a texture as they are needed. There are few recent opengl font rendering projects which inspired me on the way (that pango/cairo/clutter opengl thing and one example written in D).

Another thing that was important for me was to be able to render localized strings. I like UTF-8 for its' simplicty, and after googling around for a while I found Bjoern Hoehrmann's Flexible and Economical UTF-8 Decoder.

Putting all the pieces together is Font Stash. It's Xcode only, with simple SDL example. Should compile on VC without much tinkering. Performance should be ok-ish, there are spikes when new glyphs are rasterized and running out of cache behavior is not too nice. This release is not nice and its not clean, but it might evolve into something more usable later on.

Download Font Stash 1.0

Monday, August 24, 2009

Recast and Detour 1.4 Released

The detail height mesh is finally here! I eventually managed to make it work with all the possible combinations of usage cases currently supported by the toolkit. That led me to think if I should drop one of the variations in future releases. If managing all of them gets too complicated, that might happen.

This update added some changes to the build procedure too. Now you need to create a detail mesh even if you do not want to use the subdivision feature. Search for rcBuildPolyMeshDetail in the samples to see how to call it. The function takes poly mesh, compact height field and two parameters as input.

The technique creates a small local triangle mesh for each navmesh polygon. Each polygon is processed in turn. First the outline of the polygon are tesselated at specified sample distances. Then each sample point will receive new height value from the heightfield and finally redundant vertices are removed from the outline. The edges are processed so that matching edges from different polygons will create same curve.

Once the outline has been processed the points will be triangulated. Then, additional sample points will be generated inside the original polygon. If a sample point is too far from the triangulated height surface, it will be added to the triangle mesh. This procedure will be repeated until all sample points have been added or all sample points are within a certain distance from the detail mesh.

The parameter which controls the sample distance is called detailSampleDist and the parameter which described the maximum allowed error when the mesh or poly outline is simplified is called detailSampleMaxError. If you set both parameters to zero, you will get simple compact mesh which will not add any additional points to the surface (that is, the old behavior).

Ideally I would have liked to have just one parameter, the max error. In practice it turned out that sharp features in the heightfield would add excessive amount of extra vertices. The goal was to add little as possible data to get a bit better height representation of the navmesh. The sample distance acts as a low pass filter. In order not to run into similar problems as with the texture approach, the sampling is point sampling. This allows to technique to produce detail meshes which align perfectly at polygon boundaries.

The sample distance also is good trade-off between time and quality. The algorithm is not particularly well optimized, but it does not add more than 10% of generation time when you use large enough (around 4-8 cells) sample distance.

The single pass navmesh generation did not change much, but as I posted earlier, the tiled process was not as easy as I predicted. Initially I had a lot of trouble at finding a way to create seamless detail meshes at the tile boundaries. Finally I decided to remove the extra vertices which are generated by the region partitioning. This turned out to work fine, even if the code which does that is not super clean.

This obviously changed the tiled generation code quite a bit. I think it is much cleaner now. The changes are not that complicated, take a look at the sample for more details.

If you are using the tile navmesh, the change to your pipeline is as simple as with the single pass method. Again, the sample is your friend what it comes to details.

I think the Detour API to use the height values is not as full fledged as it will be. Once I get to implement the path following code I will add more useful features there. If you have certain requests already, let me know.

The new version can be downloaded from the usual address. Here's direct download link for your convenience:

I think I might tackle the jump links or areas next.

Thursday, August 20, 2009

Navmesh Height Accuracy pt. 5

So fitting the detail mesh into the tiled preprocess was not as painless as I hoped. The problem is that the detail mesh requires the edges of the polygons to align perfectly as well as it requires the heightfield present to pull the date from.

While the tiled process has all kinds of good properties, one thing that was quite complicated to get right was how to handle T-junctions between the tiles. The region partitioning system can create extra vertices along the tile edges. Just imagine open field, with one obstacle. Most of the tiles will be just one region, whilst the tile which has the obstacle will be split into several regions. That extra region would create T-junctions along the tile edges.

The way I handled this earlier was just that first I rasterized all the tiles, one at a time, and stored the region contours, which is just little amount of data per tile. After all tiles had been processed, I would post process all the neighbor contours so that they equal vertices along the tile edges. After that, I would build a the polygon mesh from the contours.

That was all fine back them. But now, the detail mesh generation requires the heightfield and matching contours to be present at the same time. And keeping the heighfield in memory is not an option since there just is not enough of it in some cases.

Well, the past day or two I have been working slightly different approach to the per tile generation. We will see how it will turn out. I have had quite a few dead ends recently, so I'm prepared that it will fail :) The idea is that first I will detect the extra vertices along the tile edges and then I would remove them from the polymesh and then generate the detail mesh. Since there are minimum number of vertices per tile edge, there is no T-junctions. That way I do not need to fix up the contours in second pass and all is good again.

It took me a while to come up with this solution, even if it sounds like the best thing to do. I guess I'm just subconsciously trying to dodge every solution which requires adding or removing a vertex from triangle mesh and keeping the resulting mesh still valid.

I don't want to make a quad-edge mesh thing-a-magic just for few vertices that needs to be removed and the code to handle vertex removing and triangle patching on simple flat mesh structures is just pain it the arse. If someone has good pointers how to remove a vertex from 2D trimesh and keep the remainder of the mesh in good shape (no overlaps, etc), let me know.

Adding the detail mesh has been quite a journey. There has been many more detours on the way than I expected. It's been awesome ride, nonetheless.