Tuesday, July 13, 2010

Recast Area Types Per Triangle

I just committed a change (R185) which allows you to pass area type per triangle for rasterization.

API Changes

From Flags to Areas

Firstly, the flags are now gone from the rasterization functions and area type is used instead. Now same concept of marking things walkable or not is used throughout the pipeline.

If you want to make a triangle non-passable, use area type of RC_NULL_AREA. If you don't care about area types at all, you should use general RC_WALKABLE_AREA.

rcMarkWalkableTriangles() uses that area type that to keep the existing behavior. I also added rcClearUnwalkableTriangles() which does the opposite, that is sets the area types of a triangle to RC_NULL_AREA if the triangle slope is too steep. Handy if you have an array of area types to start with.

The change to the rasterize methods is minimal, instead of passing a flags now you pass an area type.

Compacting Heightfield

When the heightfield is processed to compact heightfield, there used to be a flag parameter. This is gone now, so flags parameter from rcGetHeightFieldSpanCount() and rcBuildCompactHeightfield() are omitted now. Every voxel that has are type other than RC_NULL_AREA will be included in the span count as well as in the compact heightfield.

Filtering

The area eroding function used to have a trace of some old ideas of mine which required you to pass area type to it. The area type is removed now. I changed the function name to rcErodeWalkableArea() to better reflect what it is actually doing.

Aliasing

The new feature comes with a new set of problems. One problem is aliasing. In many ways voxelization works the same way as z-buffer. When surfaces are intersecting each other at low angle or almost parallel we will get aliasing error usually referred as "z-fighting".

This can cause nasty noise at the boundaries of different types of areas. One simple way to fight this is to apply media filter to remove the speckles. I have added rcMedianFilterWalkableArea() just for that.

Median filtering will smooth things out a bit, so if you want to keep as sharp as possible outlines for rcMarkConvexPolyArea() you may want to apply the filtering before you mark the areas. Otherwise I recommend to apply the filtering just before you build the distance field and build regions.

Median filtering is not super expensive, but not free either. On average add 1-2% to the generation time (that is, 10ms on 1000ms generation process).

Accuracy

In order to not to increase the memory footprint of voxelization process, I took some bits from the height range and moved them to the area types in rcSpan structure. This basically reduces the height range from 32k to 8k.

For majority of the users this is fine. One of my test levels which goes 6 stories above and below the ground uses max height of 1450 (human sized character, cell height 0.2).

If you run out of bits there, you can adjust the bits in rcSpan struct (remember to also change RC_SPAN_HEIGHT_BITS). This is likely to increase the rcSpan size from 8 bytes to 12 bytes (on 32 bit machines) and that will reflect in memory consumption.


Sorry for the API breakage, but I hope this will make things more flexible for many people!

Monday, July 12, 2010

Sumo vs. Ninja

Steering Behaviors is interesting method to create life-like behaviors for agents. It is tempting to use it for local avoidance. It is simple and intuitive to understand, but it comes with one really fatal drawback. When it breaks–and it will–it is eventually impossible to tune to get the desired behavior.

Velocity based collision avoidance methods on the other hand are a bit harder to get up and running, but get past the point where steering behavior based local avoidance fails. They sure come with their own share of problems too, like all sorts of jittering and feedback effects. But those problems are manageable and tunable to greater depth than anything based steering behaviors.

One way to look at describe the difference between these two methods is that steering behaviors based avoidance is basically a sumo-suit your character wears. As you bump into something, you are gradually pushed away. But when things get tight, you get stuck, and just tumble around.

One the other hand you want your characters to be suave like ninja, slipping through even the tightest of all encounters.

Eventually you want the agents to be as forward looking as an octopus, so that they can plan their avoidance velocities even better.

Friday, July 9, 2010

Custom Memory Allocator for Recast and Detour pt. 2

Ok, the update of the last post got kind out of hands. I did not expect to get this all done today.

The custom memory allocator for both Recast and Detour are submitted in R178 in SVN. The latest update also fixes the Visual Studio project and one array allocation bug which surfaced in VC.

As highlighted in the previous post, the API has changed on how the objects are required to be constructed. There is now alloc and free functions for each object (dtNavMesh, rcHeightField, etc). I used the same pattern all over the place, so it should be easy to spot.

[EDIT] Few more bugs were still there. Folks living at the SVN edge, please try R183.

Custom Memory Allocator for Detour

I just checked in custom memory allocator code for Detour (R176, or once googlecode stops giving me error 500).

I opted for similar method that is used by other open source libraries such as Bullet Physics or Box 2D. I initially wanted to use an interface which would be passed to each of the allocation functions, but the implementation became really ugly and confusing for those who don't care about it.

If you have a strong case why it should not be handled that way and have good example how to do it better (or pointer to some lib which does it great) let me know.

Thanks to Cameron Hart for initial patch and discussion.

I really don't like how C++ handles memory allocation. It is so half heartedly build into the language. Even the standard library has its' own way of circumventing it.

I'm planning to give the same treatment to Recast too soon.

[EDIT] Added similar support for Recast in R177.

NOTE: From now on the Recast objects (i.e. rcHeightField, rcPolyMesh etc) are required to be allocated and freed using special functions (rcAllocHeightfield() and rcFreeHeightField() respectively). The samples are up to date, in case of confusion check them out.


Wednesday, July 7, 2010

My Paris Game AI Conference Presentation


I have now successfully vacated my vacation, purged the most urgent work related queue, and finally got some time to clean up the "slides" from my AIGD10 presentation. Without further due, here's link to the presentation code and executables for Windows and OSX:
Use right mouse button to navigate the presentation. Some of the diagrams are interactive, I have included some usage notes below them. It is not meant to be intuitive, just a quickly hacked version of what I hope powerpoint would be :)

The code is quite ugly at points and probably not as efficient as you might like. I have tried to make comments around the important chunks of code where things need improvement. I left some things unoptimized for the sake of readability.

I'm still wondering what might be the best way to transcript the presentation. Here's quick overview of the navigation loop idea.

Navigation Loop
The idea of navigation loop is to cover all the things from A* to velocity selection which are needed to make character move in the world. One of the main goals was to make the agent movement as flexible as possible so that you can concentrate on movement styles or even multi-agent navigation instead of focusing all your energy to try to stay on path (that naked tight rope dancer is that).

Your system will get horribly, horribly, horribly convoluted when you try to follow (linear) spline path and especially when you mix and match collision representations!

The path to the goal is represented all the time as linear array of connected polygons. We never find straight string pulled path to the goal location. This linear array of polygons is found before we start to move the character. You can use your favorite search method to do that. It probably will be A*, I use breath first search in the demo.

Foreach (Simulation Step) {

At each simulation step you first calculate the next corner to move to. This can be implemented efficiently using the Funnel Algorithm. Instead of trying to stay on path, we can always query the next sub-goal towards the target using Funnel Algorithm.

The next step is to apply steering. The demo contain a couple of different styles from robotic straight path moving, to smooth path movement to drunken movement.

The next step is to move the agent. This is done so that the agent is always constrained to the navmesh. The result from this movement step is a linear array of polygons that we visited through when we moved the agent [1].

As final step is to finally fix up the corridor based on the visited array. Most of the time this fix up will shrink the path corridor, but in case the movement lead the character outside the corridor, new polygons will be added in the beginning of the array.

The result of this loop is that no matter how we steer, we are always able to query the next subgoal. If the steering is not totally stupid, the agent will reach the target with certain probability.

Later in the presentation you can find how to extend the loop to contain velocity planning stage.

Data Dependancies

The navigation loop idea assumes certain things about the data. The dynamic obstacles are always assumed to be traversable in one way or another. The system tries its' best to resolve collision conflicts, but in certain cases you need to detect dead-locks and figure alternative ways to handle them, i.e. disable collisions or use special animations to switch places.

The local velocity planning cannot deal with local minima (pockets, U-shapes, etc). If you have larger or more complicated obstacles they need to represented in the navigation graph.

Conclusion

I'd like to thanks David Miles and Thomas Young for laying down the ground work on their earlier articles and talks. I have not seen similar complete system represented before, but I don't claim it to be particularly unique. Such holistic systems as quite common in motion planning literature.

The sort of "anti-points" of this presentation were: 1) A* != navigation, 2) (Linear)Spline is horrible representation of a path. Unfortunately I know that the old fragments of wisdom on navigation will stick with us for years to come .

___

[1] I have blogged a couple of times about constrained movement along navigation meshes. I think the idea is sound, but I'm still figuring out a couple of implementation details.

Firstly, the idea works really when on triangles, but not so well on polys.

Secondly there are certain cases where the demo implementation fails when the next clamped location is at a vertex and both of the neighbour edges are valid exit edges. Need to have better heuristic for this. The system is always able to resolve this the next update, but it is annoying nonetheless.

Tuesday, June 29, 2010

Greetings from Paris Game AI Conference 2010


The title image is a bit of an insider joke to those who had the pleasure to experience the technical problems of my talk.

I spent last week in Paris. Three of those days I spent with fellow game AI devs at Paris Game AI Conference. It is a rare situation to meet so many talented and enthusiastic people. I also had talk at the conference. I will post more about that soon, along with the demo and the slides. I think I will write more detailed posts on the topic too.

Bjoern has good coverage of the talks on other blogs and sites, I'm just going to give you few observations.

Empathy and AI

I had the feeling that this year there were more veteran devs in there–Bruce Bloomberg, Ken Perlin, Noah Falstein, just to name few. My big take away from them was to design with user experience in mind and engineer the simplest possible solution to achieve that.

That is, rather than asking if you should use behavior tree or finite state machine, you should ask what is that you want the user the experience and then work it out the easiest solution from there. It is not easy, though. I think such problem solving skill comes with experience. You sort of need to have a feeling of the potential solutions and their strengths. I think Ken's advice is really good rule of thumb, create the simples possible solution first and iterate from there.

You get a good head start by thinking the problem from the users point of view. For example, what kind of feedback is needed to understand the AI, or what kind of interactions you want to use to have with your AI?

Mikael Hedberg had interesting talk how that thinking evolved in the case Battlefield Bad Company. For example they measured that an average screen time of an AI is about 5 seconds. If the death of an AI opponent takes approx. 2 seconds, it is worth putting quite a bit of effort to get that part right.

Good death and hit reactions is the key to make the AI opponent to feel good interactive toy. When working on shooters, just make that awesome first. And while the designers are having good time killing all the foes, add some magic to make them fight back better.

Bruce Bloomberg's talk was also great example of this mindset. I don't think I can ever be so honest about how to simplify the content creation process and the technology based on how the player perceives things as they have been.

Few months back, there was a video of a magician called Jamy Ian Swiss circulating the nets. He used the word empathy to describe this property of user experience driven design. The video is worth watching.

Gameplay vs. AI

Another really welcome development I saw was how people categorize AI programming. In past there usually has bee really clear cut between what belongs to AI and what belongs to gameplay. It seems that a lot of people, from designers to programmers, consider game AI to be a broad concept.

And that is true! In current games the concepts from AI are being applied very broadly across all the gameplay related things from audio to animation. I think this is very good change. The biggest advantage is that the solutions that were before only "available" to AI programmers can now be used to solve other problems too. Sometimes the barriers are in our minds.


It was also interesting to note that this year most of the people were using really simple solutions to their problems. It may have been also the result of Alex's great choice of talks. Most people were using (hierarchical) finite state machines instead of more complex solutions. The reason could be that many talks were biased towards game characters, but it also resonates with the aim to find as simple as possible solutions.


All in all, it was awesome conference, can't wait to get there next year! Big thanks to Alex and Petra for all the arrangements. I'm going to enjoy the rest of my vacation and work hard next week to put the demo and code online.

Thursday, June 17, 2010

Stealth Mode


I have been a bit in a stealth mode the past months. It has been less about doing something I cannot tell you about, but I have just put a lot of time and energy into finishing up my local navigation research and preparing a (hopefully!) kick ass presentation about it for the Paris Game AI Conference next week. The image above is a little teaser screenshot from my presentation.

I will make the presentation available after the conference along with the demo code. So that might be something to look forward to. More about that in few weeks.

I'm also planning to update Project Cane at some point this summer after I clean things up a bit. Project Cane was basically my testbed to test different local avoidance ideas. It will also come with the local navigation grid code I blogged earlier. It'll be bunch of proto code, but maybe someone will use it useful too.

Then at some point I will start working on multi-agent navigation which will be part of Recast & Detour. My current idea is that it will be separate library, just like Recast and Detour are separated but related. There will be some glue code in form of examples which will show you how to use them together.

I hope to see many of you in Paris next week, if not, stay tuned for the presentation :)