Monday, July 20, 2009

Movement Cost Along Polygons

Somehow this topic keeps on haunting me. While doing some home improvements over the weekend, I remembered a paper from few years ago:

Theta*: Any-Angle Path Planning on Grids
http://idm-lab.org/bib/abstracts/papers/aaai07a.pdf

It looks like there is more recent paper on topic too:

Incremental Phi*: Incremental Any-Angle Path Planning on Grids
http://ijcai.org/papers09/Papers/IJCAI09-303.pdf

The rough takeaway from both papers is that they incorporate line of sight tests to the cost calculation. So instead of accumulating the movement cost as you follow wiggly edge midpoints or polygon centroids you actually use the shortest path to most distant parent to calculate the cost.

It looks like the string pulling method used in Detour could be used in similar way. That is, it should be possible to create partial result of the shortest path up to the current polygon in open list and use that to make better guess of the cost up to a certain edge. The down side is that the state takes too much memory per node (around 28 bytes) so I don't think the trade off is good. I might give it a spin at some point just to see if it is indeed possible.

Thursday, July 16, 2009

Cost and Heuristics Take 2


I had a little time today to take another look at the travel costs and heuristics inspired by the discussion with Steve in the previous post.

Now Detour is using the polygon edge midpoints instead of polygon centroids to calculate the cost and heuristics. The start and end points are also taken into account, so there is slight API change.

The above screenshot shows the comparison between the old and new. In the process I also found out that Detour was not handling the A* open list properly (the nodes never got closed flag, argh!), now it is fixed too. Over estimating the heuristics helps quite a bit to find more straigth path, but making it too great will "wall hug" when finding around obstacles.

The stuff is in SVN now, I will optimize it a bit and make a new release soonish.

Tuesday, July 14, 2009

Recast 1.3 Released


Recast 1.3 is here! This is a bit black-triangle release. The largest chunk of work has gone into bringing the tile based navigation mesh into Detour. The demo is not very extensive yet.

I also went and changed the detour names a bit. The tiny migration guide is that the static mesh types are now prefixed with dtStat instead of dt, for example dtPoly is now dtStatPoly. You also need to include the DetourNode.h/cpp as well as DetourCommon.h/cpp. The first contains the node pool and queu and the second one contains common utilities shared between the two different implementations.

The demo currently works so that when you are using the Create Tile tool (it's detault tool), you can create (click) or remove (shift-click) navmesh tiles on the fly. Do not forget to press Build once before you start clicking! The unexciting trick there is that tiles are generated on the fly. It is kinda obvious if you just click around, but it is magic, trus me ;).

The magic that the tile navmesh does is that it links adjacent navmeshes together on the fly in quite efficient way without much calculations or memory allocations. Currently the tiles in the demo come from the Recast build process but it is possible to create them beforehand and stream them in as you need 'em. It is possible to also calculate the tiles for certain world states (bridge intact and destroyed) and swap them, or calc new tiles on the fly as your world changes. I will write more this in next post.

I also did some revamp for the gui, the OpenGL specific stuff lies now in separate file, maybe one day it may be possible to put that gui code into library too. I also changed to use Sean Barrett's awesome stb_truetype.

Download Recast 1.3

Wednesday, July 8, 2009

Tiled Mesh Progress Pt. 2


First succesful path query on tiled mesh got done today!

I initially thought that I would use similar structure of simply connected polygons as I used for the static meshes and make the connections between the tiles a special case. It turned out that with my target tile size (32-64 voxels) actually the opposite was true. There are more tile-to-tile connections than inter-tile connections. So I went and changed my data structure a bit and I got rid of few levels of indirection, cleaner code and less conditionals with the expense of a bit more memory.

The tiled mesh is going to be a bit slower than the static mesh, but I expect it to be quite swift still. Once I get the initial full version done, I will experiment a little with different packed data maybe even limiting the vertices to range of 0-255 (the height will need few more bits), to get the data size as low as possible. A lot of the calculations can be done in integer coords anyways, so it should not be a problem.

One design decision that is biting me in the ass constantly is that I wanted the changes to tiles be as local as possible. For that reason I allow axis aligned T-junctions at the connections at the tile edges. That is adding a bit more work on the runtime as I need to clamp some coordinates when an edge at the tile border is processed. But it think in the end it is worth it. This also forced me to make support for more links per polygon than there are edges, so this work should be useful later when I will add support for non-polygonal links (jump/animation links).

Tuesday, July 7, 2009

Tiled Mesh Progress



Looks like the tiled mesh is getting quite a bit more complicated than the regular static one. Especially the polygon Ids a getting a bit tricky. I'm using salty-handles to reference the polygons so the system is able to delete a tile and the user will then need to react to that when using the data. I general I prefer this method as it does not kill the cache with random callbacks when something changes.

So far I have implemented dynamic tile generation, tile stitching and polygon queries. In the demo the user can create, delete and recreate tiles of navmesh by clicking around the level and then use the usual tools to find path or locate nearest polygons.

There are a bit too much indirection in the data when following the neighbour polygons. The code currently accesses first the 'from' polygon then a portal, then another portal and then the 'to' polygon. By some clever misuse of the salt-handles I might be able to leave the portal out of the equation completely. One thing that will cause indirect accesses is that when following a salty-handle, I need to check if the tile where the handle points to exists. Well, once I get the first implementation ready it is start to fire some profiler.

I try to get a rough version into SVN as soon as possible, so I can start to polish it and finally make a release.

Thursday, July 2, 2009

Costs and Heuristics


Not good enough yet, but an improvement.

I did a small update to Detour yesterday. The heuristics of the Detour path finder have been really bad and I though about improving it a bit. The pathfinder should perform now much better especially with meshes generated using the tiled process. I'm pretty sure Mr. Ericson would not approve my changes, though. I have to admit that I do not understand the problem good enough yet.

I settled to something that does not over expand the open list and something that is visually pleasing as well as coherent.

I noticed that the problem is not only they heuristics, but also the cost function. It is main in the ass to create good cost function which would work with many types of polygons. Additionally the tiled processing sometimes generated regular squares, which eve further messes up any cost function that you come up with.

The cost function I ended up doing actually measures the distance from previous polygon to the next polygon instead of current to next. It sort of emulates 8-connected links in 4-connected environment. I think using the midpoints for anything related to polygons and pathfinding is super bad. Especially trying to follow a path by following the midpoints! Edge mid points may be one notch better choice, but they are bad estimate too.

Visually, looking at the results a Dijkstra producses, the problem looks a lot like distance field building problems. I wonder if one of those algorithms could spark some ideas. The CDA algorithms are close to my prev-to-next distance calculation in the sense that they use larger environment to make better guesses.


Image from paper: George J. Grevera, The ‘‘dead reckoning’’ signed distance transform, 2004.

Getting to Know My Data


In order to forward with the tiled navigation, I decided to take a bit more detailed look at what kind of data Recast creates per tiles. I knew roughly how much time it takes per tile and I could say that there were not that many polygons per tile. I like this part of the job. It always gives me such insights.

I ran my tests with 32x32 tiles (which means 40x40 rasterization when the padding is added, that results 50% over rasterization), max 6 vertices per polygon. The above scenarion was generated in 13.5 ms and it touched 11750 triangles during rasterization. I'm sure there are a lot of unnecessary triangles which are processed.

My current input data is huge mesh (the above one contains few million triangles) and I divide it into chunks of less than 512 polygons using AABB tree like algorithm. This is sort of balancing between memory and speed. In real world example the data usually would be a database of objects, which creates much nice partitioning in practice.

After running the tests I noticed that almost half of the tiles in my test data usually contain just one polygon! The most complex tiles I got were around 32 polygons. This means that if I chose to use HPA* for pathfinding, I could just sore a small lookup table inside each tile to find paths inside that tile.

Most of the time the technique is able to create a tile in about 15 ms, the most complex cases do not exceed 50 ms. The technique is currently rasterization bound, which takes over 50% of the build time. This is actually good news, since I think there is a lot of room for improvement there.

The results from different levels show similar trends in build speed and polygon counts.


Thanks to John Ratcliff for the test data!