I have tested several different methods for local obstacle avoidance over the past months. A little less recently, as I have been finishing a game, though. There is something that I really like about the Uncharted way of mixing and matching navmeshes for global search, and dynamic grid for more local path queries.
There is one potentially nasty thing about that combination and that is what to do when the local grid says that the path is blocked, but the global navmesh disagrees. I'm going to sidestep that question by making an assumption that all the stuff that is not in the global navmesh can be traversed some way. Let it be moving around, kicking or putting it on fire.
Even if the method has some problems, I thought it was worth the try. I limited my implementation to a small grid which follows the agent. The idea behind this is that the grid only tries to solve the pathfinding problem between two global navigation nodes. In case of navmeshes this area would be the area of current navigation polygon, if you use waypoints, it could be the area between the waypoints.
Simplifying the Grid
Grids are really simple on what it comes to creating obstacles. The horrible side effect of grids is that they create huge graphs and that shows in performance. Since we are using small grid, something between 20x20 to 64x64 probably suffices, this is not a huge problem, but 64x64 is still 4096 nodes to be searched with A*.
There is one simple trick we can do to reduce the node count drastically. After the grid has been build, we simplify our search representation by storing the grid as RLE spans (oh yeah, I'm big fan of RLE!). In practice this reduces the node count from N^2 to 2N. In our node grid range (N=20..64), that means at about an order of magnitude less nodes!
(I do remember reading some GDC slides which described the same trick used by some RTS, but I could not find the reference.)
The side effect is that the cost calculations become nasty and messy. But let's not care about that yet. Instead, let's dumb our search down even further by making more assumptions.
Simplifying the Search
If we were not using the local grid, the path through that area would be a straight line. Instead of building a full fledged A* pathfinder for the local grid we can use this information to create something that will do a good job if the straight line exists, and in any other case it will just get us out of the way of the obstacles and local minima.
I chose to use breath first search for this. Because of the search space representation, it is quite efficient. The open list never gets really big, in practice open list size is usually around 4, and hardly ever peeks past 8 items.
In order to get favor that straight path, the neighbour nodes which are about to be added to the open list are sorted so that the node that is closest to the straight line from start to end point is added first. This bias makes sure that the nodes close to the center line are visited first. The open list is never sorted, just the new nodes that are added to it. It's not pretty, but it works really well in practice.
Finding the Final Path
During the search an index to the parent node is stored and pathfind basically consists of finding the node where the end location lies and tracing back the nodes until we reach the start location. When finding the end node, I choose the closest one that was visited during the breath first search. This makes sure that I get path to follow towards the target, even if the way is temporarily blocked.
Finally, the Funnel Algorithm is used to find straight path through the regions. Funnel Algorithm just needs a series of segments which define left and right edge of constraints to move along the funnel. In our case the portals are the shared edge between two regions.
In my prototype the slowest part was updating the grid, which was stored 1 bit per cell. Updating the grid takes in average 0.026 ms, this includes about 6 segments and 6 agents. I'm sure you can squeeze that number down by vectorizing the rasterization phase. Path query, including string pulling, takes about 0.012 ms in average. All results on 2.8GHz Core Duo. One path query uses around than 1.5kB of memory to store the required search space representation and the final path.
[EDIT] Updated time measurements above, the originals were measured in debug build.
It was interesting to try to the local grid method. I think it has great potential, but the potential problems of syncing the failures from local grid to the global structure are a nasty side effect. I think it might be possible to create an "interface of assumptions", for example the one I explained on the opening sentence, to improve it.
I've been thinking about dynamic obstacle avoidance lately as well. But I like the idea of using a visibility graph more than a grid rasterization. In particular, I think the Oriented Visibility Graph might work well (you mentioned this previously at AiGameDev.com). But the same problem exists, which is maintaining consistency between the static and dynamic representation. The OVG is O(V^3) for V is the number of vertices of all objects being considered, and edges can be incrementally added and removed vertices quite quickly. An additional benefit is that no path smoothing is required when taking this approach, and if you can run this procedure often enough, you wouldn't need any steering code either. I'm wondering if it would be possible to use the OVG for handling all movement within a single navigation cluster, and then use a more traditional path planner to solve a path on the abstract graph, so as to find which clusters to travel between. My hope is that I could run the OVG algorithm nearly every frame, and have no steering or string pulling code, which would effectively combine steering and pathfinding.ReplyDelete
I don't think that would work, but don't let it stop you trying!ReplyDelete
I see two problems.
Firstly, flickering. You will get a lot of flickering if you do path finding often. It kinda helps if you can use the previous results as bias, but still the paths in different configurations end up being vastly different.
Secondly, when you have a crowd of agents, you will end up in situations where the path is blocked. For example in my example picture, when the top most agent goes through the door, the path will be blocked.
The above example images may be a bit misleading in the sense that I usually don't use the moving objects as blockers, but use velocity planning to avoid them.
Any solution based on planning seems like it will struggle with dynamic obstacles. Christian Gyrling mentioned this is his interview, they used the 2D grid approach an uncharted and the it shows problems when the bots are following dynamic obstacles, and try to plan paths around those dynamic obstacles when they could just follow them.ReplyDelete
Any ideas how to resolve this? Using A* that also updates the obstacles seems ridiculously expensive though. Maybe A* that has some kind of predicted future position based on the distance to that point? (It's getting similar to RVO now.)
It's not so easy at it initially seems :) There a couple of multi-agent pathfinding papers out there (i.e. Tractable Multi-Agent Path Planning on Grid Maps ). They generally are global searches, that is, all agents are handled at the same time, which IMHO makes them scale really bad. They all also require coarse grid (cellsize = agentsize), sometimes 3D grid (x,y,time). It does not sound like something I would be willing to do in realtime.ReplyDelete
http://vimeo.com/10404328 (not converted yet as of writing this, should be available in 10 mins or so)
Here's a video of my testbed which uses just the simple "local" pathfinder from above blog post. This pathfinder has the advantage, that it only tries to solve the problem locally, and not spend too much time to do a full replan, which would be super wasteful since most of the path is thrown away all the time. If it does not find good path, it will return... something that gets as close to the target as possible. That keeps the agents going even if the path is temporarily broken.
Multi-agent pathfinding is partially a scheduling problem, and normal A* does not know anything about time. I have tried expanding the agents towards the movement direction and moving them forward based on distance to the caller, but the results are not very encouraging. Horrible flicker and 50-50 dead-locks all over the place.
One simple and silly feature that most local methods exhibit is separation (either directly or in directly). This is pretty much the simplest way to handle scheduling... that is, pushing each other around until someone gets through.
With pathfinder, you could add more cost to the areas around other agents, but this will not solve dead-locks like the one in the video, since the A* will eventually return the nearest location to target.
I got best results with this type of method when I added random 1-6 frame delay between the pathfinds.
Also, maintaining flow is essential, especially with crowded scenes. With pathfinding, the flow is always broken in crowds. A simple way to "fix" this could be to reduce the radii obstacles so that the pathfinder would would find path in between the agents, and handle the rest using some kind of repulsion force. But then again you run into oscillations and over reaction problems as reported by many papers.
It looks like simple problem but it is really hard... I heard it could something like NP-hard :)
Just realized... I've posted two long "defence" posts in a couple of days to persons who's work I respect a lot. Makes me wonder if I might be chasing windmills after all ;)ReplyDelete
As a disclaimer, I reserve the rights to be wrong.
p.s. I just well behaving 150 sample version of my RVO stuff working, and it looks almost good :)