Inspired by the relative success of the sampled hRVO thing, I though I'd dust off my old ClearPath proto and try out how it would compare with the sampling based method.
I have to admit that the recent ORCA explanation video also sparked some gas too. I think I got the ORCA thing finally after seeing it in that video.
Without further due here's the video.
There are a couple of tricks I added on top of ClearPath. I use similar side bias as HRVO uses, that is based on the velocities and location of the objects, one side of the VO is expanded. It helps the agent to choose one side and reduces the side selection flickering.
Another trick I added is that I clamp the VO cone using a spike instead of a butt end like they do in the ClearPath paper. This makes the avoidance much smoother as can be seen in the head-to-head examples. It adds much needed prediction Clodéric mentioned in the comments of my last post.
I used different time horizon for static and moving obstacles, this allows the agents to move close to the walls while avoiding each other. This is quite important to get good behavior in crowded locations.
The sudden movements are caused by thin sliver section being opened and closed based on neighbour movement.
Geometric vs. Sampled
In many cases the geometric methods are much smoother than any sampling method. I think its biggest drawback is that the result is binary. You're either in or out. But that is also its strength too making the results accurate. The binary results also mean, that it is possible to get into situations where there is no admissible velocity at all.
Another con of the geometric method is the velocity selection. A common way is to just move the velocity to the nearest location at the border of the velocity obstacle. On crowded cases this results dead-locks as can be seen in the video too. I'm pretty sure that if the floating point results would be super accurate, the dead-locks would never resolve.
You can see similar clogging, which is present in my video, in the ClearPath video too (at the time they turn the speed to 10x).
The sampled VO looks smoother too, especially around the tip of the cone. In practice this means that you are less likely to run into situations where there is no admissible velocities. You will get high penalty for velocities which will collide with another agent, but sampling allows you to take the risk.
Sampling also allows to combine multiple different weights when calculating the best velocity. This allows a bit more interesting methods to smooth out velocity selection.
I'd love to see an academic paper which compares the pros and cons between these two methods and educated conclusion. Too bad most papers present their new method as the best thing since sliced bread and fail to address the cons properly.
The sampling method looks really good and smooth when I have 129x129 samples (like in the example picture above), but that amount of samples is totally impractical. I think sample counts around 200 start to become acceptable, I'd love to have less. I will try to see if I can cook up some wacky stable adaptive version to reduce the sample count.
I think the geometric method could be improved quite a bit if I could figure out better method to clamp the velocity to the boundary of the VO. Nearest point is evidently not good enough.
I do like both methods. I will continue trying to fix the cons of both as time permits. And I have a funny feeling that one of these methods might be good enough for a triple A quality agent movement. Well, that would still lack the animation bit, but that is another story.