Sunday, March 20, 2011

Simulating Human Collision Avoidance

There were many good collision avoidance papers published last year. One trend that I saw already when I was preparing my Paris Game AI Conference presentation last year was that the next step in the human like collision avoidance will come from inspecting motion capture data.

One of my favorites from last year was A Velocity-Based Approach for Simulating Human Collision Avoidance by Karamouzas & Overmars. Technically their solution is very close to the sampling based RVO, but there is one very important difference; quoting the paper:
Our analysis, though, focuses on the predicted time to collision between interacting participants and the deviation from their desired velocities, whereas they studied the effect that the minimum predicted distance has on the participants’ accelerations.
In practice it means that they did bunch of measurements with real people and noticed that the velocity sampling range depends on the predicted time of impact.

That is, if the agent things it will hit something 3 seconds in the future, it is likely to adjust the speed and angle just a tiny amount, but if the collision is imminent, the agent may adjust the velocity a lot. The plot at the top of the post shows how the sampling range changes based on the predicted time of impact.

This is tiny detail, but very important one. The resulting animations (accessible via the link above) look pretty good too.


  1. The movement in that presentation looks really good! The only thing that I thought was a little unnatural was that some people started to walk so close together that it would be uncomfortable in real life (especially considering that there was enough room to not walk so close together). What was also missing was people looking at each other while trying not to walk into each other, but that would be an animation system issue, not navigation.

  2. How is that time to impact different than reynolds approach though? Its basically scaling the steering force in proportion to the distance to impact.

    I think its a reasonably intuitive thing to do, but I think in some situations it would break down (single agent moving towards crowd etc).

    I'd love to see some raw footage of crowds being put on the internet as a first step in testing crowd simulations. Mocap is good for fine detail, but I think having top down footage of real crowds would show us where more of the problems are.

    I've not seen anything like that though. Have you?

  3. @LogicaError, they have follow up paper which deals with groups. They also use personal space radius on in addition to agent radius, so that distance should controllable.

    @Phil, the difference is that Raynold's method use just the first collision to try to solve the whole collision avoidance problem, whilst Karamouzas & Overmars use the first collision to control the amount of allowed change in speed and angle, and then use sampled RVO like algorithm to calculate the actual avoidance.

    There are a couple of papers/videos out there which use rotoscoping to match their simulation to actual footage.

    Id' love to see motion capture + video + extracted hip/chest/head data.

  4. Quite frankly I allready try wrapping my head around using the velocity-based approach for steering a space that is solely defined by what recast offers. For animation I would still correct the z-axis based on the actual geometry data. I'd be glad to have some pointers to how I should efficiently extract the data the algorithm needs from the navmesh. The math is so much easier on a 2D-plane..

  5. @Thomas, dtPathCorridor::movePosition() adjusts the elevation of the agent using approximate getPolyHeight(). If you want to position the character exactly on the ground, you should should use physics cast.

    For example, you place the start location of the cast at current agent position and raise it by agentMaxClimb and the cast down up to 2*agentMaxClimb. A raycast might be ok, I would use sphere case where the sphere radius would be half the agent radius.

  6. Thx for your reply. As for the z-correction I just wanted to state that I will still be using that.
    The other problem is much more interesting, actually. I will see if simply sampling future positions will yield the colliding agents.
    If you should wonder why I make such a fuss about it, as you allready offer a RVO implementation to peek at for inspiration.. well, I've only just started working with your code. So actually I just wanted to tell you that I'm going to mate your work with that of Karamozas and marvel at whatever might come out of it :-)

  7. The paper is now here: