Friday, April 16, 2010

Adaptive RVO Sampling


I spent a little time trying to bring slim down the sample count of the sampled RVO method, while still trying to get really good accuracy and quality. The above pictures shows two cases brute force and adaptive versions of a scene where the accuracy and quality of the resulting movement is comparable. One takes 890 samples (think 500 would be fair comparison too in terms of accuracy) and another 50 samples. An order of magnitude optimization, my favorite!

The method now takes something between 0.01–0.04ms per agent to calculate (2.4GHz Core Duo), of course depending on the number of obstacles. My goal is to get around 25 agents at 0.5ms per frame, so that is just about there! There are still quite a lot of potential optimizations (remove square roots and divisions, vectorizations, etc) in the sampling code, so I'm quite happy with it so far.

It is possible to run the simulation at lower rate too. I'm currently testing at 20Hz, but the method works well down to 5Hz. I have not investigated on adapting the sample count based on the surroundings, so I think with some cleverness and LOD tricks, it is possible to slim the timings even more.

One more thing to note in the above picture picture is the sampling area. Instead of the full circle around the agent, the sampling area is biased towards the desired movement direction and the radius is reduced too. The rationale behind it is that more evasive movements happen at lower speed. It cuts down the sample count quite dramatically, around 40%.

Well, I think I'll will save some details to my up coming presentation at Game AI Conference 2010 too :)

6 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. Hi Mikko,

    I've been following this blog for a while now, I am Phil Carlisle's student, working on crowd simulations and personality models and your work has been very interesting to me.

    I saw this post just now, just after implementing this same idea myself, my method brings my sample count down from 250-350 per agent to around 40 samples per agent which is a big increase.

    I know you said your going to save implementation details for the AI conference, but I'm interested to hear what your method was.

    My method is to test velocity samples on the edge of the circle surrounding the agent, with the circles radius = the agents desired velocity. I then record which sample was the best and repeat using a circle with half the radius of the previous centered at halfway between the agents position and the previous best scored velocity.

    So far im getting not only a speed increase, but much better local area obstacle avoidance over my previous sampling method.

    I liked your idea to bias the sampling area towards the movement direction, and implemented this as well, it was only a small modification and involved setting the initial center of search around a point located between the agents position and its desired velocity.

    Keep up the good work

    Scott Bevin

    ReplyDelete
  3. Hi Mikko,

    I am the author of the orinigal RVO paper. It is great to see you put it in practice! Also check out our new work on ORCA; that should increase quality at lower computation costs.

    http://gamma.cs.unc.edu/CA/

    Jur van den Berg

    ReplyDelete
  4. Hi Jur,

    Thank you for writing those papers, I'm a big fan! :) I'm following your work really keenly.

    I have tried ORCA, but I was not able to get it to work well. Some parts of the ORCA implementation fall outside my comfort zone as programmer (i.e. 3D linear programming). It could be that my implementation was not quite right, either.

    The biggest problem I had with the method was that I ended up a lot into situations where the region of permitted velocities became empty.

    It often looked like it could have been omitted if I had chosen to avoid some agents from the other side. I tried to resolve those situations by setting vopt = 0, but it happened too often and in practice that leads to avoidance strategy which is pretty much equivalent to Fiorini & Shiller "to goal", which eventually makes everyone stop if they have conflicting goals.

    There is something quite elegant about ORCA, when time permits, I will give it another go.

    I would love to love the geometric methods, but I have many practical cases where they result dead-locks because of the nearest velocity selection.

    For example in the Geometric Methods for Multi-agent Collision Avoidance video [1] ORCA seems to have trouble solving the 5 agent case when the parameters are symmetric.

    I made a video which compares the hard vs. soft boundary of an RVO implementation [2]. In more loose situations they both perform quite equally, but the smoother version have some subttle properties which make it select better velocities in tight spots.

    I claim (without a proof) that the method of selecting a velocity outside the VO is defining the agents behavior. I think selecting the nearest velocity outside the VO is not always the correct behavior.

    Pettre & co has shown that based on empirical studies it seems to be some thing between Fiorini's and Shiller's TG and MV strategies. In many cases selecting nearest velocity is really close, but when the VO becomes more obtuse the nearest velocity selection favors TG more.

    --mikko

    [1] http://www.youtube.com/watch?v=oleR0ovxgN8
    [2] http://www.vimeo.com/10715717

    ReplyDelete
  5. I am a big fan of Jur's as well having coming across his website and RVO work about a year ago. Now I am a big fan of yours as well. Great work and fascinating read.

    I broke down Jur's code and semi-successfully translated it into Unity javascript (more like c# than javascript -- http://unity3d.com). I am not a programmer by trade (self-taught) but I got it to work minus the kdtree part. A great learning experience for me.

    Anyway, wishing you guys all the best with this. I'll be following your progress.

    Mitch

    ReplyDelete
  6. Hi all,

    I've been very impressed by the elegance of ORCA, so I have implemented a demo in C# / Unity. Source code is available at http://rvo-unity.chezslan.fr

    Credits should go mainly to Jur et al.

    ReplyDelete