## Tuesday, November 3, 2009

### Making Velocity Selection More Stable

I have done a couple of tests the get my velocity obstacle prototype to reduce the flicker in the velocity selection. Here's a couple of things that seem to work well.
1. Iterations: Just like you can improve collision detection by iterations, you can help the velocity selection with iterations too. By adjusting the sensed velocity slowly towards the selected velocity allows the other agents to react to the gradual change. Even after this change I often get jitter when two agents are about to start avoiding each other.
2. Samples Selection: I originally used the sample selection presented in HRVO code, but I noticed that I get more stable results if I just use the samples at the max velocity circle. This sometimes results full stops, but it might not be a bad solution at all. It should be possible to use a bit of higher level logic, which could choose between several states based on how crowded the situation is. This should help the animation play too.
3. Finite Time Threshold: Truncating the tip of the VO cone helps certain situations a lot, especially approaching to the goal location. There seems to be some correlation between how well the avoidance works versus how far the time threshold is, but it varies quite a lot from situation to situation. I think it is possible to adjust the threshold and get better results.
4. Velocity Obstacle Shape: The shape of the velocity cone has big impact on how smooth the velocity selection is. It is a no-brainer but it definitely was not the first thing in my list to try to get the system running smoother. Making the tip of the truncated cone more sharp helps a lot the first contact velocity selection (the problem that was unsolved by the iterations).
I think my further prototyping will concentrate on investigating how the shape of the velocity obstacle affects the avoidance. For example what happens if the shape of the VO is more like a parabola, or what if it is more like a trumpet. Another thing to prototype is how different avoidance region types affect the movement.

I also did a quick prototype of the directive circle approach based on this paper: Robot Motion Planning in Dynamic Environments with Moving Obstacles and Target. It uses similar max velocity selection method which I found more stable and has interesting trick to combine the VOs. Sampling artifacts aside, it seems to have some problems when approaching the goal location. I will test it a little more. The way the obstacles are combined in that approach allows me to test if adding more clearance might help the steering too.

1. The jittering you mention, could this be a feedback effect? Where one agent moves in one direction, forcing another agent to change it's path, which forces the original agent to adjust it's path again?
It *might* help if you add a preference to turning to a particular direction, like turning to the right, with a certain epsilon value, to all agents.
Then maybe they'll be less likely to jump back and forth between turning to the left and turning to the right.

2. That is one source of the jitter, and the first situation you described can be fixed to certain degree using iterations as I explained in the post. The idea is that if the change is small, the others react less to it, so the system converges to something more stable.

The other kind of problem comes from the fact that the VO is symmetrical. Imagine that you are approaching someone head-to-head, then the both samples are equally good, usually you end up choosing on in one frame and the other the next frame.

My prototype solves that using the hybrid velocity obstacle, which means that one side of the obstacle plane is moved to create slighly asymmetric VO shape.

If you look the picture in the post closely, you can see that the sharp V shapes at the end of the obstacles are not symmetrical. That is the HRVO skewing in action.

I have tried turning preference too, while it works in certain cases, you will get a lot of dead-locks, there two agents try to both pass each by other towards the same direction, and they will travel side by side far, far away (and live happily ever after ;).

If you inspect the trail of papers from different authors (like VOs or VFHs), eventually they all suggest using lookahead planning.

That is, for each sample, simulate the system few iterations in the future and see which one is eventually the best (for more details, check Fiorini's thesis or that VFH* paper).

That method sounds a bit too complex to do at realtime for multiple agents, though. It might be worth the try if I can get the system otherwise fast and simple.