I had a little time today to do some more experiments with obstacle avoidance. It is starting to shape up nicely, I hope that in next update I can share some interactive examples too.
Today's experiment was to solve of unfortunate side effect of the sampling method I chose--namely initial collision. If two agents are overlapping, the sampling will always return blocked and the two agent will get stuck. I previously side stepped this problem by just ignoring the initial collision.
The collision happens most likely because of floating point inaccuracy or that the agent movement code did a bit different thing than what the local navigation told it to do. Some situations on games can make it that happen, so it is good to be able to handle that.
First I tried to adjust the time-of-impact values based on the initial collision, but that did not work out as well as I hoped. Eventually my solution was to calculate both the time-of-impact as well as time-of-exit values per sampling direction. The same code actually is able to return both the values easily (TOI is calculated using ray-circle intersection, TOI is the first hit and TOE is the second).
My final solution was to add yet another component to the sample scoring calculation (remember, the sample direction with smallest score is selected). Previously the score or penalty was calculated as follows:
score = Wa*da + Wt*1/toi
Where da is the angle difference between the desired direction and sample direction and toi is time of impact in that direction. Wa and Wt are weights, I have used values 2 and 1 for the weights respectively in my test.
If there is a collision, I'd like the agent to move to to the direction which resolves the collision as quickly as possible. The best direction is where the time-to-exit is the smallest. TOE is zero, if there is no initial collision. For each sample, the largest TOE among the all hits is selected. The modified score calculation looks as follows. I used We=100 in my tests.
score = Wa*da + Wt*1/toi + We*toe^2
The result is that the agent prefers to move away from the collision, but does not select overly stupid direction because of that unfortunate state.
Another thing is test was how the min and max time horizons affect the behavior of the agents. You may remember from the earlier blog posts that max time horizon specifies how early the agent starts to change the movement direction, and min time means how early the agent starts to change the movement speed.
Setting min time horizon close to max time horizon results much more varying and smoother movement, which smaller time horizon results faster and more aggressive movement. It is pretty much as simple as that. I just had not played around with it earlier.
As expected, more aggressive settings make the crowd to spread out much more in more congested situations, while less aggressive setting makes the crowd to make more swirl like solution.
I'm quite happy how few parameters there are to tune with this local navigation solution. I also like how readable their change is too. The overall behavior is mostly controlled how min and max time horizon is set. The weights allow further tuning how much the agent is supposed to favor moving towards the goal (Wa higher) versus moving fast (Wt higher).
Mikko, check out "Composite Agents" that was presented at SCA last year. It treats both individual agents and grouped agents in a similar manner to create some interesting social varations for movement. I'd paste the URL here but firefox wont let me. they have a website though.ReplyDelete
(I've noticed that the blogger text field works better after I go to preview mode and come back.)ReplyDelete
Thanks for the link. I think composite agents should work quite well with my method too. Velocity obstacles and similar methods often are a bit too optimal, and comparing the behavior it creates to humans of seem rude. That is, there is no concept of personal space (especially in front of the agent).
Social Force Model is another similar method. I think the most often referenced paper is "Simulating dynamic features of escape panic" by Helbing, et al (seems to be there on comp agent paper references too). Not sure how that compares to the CA method.
There's a lot of interesting research on the field. I have to hold my horses all the time so that I can finish even the simple ground level stuff :)
Hi Mikko, you have done a amazing job of visulizing the VO concept! Can we also check out the source code of the demo on SVN? if that's true, will be so generous! -- AllenReplyDelete
Project Cane has source for the obstacle avoidance explained above: http://code.google.com/p/cane/ReplyDelete
I have not submitted my HRVO code, since it lacks support for segment obstacles.
I'm working more on obstacle avoidance this spring and the results will be available as part of Project Cane sources and I'm likely to blog about that stuff too.