tag:blogger.com,1999:blog-1272803659321539598.post3949908491952434839..comments2024-03-28T23:38:41.403-07:00Comments on Digesting Duck: Geometric vs. SamplingMikko Mononenhttp://www.blogger.com/profile/11900996590678707801noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-1272803659321539598.post-61081356310627984202010-03-26T06:03:19.904-07:002010-03-26T06:03:19.904-07:00Thanks, Bruce! Looks like a book worth having. I d...Thanks, Bruce! Looks like a book worth having. I did a quick hack yesterday where I just star with a rectangle and chip off each ORCA plane from that.<br /><br />One thing that I noticed was that it becomes really important how you choose the planes. At first I just tried to use the polygon I created for the ClearPath proto and while it worked with a couple of agents, I got really bad results when there were many agents. The planes started to flicker around and often create null admissible set. I will next try smooth boundary too.<br /><br />How do you handle the case when the admissible set is nul? Or do you use 3d linear programming?<br /><br /><br />I was thinking about to test a version which uses sampling together with the orca constraints. Sort of like rendering gradients using max blending mode, and then finding local minima after that. This might allow do combine other weights too.<br /><br /><br />As for your side note, I think if you squint your eyes, my previous TOI sampling can be thought to belong to that category. Basically I was rendering 1px high "depth map" around the agent and steer based on that.Mikko Mononenhttps://www.blogger.com/profile/11900996590678707801noreply@blogger.comtag:blogger.com,1999:blog-1272803659321539598.post-83019325649781899162010-03-26T05:41:36.567-07:002010-03-26T05:41:36.567-07:00I used the LP algorithm on page 76 of Computationa...I used the LP algorithm on page 76 of Computational Geometry by de Berg et al. The one they use in the paper is actually later in the same chapter, but since I was doing a proof of concept I stuck with the simplest algorithm. The algorithm is pretty intuitive, once you get the intuition. The trick in the implementation, I think, is to have a really robust Boundary class that can tell you:<br /><br />1. Given a point and a boundary, which side the point is on, and distance.<br /><br />2. Intersection point between 2 boundaries.<br /><br />One practical issue on ORCA is handling the case when 2 entities are inside the safety zone (BminusA) < radius AB. I wound up clamping BminusA to radius AB + epsilon and that seemed to work ok. I also did the real boundary calculation rather than the clear path approximation.<br /><br />On a different (and speculative note), years ago (1995) I did a synthetic vision based collision avoidance scheme that worked surprisingly well. I had an autonomous animated dog that could "navigate" in the DOOM environment (he could move around on a level, go through openings, avoid walls etc...) The idea was really simple: I rendered the scene from the dog's head and calculated the motion energy between successive images at each pixel (I did a 64 x 64, or 32 x 32 array as I remember). The motion energy is just how much the color changed. To stay in the middle of a corridor, you have a simple control law that balances the motion energy on the right and left half of the image (this is actually how Bees do it in nature). I had special cases for handling the case of approaching a wall head-on, and a few other special cases as I remember. It did require that surfaces be textured. This was all before GPUs etc.<br /><br />I am not proposing this as a method necessarily but rather to suggest that approaching CA as a vision problem might be a profitable direction to explore. I think this might be especially true when you are dealing with something like a pack or group as opposed to a large crowd, where the fine details of the motion actually matter and where the analytical approximations of the shape of the characters starts having visible artifacts. Just speculation. <br /><br />-bbUnknownhttps://www.blogger.com/profile/04551141569372308570noreply@blogger.comtag:blogger.com,1999:blog-1272803659321539598.post-21666248189589369122010-03-25T05:24:55.516-07:002010-03-25T05:24:55.516-07:00Bruce, nice to hear that it works well.
How did y...Bruce, nice to hear that it works well.<br /><br />How did you implement the linear programming? I've never implemented one, and I could not find any simple pointers to that either. I will probably just clip a rectangle with ORCA planes if I cannot figure out anything smarter.<br /><br />On a side note, I did some profiling with my sampled vs. geometric method and the geometric method is over 2x faster than sampling with 150 samples. The way I imagine implementing ORCA, it will at least another 2x faster if not more.<br /><br />That should get me to my target, which is 20+ agents in less than 0.5ms.Mikko Mononenhttps://www.blogger.com/profile/11900996590678707801noreply@blogger.comtag:blogger.com,1999:blog-1272803659321539598.post-68679828339505756302010-03-25T03:59:53.050-07:002010-03-25T03:59:53.050-07:00I smiled at your comment about finally getting ORC...I smiled at your comment about finally getting ORCA. It took me some noodling as well, but once I did and implemented it, I have been quite happy with it. I am using it for quadrupeds and am pleased with the results. <br /><br />You made a comment several weeks ago which it is quite true: at best collision techniques tell you impermissable velocities, rather than telling you good velocities, so CA needs to work in conjunction with some pathing/nav approach.<br /><br />bruceUnknownhttps://www.blogger.com/profile/04551141569372308570noreply@blogger.comtag:blogger.com,1999:blog-1272803659321539598.post-27911372879785545332010-03-23T11:06:04.253-07:002010-03-23T11:06:04.253-07:00I'm thought process is that, what can I take a...I'm thought process is that, what can I take away to fix it, rather than what needs to be added to make it better ;)<br /><br />I hear your point, though, and I believe it is well justified too. There is really cool emerging research both at Gamma [1] as well as at IRISA [2] which is trying to find new solutions to the crowd simulation based on data recorded from humans. I'm keeping keen eye on this research.<br /><br />This actually led to quite interested insight the other day. The big problem with velocity selection is that we can easily find the velocities which are just super bad and physically impossible impossible (i.e. any derivative of VOs), but selecting the velocity amongst the admissible velocities is actually defining the behavior of your agent!<br /><br />That makes for example comparison between different methods impossible since the results are subjective. There is always someone stupid or drunk enough example to compare against so that you can describe the motion "human like".<br /><br />That is the reason I like the above mentioned research, they are pitching their algorithms against plain cold data, and they put a lot of effort into creating as good data as possible.<br /><br />So that is sort of the stuff I'm aiming at in the distant horizon. In the mean while, I want to enable the community with better tools than there currently exists.<br /><br />The currently the accepted standard is "let's trace the polygon edge mid points and add some forces to avoid the others". This is just unacceptable. I want to first come up with a solution that is robust and grounded to something that we are accustomed already, then pursue the greater goal.<br /><br />If the solution is too drastic and requires a lot of work to get it implemented and integrated into existing production pipelines, none will follow it and the sorry state of agent movement will remain.<br /><br />The problem I see with waiting (without any other good means of avoidance) is that then obstacle avoidance becomes scheduling problem, which is not really easy either. It can become as complicated to choose when to stop and when to move again, than it is to joining the merry go round with the others. I did some experiments at some point about just stopping and waiting, and while it worked in some cases, most of the time it resulted dead-locks.<br /><br />I'd love to see a research and some prototype on that topic, though!<br /><br /><br />One thing that is visible in the <br />Geometric Methods of Multi-agent Collision Avoidance video [3], and which supports your hypothesis, is that they actually use random desired movement speeds to solve some of short comings of their avoidance method.<br /><br />I could yield as much as trying to use the behavior to alter the speed in different kinds of situations. Desired speed and change in desired speed is expressing quite a bit of behavior already.<br /><br /><br />One thing I really hate is how different disciplines have named this problem differently. Some call it multi-agent navigation, some call it virtual walkers and what not. Makes Googling quite frustrated job. I think "virtual walkers" might be most relevant to your work.<br /><br /><br />[1] http://gamma.cs.unc.edu/RCAP/<br />[2] http://www.irisa.fr/bunraku/Julien.Pettre/dataset1.html<br />[3] http://www.youtube.com/watch?v=oleR0ovxgN8Mikko Mononenhttps://www.blogger.com/profile/11900996590678707801noreply@blogger.comtag:blogger.com,1999:blog-1272803659321539598.post-44543082765771785282010-03-23T10:01:26.752-07:002010-03-23T10:01:26.752-07:00Mikko: The more I think about it, the more I think...Mikko: The more I think about it, the more I think this is the wrong way to do agent locomotion. My problem is that individuals simply dont treat the world as a set of moving vehicles. They often negotiate a contract between other agents if they see that moving forward would actually cause both of them to stop moving. Your previous video actually showed some of that behavior (the funnel case at the end where some of the agents simply move out of the way).<br /><br />I think adding some super-simple personality modelling on top of these methods would solve some of the crazy edge cases where agents start clumping in a group. It would allow individual agents to essentially adapt themselves out of the steering problems. For instance imagine in the case where you have agents pathing to the other side of a circle, instead of the current "everyone circles round to the right" case, some of them simply stay on the periphery and walk on the outside of the group, while others make a direct line.<br /><br />This feels like the kind of problem you could actually try sampling the behaviour of real humans (by finding a local university and getting them to walk to the same kind of points and videoing the result). Maybe someday I'll try doing that with a group of students and film it for you!<br /><br />Phil.Phil Carlislehttps://www.blogger.com/profile/05262518177977960604noreply@blogger.com