Advertisement

STD Heaps: reheaping

Started by April 11, 2003 05:27 PM
8 comments, last by Cirrus 21 years, 4 months ago
I was unsure of where to post this, General Programming would've fit but since this is pertinent to a pathfinding algorithm I thought it'd fit here too. I've implemented the A star algorithm using STD heaps. I've got it working fine but I'm curious of just how efficient it is. In particular, when I come across a node which is already in the heap (open list) I relink its parent and recalculate F and G. The heap then needs to be reheaped to make sure the best F is on top of the heap (which may not be the case after this step). I'm currently resending my vector to std::make_heap. It works but is it overkill? Is there a better solution? Thanks for reading [edited by - Cirrus on April 11, 2003 6:29:10 PM]
RealitySliphttp://www.RealitySlip.com
push_heap may be more to your liking; read the docs for it.

How appropriate. You fight like a cow.
Advertisement
Hi Sneftel,

thanks for the reply but push_heap won't do as it only orders the last element in the heap. What I'd have to do to use push_heap is swap the last element in the heap with the one I've modified. That would work (since I only need the top of the heap to be sorted, I only need the first element to have the lowest F value) but I'm not sure if it's more efficient than reheaping (definitely should be)? That may work actually I just don't know if it'll mess up the heap.

/* edit */
Actually, I can't do that as I don't have the index of the element within the heap (so I can't swap with the last element). Getting the index is out of the question (O(n) search...).

[edited by - Cirrus on April 11, 2003 7:12:24 PM]
RealitySliphttp://www.RealitySlip.com
You''re going to have to make a decision... do you want to pay the cost (each iteration) of keeping an ordered structure, or pay the cost of searching for the lowest cost in an unordered structure?

The conventional wisdom suggests choosing the latter if you expect that you will be inserting more objects than you are removing, which is certainly the case in a branching search.

Since you seem to have chosen the former, don''t be afraid that it is going to cost you some computation to reorder your heap each iteration. I''m not very familiar with the std library... does make_heap implement a Heapsort, or a Shellsort?

Timkin
/* redit:

post content got messed with last edit! doh!
make sure you read the post after this as my reasoning's wrong!

*/

/* edit:

From what I've read, it uses a heapsort. It only needs to be done once when starting the pathfinding, when there's just the path start in the list meaning hardly no performance hit. Unless you do what I'm currently doing when you hit a better path and reheap using make_heap (I think I've found a good solution to this though, read on).

*/

I don't keep the list fully ordered, each iteration only needs to do a maximum number of 8 insertions/reheaps (depending on whether the node already had a path or not and if it was better or worse). Inserting an element should be done in O(log n) while reheaping (better path for existing node found) should be O(n log n) according to some documentation I've found. The problem is that it could be any permutation of those two. Meaning in a worse case scenario (though pretty rare, maybe impossible?) it'd be 8 * O(n log n) in a single iteration.

So if I went for a naive search every iteration it'd be a minimum of O(n) for the best element search. This would mean no paths need correction, we simply insert 8 elements. In such a case the heap is far more efficient running in 8*O(log n). Now in the worse case scenario the naive approach would _still_ take O(n) and would only need to relink parents and recalculate values when a better path is found (no need to reiterate through the list at all). While the worse case for the heap (8*O(n log n)).

With large sets, the heap would be far more rewarding than the naive approach especially since the algorithm inserts more than it adjusts the paths. In the worse case, the heap performs considerably worse but still not as bad as the naive approach's best case to the heap's best case.

Now to go even further, what if instead of reheaping I did a search for the element in the list and did a swap with the last one? That would take O(n) for the search and O(log n) for resorting the last element. So what I need to know is if O(n)+O(log n) < O(n log n). Since that's true, I might as well do the swap method! I just answered my own question

I'll post it anyway so someone can check if my reasoning's right and just in case someone has a similar question in the future.

So my answer would be: instead of reheaping via make_heap when a better path to a node is found, search the heap for the current node and swap it with the last one. Then simply push_heap.

[edited by - Cirrus on April 16, 2003 1:55:29 PM]
RealitySliphttp://www.RealitySlip.com
quote: Original post by Cirrus
So my answer would be: instead of reheaping via make_heap when a better path to a node is found, search the heap for the current node and swap it with the last one. Then simply push_heap.

push_heap requires that you already have a heap (excluding the last element). The swapping of elements could break this guarantee.

There is no quick optimisation to a heap that you can get by swapping, as the element you''re removing might be at or near the head of the heap, which means potentially half the heap needs to be resorted before you can add anything. Heap removal can''t really be optimised very much.

Personally I doubt there''s a problem since you''re already using a fairly efficient algorithm. I just use an std::map and have no complaints.



[ MSVC Fixes | STL Docs | SDL | Game AI | Sockets | C++ Faq Lite | Boost
Asking Questions | Organising code files | My stuff | Tiny XML | STLPort]
Advertisement
/* edit: something got majorly messed up with the source code box... */

Very good point, I really didn't think that through enough...

Lets assume could do something else like find the element in the heap and push it all the way down the heap (would still be O(n) although you'd have a lot of writing to memory).

Something like:

      // repathed_node is a node pointer and working_list is the// heaped vector/deque of node pointers.// find node for(unsigned int i=0; i<working_list.size() && working_list[i] != repathed_node; i++){}// push it down the list which should gaurantee things are still// ordered, although this may break the binary structure?for(; i<working_list.size()-1; i++){	working_list[i] = working_list[i+1];}working_list[i] = repathed_node;  std::push_heap(working_list.begin(), working_list.end());    


The limited knowledge I have on heaps sends off alarms at doing that though, wouldn't this bust the linear binary tree used to store the heap as parents and children would be unaligned in the tree?

So I assume the best thing (when considering a near semi-realtime, meaning not every frame, implementation of A*) is make_heap when a better path is found or simply accept a non-optimal path and not move the node (in the heap) at all. I've seen a few implementations do this but you could end up chosing a really bad path this way (seems to defeat the purpose).

Hope I thought things right this time

[edited by - Cirrus on April 16, 2003 1:52:24 PM]
RealitySliphttp://www.RealitySlip.com
quote: Original post by Cirrus
The limited knowledge I have on heaps sends off alarms at doing that though, wouldn't this bust the linear binary tree used to store the heap as parents and children would be unaligned in the tree?

Very much so It's not just an ordered list, as you know. So you can't just 'shuffle' everything up. A heap looks a bit like this:
          1       2     3      4 5   6 7  etc...  

So if you wanted to remove node 3 by shuffling elements 4 to 7 upwards, you break the tree by moving element 4 to the right-hand branch of node 1 (among other problems). Even if you were careful to 'bubble' the elements up the correct branches, you can be left with holes in the structure, such as if you tried to remove node 2 above - node 4 or 5 would have to move up, leaving a gap where 4 or 5 once were. It's not impossible, but you need to have an understanding of the heap structure before you can code this.

quote: So I assume the best thing (when considering a near semi-realtime, meaning not every frame, implementation of A*) is make_heap when a better path is found or simply accept a non-optimal path and not move the node (in the heap) at all.

I'd just make_heap. Forming a heap is very fast, because it never has to perform a full sort.

[ MSVC Fixes | STL Docs | SDL | Game AI | Sockets | C++ Faq Lite | Boost
Asking Questions | Organising code files | My stuff | Tiny XML | STLPort]

[edited by - Kylotan on April 16, 2003 5:36:06 PM]
Thought so, was worth a shot

Thanks for the time and explanations! Good to know that it''s just about as near optimal as you can get.
RealitySliphttp://www.RealitySlip.com
...and one final point... if at any time you feel your method is worse than O(n) for each iteration, then you are better off taking the naive approach of searching an unordered list for the best node to expand... you''ll also have to perform an O(n) search of the open and branch lists to confirm that you haven''t visited this path before when you generate each successor... but it''s still all O(n)!

Cheers,

Timkin

This topic is closed to new replies.

Advertisement