Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Persistent data structure
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
====Persistent data structure method==== We can notice that what really takes time in the data structure used in the naïve method is that whenever we move from a strip to the next, we need to take a snap shot of whatever data structure we are using to keep things in sorted order. We can notice that once we get the segments that intersect <math>s_{i}</math>, when we move to <math>s_{i+1}</math> either one thing leaves or one thing enters. If the difference between what is in <math>s_{i}</math> and what is in <math>s_{i+1}</math> is only one insertion or deletion then it is not a good idea to copy everything from <math>s_{i}</math> to <math>s_{i+1}</math>. The trick is that since each copy differs from the previous one by only one insertion or deletion, then we need to copy only the parts that change. Let us assume that we have a tree rooted at {{mvar|T}}. When we insert a key {{mvar|k}} into the tree, we create a new leaf containing {{mvar|k}}. Performing rotations to rebalance the tree will only modify the nodes of the path from {{mvar|k}} to {{mvar|T}}. Before inserting the key {{mvar|k}} into the tree, we copy all the nodes on the path from {{mvar|k}} to {{mvar|T}}. Now we have 2 versions of the tree, the original one which doesn't contain {{mvar|k}} and the new tree that contains {{mvar|k}} and whose root is a copy of the root of {{mvar|T}}. Since copying the path from {{mvar|k}} to {{mvar|T}} doesn't increase the insertion time by more than a constant factor then the insertion in the persistent data structure takes <math>O(\log(n))</math> time. For the deletion, we need to find which nodes will be affected by the deletion. For each node {{mvar|v}} affected by the deletion, we copy the path from the root to {{mvar|v}}. This will provide a new tree whose root is a copy of the root of the original tree. Then we perform the deletion on the new tree. We will end up with 2 versions of the tree. The original one which contains {{mvar|k}} and the new one which doesn't contain {{mvar|k}}. Since any deletion only modifies the path from the root to {{mvar|v}} and any appropriate deletion algorithm runs in <math>O(\log(n))</math>, thus the deletion in the persistent data structure takes <math>O(\log(n))</math>. Every sequence of insertion and deletion will cause the creation of a sequence of dictionaries or versions or trees <math>S_{1}, S_{2}, \dots S_{i}</math> where each <math>S_{i}</math> is the result of operations <math>S_{1}, S_{2}, \dots S_{i}</math>. If each <math>S_{i}</math> contains {{mvar|m}} elements, then the search in each <math>S_{i}</math> takes <math>O(\log(m))</math>. Using this persistent data structure we can solve the next element search problem in <math>O(\log(n))</math> query time and <math>O(n\cdot \log(n))</math> space instead of <math>O(n^{2})</math>. Please find below the source code for an example related to the next search problem.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)