Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Splay tree
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Self-adjusting binary search tree}} {{Use dmy dates|date=January 2022}} {{Infobox data structure-amortized |name=Splay tree |type=Tree |invented_year=1985 |invented_by=[[Daniel Dominic Sleator]] and [[Robert Endre Tarjan]] |space_avg=O(''n'') | search_avg=O(log ''n'')<ref name="SleatorTarjan" />{{rp|659}} |search_worst=O(''n'')<ref name=BrinkmannDegraerDeLoof />{{rp|1}} |insert_avg=O(log ''n'')<ref name="SleatorTarjan" />{{rp|659}} |insert_worst=O(''n'') |delete_avg=O(log ''n'')<ref name="SleatorTarjan" />{{rp|659}} |delete_worst=O(''n'') }} A '''splay tree''' is a [[binary search tree]] with the additional property that recently accessed elements are quick to access again. Like [[self-balancing binary search tree]]s, a splay tree performs basic operations such as insertion, look-up and removal in [[big O notation|O]](log ''n'') [[amortized analysis|amortized]] time. For random access patterns drawn from a non-uniform random distribution, their amortized time can be faster than logarithmic, proportional to the [[Entropy (information theory)|entropy]] of the access pattern. For many patterns of non-random operations, also, splay trees can take better than logarithmic time, without requiring advance knowledge of the pattern. According to the unproven dynamic optimality conjecture, their performance on all access patterns is within a constant factor of the best possible performance that could be achieved by any other self-adjusting binary search tree, even one selected to fit that pattern. The splay tree was invented by [[Daniel Sleator]] and [[Robert Tarjan]] in 1985.<ref name="SleatorTarjan">{{harvnb|Sleator|Tarjan|1985}}.</ref> All normal operations on a binary search tree are combined with one basic operation, called ''splaying''. Splaying the tree for a certain element rearranges the tree so that the element is placed at the root of the tree. One way to do this with the basic search operation is to first perform a standard binary tree search for the element in question, and then use [[tree rotation]]s in a specific fashion to bring the element to the top. Alternatively, a top-down algorithm can combine the search and the tree reorganization into a single phase. == Advantages == Good performance for a splay tree depends on the fact that it is self-optimizing, in that frequently accessed nodes will move nearer to the root where they can be accessed more quickly. The worst-case height—though unlikely—is O(''n''), with the average being O(log ''n''). Having frequently-used nodes near the root is an advantage for many practical applications (also see [[locality of reference]]), and is particularly useful for implementing [[cache (computing)|cache]]s and [[Garbage collection (computer science)|garbage collection]] algorithms. Advantages include: * Comparable performance: Average-case performance is as efficient as other trees.<ref>{{harvnb|Goodrich|Tamassia|Goldwasser|2014}}.</ref> * Small [[memory footprint]]: Splay trees do not need to store any bookkeeping data. == Disadvantages == The most significant disadvantage of splay trees is that the height of a splay tree can be linear.<ref name="BrinkmannDegraerDeLoof" />{{rp|1}} For example, this will be the case after accessing all ''n'' elements in non-decreasing order. Since the height of a tree corresponds to the worst-case access time, this means that the actual cost of a single operation can be high. However the [[amortized]] access cost of this worst case is logarithmic, O(log ''n''). Also, the expected access cost can be reduced to O(log ''n'') by using a randomized variant.<ref>{{harvnb|Albers|Karpinski|2002}}.</ref> The representation of splay trees can change even when they are accessed in a 'read-only' manner (i.e. by ''find'' operations). This complicates the use of such splay trees in a multi-threaded environment. Specifically, extra management is needed if multiple threads are allowed to perform ''find'' operations concurrently. This also makes them unsuitable for general use in purely functional programming, although even there they can be used in limited ways to implement priority queues. Finally, when the access pattern ''is'' random, the additional splaying overhead adds a significant constant factor to the cost compared to less-dynamic alternatives. == Operations == === Splaying === When a node ''x'' is accessed, a splay operation is performed on ''x'' to move it to the root. A splay operation is a sequence of ''splay steps'', each of which moves ''x'' closer to the root. By performing a splay operation on the node of interest after every access, the recently accessed nodes are kept near the root and the tree remains roughly balanced, so it provides the desired amortized time bounds. Each particular step depends on three factors: * Whether ''x'' is the left or right child of its parent node, ''p'', * whether ''p'' is the root or not, and if not * whether ''p'' is the left or right child of ''its'' parent, ''g'' (the ''grandparent'' of ''x''). There are three types of splay steps, each of which has two symmetric variants: left- and right-handed. For the sake of brevity, only one of these two is shown for each type. (In the following diagrams, circles indicate nodes of interest and triangles indicate sub-trees of arbitrary size.) The three types of splay steps are: '''Zig step:''' this step is done when ''p'' is the root. The tree is rotated on the edge between ''x'' and ''p''. Zig steps exist to deal with the parity issue, will be done only as the last step in a splay operation, and only when ''x'' has odd depth at the beginning of the operation. [[File:splay tree zig.svg|center|350px]] '''Zig-zig step:''' this step is done when ''p'' is not the root and ''x'' and ''p'' are either both right children or are both left children. The picture below shows the case where ''x'' and ''p'' are both left children. The tree is rotated on the edge joining ''p'' with ''its'' parent ''g'', then rotated on the edge joining ''x'' with ''p''. Zig-zig steps are the only thing that differentiate splay trees from the ''rotate to root'' method introduced by Allen and Munro<ref name="AllenMunro">{{harvnb|Allen|Munro|1978}}.</ref> prior to the introduction of splay trees. [[Image:Zigzig.gif|center]] '''Zig-zag step:''' this step is done when ''p'' is not the root and ''x'' is a right child and ''p'' is a left child or vice versa (''x'' is left, ''p'' is right). The tree is rotated on the edge between ''p'' and ''x'', and then rotated on the resulting edge between ''x'' and ''g''. [[Image:Zigzag.gif|center]] === Join === Given two trees S and T such that all elements of S are smaller than the elements of T, the following steps can be used to join them to a single tree: * Splay the largest item in S. Now this item is in the root of S and has a null right child. * Set the right child of the new root to T. === Split === Given a tree and an element ''x'', return two new trees: one containing all elements less than or equal to ''x'' and the other containing all elements greater than ''x''. This can be done in the following way: * Splay ''x''. Now it is in the root so the tree to its left contains all elements smaller than ''x'' and the tree to its right contains all element larger than ''x''. * Split the right subtree from the rest of the tree. === Insertion === To insert a value ''x'' into a splay tree: * Insert ''x'' as with a normal [[binary search tree]]. * Perform a splay on ''x''. As a result, the newly inserted node ''x'' becomes the root of the tree. Alternatively: * Use the split operation to split the tree at the value of ''x'' to two sub-trees: S and T. * Create a new tree in which ''x'' is the root, S is its left sub-tree and T its right sub-tree. === Deletion === To delete a node ''x'', use the same method as with a binary search tree: * If ''x'' has two children: ** Swap its value with that of either the rightmost node of its left sub tree (its in-order predecessor) or the leftmost node of its right subtree (its in-order successor). ** Remove that node instead. In this way, deletion is reduced to the problem of removing a node with 0 or 1 children. Unlike a binary search tree, in a splay tree after deletion, we splay the parent of the removed node to the top of the tree. Alternatively: * The node to be deleted is first splayed, i.e. brought to the root of the tree and then deleted. leaves the tree with two sub trees. * The two sub-trees are then joined using a "join" operation. == Implementation and variants == Splaying, as mentioned above, is performed during a second, bottom-up pass over the access path of a node. It is possible to record the access path during the first pass for use during the second, but that requires extra space during the access operation. Another alternative is to keep a parent pointer in every node, which avoids the need for extra space during access operations but may reduce overall time efficiency because of the need to update those pointers.<ref name="SleatorTarjan" /> Another method which can be used is based on the argument that the tree can be restructured during the way down the access path instead of making a second pass. This top-down splaying routine uses three sets of nodes – left tree, right tree and middle tree. The first two contain all items of original tree known to be less than or greater than current item respectively. The middle tree consists of the sub-tree rooted at the current node. These three sets are updated down the access path while keeping the splay operations in check. Another method, semisplaying, modifies the zig-zig case to reduce the amount of restructuring done in all operations.<ref name="SleatorTarjan" /><ref name="Lucas" /> Below there is an implementation of splay trees in C++, which uses pointers to represent each node on the tree. This implementation is based on bottom-up splaying version and uses the second method of deletion on a splay tree. Also, unlike the above definition, this C++ version does ''not'' splay the tree on finds – it only splays on insertions and deletions, and the find operation, therefore, has linear time complexity. <syntaxhighlight lang="cpp"> #include <functional> #ifndef SPLAY_TREE #define SPLAY_TREE template<typename T, typename Comp = std::less<T>> class splay_tree { private: Comp comp; unsigned long p_size; struct node { node *left, *right; node *parent; T key; node(const T& init = T()) : left(nullptr), right(nullptr), parent(nullptr), key(init) { } ~node() { } } *root; void left_rotate(node *x) { node *y = x->right; if (y) { x->right = y->left; if (y->left) y->left->parent = x; y->parent = x->parent; } if (!x->parent) root = y; else if (x == x->parent->left) x->parent->left = y; else x->parent->right = y; if (y) y->left = x; x->parent = y; } void right_rotate(node *x) { node *y = x->left; if (y) { x->left = y->right; if (y->right) y->right->parent = x; y->parent = x->parent; } if (!x->parent) root = y; else if (x == x->parent->left) x->parent->left = y; else x->parent->right = y; if (y) y->right = x; x->parent = y; } void splay(node *x) { while (x->parent) { if (!x->parent->parent) { if (x->parent->left == x) right_rotate(x->parent); else left_rotate(x->parent); } else if (x->parent->left == x && x->parent->parent->left == x->parent) { right_rotate(x->parent->parent); right_rotate(x->parent); } else if (x->parent->right == x && x->parent->parent->right == x->parent) { left_rotate(x->parent->parent); left_rotate(x->parent); } else if (x->parent->left == x && x->parent->parent->right == x->parent) { right_rotate(x->parent); left_rotate(x->parent); } else { left_rotate(x->parent); right_rotate(x->parent); } } } void replace(node *u, node *v) { if (!u->parent) root = v; else if (u == u->parent->left) u->parent->left = v; else u->parent->right = v; if (v) v->parent = u->parent; } node* subtree_minimum(node *u) { while (u->left) u = u->left; return u; } node* subtree_maximum(node *u) { while (u->right) u = u->right; return u; } public: splay_tree() : root(nullptr), p_size(0) { } void insert(const T &key) { node *z = root; node *p = nullptr; while (z) { p = z; if (comp(z->key, key)) z = z->right; else z = z->left; } z = new node(key); z->parent = p; if (!p) root = z; else if (comp(p->key, z->key)) p->right = z; else p->left = z; splay(z); p_size++; } node* find(const T &key) { node *z = root; while (z) { if (comp(z->key, key)) z = z->right; else if (comp(key, z->key)) z = z->left; else return z; } return nullptr; } void erase(const T &key) { node *z = find(key); if (!z) return; splay(z); if (!z->left) replace(z, z->right); else if (!z->right) replace(z, z->left); else { node *y = subtree_minimum(z->right); if (y->parent != z) { replace(y, y->right); y->right = z->right; y->right->parent = y; } replace(z, y); y->left = z->left; y->left->parent = y; } delete z; p_size--; } /* //the alternative implementation void erase(const T &key) { node *z = find(key); if (!z) return; splay(z); node *s = z->left; node *t = z->right; delete z; node *sMax = NULL; if (s) { s->parent = NULL; sMax = subtree_maximum(s); splay(sMax); root = sMax; } if (t) { if (s) sMax->right = t; else root = t; t->parent = sMax; } p_size--; } */ const T& minimum() { return subtree_minimum(root)->key; } const T& maximum() { return subtree_maximum(root)->key; } bool empty() const { return root == nullptr; } unsigned long size() const { return p_size; } }; #endif // SPLAY_TREE </syntaxhighlight> == Analysis == A simple [[amortized analysis]] of static splay trees can be carried out using the [[potential method]]. Define: * size(''r'') = the number of nodes in the sub-tree rooted at node ''r'' (including ''r''). * rank(''r'') = log<sub>2</sub>(size(''r'')). * Φ = the sum of the ranks of all the nodes in the tree. Φ will tend to be high for poorly balanced trees and low for well-balanced trees. To apply the [[potential method]], we first calculate ΔΦ: the change in the potential caused by a splay operation. We check each case separately. Denote by rank' the rank function after the operation. x, p and g are the nodes affected by the rotation operation (see figures above). ===Zig step=== :{| |- | ΔΦ || = rank'(''p'') − rank(''p'') + rank'(''x'') − rank(''x'') | [since only p and x change ranks] |- | || = rank'(''p'') − rank(''x'') | [since rank'(''x'')=rank(''p'')] |- | || ≤ rank'(''x'') − rank(''x'') | [since rank'(''p'')<rank'(''x'')] |} ===Zig-zig step=== :{| | ΔΦ ||colspan=2| = rank'(''g'') − rank(''g'') + rank'(''p'') − rank(''p'') + rank'(''x'') − rank(''x'') |- | || = rank'(''g'') + rank'(''p'') − rank(''p'') − rank(''x'') | [since rank'(x)=rank(g)] |- | || ≤ rank'(''g'') + rank'(''x'') − 2 rank(''x'') | [since rank(''x'')<rank(''p'') and rank'(''x'')>rank'(''p'')] |- | || ≤ 3(rank'(''x'')−rank(''x'')) − 2 | [due to the concavity of the log function] |} ===Zig-zag step=== :{| | ΔΦ ||colspan=2| = rank'(''g'') − rank(''g'') + rank'(''p'') − rank(''p'') + rank'(''x'') − rank(''x'') |- | || ≤ rank'(''g'') + rank'(''p'') − 2 rank(''x'') | [since rank'(''x'')=rank(''g'') and rank(''x'')<rank(''p'')] |- | || ≤ 3(rank'(''x'')−rank(''x'')) − 2 | [due to the concavity of the log function] |} The amortized cost of any operation is ΔΦ plus the actual cost. The actual cost of any zig-zig or zig-zag operation is 2 since there are two rotations to make. Hence: :{| | amortized-cost || = cost + ΔΦ |- | || ≤ 3(rank'(''x'')−rank(''x'')) |} When summed over the entire splay operation, this [[telescoping series|telescopes]] to 1 + 3(rank(root)−rank(''x'')) which is O(log ''n''), since we use The Zig operation at most once and the amortized cost of zig is at most 1+3(rank'(''x'')−rank(''x'')). So now we know that the total ''amortized'' time for a sequence of ''m'' operations is: :<math>T_\mathrm{amortized}(m) = O(m \log n)</math> To go from the amortized time to the actual time, we must add the decrease in potential from the initial state before any operation is done (Φ<sub>''i''</sub>) to the final state after all operations are completed (Φ<sub>''f''</sub>). :<math>\Phi_i - \Phi_f = \sum_x{\mathrm{rank}_i(x) - \mathrm{rank}_f(x)} = O(n \log n)</math> where the [[big O notation]] can be justified by the fact that for every node ''x'', the minimum rank is 0 and the maximum rank is log(''n''). Now we can finally bound the actual time: :<math>T_\mathrm{actual}(m) = O(m \log n + n \log n)</math> === Weighted analysis === The above analysis can be generalized in the following way. * Assign to each node ''r'' a weight ''w''(''r''). * Define size(''r'') = the sum of weights of nodes in the sub-tree rooted at node ''r'' (including ''r''). * Define rank(''r'') and Φ exactly as above. The same analysis applies and the amortized cost of a splaying operation is again: :<math>1 + 3(\mathrm{rank}(root)-\mathrm{rank}(x))</math> where ''W'' is the sum of all weights. The decrease from the initial to the final potential is bounded by: :<math>\Phi_i - \Phi_f \leq \sum_{x\in tree}{\log{\frac{W}{w(x)}}}</math> since the maximum size of any single node is ''W'' and the minimum is ''w(x)''. Hence the actual time is bounded by: :<math>O\left(\sum_{x \in sequence}{\left(1 + 3\log{\frac{W}{w(x)}}\right)} + \sum_{x \in tree}{\log{\frac{W}{w(x)}}}\right) = O\left(m + \sum_{x \in sequence}{\log{\frac{W}{w(x)}}} + \sum_{x \in tree}{\log{\frac{W}{w(x)}}}\right)</math> == Performance theorems == There are several theorems and conjectures regarding the worst-case runtime for performing a sequence ''S'' of ''m'' accesses in a splay tree containing ''n'' elements. {{Math theorem|Balance Theorem|The cost of performing the sequence ''S'' is <math>O\left[m \log n + n\log n\right]</math>. {{Math proof|Take a constant weight, e.g. {{tmath|1=w(x)=1}} for every node ''x''. Then {{tmath|1=W=n}}.}} This theorem implies that splay trees perform as well as static balanced binary search trees on sequences of at least ''n'' accesses.<ref name="SleatorTarjan" />}} {{Math theorem|Static Optimality Theorem|Let <math>q_x</math> be the number of times element ''x'' is accessed in ''S''. If every element is accessed at least once, then the cost of performing ''S'' is <math>O\left[m + \sum_{x\in tree} q_x\log\frac{m}{q_x}\right]</math> {{Math proof|Let <math>w(x)=q_x</math>. Then <math>W=m</math>.}} This theorem implies that splay trees perform as well as an optimum static binary search tree on sequences of at least ''n'' accesses.<ref>{{harvnb|Knuth|1997|p=478}}</ref> They spend less time on the more frequent items.<ref name="SleatorTarjan" /> Another way of stating the same result is that, on input sequences where the items are drawn independently at random from a non-uniform [[probability distribution]] on ''n'' items, the amortized expected ([[average-case analysis|average case]]) cost of each access is proportional to the [[Entropy (information theory)|entropy]] of the distribution.{{sfnp|Grinberg|Rajagopalan|Venkatesan|Wei|1995}}}} {{Math theorem|Static Finger Theorem|Assume that the items are numbered from 1 through ''n'' in ascending order. Let ''f'' be any fixed element (the 'finger'). Then the cost of performing ''S'' is <math>O\left[m + n\log n + \sum_{x\in sequence} \log(|x-f| + 1)\right]</math>. {{Math proof|Let <math>w(x)=1/(|x-f|+1)^2</math>. Then {{tmath|1=W=O(1)}}. The net potential drop is ''O'' (''n'' log ''n'') since the weight of any item is at least {{tmath|1=1/n^2}}.<ref name="SleatorTarjan" />}} }} {{Math theorem|Dynamic Finger Theorem|Assume that the 'finger' for each step accessing an element ''y'' is the element accessed in the previous step, ''x''. The cost of performing ''S'' is <math>O\left[m + n + \sum_{x,y\in sequence}^m \log(|y-x| + 1)\right]</math>.<ref name="ColeEtAl">{{harvnb|Cole|Mishra|Schmidt|Siegel|2000}}.</ref><ref name="Cole">{{harvnb|Cole|2000}}.</ref>}} {{Math theorem|Working Set Theorem|At any time during the sequence, let <math>t(x)</math> be the number of distinct elements accessed before the previous time element x was accessed. The cost of performing ''S'' is <math>O\left[m + n\log n + \sum_{x\in sequence} \log(t(x) + 1)\right]</math> {{Math proof|Let <math>w(x)=1/(t(x)+1)^2</math>. Note that here the weights change during the sequence. However, the sequence of weights is still a permutation of {{tmath|1=1, \tfrac 1 4, \tfrac 1 9, \cdots, \tfrac 1 {n^2} }}. So as before {{tmath|1=W=O(1)}}. The net potential drop is ''O'' (''n'' log ''n'').}} This theorem is equivalent to splay trees having [[key-independent optimality]].<ref name="SleatorTarjan" />}} {{Math theorem|Scanning Theorem|Also known as the '''Sequential Access Theorem''' or the '''Queue theorem'''. Accessing the ''n'' elements of a splay tree in symmetric order takes ''O''(''n'') time, regardless of the initial structure of the splay tree.<ref name="Tarjan">{{harvnb|Tarjan|1985}}.</ref> The tightest upper bound proven so far is <math>4.5n</math>.<ref name="Elmasry">{{harvnb|Elmasry|2004}}.</ref>}} == Dynamic optimality conjecture == {{Main article|Optimal binary search tree}} {{unsolved|computer science|Do splay trees perform as well as any other binary search tree algorithm?}} In addition to the proven performance guarantees for splay trees there is an unproven conjecture of great interest from the original Sleator and Tarjan paper. This conjecture is known as the ''dynamic optimality conjecture'' and it basically claims that splay trees perform as well as any other binary search tree algorithm up to a constant factor. :'''Dynamic Optimality Conjecture:<ref name="SleatorTarjan" />''' Let <math>A</math> be any binary search tree algorithm that accesses an element <math>x</math> by traversing the path from the root to <math>x</math> at a cost of <math>d(x)+1</math>, and that between accesses can make any rotations in the tree at a cost of 1 per rotation. Let <math>A(S)</math> be the cost for <math>A</math> to perform the sequence <math>S</math> of accesses. Then the cost for a splay tree to perform the same accesses is <math>O[n + A(S)]</math>. There are several corollaries of the dynamic optimality conjecture that remain unproven: :'''Traversal Conjecture:<ref name="SleatorTarjan" />''' Let <math>T_1</math> and <math>T_2</math> be two splay trees containing the same elements. Let <math>S</math> be the sequence obtained by visiting the elements in <math>T_2</math> in preorder (i.e., depth first search order). The total cost of performing the sequence <math>S</math> of accesses on <math>T_1</math> is <math>O(n)</math>. :'''Deque Conjecture:<ref name="Tarjan" /><ref name="Pettie">{{harvnb|Pettie|2008}}.</ref><ref name="Sundar">{{harvnb|Sundar|1992}}.</ref>''' Let <math>S</math> be a sequence of <math>m</math> [[double-ended queue]] operations (push, pop, inject, eject). Then the cost of performing <math>S</math> on a splay tree is <math>O(m + n)</math>. :'''Split Conjecture:<ref name="Lucas">{{harvnb|Lucas|1991}}.</ref>''' Let <math>S</math> be any permutation of the elements of the splay tree. Then the cost of deleting the elements in the order <math>S</math> is <math>O(n)</math>. == Variants == In order to reduce the number of restructuring operations, it is possible to replace the splaying with ''semi-splaying'', in which an element is splayed only halfway towards the root.<ref name="SleatorTarjan"/><ref name=BrinkmannDegraerDeLoof>{{harvnb|Brinkmann|Degraer|De Loof|2009}}.</ref> Another way to reduce restructuring is to do full splaying, but only in some of the access operations – only when the access path is longer than a threshold, or only in the first ''m'' access operations.<ref name="SleatorTarjan"/> The CBTree augments the splay tree with access counts at each node and uses them to restructure infrequently. A variant of the CBTree called the LazyCBTree does at most one rotation on each lookup. This is used along with an optimistic hand-over-hand validation scheme to make a concurrent self-adjusting tree.<ref name="AfekEtAl">{{harvnb|Afek|Kaplan|Korenfeld|Morrison|Tarjan|2014}}</ref> Using pointer-compression techniques,<ref name="BenderEtAl">{{harvnb|Bender|Conway|Farach-Colton|Kuszmaul|Tagliavini|2023}}.</ref> it is possible to construct a [[Succinct data structure|succinct]] splay tree. == See also == * [[AVL tree]] * [[B-tree]] * [[Finger tree]] * [[Geometry of binary search trees]] * [[Iacono's working set structure]] * [[Link/cut tree]] * [[List of data structures]] * [[Scapegoat tree]] * [[Splaysort]], a sorting algorithm using splay trees * [[T-tree]] * [[Treap]] * [[Tree rotation]] * [[tree data structure|Trees]] * [[Zipper (data structure)]] == Notes == {{Reflist}} == References == *{{cite journal |last1=Afek |first1=Yehuda |last2=Kaplan |first2=Haim |last3=Korenfeld |first3=Boris |last4=Morrison |first4=Adam |last5=Tarjan |first5=Robert E. |title=The CB tree: a practical concurrent self-adjusting search tree |journal=Distributed Computing |date=2014 |volume=27 |issue=6 |pages=393–417 |doi=10.1007/s00446-014-0229-0 |url=http://arks.princeton.edu/ark:/88435/pr14z66 }} *{{cite journal | first1 = Susanne | last1 = Albers | first2 = Marek | last2 = Karpinski | title = Randomized Splay Trees: Theoretical and Experimental Results | journal = [[Information Processing Letters]] | volume = 81 | issue = 4 | pages = 213–221 | date = 28 February 2002 | url = http://www14.in.tum.de/personen/albers/papers/ipl02.pdf | doi=10.1016/s0020-0190(01)00230-7 }} *{{cite journal | first1 = Brian | last1 = Allen | first2 = Ian | last2 = Munro | title = Self-organizing binary search trees | journal = [[Journal of the ACM]] | volume = 25 | pages = 526–535 | date = October 1978 | issue = 4 | doi = 10.1145/322092.322094 | s2cid = 15967344 | doi-access = free }} *{{cite journal | last1 = Brinkmann | first1 = Gunnar | last2 = Degraer | first2 = Jan | last3 = De Loof | first3 = Karel | title = Rehabilitation of an unloved child: semi-splaying | journal = Software: Practice and Experience | volume = 39 | issue = 1 | pages = 33–45 | date = January 2009 | doi = 10.1002/spe.v39:1 | hdl = 11382/102133 | citeseerx = 10.1.1.84.790 | url = http://caagt.ugent.be/preprints/splay_spe.pdf | quote = The results show that semi-splaying, which was introduced in the same paper as splaying, performs better than splaying under almost all possible conditions. This makes semi-splaying a good alternative for all applications where normally splaying would be applied. The reason why splaying became so prominent while semi-splaying is relatively unknown and much less studied is hard to understand. }} *{{cite journal | first1 = Richard | last1 = Cole | first2 = Bud | last2 = Mishra | first3 = Jeanette | last3 = Schmidt | first4 = Alan | last4 = Siegel | title = On the Dynamic Finger Conjecture for Splay Trees. Part I: Splay Sorting log n-Block Sequences | journal = [[SIAM Journal on Computing]] | volume = 30 | issue = 1 | pages = 1–43 | date = January 2000 | doi = 10.1137/s0097539797326988 | citeseerx = 10.1.1.36.4558 }} *{{cite journal | first = Richard | last = Cole | title = On the Dynamic Finger Conjecture for Splay Trees. Part II: The Proof | journal = [[SIAM Journal on Computing]] | volume = 30 | issue = 1 | pages = 44–85 | date = January 2000 | doi = 10.1137/S009753979732699X | citeseerx = 10.1.1.36.2713 }} *{{cite journal | first = Amr | last = Elmasry | title = On the sequential access theorem and Deque conjecture for splay trees | journal = Theoretical Computer Science | volume = 314 | issue = 3 | pages = 459–466 | date = April 2004 | doi = 10.1016/j.tcs.2004.01.019 | url = https://www.researchgate.net/publication/220150614 | doi-access = }} *{{cite book | first1 = Michael | last1 = Goodrich | first2 = Roberto | last2 = Tamassia | first3 = Michael | last3 = Goldwasser | title = Data Structures and Algorithms in Java | publisher = Wiley | page = 506 | edition = 6 | year = 2014 | isbn = 978-1-118-77133-4 | language = en }} *{{cite conference | last1 = Grinberg | first1 = Dennis | last2 = Rajagopalan | first2 = Sivaramakrishnan | last3 = Venkatesan | first3 = Ramarathnam | last4 = Wei | first4 = Victor K. | editor-last = Clarkson | editor-first = Kenneth L. | contribution = Splay trees for data compression | contribution-url = https://dl.acm.org/citation.cfm?id=313651.313812 | quote = Average depth of access in a splay tree is proportional to the entropy. | pages = 522–530 | publisher = ACM/SIAM | title = Proceedings of the Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, 22–24 January 1995. San Francisco, California, USA | year = 1995}} *{{cite book | first = Donald | last = Knuth | author-link = Donald Knuth | title = [[The Art of Computer Programming]] | volume = 3: Sorting and Searching | edition = 3rd | publisher = Addison-Wesley | year = 1997 | isbn = 0-201-89685-0 | page = 478 <!-- section 6.2.3 --> | quote = The time needed to access data in a splay tree is known to be at most a small constant multiple of the access time of a statically optimum tree, when amortized over any series of operations. }} *{{cite book | first = Joan M. | last = Lucas | contribution = On the Competitiveness of Splay Trees: Relations to the Union-Find Problem | title = On-line Algorithms: Proceedings of a DIMACS Workshop, February 11–13, 1991 | publisher = [[DIMACS|Center for Discrete Mathematics and Theoretical Computer Science]] | series = Series in Discrete Mathematics and Theoretical Computer Science | volume = 7 | pages = 95–124 | date = 1991 | isbn = 0-8218-7111-0 }} *{{cite conference | first = Seth | last = Pettie | title = Splay Trees, Davenport-Schinzel Sequences, and the Deque Conjecture | journal = Proc. 19th ACM-SIAM Symposium on Discrete Algorithms | pages = 1115–1124 | year = 2008 | bibcode = 2007arXiv0707.2160P | volume = 0707 | arxiv = 0707.2160 | url = http://web.eecs.umich.edu/~pettie/papers/Deque.pdf }} *{{cite journal | first1 = Daniel D. | last1 = Sleator | author1-link = Daniel Sleator | first2 = Robert E. | last2 = Tarjan | author2-link = Robert Tarjan | title = Self-Adjusting Binary Search Trees | journal = [[Journal of the ACM]] | volume = 32 | issue = 3 | pages = 652–686 | year = 1985 | url = https://www.cs.cmu.edu/~sleator/papers/self-adjusting.pdf | doi = 10.1145/3828.3835 | s2cid = 1165848 }} *{{cite journal | first = Rajamani | last = Sundar | title = On the Deque conjecture for the splay algorithm | journal = Combinatorica | volume = 12 | pages = 95–124 | year = 1992 | issue = 1 | doi = 10.1007/BF01191208 | s2cid = 27422556 }} *{{cite journal | first = Robert E. | last = Tarjan | author-link = Robert Tarjan | title = Sequential access in splay trees takes linear time | journal = Combinatorica | volume = 5 | issue = 4 | pages = 367–378 | year = 1985 | doi = 10.1007/BF02579253 | s2cid = 34757821 }} *{{cite journal |first1= Michael A. |last1 = Bender |first2 = Alex |last2 = Conway |first3 = Martin |last3 = Farach-Colton |first4 = William |last4 = Kuszmaul |first5 = Guido |last5 = Tagliavini |title = Tiny Pointers |journal = Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA) |pages=477–508 |isbn=978-1-61197-755-4 |year = 2023|doi = 10.1137/1.9781611977554.ch21 |s2cid = 244709005 }} == External links == * [https://xlinux.nist.gov/dads/HTML/splaytree.html NIST's Dictionary of Algorithms and Data Structures: Splay Tree] * [http://www.link.cs.cmu.edu/link/ftp-site/splaying/ Implementations in C and Java (by Daniel Sleator)] * [http://wiki.algoviz.org/search/node/splay Pointers to splay tree visualizations] * [https://github.com/fbuihuu/libtree Fast and efficient implementation of Splay trees] * [https://github.com/cpdomina/SplayTree Top-Down Splay Tree Java implementation] * [https://arxiv.org/abs/1003.0139 Zipper Trees] {{CS-Trees}} {{Data structures}} {{DEFAULTSORT:Splay Tree}} [[Category:Binary trees]] [[Category:Search trees]] [[Category:Amortized data structures]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:CS-Trees
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Data structures
(
edit
)
Template:Harvnb
(
edit
)
Template:Infobox data structure-amortized
(
edit
)
Template:Main article
(
edit
)
Template:Math theorem
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:Short description
(
edit
)
Template:Unsolved
(
edit
)
Template:Use dmy dates
(
edit
)