Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Dynamic array
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Variants == [[Gap buffer]]s are similar to dynamic arrays but allow efficient insertion and deletion operations clustered near the same arbitrary location. Some [[deque]] implementations use [[Deque#Implementations|array deques]], which allow amortized constant time insertion/removal at both ends, instead of just one end. Goodrich<ref>{{Citation | title=Tiered Vectors: Efficient Dynamic Arrays for Rank-Based Sequences | first1=Michael T. | last1=Goodrich | author1-link=Michael T. Goodrich | first2=John G. | last2=Kloss II | year=1999 | url=https://archive.org/details/algorithmsdatast0000wads/page/205 | journal=[[Workshop on Algorithms and Data Structures]] | pages=[https://archive.org/details/algorithmsdatast0000wads/page/205 205–216] | doi=10.1007/3-540-48447-7_21 | volume=1663 | series=Lecture Notes in Computer Science | isbn=978-3-540-66279-2 | url-access=registration }}</ref> presented a dynamic array algorithm called ''tiered vectors'' that provides ''O''(''n''<sup>1/''k''</sup>) performance for insertions and deletions from anywhere in the array, and ''O''(''k'') get and set, where ''k'' ≥ 2 is a constant parameter. [[Hashed array tree]] (HAT) is a dynamic array algorithm published by Sitarski in 1996.<ref name="sitarski96">{{Citation | title=HATs: Hashed array trees | department=Algorithm Alley | journal=Dr. Dobb's Journal | date=September 1996 | first1=Edward | last1=Sitarski | volume=21 | issue=11 | url=http://www.ddj.com/architect/184409965?pgno=5}}</ref> Hashed array tree wastes order ''n''<sup>1/2</sup> amount of storage space, where ''n'' is the number of elements in the array. The algorithm has ''O''(1) amortized performance when appending a series of objects to the end of a hashed array tree. In a 1999 paper,<ref name="brodnik">{{Citation | title=Resizable Arrays in Optimal Time and Space |type=Technical Report CS-99-09 | url=http://www.cs.uwaterloo.ca/research/tr/1999/09/CS-99-09.pdf | year=1999 | first1=Andrej | last1=Brodnik | first2=Svante | last2=Carlsson | first5=ED | last5=Demaine | first4=JI | last4=Munro | first3=Robert | last3=Sedgewick | author3-link=Robert Sedgewick (computer scientist) | publisher=Department of Computer Science, University of Waterloo}}</ref><!-- Defined in {{List data structure comparison}}: {{Citation | title=Resizable Arrays in Optimal Time and Space | date=Technical Report CS-99-09 | url=http://www.cs.uwaterloo.ca/research/tr/1999/09/CS-99-09.pdf | year=1999 | first1=Andrej | last1=Brodnik | first2=Svante | last2=Carlsson | first5=ED | last5=Demaine | first4=JI | last4=Munro | first3=Robert | last3=Sedgewick | author3-link=Robert Sedgewick (computer scientist) | publisher=Department of Computer Science, University of Waterloo}} --> Brodnik et al. describe a tiered dynamic array data structure, which wastes only ''n''<sup>1/2</sup> space for ''n'' elements at any point in time, and they prove a lower bound showing that any dynamic array must waste this much space if the operations are to remain amortized constant time. Additionally, they present a variant where growing and shrinking the buffer has not only amortized but worst-case constant time. Bagwell (2002)<ref>{{Citation | title=Fast Functional Lists, Hash-Lists, Deques and Variable Length Arrays | first1=Phil | last1=Bagwell | year=2002 | publisher=EPFL | url=http://citeseer.ist.psu.edu/bagwell02fast.html}}</ref> presented the VList algorithm, which can be adapted to implement a dynamic array. Naïve resizable arrays -- also called "the worst implementation" of resizable arrays -- keep the allocated size of the array exactly big enough for all the data it contains, perhaps by calling [[realloc]] for each and every item added to the array. Naïve resizable arrays are the simplest way of implementing a resizable array in C. They don't waste any memory, but appending to the end of the array always takes Θ(''n'') time.<ref name="sitarski96" /><ref> Mike Lam. [https://w3.cs.jmu.edu/lam2mo/cs240_2015_08/files/04-dyn_arrays.pdf "Dynamic Arrays"]. </ref><ref> [https://users.cs.northwestern.edu/~jesse/course/cs214-fa19/lec/17-amortized.pdf "Amortized Time"]. </ref><ref> [https://iq.opengenus.org/hashed-array-tree/ "Hashed Array Tree: Efficient representation of Array"]. </ref><ref> [https://people.ksp.sk/~kuko/gnarley-trees/Complexity2.html "Different notions of complexity"]. </ref> Linearly growing arrays pre-allocate ("waste") Θ(1) space every time they re-size the array, making them many times faster than naïve resizable arrays -- appending to the end of the array still takes Θ(''n'') time but with a much smaller constant. Naïve resizable arrays and linearly growing arrays may be useful when a space-constrained application needs lots of small resizable arrays; they are also commonly used as an educational example leading to exponentially growing dynamic arrays.<ref> Peter Kankowski. [https://www.strchr.com/dynamic_arrays "Dynamic arrays in C"]. </ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)