Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Array (data structure)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Type of data structure}} {{About|the byte-layout-level structure|the abstract data type|Array (data type)}} {{Use dmy dates|date=November 2020}} {{More citations needed|date=September 2008}} In [[computer science]], an '''array''' is a [[data structure]] consisting of a collection of ''elements'' ([[value (computer science)|values]] or [[variable (programming)|variables]]), of same memory size, each identified by at least one ''array index'' or ''key'', a collection of which may be a [[tuple]], known as an index tuple. An array is stored such that the position (memory address) of each element can be computed from its index tuple by a mathematical formula.<ref>{{cite web|url=https://xlinux.nist.gov/dads/HTML/array.html|title=array|last=Black|first=Paul E.|date=13 November 2008|work=[[Dictionary of Algorithms and Data Structures]]|publisher=[[National Institute of Standards and Technology]]|access-date=22 August 2010}}</ref><ref name="andres">{{cite arXiv |eprint=1008.2909 |author1=Bjoern Andres |author2=Ullrich Koethe |author3=Thorben Kroeger |author4=Hamprecht |title=Runtime-Flexible Multi-dimensional Arrays and Views for C++98 and C++0x |class=cs.DS |year=2010}}</ref><ref name="garcia">{{Cite journal|last1=Garcia|first1=Ronald |first2=Andrew |last2=Lumsdaine|year=2005|title=MultiArray: a C++ library for generic programming with arrays|journal=Software: Practice and Experience|volume=35|issue=2|pages=159–188|issn=0038-0644|doi=10.1002/spe.630|s2cid=10890293 }}</ref> The simplest type of data structure is a linear array, also called a one-dimensional array. For example, an array of ten [[32-bit]] (4-byte) integer variables, with indices 0 through 9, may be stored as ten [[Word (data type)|words]] at memory addresses 2000, 2004, 2008, ..., 2036, (in [[hexadecimal]]: <code>0x7D0</code>, <code>0x7D4</code>, <code>0x7D8</code>, ..., <code>0x7F4</code>) so that the element with index ''i'' has the address 2000 + (''i'' × 4).<ref>David R. Richardson (2002), The Book on Data Structures. iUniverse, 1112 pages. {{ISBN|0-595-24039-9}}, {{ISBN|978-0-595-24039-5}}.</ref> The memory address of the first element of an array is called first address, foundation address, or base address. Because the mathematical concept of a [[matrix (mathematics)|matrix]] can be represented as a two-dimensional grid, two-dimensional arrays are also sometimes called "matrices". In some cases the term "vector" is used in computing to refer to an array, although [[tuple]]s rather than [[vector space|vectors]] are the more mathematically correct equivalent. [[table (information)|Table]]s are often implemented in the form of arrays, especially [[lookup table]]s; the word "table" is sometimes used as a synonym of array. Arrays are among the oldest and most important data structures, and are used by almost every program. They are also used to implement many other data structures, such as [[list (computing)|list]]s and [[string (computer science)|string]]s. They effectively exploit the addressing logic of computers. In most modern computers and many [[external storage]] devices, the memory is a one-dimensional array of words, whose indices are their addresses. [[Central processing unit|Processors]], especially [[vector processor]]s, are often optimized for array operations. Arrays are useful mostly because the element indices can be computed at [[Run time (program lifecycle phase)|run time]]. Among other things, this feature allows a single iterative [[statement (programming)|statement]] to process arbitrarily many elements of an array. For that reason, the elements of an array data structure are required to have the same size and should use the same data representation. The set of valid index tuples and the addresses of the elements (and hence the element addressing formula) are usually,<ref name="garcia" /><ref name="veldhuizen">{{cite conference |first1=Todd L. |last1=Veldhuizen |title=Arrays in Blitz++ |publisher=Springer |location=Berlin |conference=Computing in Object-Oriented Parallel Environments |date=December 1998 |isbn=978-3-540-65387-5 |pages=223–230 |series=Lecture Notes in Computer Science |volume=1505 |doi=10.1007/3-540-49372-7_24 }}{{dead link|date=November 2023}}</ref> but not always,<ref name="andres" /> fixed while the array is in use. The term "array" may also refer to an [[array data type]], a kind of [[data type]] provided by most [[high-level programming language]]s that consists of a collection of values or variables that can be selected by one or more indices computed at run-time. Array types are often implemented by array structures; however, in some languages they may be implemented by [[hash table]]s, [[linked list]]s, [[search tree]]s, or other data structures. The term is also used, especially in the description of [[algorithm]]s, to mean [[associative array]] or "abstract array", a [[theoretical computer science]] model (an [[abstract data type]] or ADT) intended to capture the essential properties of arrays. ==History== The first digital computers used machine-language programming to set up and access array structures for data tables, vector and matrix computations, and for many other purposes. [[John von Neumann]] wrote the first array-sorting program ([[merge sort]]) in 1945, during the building of the [[EDVAC|first stored-program computer]].<ref>{{TAOCP|volume=3|page=159}}</ref> Array indexing was originally done by [[self-modifying code]], and later using [[index register]]s and [[Addressing mode|indirect addressing]]. Some mainframes designed in the 1960s, such as the [[Burroughs large systems|Burroughs B5000]] and its successors, used [[memory segmentation]] to perform index-bounds checking in hardware.<ref>{{citation|title=Capability-based Computer Systems|first=Henry M.|last=Levy|publisher=Digital Press|year=1984|isbn=9780932376220|page=22}}.</ref> Assembly languages generally have no special support for arrays, other than what the machine itself provides. The earliest high-level programming languages, including [[Fortran|FORTRAN]] (1957), [[Lisp (programming language)|Lisp]] (1958), [[COBOL]] (1960), and [[ALGOL|ALGOL 60]] (1960), had support for multi-dimensional arrays, and so has [[C (programming language)|C]] (1972). In [[C++]] (1983), class templates exist for multi-dimensional arrays whose dimension is fixed at runtime<ref name="garcia" /><ref name="veldhuizen" /> as well as for runtime-flexible arrays.<ref name="andres" /> ==Applications== Arrays are used to implement mathematical [[coordinate vector|vectors]] and [[matrix (mathematics)|matrices]], as well as other kinds of rectangular tables. Many [[database]]s, small and large, consist of (or include) one-dimensional arrays whose elements are [[record (computer science)|record]]s. Arrays are used to implement other data structures, such as lists, [[heap (data structure)|heaps]], [[hash table]]s, [[double-ended queue|deque]]s, [[queue (data structure)|queue]]s, [[stack (data structure)|stacks]], [[String (computer science)|strings]], and VLists. Array-based implementations of other data structures are frequently simple and space-efficient ([[implicit data structure]]s), requiring little space [[Overhead (computing)|overhead]], but may have poor space complexity, particularly when modified, compared to tree-based data structures (compare a [[sorted array]] to a [[search tree]]). One or more large arrays are sometimes used to emulate in-program [[dynamic memory allocation]], particularly [[memory pool]] allocation. Historically, this has sometimes been the only way to allocate "dynamic memory" portably. Arrays can be used to determine partial or complete [[control flow]] in programs, as a compact alternative to (otherwise repetitive) multiple <code>IF</code> statements. They are known in this context as [[control table]]s and are used in conjunction with a purpose-built interpreter whose [[control flow]] is altered according to values contained in the array. The array may contain [[subroutine]] [[Pointer (computer programming)|pointers]] (or relative subroutine numbers that can be acted upon by [[Switch statement|SWITCH]] statements) that direct the path of the execution. ==Element identifier and addressing formulas== When data objects are stored in an array, individual objects are selected by an index that is usually a non-negative [[scalar (computing)|scalar]] [[integer]]. Indexes are also called subscripts. An index ''maps'' the array value to a stored object. There are three ways in which the elements of an array can be indexed: ; 0 (''[[zero-based numbering|zero-based indexing]]''): The first element of the array is indexed by subscript of 0.<ref>{{cite web | access-date = 8 April 2011 | publisher = Computer Programming Web programming Tips | title = Array Code Examples - PHP Array Functions - PHP code | quote = In most computer languages array index (counting) starts from 0, not from 1. Index of the first element of the array is 0, index of the second element of the array is 1, and so on. In array of names below you can see indexes and values. | url = http://www.configure-all.com/arrays.php | archive-url = https://web.archive.org/web/20110413142103/http://www.configure-all.com/arrays.php | archive-date = 13 April 2011 | url-status = dead }}</ref> ; 1 (''one-based indexing''): The first element of the array is indexed by subscript of 1. ; n (''n-based indexing''): The base index of an array can be freely chosen. Usually programming languages allowing ''n-based indexing'' also allow negative index values and other [[scalar (computing)|scalar]] data types like [[enumerated type|enumerations]], or [[character (computing)|characters]] may be used as an array index. Using zero based indexing is the design choice of many influential programming languages, including [[C (programming language)|C]], [[Java (programming language)|Java]] and [[Lisp (programming language)|Lisp]]. This leads to simpler implementation where the subscript refers to an offset from the starting position of an array, so the first element has an offset of zero. Arrays can have multiple dimensions, thus it is not uncommon to access an array using multiple indices. For example, a two-dimensional array <code>A</code> with three rows and four columns might provide access to the element at the 2nd row and 4th column by the expression <code>A[1][3]</code> in the case of a zero-based indexing system. Thus two indices are used for a two-dimensional array, three for a three-dimensional array, and ''n'' for an ''n''-dimensional array. The number of indices needed to specify an element is called the dimension, dimensionality, or [[rank (computer programming)|rank]] of the array. In standard arrays, each index is restricted to a certain range of consecutive integers (or consecutive values of some [[enumerated type]]), and the address of an element is computed by a "linear" formula on the indices. ===One-dimensional arrays=== [[File:1D array diagram.svg|thumb|Diagram of a typical 1D array]] A one-dimensional array (or single dimension array) is a type of linear array. Accessing its elements involves a single subscript which can either represent a row or column index. As an example consider the C declaration <code>int anArrayName[10];</code> which declares a one-dimensional array of ten integers. Here, the array can store ten elements of type <code>int</code> . This array has indices starting from zero through nine. For example, the expressions <code>anArrayName[0]</code> and <code>anArrayName[9]</code> are the first and last elements respectively. For a vector with linear addressing, the element with index ''i'' is located at the address {{nowrap|''B'' + ''c'' · ''i''}}, where ''B'' is a fixed ''base address'' and ''c'' a fixed constant, sometimes called the ''address increment'' or ''stride''. If the valid element indices begin at 0, the constant ''B'' is simply the address of the first element of the array. For this reason, the [[C (programming language)|C programming language]] specifies that array indices always begin at 0; and many programmers will call that element "[[zero-based numbering|zeroth]]" rather than "first". However, one can choose the index of the first element by an appropriate choice of the base address ''B''. For example, if the array has five elements, indexed 1 through 5, and the base address ''B'' is replaced by {{nowrap|''B'' + 30''c''}}, then the indices of those same elements will be 31 to 35. If the numbering does not start at 0, the constant ''B'' may not be the address of any element. [[File:2D array diagram.svg|thumb|Diagram of a typical 2D array]] ===Multidimensional arrays=== [[File:3D array diagram.svg|thumb|Diagram of a typical 3D array]] For a multidimensional array, the element with indices ''i'',''j'' would have address ''B'' + ''c'' · ''i'' + ''d'' · ''j'', where the coefficients ''c'' and ''d'' are the ''row'' and ''column address increments'', respectively. More generally, in a ''k''-dimensional array, the address of an element with indices ''i''<sub>1</sub>, ''i''<sub>2</sub>, ..., ''i''<sub>''k''</sub> is : ''B'' + ''c''<sub>1</sub> · ''i''<sub>1</sub> + ''c''<sub>2</sub> · ''i''<sub>2</sub> + … + ''c''<sub>''k''</sub> · ''i''<sub>''k''</sub>. For example: int a[2][3]; This means that array a has 2 rows and 3 columns, and the array is of integer type. Here we can store 6 elements they will be stored linearly but starting from first row linear then continuing with second row. The above array will be stored as a<sub>11</sub>, a<sub>12</sub>, a<sub>13</sub>, a<sub>21</sub>, a<sub>22</sub>, a<sub>23</sub>. This formula requires only ''k'' multiplications and ''k'' additions, for any array that can fit in memory. Moreover, if any coefficient is a fixed power of 2, the multiplication can be replaced by [[bitwise operation|bit shifting]]. The coefficients ''c''<sub>''k''</sub> must be chosen so that every valid index tuple maps to the address of a distinct element. If the minimum legal value for every index is 0, then ''B'' is the address of the element whose indices are all zero. As in the one-dimensional case, the element indices may be changed by changing the base address ''B''. Thus, if a two-dimensional array has rows and columns indexed from 1 to 10 and 1 to 20, respectively, then replacing ''B'' by {{nowrap|''B'' + ''c''<sub>1</sub> − 3''c''<sub>2</sub>}} will cause them to be renumbered from 0 through 9 and 4 through 23, respectively. Taking advantage of this feature, some languages (like FORTRAN 77) specify that array indices begin at 1, as in mathematical tradition while other languages (like Fortran 90, Pascal and Algol) let the user choose the minimum value for each index. ===Dope vectors=== {{Main|Dope vector}} The addressing formula is completely defined by the dimension ''d'', the base address ''B'', and the increments ''c''<sub>1</sub>, ''c''<sub>2</sub>, ..., ''c''<sub>''k''</sub>. It is often useful to pack these parameters into a record called the array's descriptor, stride vector, or [[dope vector]].<ref name="andres" /><ref name="garcia" /> The size of each element, and the minimum and maximum values allowed for each index may also be included in the dope vector. The dope vector is a complete [[handle (computing)|handle]] for the array, and is a convenient way to pass arrays as arguments to [[subroutine|procedures]]. Many useful [[array slicing]] operations (such as selecting a sub-array, swapping indices, or reversing the direction of the indices) can be performed very efficiently by manipulating the dope vector.<ref name="andres" /> ===Compact layouts=== {{Main|Row- and column-major order}} Often the coefficients are chosen so that the elements occupy a contiguous area of memory. However, that is not necessary. Even if arrays are always created with contiguous elements, some array slicing operations may create non-contiguous sub-arrays from them. [[File:Row_and_column_major_order.svg|thumb|upright|Illustration of row- and column-major order]] There are two systematic compact layouts for a two-dimensional array. For example, consider the matrix :<math>A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}. </math> In the row-major order layout (adopted by C for statically declared arrays), the elements in each row are stored in consecutive positions and all of the elements of a row have a lower address than any of the elements of a consecutive row: : {| class="wikitable" |- | 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9 |} In column-major order (traditionally used by Fortran), the elements in each column are consecutive in memory and all of the elements of a column have a lower address than any of the elements of a consecutive column: : {| class="wikitable" |- | 1 || 4 || 7 || 2 || 5 || 8 || 3 || 6 || 9 |} For arrays with three or more indices, "row major order" puts in consecutive positions any two elements whose index tuples differ only by one in the ''last'' index. "Column major order" is analogous with respect to the ''first'' index. In systems which use [[processor cache]] or [[virtual memory]], scanning an array is much faster if successive elements are stored in consecutive positions in memory, rather than sparsely scattered. This is known as spatial locality, which is a type of [[locality of reference]]. Many algorithms that use multidimensional arrays will scan them in a predictable order. A programmer (or a sophisticated compiler) may use this information to choose between row- or column-major layout for each array. For example, when computing the product ''A''·''B'' of two matrices, it would be best to have ''A'' stored in row-major order, and ''B'' in column-major order. ===Resizing=== {{Main|Dynamic array}} Static arrays have a size that is fixed when they are created and consequently do not allow elements to be inserted or removed. However, by allocating a new array and copying the contents of the old array to it, it is possible to effectively implement a ''dynamic'' version of an array; see [[dynamic array]]. If this operation is done infrequently, insertions at the end of the array require only amortized constant time. Some array data structures do not reallocate storage, but do store a count of the number of elements of the array in use, called the count or size. This effectively makes the array a [[dynamic array]] with a fixed maximum size or capacity; [[Pascal string]]s are examples of this. ===Non-linear formulas=== More complicated (non-linear) formulas are occasionally used. For a compact two-dimensional [[triangular array]], for instance, the addressing formula is a polynomial of degree 2. ==Efficiency== Both ''store'' and ''select'' take (deterministic worst case) [[constant time]]. Arrays take linear ([[Big-O notation|O]](''n'')) space in the number of elements ''n'' that they hold. In an array with element size ''k'' and on a machine with a cache line size of B bytes, iterating through an array of ''n'' elements requires the minimum of ceiling(''nk''/B) cache misses, because its elements occupy contiguous memory locations. This is roughly a factor of B/''k'' better than the number of cache misses needed to access ''n'' elements at random memory locations. As a consequence, sequential iteration over an array is noticeably faster in practice than iteration over many other data structures, a property called [[locality of reference]] (this does ''not'' mean however, that using a [[Perfect hash function|perfect hash]] or [[hash function#Trivial hash function|trivial hash]] within the same (local) array, will not be even faster - and achievable in [[constant time]]). Libraries provide low-level optimized facilities for copying ranges of memory (such as [[String.h|memcpy]]) which can be used to move [[Contiguous data storage|contiguous]] blocks of array elements significantly faster than can be achieved through individual element access. The speedup of such optimized routines varies by array element size, architecture, and implementation. Memory-wise, arrays are compact data structures with no per-element [[Computational overhead|overhead]]. There may be a per-array overhead (e.g., to store index bounds) but this is language-dependent. It can also happen that elements stored in an array require ''less'' memory than the same elements stored in individual variables, because several array elements can be stored in a single [[Word (data type)|word]]; such arrays are often called ''packed'' arrays. An extreme (but commonly used) case is the [[bit array]], where every bit represents a single element. A single [[octet (computing)|octet]] can thus hold up to 256 different combinations of up to 8 different conditions, in the most compact form. Array accesses with statically predictable access patterns are a major source of [[data parallelism]]. ===Comparison with other data structures=== {{List data structure comparison}} [[Dynamic array]]s or growable arrays are similar to arrays but add the ability to insert and delete elements; adding and deleting at the end is particularly efficient. However, they reserve linear ([[Big-O notation#Family of Bachmann–Landau notations|Θ]](''n'')) additional storage, whereas arrays do not reserve additional storage. [[Associative array]]s provide a mechanism for array-like functionality without huge storage overheads when the index values are sparse. For example, an array that contains values only at indexes 1 and 2 billion may benefit from using such a structure. Specialized associative arrays with integer keys include [[Radix tree|Patricia trie]]s, [[Judy array]]s, and [[van Emde Boas tree]]s. [[Self-balancing binary search tree|Balanced trees]] require O(log ''n'') time for indexed access, but also permit inserting or deleting elements in O(log ''n'') time,<ref>{{cite web|url=http://www.chiark.greenend.org.uk/~sgtatham/algorithms/cbtree.html|title=Counted B-Trees}}</ref> whereas growable arrays require linear (Θ(''n'')) time to insert or delete elements at an arbitrary position. [[Linked list]]s allow constant time removal and insertion in the middle but take linear time for indexed access. Their memory use is typically worse than arrays, but is still linear. [[Image:Array of array storage.svg|120px|left|A two-dimensional array stored as a one-dimensional array of one-dimensional arrays (rows).]] An [[Iliffe vector]] is an alternative to a multidimensional array structure. It uses a one-dimensional array of [[reference (computer science)|references]] to arrays of one dimension less. For two dimensions, in particular, this alternative structure would be a vector of pointers to vectors, one for each row(pointer on c or c++). Thus an element in row ''i'' and column ''j'' of an array ''A'' would be accessed by double indexing (''A''[''i''][''j''] in typical notation). This alternative structure allows [[jagged array]]s, where each row may have a different size—or, in general, where the valid range of each index depends on the values of all preceding indices. It also saves one multiplication (by the column address increment) replacing it by a bit shift (to index the vector of row pointers) and one extra memory access (fetching the row address), which may be worthwhile in some architectures. ==Dimension== The ''dimension'' of an array is the number of indices needed to select an element. Thus, if the array is seen as a function on a set of possible index combinations, it is the dimension of the space of which its domain is a discrete subset. Thus a one-dimensional array is a list of data, a two-dimensional array is a rectangle of data,<ref>{{Cite web|title=Two-Dimensional Arrays \ Processing.org|url=https://processing.org/tutorials/2darray/|website=processing.org|access-date=2020-05-01}}</ref> a three-dimensional array a block of data, etc. This should not be confused with the dimension of the set of all matrices with a given domain, that is, the number of elements in the array. For example, an array with 5 rows and 4 columns is two-dimensional, but such matrices form a 20-dimensional space. Similarly, a three-dimensional vector can be represented by a one-dimensional array of size three. ==See also== {{Portal|Computer programming}} {{Div col|colwidth=20em}} * [[Dynamic array]] * [[Parallel array]] * [[Variable-length array]] * [[Bit array]] * [[Array slicing]] * [[Offset (computer science)]] * [[Row- and column-major order]] * [[Stride of an array]] {{Div col end}} == References == {{Reflist}} == External links == {{Commons category|Array data structure}} {{Wiktionary|array}} * {{Wikibooks inline|Data Structures/Arrays}} {{Clear}} {{Data structures}} {{Parallel computing}} {{Authority control}} {{DEFAULTSORT:Array Data Structure}} [[Category:Arrays|*]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:About
(
edit
)
Template:Authority control
(
edit
)
Template:Citation
(
edit
)
Template:Cite arXiv
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Clear
(
edit
)
Template:Commons category
(
edit
)
Template:Data structures
(
edit
)
Template:Dead link
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:ISBN
(
edit
)
Template:List data structure comparison
(
edit
)
Template:Main
(
edit
)
Template:More citations needed
(
edit
)
Template:Nowrap
(
edit
)
Template:Parallel computing
(
edit
)
Template:Portal
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:TAOCP
(
edit
)
Template:Use dmy dates
(
edit
)
Template:Wikibooks inline
(
edit
)
Template:Wiktionary
(
edit
)