If you're learning data structures and algorithms, arrays are the first thing you need to truly understand. Almost every other data structure either builds on arrays or exists to fix one of their weaknesses.
This post breaks down what arrays are, how they work under the hood, and why their performance characteristics matter — especially for technical interviews.
What is an array (or list)?
An array is the most basic way for your program to store a list of values.
Some everyday examples include:
A list of TODOs
A list of friends
A sequence of numbers from input
Conceptually, it is just a container that stores multiple values in a specific order.
Creating an array blocks off a contiguous block of memory
When you create an array, the computer allocates a single contiguous block of memory large enough to hold all of its elements.
That detail — contiguous — is the most important thing to remember about arrays.
Memory
1[ 7 ] [ 4 ] [ 9 ] [ 1 ] [ 6 ]2^3base address
Every element sits right next to the previous one in memory.
This is what enables fast access, but it also explains why inserts and deletes can be expensive.
Indexing an array is O(1) time
Arrays support constant-time access.
Why?
Because once the program knows:
the base address of the array
the size of each element
It can jump directly to any index using simple math:
1address = base + index * element_size
No loops. No traversal. Just Arithmetic.
That’s why:
arr[0]
arr[5]
arr[n-1]
are all O(1) operations.
Most modern languages use 0-based indexing, meaning the first element lives at index 0. The index is simply an offset from the base memory address.
Most programming languages provide a dynamic array implementation
In practice, you rarely work with "raw" arrays.
Higher-level languages give you dynamic arrays:
Python → list
Java → ArrayList
C++ → vector
These still use contiguous memory under the hood — but they automatically resize for you.
When you append an element and the array runs out of space, the language will:
Allocate a larger contiguous block (usually ~2× the size)
Copy all existing elements into the new block
Free the old memory
That copy step is expensive — but it doesn’t happen often.
Inserting at the end is O(1) amortized
Appending to the end of a dynamic array is:
Usually O(1) — just write into the next free slot
Occasionally O(n) — when a resize is -required
Because resizes happen infrequently, the average cost per append is still constant.
This is called amortized O(1) time.
Inserting anywhere else is O(n)
If you insert an element not at the end, you have a problem.
To preserve contiguity and ordering, the array must: