DSA

Arrays

January 28, 20265 min read

If you're learning data structures and algorithms, arrays are the first thing you need to truly understand. Almost every other data structure either builds on arrays or exists to fix one of their weaknesses.

This post breaks down what arrays are, how they work under the hood, and why their performance characteristics matter — especially for technical interviews.


What is an array (or list)?

An array is the most basic way for your program to store a list of values.

Some everyday examples include:

  • A list of TODOs

  • A list of friends

  • A sequence of numbers from input

Conceptually, it is just a container that stores multiple values in a specific order.


Creating an array blocks off a contiguous block of memory

When you create an array, the computer allocates a single contiguous block of memory large enough to hold all of its elements.

That detail — contiguous — is the most important thing to remember about arrays.

Memory
1[ 7 ] [ 4 ] [ 9 ] [ 1 ] [ 6 ]2^3base address

Every element sits right next to the previous one in memory.

This is what enables fast access, but it also explains why inserts and deletes can be expensive.


Indexing an array is O(1) time

Arrays support constant-time access.

Why?

Because once the program knows:

  • the base address of the array

  • the size of each element

It can jump directly to any index using simple math:
1address = base + index * element_size

No loops. No traversal. Just Arithmetic.

That’s why:

  • arr[0]
  • arr[5]
  • arr[n-1]

are all O(1) operations.

Most modern languages use 0-based indexing, meaning the first element lives at index 0. The index is simply an offset from the base memory address.


Most programming languages provide a dynamic array implementation

In practice, you rarely work with "raw" arrays.

Higher-level languages give you dynamic arrays:

  • Python → list
  • Java → ArrayList
  • C++ → vector

These still use contiguous memory under the hood — but they automatically resize for you.

When you append an element and the array runs out of space, the language will:

  1. Allocate a larger contiguous block (usually ~2× the size)
  2. Copy all existing elements into the new block
  3. Free the old memory

That copy step is expensive — but it doesn’t happen often.


Inserting at the end is O(1) amortized

Appending to the end of a dynamic array is:

  • Usually O(1) — just write into the next free slot
  • Occasionally O(n) — when a resize is -required

Because resizes happen infrequently, the average cost per append is still constant.

This is called amortized O(1) time.


Inserting anywhere else is O(n)

If you insert an element not at the end, you have a problem.

To preserve contiguity and ordering, the array must:

  • Shift every element to the right
  • Make space for the new value
Insert at Index 1:
1[ 3 ][ 5 ][ 7 ][ 9 ]23[ 3 ][ X ][ 5 ][ 7 ][ 9 ]

That shift costs O(n) time in the worst case.


Deleting from the end is O(1)

Removing the last element is cheap.

The array simply:

  • Decrements its logical size
  • Leaves the memory as-is

No shifting required.


Deleting from anywhere else is O(n)

Just like insertion, deleting from the middle requires work.

All elements to the right must be shifted one position left to fill the gap.

That’s an O(n) operation.


Searching for an element is O(n)… unless the array is sorted

If the array is unsorted, the only option is a linear scan:

  • Check each element one by one
  • Worst case: examine all n elements

That’s O(n) time.

If the array is sorted, you can do much better.

Using binary search, you can locate an element in O(log n) time.


Sorting an array is usually O(n log n)

Most general-purpose sorting algorithms run in:

  • O(n log n) time

Examples include:

  • QuickSort
  • MergeSort
  • HeapSort
  • Timsort

Quadratic algorithms like:

  • Bubble Sort
  • Selection Sort

exist mainly for teaching purposes and are rarely used in practice.


Sometimes you can sort in O(n)

If you know extra information about the data — for example:

  • All values are integers
  • Values fall within a small range

You can use specialized algorithms like counting sort to achieve O(n) time.

This insight can be the difference between:

  • Passing a LeetCode problem
  • Getting TLE

Know the common array operations in your language

For Python lists:

  • len(arr) → O(1)
  • arr.append(x) → O(1) amortized
  • arr.insert(i, x) → O(n)
  • arr.pop() → O(1)
  • arr.remove(x) → O(n)
  • min(arr) / max(arr) → O(n)

Knowing these costs lets you reason about performance before you write code.


2D arrays (matrices)

A 2D array is just an array of arrays.

You’ll see them everywhere:

  • Grids and mazes
  • Minesweeper boards
  • Game maps
  • Images and screens
Access is still O(1):
1grid[row][col]

But nested loops mean runtime grows quickly — something to always watch for.


Final takeaway

Arrays are fast, simple, and memory-efficient — as long as you use them correctly.

Understanding:

  • why indexing is O(1)
  • why inserts and deletes can be O(n)
  • how amortized costs work

will give you a huge advantage in interviews and algorithmic problem solving.

Master arrays first — everything else builds on top of them.