Skip to main content
Introduction to Data Structures

Introduction to Data Structures

4 minutes read

Filed underData Structureson

Understand how data structures transform random bytes into meaningful patterns. Explore contiguous vs scattered memory, linear vs non-linear structures, and the tradeoffs that shape your code's performance.

ADTs vs Data Structures

It's important to understand two fundamental concepts that often get confused:

Abstract Data Types (ADTs)

Define what you can do - which operations exist and their expected behavior. It's the interface contract, the abstract specification. An ADT says "you can push and pop", but doesn't say how it works under the hood.

Data Structures

Define how to do it - the concrete implementation in memory. It's the practical realization of the concept. A data structure shows exactly how bytes are organized and manipulated.

Practical example

A "stack" is an ADT that defines LIFO behavior. You can implement it using arrays or linked lists - different data structures, same abstract behavior. The structure choice affects performance, but the ADT remains the same.

Logical data organization

How elements relate to each other defines the structure's fundamental nature:

Linear structures

Establish a clear sequential order - there's a first element, a last one, and all intermediates have exactly one predecessor and one successor. Like a line of people: you know who's in front and who's behind you.

Linear structure examples

Arrays, linked lists, stacks, and queues are classic examples. In a stack, you pile plates on top of each other - you only access the top one. In a queue, it's like a bank line - first come, first served. Linearity is in the relationship between elements, not necessarily in physical layout.

Non-linear structures

Break this unique sequentiality. Here, an element can connect to several others simultaneously, forming hierarchies or complex networks. Like a family tree or social network.

Non-linear structure examples

Trees organize data hierarchically - like folders and files on your computer. Graphs represent arbitrary networks - like routes between cities or social network connections. Hash tables use mathematical functions to spread data and access it instantly, like a supercharged book index.

Memory perspective

Beyond logical organization, there's the physical question: how are bytes arranged in RAM?

Contiguous memory

Elements stored in adjacent and sequential positions in memory. Like a street with sequentially numbered houses.

RAM Memory:
0x1000
42
index 0
0x1004
17
index 1
0x1008
93
index 2
0x100C
28
index 3
Array: [42, 17, 93, 28]

Arrays work this way. Each element takes 4 bytes, so element array[2] is mathematically at base_address + (2 × 4). Instant O(1) access! But insert in the middle? You need to shift all following elements - imagine reorganizing an entire bookshelf because you want to insert one in the middle.

Non-contiguous memory

Elements scattered in different memory regions, connected by pointers. Like a treasure hunt where each clue points to the next.

RAM Memory (scattered):
📍0x1000
data: 10
próximo → 0x5FA0
📍0x5FA0
data: 20
próximo → 0x2C10
📍0x2C10
data: 30
próximo → NULL
Linked List: 10 20 30

Linked lists work this way. Each node carries its data and a pointer to the next. Insert in the middle? Easy - just redirect two pointers. But access element 100? You need to follow 100 clues sequentially - O(n).

What makes them different

Each data structure makes tradeoffs:

  • Memory layout - Contiguous (arrays) vs scattered (linked lists)
  • Access patterns - Random access vs sequential traversal
  • Operation costs - Fast insertions but slow searches, or vice versa
  • Space overhead - Minimal metadata vs pointer-heavy structures

The right choice depends on your specific use case. Need fast searches? Hash tables. Maintain order? Binary search trees. LIFO behavior? Stacks.