Cache Simulator

Adjust the configuration below and run memory access patterns to visualize cache behavior.

Run Controls

Statistics

0Hits
0Misses
0Prefetches
0%Hit Rate
0 nsTime

Cache Lines

# Valid Tag Off 0 Off 1 Off 2 Off 3 LRU

Hit vs. Miss

RAM Overview

Hit
Miss
Prefetch

Log

Cache Simulator

Synthetic Access Patterns

This tool doesn’t execute real code (e.g. for i in range(10): print(i)); instead it generates one of four abstract memory‐access sequences:

Memory vs. Cache

Main Memory
A large, relatively slow array of words, each addressed by a unique number.
Cache
A smaller, faster store divided into fixed-size blocks (lines), each holding several consecutive words.
Set-Associativity
Splits the cache into “sets”; each block can only live in one set but within that set can occupy one of several lines.

How an Access Works

  1. Break the requested address into three fields:
    • Tag: Identifies which block is in the cache line.
    • Set index: Chooses which small group (set) of lines to search.
    • Offset: Picks which word within the block to use.
  2. If a matching tag is found in that set → cache hit, return the data.
  3. If not → cache miss, load the block from RAM, evicting the least-recently-used line if needed.
  4. Optionally, prefetch future blocks on a miss to reduce upcoming misses.

Replacement Policy (LRU)

When you need to make room, evict the line that hasn’t been accessed for the longest time. This maximizes the chance that “hot” data stays resident.

Performance Metrics

Visualizations

You get three synchronized views:

Play with block size, associativity, and prefetch distance to see in real time how they affect performance!