Virtual Memory & Cache Memory – Concepts, Working, Advantages & Differences
Simple, point-wise notes with diagrams-in-words, formulas and comparison tables.
1) Virtual Memory
Definition: A memory management technique that lets the system run programs larger than the available physical RAM by using a part of the disk/SSD as an extension of memory.
- Idea: Give each process a large, continuous virtual address space, even if physical RAM is smaller.
- How it works: Program is divided into fixed-size pages; RAM has frames. Required pages are loaded into frames on demand. If RAM is full, least-used pages are moved back to disk (page out).
- When needed page is missing: Page fault occurs → OS loads that page from disk to RAM.
- Benefits: Run big apps; isolation between processes; better RAM utilization.
Pros:
- Programs larger than RAM can execute.
- Protection & isolation across processes.
- Efficient sharing of memory.
Cons:
- Disk access is slow → overall slowdown during heavy paging.
- Excessive page faults cause thrashing (CPU mostly waits for disk).
Analogy: RAM = desk; Disk = cupboard. If desk is small, keep extra books in cupboard and bring them when needed.
2) Cache Memory
Definition: A small, very fast memory between CPU and RAM that stores recently/frequently used instructions and data to speed up access.
- Locality principle: Temporal (recently used likely to be used again) & Spatial (nearby items likely to be used).
- Levels: L1 (smallest, fastest), L2, L3 (larger, a bit slower). L1 often split into I-cache (instructions) & D-cache (data).
- Hit/Miss: If item found → cache hit; else → cache miss and fetched from lower level (RAM).
Pros:
- Huge performance boost; CPU waits less.
- Lowers average memory access time.
Cons:
- Expensive (SRAM) → limited size.
- Design is complex (coherence, consistency, replacement policies).
Analogy: Cache = notes kept on your desk for quick use; RAM = books on nearby shelf.
3) Virtual Memory vs Cache Memory (Comparison)
Feature | Virtual Memory | Cache Memory |
---|---|---|
Primary Goal | Increase addressable memory size | Increase speed of memory access |
Backed by | Disk/SSD (swap/page file) | RAM/main memory |
Managed by | Operating System + MMU | Hardware (cache controller/CPU) |
Granularity | Pages (e.g., 4 KB) | Cache lines/blocks (e.g., 32–128 bytes) |
Size | Large (GBs) | Small (KBs–MBs) |
Speed | Slow (disk latency) | Very fast (SRAM, on-chip) |
On miss | Page fault → disk access | Cache miss → fetch from RAM |
4) Deep Dive (Short & Sweet)
- Address translation (VM): Virtual address → (via page table) → Physical frame. TLB (Translation Lookaside Buffer) caches recent translations to avoid slow page-table walks.
- Page replacement (VM): When RAM is full, choose a victim page: common policies LRU, FIFO, Clock.
- Cache mapping:
- Direct-mapped: Each block has exactly one place to go (simple, fast, more conflicts).
- Fully associative: Block can go anywhere (lowest conflict, expensive).
- Set-associative: Middle ground (e.g., 4-way).
- Replacement in cache: LRU, Random, FIFO (depends on associativity and hardware budget).
- Write policies (cache): Write-through (write to cache + RAM immediately, simpler, more traffic) vs Write-back (mark dirty, write later, faster but complex).
- Types of cache misses: Compulsory (first access), Capacity (cache too small), Conflict (mapping collisions).
- Thrashing (VM): Too many page faults due to insufficient RAM or poor locality → performance collapses. Fix via working-set tuning/increasing RAM.
5) Handy Formulas & Quick Examples
- Average Memory Access Time (Cache):
AMAT = HitTime + MissRate × MissPenalty
Example: HitTime 1 ns, MissRate 5% (0.05), MissPenalty 80 ns ⇒ AMAT = 1 + 0.05×80 = 5 ns. - Effective Access Time (with TLB):
EAT = α × TLB_HitTime + (1 − α) × TLB_MissTime
Here TLB_MissTime includes a page-table walk (and possibly a page fault cost if the page isn’t in RAM). - Effective Memory Size (VM): “Virtually” larger than RAM because infrequently used pages live on disk; actual speed depends on page-fault rate.
6) Exam Tips (Score Fast)
- Write crisp definitions of VM and Cache first.
- Draw a tiny flow: CPU → Cache → RAM → Disk (even text-diagram works).
- Mention locality for cache and paging + page fault for VM.
- Give 3 pros / 3 cons for each.
- Add the comparison table and one AMAT formula line.
One-line conclusion: “Virtual Memory extends capacity; Cache Memory boosts speed — both rely on locality to work well.”