🧠 Virtual Memory Explained

Learn concepts visually for easy revision

🤔 What is Virtual Memory?

Memory Illusion

4GB
Real Memory
⬇️
16GB
Virtual Memory
Simple Definition: Virtual memory tricks your computer into thinking it has more RAM than it actually does!

How? By using hard disk space as "fake RAM" when needed.

Real Example: Your phone has 4GB RAM but can run apps that need 8GB total.

📄 Demand Paging - The Smart Way

Page Loading Process

📱 App Starts
Only load what's needed
🔍 Page Needed
User clicks something
💾 Load from Disk
Bring page to RAM
Think of it like a book:
• You don't read all pages at once
• You only open the page you need
• Other pages stay in the book (disk)
• Only current page is on your desk (RAM)

⚠️ Page Fault - When Things Go Wrong

Interactive Page Table Demo

Click on pages to see page faults in action!

Page 1
✅ In RAM
Page 2
💾 On Disk
Page 3
✅ In RAM
Page 4
❌ Invalid
Page 5
💾 On Disk
Page 6
✅ In RAM
Page 7
💾 On Disk
Page 8
❌ Invalid
Click on any page to see what happens!

🔧 Page Fault Handling Steps

1️⃣
Check if Valid
Is this a real page?
2️⃣
Find Free Space
Get empty RAM slot
3️⃣
Load from Disk
Copy page to RAM
4️⃣
Update Table
Mark as "in memory"
5️⃣
Continue
Resume execution

⚖️ Pros & Cons

✅ Advantages

  • Big Programs: Run apps larger than your RAM
  • More Apps: Run multiple programs together
  • Efficiency: Better CPU usage
  • Flexibility: Programs aren't limited by physical RAM

❌ Disadvantages

  • Slower: Disk access takes time
  • Thrashing: Too much swapping slows system
  • Complexity: More complex memory management
  • Overhead: Extra work for OS

🚀 Quick Revision Summary

Virtual Memory
Illusion of more RAM using disk space
Demand Paging
Load pages only when needed
Page Fault
When needed page isn't in RAM
Lazy Loading
Don't load until absolutely necessary

🔄 Page Replacement Algorithms (Interview Favorite!)

Memory Full Scenario

A
B
C
D
RAM Full (4 pages)
⚠️ New Page E Needed!
E
Which page to replace?
Common Algorithms:
FIFO: Replace oldest page (First In, First Out)
LRU: Replace least recently used page
Optimal: Replace page that won't be used for longest time
Random: Replace any random page

Interview Tip: LRU is most commonly asked!

🌪️ Thrashing - The Death Spiral

Thrashing Cycle

📈
Too Many Processes
💾
Frequent Page Faults
🐌
More Disk I/O
😵
System Slowdown
What is Thrashing?
When system spends more time swapping pages than executing programs.

Real Example: Your laptop with 4GB RAM running 20 Chrome tabs + video editor + games = everything becomes super slow!

Solutions:
• Increase RAM
• Reduce number of processes
• Better page replacement algorithms
• Working set model

🎯 Working Set Model (Advanced Interview Topic)

Definition: Set of pages that a process is actively using in recent time window
Key Points:
Working Set Size: Number of pages process needs to run efficiently
Locality of Reference: Programs tend to access nearby memory locations
Temporal Locality: Recently accessed pages likely to be accessed again
Spatial Locality: Nearby pages likely to be accessed together

Interview Answer: "Working set helps prevent thrashing by ensuring each process has enough pages in memory to run efficiently."

🔧 Memory Management Techniques (Interview Deep Dive)

Translation Lookaside Buffer (TLB)
Cache for page table entries
Speeds up address translation
Copy-on-Write
Share pages until modification
Saves memory for fork() operations
Memory Mapped Files
Map files directly to memory
Efficient file I/O operations
Swap Space
Dedicated disk area for pages
Usually 2x RAM size

❓ Common Interview Questions

Q1: What happens when RAM is full and new page is needed?
A: Page replacement algorithm kicks in. System selects a victim page using algorithms like LRU, FIFO, or Optimal. Victim page is written to disk if modified (dirty), then new page is loaded into that frame.
Q2: Difference between Paging and Segmentation?
A: Paging divides memory into fixed-size blocks (pages), while segmentation divides into variable-size logical units (segments). Paging eliminates external fragmentation but may cause internal fragmentation.
Q3: How to calculate effective memory access time?
A: EMAT = (1-p) × ma + p × page_fault_time
Where p = page fault rate, ma = memory access time
Q4: What is the difference between logical and physical address?
A: Logical address is generated by CPU (virtual), physical address is actual RAM location. MMU translates logical to physical using page tables.
Q5: Why is LRU better than FIFO?
A: LRU considers usage pattern (temporal locality), while FIFO only considers arrival time. LRU typically has fewer page faults in real-world scenarios.

📊 Performance Metrics (Numbers Interviewers Love)

Key Formulas

  • Page Fault Rate: Page Faults / Total Memory References
  • Hit Ratio: (Total Accesses - Page Faults) / Total Accesses
  • Effective Access Time: (1-p) × ma + p × fault_time
  • Memory Utilization: (Used Pages / Total Pages) × 100%

Typical Values

  • RAM Access: 10-100 nanoseconds
  • Disk Access: 1-10 milliseconds
  • Page Fault Rate: 0.1-1% (good system)
  • TLB Hit Rate: 90-99%

⚙️ Implementation Details (System Design Questions)

Page Table Structure:
Single-level: Simple but large for 64-bit systems
Multi-level: Hierarchical, saves space
Inverted: One entry per physical frame
Hashed: Good for sparse address spaces
Page Size Trade-offs:
Small Pages: Less internal fragmentation, larger page tables
Large Pages: Smaller page tables, more internal fragmentation
Typical Sizes: 4KB (x86), 4KB-64KB (ARM), up to 1GB (huge pages)
Modern Optimizations:
Prefetching: Load pages before they're needed
Clustering: Group related pages together
Compression: Compress pages in memory
NUMA Awareness: Consider memory locality in multi-processor systems

📝 Key Points for Exams & Interviews

Remember: Virtual memory = Physical RAM + Disk space acting as RAM
Page Fault Steps: Check validity → Find free frame → Load from disk → Update page table → Continue
Valid/Invalid Bit: 1 = page in memory, 0 = page on disk or invalid
Pure Demand Paging: Start with NO pages in memory, load only when needed
Interview Favorites: LRU algorithm, Thrashing causes & solutions, Working set model, TLB concept