Caching and Tiering
• Caching and Tiering are techniques for improving the performance of a hierarchy of storage devices• Caching copies data residing on a slow storage device to a temporary residence on a faster device
• Tiering moves data residing on a slow storage device to a new permanent residence on a faster device
• Caching and tiering use different policies to drive their operations, due to their nature and history
Storage Hierarchy Members (Ranked Fastest to Slowest)
A general rule for managing storage hierarchies
The smaller the performance ratio between two tiers,
The more thought you should put into deciding to
move or copy data between those tiers to improve
overall performance
History of Caching
• Introduced in 1970’s in CPU design
• SRAM cache in front of DRAM (or core) main memory
• SRAM was 10-20x the performance of main memory at the time
• Direct-mapped, write-through caches at first
• Write-back caches, limited associativity shipping by 1975
• Victim Caches introduced with DEC Alpha CPU in the 1990’s
History of Tiering
▪ Tiering was originally done manually in the 1960’s in the
High Performance Computing community
• Load data from tape to disk, run monster job, write modified data
and results onto (new) tape
▪ IBM DFHSM (1970’s): tiering between disk and tape
• IBM’s user community (SHARE/GUIDE) helped define it
• Moved files or groups of files between disk and tape
• Semi-automatic, based on user-defined policies and scripts
• Eventually morphed into Archiving facility
No comments:
Post a Comment