( 30th October 2019 )
Autonomous Decentralized System has two caches node. Each node has cache as L3 or L4. To setup the policy of two caches, CROSS system shows great advantage. L3 cache setups write back policy with read through mode. L3 cache holds only write data that was updated was before. It effects read event if read request block is on L3 cache.
L4 cache is dedicated read cache policy from device. The provability of read access is predicted by L4 cache. This combination is very good advantage for user push and pull service.User pull is read request event and L4 > L3 cache configuration is maintains cache gain more than L3 cache. User push event is write event and it performs by L3 write back cache. If re-read access is same block address, L3 cache performs very high performance of read event by L3 cache. Other new read request may execute by L4 cache on C-Node.
Thus, L3 (Write back + Read through and Write through) and L4 cache combination is very effective multi cache model.
Fig-1: Non incremental Cache
Cache Translocation penalty
The size of cache effects its trans location time by it moment of cache behavior capacity that is called “Cache Translocation penalty”.
Once the cache size exceeds its capacity, the cache translocation speed is very slow and it is effected the previous I/O access penalty by the historical penalty.
Comentarios