top of page
Search
Writer's pictureDR.GEEK

stImplementation of low latency time operation node on C-Node

( 29th October 2019 )

We implement low latency time operation node on C-Node. (Contents node). C-Node has L4 cache device and maintain its frequently transaction data and its contents on the node. L4 device was called M-Cell which has 1GB DRAM for transaction of content or data are managed by 664bit DTS AI chip.

C-Node

Each C-Node nodes communicate other member of C-Node trio group. C-Node is content with data node for the all of services. C-Node has two storage partitions.

One of partition of P-Node utilizes L4 block cache and it has pre-fetching read technology to enhance the communication speed between P-Node via Content DF.


Fig-1: Contents node

C-Node is contents node which also has two partitioned storage space, L4 cache is implemented in this node and enhances the latency time at one side of storage partition space by L4 cache. Other partitioned storage space is original mounted device but it still good for data integrity and availability.

The differentiation between P node and C node is its storage capacity and latency time by requester node (SOA node).

The policy of file system, CROSS might be implemented IPFS (Interoperable Planetary file system). The data time line was stored and can retrieve any version of files, later.

L3/L4 Cache advantages

Because data cache has locality manner, L3 cache dedicated processing data and secure sharing data and C node is holding historical data with historical block chain. The latency of time is reduced by both of cache as following access pattern and memory addresses.


Fig-2: Cache access localities

Temporal localitythe idea thatif a memory address is accessed it will be accessed again in the near future. For example, loops fetch the same instructions over and over. As another example, calling and returning from functions causes stack memory to be accessed repeatedly.

Spatial locality – the idea that if a memory address is accessed, nearby addresses will be accessed in the near future. In the absence of a change of flow (COF), Such as a conditional branch, a program typically accesses memory in a sequential fashion. When processing an array of data, an access to array member i will very likely be followed by an access to member i+1, with both members located sequentially in memory.

The cache hit provability and cache gain are shown as following formula:



Fig-3: Cache performance relation

1 view0 comments

Recent Posts

See All

Comentários


bottom of page