This technology is an improved cache line data structure that enables fast metadata processing with low storage costs.
A cache is used by the CPU of a computer for efficient data retrieval from the main memory. Current cache systems are partitioned into ‘lines’ that contain ‘tags’ indicating the address in the main system, and data is located by comparing the requested tag to the cache tag. However, tag comparison is slow, and tag storage overhead is high. Additionally, cache hit rate and latency are often maintained by adding an additional cache for metadata processing, further increasing storage and performance overhead.
This technology is an in-place, compact cache line data structure and associated algorithms for fast metadata processing, as well as object and sub-object bounds checking. This approach minimizes metadata storage and compute costs and offers a 64x reduction in overhead (1 bit per 64 byte cache line) by storing pointers within the cache line that allow accessing metadata inline rather than in a separate cache. This technology is easy to integrate with existing memory security architectures and has the potential to reduce system performance latency and storage overhead costs, ultimately improving system efficiency.
IR CU18384
Licensing Contact: Greg Maskel