Research on Advanced Cache Management by Learning and Predicting the Line Access Behavior
Cache management is a microarchitectural technology that automatically stores required data in a high-speed cache memory in a processor in accordance with program execution. This technology is not only an immediate high-performance and high-efficiency technology for existing computers but also an essential technology that involves insights on the use of memory hierarchy. Caches are known to achieve a good hit ratio with a simple algorithm based on access locality. However, with the advent of recent large-scale LLC (last level cache), the algorithm for utilizing this capacity is improving. Recently, algorithms that complement the history based heuristic method with learning methods are revealed to be effective.
However, these new methods, which are roughly classified into prefetching and replacement algorithms, show significantly different effects depending on the program,and cause unexpected side effects when they are introduced in combination. Therefore, designing the entire cache management that achieves maximum performance is complicated and unclear.
We are conducting research to improve cache management with two approaches. One is a hybrid approach that covers existing methods and selects optimal combinations and parameters according to program behavior. In this approach, major methods in recent years are implemented on a simulator, while performing comparative experiments, effective combinations are explored and an efficient implementation for the combination is proposed.
Another approach is to joint prefetching and replacement algorithms and to propose unified line selection methods. Focusing on the behavior that prefetch disturbs the line priority prediction of replacement algorithms, we propose a learning mechanism to solve this problem.
- Hayato Nomura, Hiroyuki Katchi, Hidetsugu Irie, Shuichi Sakai: “Stubborn Strategy to Mitigate Remaining Cache Misses”, Int. Conf. on Computer Design, pp. 388–391, Oct., 2016.