Co-optimizing Memory-Level Parallelism and Cache-Level Parallelism
Minimizing cache misses has been the traditional goal in optimizing cache performance using compiler based techniques. However, continuously increasing dataset sizes combined with large numbers of cache banks and memory banks connected using on-chip networks in emerging manycores/accelerators makes cache hit–miss latency optimization as important as cache miss rate minimization. In this paper, we propose compiler support that optimizes both the latencies of last-level cache (LLC) hits and the latencies of LLC misses. Our approach tries to achieve this goal by improving the parallelism exhibited by LLC hits and LLC misses. More specifically, it tries to maximize both cache-level parallelism (CLP) and memory-level parallelism (MLP). This paper presents different incarnations of our approach, and evaluates them using a set of 12 multithreaded applications. Our results indicate that (i) optimizing MLP first and CLP later brings, on average, 11.31% performance improvement over an approach that already minimizes the number of LLC misses, and (ii) optimizing CLP first and MLP later brings 9.43% performance improvement. In comparison, balancing MLP and CLP brings 17.32% performance improvement on average.
Tue 25 JunDisplayed time zone: Tijuana, Baja California change
16:00 - 17:00 | |||
16:00 20mTalk | Co-optimizing Memory-Level Parallelism and Cache-Level Parallelism PLDI Research Papers Xulong Tang Penn State, Mahmut Taylan Kandemir Pennsylvania State University, USA, Mustafa Karakoy TOBB University of Economics and Technology, Turkey, Meenakshi Arunachalam Intel, USA Media Attached | ||
16:20 20mTalk | Low-Latency Graph Streaming using Compressed Purely-Functional Trees PLDI Research Papers Laxman Dhulipala Carnegie Mellon University, Guy E. Blelloch Carnegie Mellon University, Julian Shun MIT | ||
16:40 20mTalk | Composable, Sound Transformations of Nested Recursion and Loops PLDI Research Papers Media Attached |