Snehasish Kumar
Staff Software Engineer at Google, USA
about
At Google, I work on performance analysis and optimization, leading initiatives for compiler-driven profile-guided optimizations for data layout. My work has led to significant improvements in the throughput and latency of Google’s largest datacenter workloads. For example, my work on code layout was featured on Phoronix and discussed on HackerNews. I have contributed to Propeller, a post-link optimization framework, which was awarded distinguished paper at ASPLOS 2023.
I’m deeply involved in hardware-software co-design and actively participate in the RISC-V community. As the Vice-Chair of the RISC-V Performance Analysis SIG, I contribute to various task groups and have made significant contributions to the Control Transfer Records Specification. Additionally, I’m actively contributing to other specifications initiated by the Performance Analysis SIG, such as Performance Event Sampling.
Most of my work is open-sourced and available through projects like LLVM, Dynamorio and tcmalloc.
academic research
As a PhD student I have conducted research on cache memory systems, coherence protocols, workload characterization and application specific hardware specialization. The semiconductor industry specializes hardware for better performance and energy efficiency, but this creates challenges in deciding what to specialize and how to integrate specialized units. Current methods require manual effort to restructure workloads. My research focused on automated compiler techniques for specialization. I’ve developed program analysis techniques to address the problem and synthesized an accelerator workload suite to help researchers. I’ve also researched ways to reduce energy consumption from data movement and designed adaptive caching mechanisms. My academic research has been published at top tier conferences such as: HPCA’18, HPCA’17, IISWC’16, MICRO’16, ICS’16, ISCA’15, ICS’15, ISCA’13, MICRO’12.