Visible to the public Biblio

Filters: Author is Oral, Sarp  [Clear All Filters]
2022-09-29
Tang, Houjun, Xie, Bing, Byna, Suren, Carns, Philip, Koziol, Quincey, Kannan, Sudarsun, Lofstead, Jay, Oral, Sarp.  2021.  SCTuner: An Autotuner Addressing Dynamic I/O Needs on Supercomputer I/O Subsystems. 2021 IEEE/ACM Sixth International Parallel Data Systems Workshop (PDSW). :29–34.
In high-performance computing (HPC), scientific applications often manage a massive amount of data using I/O libraries. These libraries provide convenient data model abstractions, help ensure data portability, and, most important, empower end users to improve I/O performance by tuning configurations across multiple layers of the HPC I/O stack. We propose SCTuner, an autotuner integrated within the I/O library itself to dynamically tune both the I/O library and the underlying I/O stack at application runtime. To this end, we introduce a statistical benchmarking method to profile the behaviors of individual supercomputer I/O subsystems with varied configurations across I/O layers. We use the benchmarking results as the built-in knowledge in SCTuner, implement an I/O pattern extractor, and plan to implement an online performance tuner as the SCTuner runtime. We conducted a benchmarking analysis on the Summit supercomputer and its GPFS file system Alpine. The preliminary results show that our method can effectively extract the consistent I/O behaviors of the target system under production load, building the base for I/O autotuning at application runtime.
2022-02-25
Xie, Bing, Tan, Zilong, Carns, Philip, Chase, Jeff, Harms, Kevin, Lofstead, Jay, Oral, Sarp, Vazhkudai, Sudharshan S., Wang, Feiyi.  2021.  Interpreting Write Performance of Supercomputer I/O Systems with Regression Models. 2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS). :557—566.

This work seeks to advance the state of the art in HPC I/O performance analysis and interpretation. In particular, we demonstrate effective techniques to: (1) model output performance in the presence of I/O interference from production loads; (2) build features from write patterns and key parameters of the system architecture and configurations; (3) employ suitable machine learning algorithms to improve model accuracy. We train models with five popular regression algorithms and conduct experiments on two distinct production HPC platforms. We find that the lasso and random forest models predict output performance with high accuracy on both of the target systems. We also explore use of the models to guide adaptation in I/O middleware systems, and show potential for improvements of at least 15% from model-guided adaptation on 70% of samples, and improvements up to 10 x on some samples for both of the target systems.