Title | An asynchronous distributed-memory optimization solver for two-stage stochastic programming problems |
Publication Type | Conference Paper |
Year of Publication | 2021 |
Authors | Wang, Jingyi, Chiang, Nai-Yuan, Petra, Cosmin G. |
Conference Name | 2021 20th International Symposium on Parallel and Distributed Computing (ISPDC) |
Keywords | Approximation algorithms, Laboratories, Optimization, parallelization, power grids, Programming, pubcrawl, resilience, Resiliency, SCACOPF, Scalability, search problems, Stochastic Computing Security, Stochastic processes, Supercomputers |
Abstract | We present a scalable optimization algorithm and its parallel implementation for two-stage stochastic programming problems of large-scale, particularly the security constrained optimal power flow models routinely used in electrical power grid operations. Such problems can be prohibitively expensive to solve on industrial scale with the traditional methods or in serial. The algorithm decomposes the problem into first-stage and second-stage optimization subproblems which are then scheduled asynchronously for efficient evaluation in parallel. Asynchronous evaluations are crucial in achieving good balancing and parallel efficiency because the second-stage optimization subproblems have highly varying execution times. The algorithm employs simple local second-order approximations of the second-stage optimal value functions together with exact first- and second-order derivatives for the first-stage subproblems to accelerate convergence. To reduce the number of the evaluations of computationally expensive second-stage subproblems required by line search, we devised a flexible mechanism for controlling the step size that can be tuned to improve performance for individual class of problems. The algorithm is implemented in C++ using MPI non-blocking calls to overlap computations with communication and boost parallel efficiency. Numerical experiments of the algorithm are conducted on Summit and Lassen supercomputers at Oak Ridge and Lawrence Livermore National Laboratories and scaling results show good parallel efficiency. |
DOI | 10.1109/ISPDC52870.2021.9521613 |
Citation Key | wang_asynchronous_2021 |