Visible to the public An asynchronous distributed-memory optimization solver for two-stage stochastic programming problems

TitleAn asynchronous distributed-memory optimization solver for two-stage stochastic programming problems
Publication TypeConference Paper
Year of Publication2021
AuthorsWang, Jingyi, Chiang, Nai-Yuan, Petra, Cosmin G.
Conference Name2021 20th International Symposium on Parallel and Distributed Computing (ISPDC)
KeywordsApproximation algorithms, Laboratories, Optimization, parallelization, power grids, Programming, pubcrawl, resilience, Resiliency, SCACOPF, Scalability, search problems, Stochastic Computing Security, Stochastic processes, Supercomputers
AbstractWe present a scalable optimization algorithm and its parallel implementation for two-stage stochastic programming problems of large-scale, particularly the security constrained optimal power flow models routinely used in electrical power grid operations. Such problems can be prohibitively expensive to solve on industrial scale with the traditional methods or in serial. The algorithm decomposes the problem into first-stage and second-stage optimization subproblems which are then scheduled asynchronously for efficient evaluation in parallel. Asynchronous evaluations are crucial in achieving good balancing and parallel efficiency because the second-stage optimization subproblems have highly varying execution times. The algorithm employs simple local second-order approximations of the second-stage optimal value functions together with exact first- and second-order derivatives for the first-stage subproblems to accelerate convergence. To reduce the number of the evaluations of computationally expensive second-stage subproblems required by line search, we devised a flexible mechanism for controlling the step size that can be tuned to improve performance for individual class of problems. The algorithm is implemented in C++ using MPI non-blocking calls to overlap computations with communication and boost parallel efficiency. Numerical experiments of the algorithm are conducted on Summit and Lassen supercomputers at Oak Ridge and Lawrence Livermore National Laboratories and scaling results show good parallel efficiency.
DOI10.1109/ISPDC52870.2021.9521613
Citation Keywang_asynchronous_2021