Biblio
The inherent heterogeneity in the uncertainty of variable generations (e.g., wind, solar, tidal and wave-power) in electric grid coupled with the dynamic nature of distributed architecture of sub-systems, and the need for information synchronization has made the problem of resource allocation and monitoring a tremendous challenge for the next-generation smart grid. Unfortunately, the deployment of distributed algorithms across micro grids have been overlooked in the electric grid sector. In particular, centralized methods for managing resources and data may not be sufficient to monitor a complex electric grid. This paper discusses a decentralized constrained decomposition using Linear Programming (LP) that optimizes the inter-area transfer across micro grids that reduces total generation cost for the grid. A test grid of IEEE 14-bus system is sectioned into two and three areas, and its effect on inter-transfer is analyzed.
This paper presents one-layer projection neural networks based on projection operators for solving constrained variational inequalities and related optimization problems. Sufficient conditions for global convergence of the proposed neural networks are provided based on Lyapunov stability. Compared with the existing neural networks for variational inequalities and optimization, the proposed neural networks have lower model complexities. In addition, some improved criteria for global convergence are given. Compared with our previous work, a design parameter has been added in the projection neural network models, and it results in some improved performance. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural networks.
Distributed optimization is an emerging research topic. Agents in the network solve the problem by exchanging information which depicts people's consideration on a optimization problem in real lives. In this paper, we introduce two algorithms in continuous-time to solve distributed optimization problems with equality constraints where the cost function is expressed as a sum of functions and where each function is associated to an agent. We firstly construct a continuous dynamic system by utilizing the Lagrangian function and then show that the algorithm is locally convergent and globally stable under certain conditions. Then, we modify the Lagrangian function and re-construct the dynamic system to prove that the new algorithm will be convergent under more relaxed conditions. At last, we present some simulations to prove our theoretical results.
In this paper, we consider distributed algorithm based on a continuous-time multi-agent system to solve constrained optimization problem. The global optimization objective function is taken as the sum of agents' individual objective functions under a group of convex inequality function constraints. Because the local objective functions cannot be explicitly known by all the agents, the problem has to be solved in a distributed manner with the cooperation between agents. Here we propose a continuous-time distributed gradient dynamics based on the KKT condition and Lagrangian multiplier methods to solve the optimization problem. We show that all the agents asymptotically converge to the same optimal solution with the help of a constructed Lyapunov function and a LaSalle invariance principle of hybrid systems.