blog




  • Essay / Bid Cost Minimization - 1104

    Summary: Most independent system operators (ISOs) adopt bid cost minimization (BCM) to select bids and their respective generation levels while minimizing the total cost of the offer. It has been shown that customer payment costs resulting from selected offers can differ significantly from customer payments resulting from payment cost minimization (PCM), where payment costs are directly minimized. In order to solve the PCM in the dual space, the Lagrangian relaxation and substitution optimization approach is frequently used. When standard optimization methods, such as branch and cut, become ineffective due to the large size of a problem, the Lagrangian relaxation and substitution optimization approach provides a good solution achievable in CPU time reasonable. The convergence of the standard Lagrangian relaxation and substitution subgradient approach depends on the optimal double value, which is generally unknown. Additionally, when using the substitution subgradient approach, the upper bound property is lost, so additional conditions are needed to ensure convergence. The main objective of this paper is to develop a convergent variation of the substitution subgradient method without invoking the optimal double value, and to show the relevance and effectiveness of the new method for solving large optimization problems under constraints, such as the PCM.I. INTRODUCTIONCURRENTLY, most ISOs in the United States adopt the Bid Cost Minimization (BCM) settlement mechanism to minimize the total costs of bids. In this configuration, customer payment costs, which are determined by a mechanism that assigns uniform market clearing prices (MCPs), are different from minimized supply costs. An alternative method for determining customer payment costs is ...... middle of paper ...... amicable step size [8]. Perhaps the most recent and comprehensive study on subgradient methods for convex optimization is [2]. The Lagrangian relaxation and substitution subgradient optimization approach has been specifically addressed in [6] and [5]. The first article develops the substitution subgradient method and proves its convergence. Compared to subgradient and gradient methods, the surrogate subgradient approach finds better and smoother directions with less CPU time. This last article extends the methodology to the resolution of coupled problems. Since the optimal double value or optimal multipliers remain unknown, it is necessary to develop the substitution subgradient method, the convergence of which does not depend on the optimal double value. Whether the substitution subgradient method converges with dynamic or constant step size remains an open question..