• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 448
  • 315
  • 59
  • 50
  • 24
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • Tagged with
  • 1130
  • 1130
  • 346
  • 295
  • 279
  • 186
  • 136
  • 119
  • 111
  • 108
  • 106
  • 99
  • 85
  • 83
  • 83
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Optimal power flow via quadratic modeling

Tao, Ye 29 August 2011 (has links)
Optimal power flow (OPF) is the choice tool for determining the optimal operating status of the power system by managing controllable devices. The importance of the OPF approach has increased due to increasing energy prices and availability of more control devices. Existing OPF approaches exhibit shortcomings. Current OPF algorithms can be classified into (a) nonlinear programming, (b) intelligent search methods, and (c) sequential algorithms. Nonlinear programming algorithms focus on the solution of the Kuhn-Tucker conditions; they require a starting feasible solution and the model includes all constraints; these characteristics limit the robustness and efficiency of these methods. Intelligent search methods are first-order methods and are totally inefficient for large-scale systems. Traditional sequential algorithms require a starting feasible solution, a requirement that limits their robustness. Present implementations of sequential algorithms use traditional modeling that result in inefficient algorithms. The research described in this thesis has overcome the shortcomings by developing a robust and highly efficient algorithm. Robustness is defined as the ability to provide a solution for any system; the proposed approach achieves robustness by operating on suboptimal points and moving toward feasible, it stops at a suboptimal solution if an optimum does not exist. Efficiency is achieved by (a) converting the nonlinear OPF problem to a quadratic problem (b) and limiting the size of the model; the quadratic model enables fast convergence and the algorithm that identifies the active constraints, limits the size of the model by only including the active constraints. A concise description of the method is as follows: The proposed method starts from an arbitrary state which may be infeasible; model equations and system constraints are satisfied by introducing artificial mismatch variables at each bus. Mathematically this is an optimal but infeasible point. At each iteration, the artificial mismatches are reduced while the solution point maintains optimality. When mismatches reach zero, the solution becomes feasible and the optimum has been found; otherwise, the mismatch residuals are converted to load shedding and the algorithm provides a suboptimal but feasible solution. Therefore, the algorithm operates on infeasible but optimal points and moves towards feasibility. The proposed algorithm maximizes efficiency with two innovations: (a) quadratization that converts the nonlinear model to quadratic with excellent convergence properties and (b) minimization of model size by identifying active constraints, which are the only constraints included in the model. Finally sparsity technique is utilized that provide the best computational efficiency for large systems. This dissertation work demonstrates the proposed OPF algorithm using various systems up to three hundred buses and compares it with several well-known OPF software packages. The results show that the proposed algorithm converges fast and its runtime is competitive. Furthermore, the proposed method is extended to a three-phase OPF (TOPF) algorithm for unbalanced networks using the quadratized three-phase power system model. An example application of the TOPF is presented. Specifically, TOPF is utilized to address the problem of fault induced delayed voltage recovery (FIDVR) phenomena, which lead to unwanted relay operations, stalling of motors and load disruptions. This thesis presents a methodology that will optimally enhance the distribution system to mitigate/eliminate the onset of FIDVR. The time domain simulation method has been integrated with a TOPF model and a dynamic programming optimization algorithm to provide the optimal reinforcing strategy for the circuits.
332

On Optimal Link Activation with Interference Cancelation in Wireless Networking

Yuan, Di, Angelakis, Vangelis, Chen, Lei, Karipidis, Eleftherios, Larsson, Erik G. January 2013 (has links)
A fundamental aspect in performance engineering of wireless networks is optimizing the set of links that can be concurrently activated to meet given signal-to-interference-and-noise ratio (SINR) thresholds. The solution of this combinatorial problem is the key element in scheduling and cross-layer resource management. In this paper, we assume multiuser decoding receivers, which can cancel strongly interfering signals. As a result, in contrast to classical spatial reuse, links being close to each other are more likely to be active concurrently. Our focus is to gauge the gain of successive interference cancellation (SIC), as well as the simpler, yet instructive, case of parallel interference cancellation (PIC), in the context of optimal link activation. We show that both problems are NP-hard and develop compact integer linear programming formulations that enable to approach global optimality. We provide an extensive numerical performance evaluation, indicating that for low to medium SINR thresholds the improvement is quite substantial, especially with SIC, whereas for high SINR thresholds the improvement diminishes and both schemes perform equally well.
333

P-Cycle-based Protection in Network Virtualization

Song, Yihong 25 February 2013 (has links)
As the "network of network", the Internet has been playing a central and crucial role in modern society, culture, knowledge, businesses and so on in a period of over two decades by supporting a wide variety of network technologies and applications. However, due to its popularity and multi-provider nature, the future development of the Internet is limited to simple incremental updates. To address this challenge, network virtualization has been propounded as a potential candidate to provide the essential basis for the future Internet architecture. Network virtualization is capable of providing an open and flexible networking environment in which service providers are allowed to dynamically compose multiple coexisting heterogeneous virtual networks on a shared substrate network. Such a flexible environment will foster the deployment of diversified services and applications. A major challenge in network virtualization area is the Virtual Network Embedding (VNE), which aims to statically or dynamically allocate virtual nodes and virtual links on substrate resources, physical nodes and paths. Making effective use of substrate resources requires high-efficient and survivable VNE techniques. The main contribution of this thesis is two high-performance p-Cycle-based survivable virtual network embedding approaches. These approaches take advantage of p-Cycle-based protection techniques that minimize the backup resources while providing a full VN protection scheme against link and node failures.
334

Computational video: post-processing methods for stabilization, retargeting and segmentation

Grundmann, Matthias 05 April 2013 (has links)
In this thesis, we address a variety of challenges for analysis and enhancement of Computational Video. We present novel post-processing methods to bridge the difference between professional and casually shot videos mostly seen on online sites. Our research presents solutions to three well-defined problems: (1) Video stabilization and rolling shutter removal in casually-shot, uncalibrated videos; (2) Content-aware video retargeting; and (3) spatio-temporal video segmentation to enable efficient video annotation. We showcase several real-world applications building on these techniques. We start by proposing a novel algorithm for video stabilization that generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To achieve this, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond conventional filtering, that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scanline at a time, which results in non-rigid image distortions such as shear and wobble. We propose a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. Our novel video stabilization and calibration free rolling shutter removal have been deployed on YouTube where they have successfully stabilized millions of videos. We also discuss several extensions to the stabilization algorithm and present technical details behind the widely used YouTube Video Stabilizer. We address the challenge of changing the aspect ratio of videos, by proposing algorithms that retarget videos to fit the form factor of a given device without stretching or letter-boxing. Our approaches use all of the screen's pixels, while striving to deliver as much video-content of the original as possible. First, we introduce a new algorithm that uses discontinuous seam-carving in both space and time for resizing videos. Our algorithm relies on a novel appearance-based temporal coherence formulation that allows for frame-by-frame processing and results in temporally discontinuous seams, as opposed to geometrically smooth and continuous seams. Second, we present a technique, that builds on the above mentioned video stabilization approach. We effectively automate classical pan and scan techniques by smoothly guiding a virtual crop window via saliency constraints. Finally, we introduce an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a "region graph" over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, and allows subsequent applications to choose from varying levels of granularity. We demonstrate the use of spatio-temporal segmentation as users interact with the video, enabling efficient annotation of objects within the video.
335

Numerical Stability in Linear Programming and Semidefinite Programming

Wei, Hua January 2006 (has links)
We study numerical stability for interior-point methods applied to Linear Programming, LP, and Semidefinite Programming, SDP. We analyze the difficulties inherent in current methods and present robust algorithms. <br /><br /> We start with the error bound analysis of the search directions for the normal equation approach for LP. Our error analysis explains the surprising fact that the ill-conditioning is not a significant problem for the normal equation system. We also explain why most of the popular LP solvers have a default stop tolerance of only 10<sup>-8</sup> when the machine precision on a 32-bit computer is approximately 10<sup>-16</sup>. <br /><br /> We then propose a simple alternative approach for the normal equation based interior-point method. This approach has better numerical stability than the normal equation based method. Although, our approach is not competitive in terms of CPU time for the NETLIB problem set, we do obtain higher accuracy. In addition, we obtain significantly smaller CPU times compared to the normal equation based direct solver, when we solve well-conditioned, huge, and sparse problems by using our iterative based linear solver. Additional techniques discussed are: crossover; purification step; and no backtracking. <br /><br /> Finally, we present an algorithm to construct SDP problem instances with prescribed strict complementarity gaps. We then introduce two <em>measures of strict complementarity gaps</em>. We empirically show that: (i) these measures can be evaluated accurately; (ii) the size of the strict complementarity gaps correlate well with the number of iteration for the SDPT3 solver, as well as with the local asymptotic convergence rate; and (iii) large strict complementarity gaps, coupled with the failure of Slater's condition, correlate well with loss of accuracy in the solutions. In addition, the numerical tests show that there is no correlation between the strict complementarity gaps and the geometrical measure used in [31], or with Renegar's condition number.
336

A Multiple-objective ILP based Global Routing Approach for VLSI ASIC Design

Yang, Zhen January 2008 (has links)
A VLSI chip can today contain hundreds of millions transistors and is expected to contain more than 1 billion transistors in the next decade. In order to handle this rapid growth in integration technology, the design procedure is therefore divided into a sequence of design steps. Circuit layout is the design step in which a physical realization of a circuit is obtained from its functional description. Global routing is one of the key subproblems of the circuit layout which involves finding an approximate path for the wires connecting the elements of the circuit without violating resource constraints. The global routing problem is NP-hard, therefore, heuristics capable of producing high quality routes with little computational effort are required as we move into the Deep Sub-Micron (DSM) regime. In this thesis, different approaches for global routing problem are first reviewed. The advantages and disadvantages of these approaches are also summarized. According to this literature review, several mathematical programming based global routing models are fully investigated. Quality of solution obtained by these models are then compared with traditional Maze routing technique. The experimental results show that the proposed model can optimize several global routing objectives simultaneously and effectively. Also, it is easy to incorporate new objectives into the proposed global routing model. To speedup the computation time of the proposed ILP based global router, several hierarchical methods are combined with the flat ILP based global routing approach. The experimental results indicate that the bottom-up global routing method can reduce the computation time effectively with a slight increase of maximum routing density. In addition to wire area, routability, and vias, performance and low power are also important goals in global routing, especially in deep submicron designs. Previous efforts that focused on power optimization for global routing are hindered by excessively long run times or the routing of a subset of the nets. Accordingly, a power efficient multi-pin global routing technique (PIRT) is proposed in this thesis. This integer linear programming based techniques strives to find a power efficient global routing solution. The results indicate that an average power savings as high as 32\% for the 130-nm technology can be achieved with no impact on the maximum chip frequency.
337

Capacity Pricing in Electric Generation Expansion

Pirnia, Mehrdad January 2009 (has links)
The focus of this thesis is to explore a new mechanism to give added incentive to invest in new capacities in deregulated electricity markets. There is a lot of concern in energy markets, regarding lack of sufficient private sector investment in new capacities to generate electricity. Although some markets are using mechanisms to reward these investments directly, e.g., by governmental subsidies for renewable sources such as wind or solar, there is not much theory to guide the process of setting the reward levels. The proposed mechanism involves a long term planning model, maximizing the social welfare measured as consumers’ plus producers’ surplus, by choosing new generation capacities which, along with still existing capacities, can meet demand. Much previous research in electricity capacity planning has also solved optimization models, usually with continuous variables only, in linear or non-linear programs. However, these approaches can be misleading when capacity additions must either be zero or a large size, e.g., the building of a nuclear reactor or a large wind farm. Therefore, this research includes binary variables for the building of large new facilities in the optimization problem, i.e. the model becomes a mixed integer linear or nonlinear program. It is well known that, when binary variables are included in such a model, the resulting commodity prices may give insufficient incentive for private investment in the optimal new capacities. The new mechanism is intended to overcome this difficulty with a capacity price in addition to the commodity price: an auxiliary mathematical program calculates the minimum capacity price that is necessary to ensure that all firms investing in new capacities are satisfied with their profit levels. In order to test the applicability of this approach, the result of the suggested model is compared with the Ontario Integrated Power System Plan (IPSP), which recommends new generation capacities, based on historical data and costs of different sources of electricity generation for the next 20 years given a fixed forecast of demand.
338

Cardinality Constrained Robust Optimization Applied to a Class of Interval Observers

McCarthy, Philip James January 2013 (has links)
Observers are used in the monitoring and control of dynamical systems to deduce the values of unmeasured states. Designing an observer requires having an accurate model of the plant — if the model parameters are characterized imprecisely, the observer may not provide reliable estimates. An interval observer, which comprises an upper and lower observer, bounds the plant's states from above and below, given the range of values of the imprecisely characterized parameters, i.e., it defines an interval in which the plant's states must lie at any given instant. We propose a linear programming-based method of interval observer design for two cases: 1) only the initial conditions of the plant are uncertain; 2) the dynamical parameters are also uncertain. In the former, we optimize the transient performance of the interval observers, in the sense that the volume enclosed by the interval is minimized. In the latter, we optimize the steady state performance of the interval observers, in the sense that the norm of the width of the interval is minimized at steady state. Interval observers are typically designed to characterize the widest interval that bounds the states. This thesis proposes an interval observer design method that utilizes additional, but still-incomplete information, that enables the designer to identify tighter bounds on the uncertain parameters under certain operating conditions. The number of bounds that can be refined defines a class of systems. The definition of this class is independent of the specific parameters whose bounds are refined. Applying robust optimization techniques, under a cardinality constrained model of uncertainty, we design a single observer for an entire class of systems. These observers guarantee a minimum level of performance with respect to the aforementioned metrics, as we optimize the worst-case performance over a given class of systems. The robust formulation allows the designer to tune the level of uncertainty in the model. If many of the uncertain parameter bounds can be refined, the nominal performance of the observer can be improved, however, if few or none of the parameter bounds can be refined, the nominal performance of the observer can be designed to be more conservative.
339

Time-efficient Computation with Near-optimal Solutions for Maximum Link Activation in Wireless Communication Systems

Geng, Qifeng January 2012 (has links)
In a generic wireless network where the activation of a transmission link is subject to its signal-to-noise-and-interference ratio (SINR) constraint, one of the most fundamental and yet challenging problem is to find the maximum number of simultaneous transmissions. In this thesis, we consider and study in detail the problem of maximum link activation in wireless networks based on the SINR model. Integer Linear Programming has been used as the main tool in this thesis for the design of algorithms. Fast algorithms have been proposed for the delivery of near-optimal results time-efficiently. With the state-of-art Gurobi optimization solver, both the conventional approach consisting of all the SINR constraints explicitly and the exact algorithm developed recently using cutting planes have been implemented in the thesis. Based on those implementations, new solution algorithms have been proposed for the fast delivery of solutions. Instead of considering interference from all other links, an interference range has been proposed. Two scenarios have been considered, namely the optimistic case and the pessimistic case. The optimistic case considers no interference from outside the interference range, while the pessimistic case considers the interference from outside the range as a common large value. Together with the algorithms, further enhancement procedures on the data analysis have also been proposed to facilitate the computation in the solver.
340

Numerical Stability in Linear Programming and Semidefinite Programming

Wei, Hua January 2006 (has links)
We study numerical stability for interior-point methods applied to Linear Programming, LP, and Semidefinite Programming, SDP. We analyze the difficulties inherent in current methods and present robust algorithms. <br /><br /> We start with the error bound analysis of the search directions for the normal equation approach for LP. Our error analysis explains the surprising fact that the ill-conditioning is not a significant problem for the normal equation system. We also explain why most of the popular LP solvers have a default stop tolerance of only 10<sup>-8</sup> when the machine precision on a 32-bit computer is approximately 10<sup>-16</sup>. <br /><br /> We then propose a simple alternative approach for the normal equation based interior-point method. This approach has better numerical stability than the normal equation based method. Although, our approach is not competitive in terms of CPU time for the NETLIB problem set, we do obtain higher accuracy. In addition, we obtain significantly smaller CPU times compared to the normal equation based direct solver, when we solve well-conditioned, huge, and sparse problems by using our iterative based linear solver. Additional techniques discussed are: crossover; purification step; and no backtracking. <br /><br /> Finally, we present an algorithm to construct SDP problem instances with prescribed strict complementarity gaps. We then introduce two <em>measures of strict complementarity gaps</em>. We empirically show that: (i) these measures can be evaluated accurately; (ii) the size of the strict complementarity gaps correlate well with the number of iteration for the SDPT3 solver, as well as with the local asymptotic convergence rate; and (iii) large strict complementarity gaps, coupled with the failure of Slater's condition, correlate well with loss of accuracy in the solutions. In addition, the numerical tests show that there is no correlation between the strict complementarity gaps and the geometrical measure used in [31], or with Renegar's condition number.

Page generated in 0.0932 seconds