• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 406
  • 315
  • 50
  • 46
  • 24
  • 12
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1046
  • 1046
  • 339
  • 280
  • 279
  • 186
  • 130
  • 114
  • 107
  • 100
  • 95
  • 95
  • 83
  • 80
  • 80
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Congestion Removal in the Next Generation Internet

Suryasaputra, Robert, rsuryasaputra@gmail.com January 2007 (has links)
The ongoing development of new and demanding Internet applications requires the Internet to deliver better service levels that are significantly better than the best effort service that the Internet currently provides and was built for. These improved service levels include guaranteed delays, jitter and bandwidth. Through extensive research into Quality of Service and Differentiated Service (DiffServ) it has become possible to provide guaranteed services, however this turns out to be inadequate without the application of Traffic Engineering methodologies and principles. Traffic Engineering is an integral part of network operation. Its major goal is to deliver the best performance from an existing service provider's network resources and, at the same time, to enhance a customers' view of network performance. In this thesis, several different traffic engineering methods for optimising the operation of native IP and IP networks employing MPLS are proposed. A feature of these new methods is their fast run times and this opens the way to making them suitable for application in an online traffic engineering environment. For native IP networks running shortest path based routing protocols, we show that an LP-based optimisation based on the well known multi-commodity flow problem can be effective in removing network congestion. Having realised that Internet service providers are now moving towards migrating their networks to the use of MPLS, we have also formulated optimisation methods to traffic engineer MPLS networks by selecting suitable routing paths and utilising the feature of explicit routing contained in MPLS. Although MPLS is capable of delivering traffic engineering across different classes of traffic, network operators still prefer to rely on the proven and simple IP based routing protocols for best effort traffic and only use MPLS to route traffic requiring special forwarding treatment. Based on this fact, we propose a method that optimises the routing patterns applicable to different classes of traffic based on their bandwidth requirements. A traffic engineering comparison study that evaluates the performance of a neural network-based method for MPLS networks and LP-based weight setting approach for shortest path based networks has been performed using a well-known open source network simulator, called ns2. The comparative evaluation is based upon the packet loss probability. The final chapter of the thesis describes the software development of a network management application called OptiFlow which integrates techniques described in earlier chapters including the LP-based weight setting optimisation methodology; it also uses traffic matrix estimation techniques that are required as input to the weight setting models that have been devised. The motivation for developing OptiFlow was to provide a prototype set of tools that meet the congestion management needs of networking industries (ISPs and telecommunications companies - telcos).
252

Optimal Drill Assignment for Multi-Boom Jumbos

Michael Champion Unknown Date (has links)
Development drilling is used in underground mining to create access tunnels. A common method involves using a drilling rig, known as a jumbo, to drill holes into the face of a tunnel. Jumbo drill rigs have two or more articulated arms with drills as end-effectors, that extend outwards from a vehicle. Once drilled, the holes are charged with explosives and fired to advance the tunnel. There is an ongoing imperative within the mining industry to reduce development times and reducing time spent drilling is seen as the best opportunity for achieving this. Notwithstanding that three-boom jumbos have been available for some years, the industry has maintained a preference for using jumbo rigs with two drilling booms. Three-boom machines have the potential to reduce drilling time by as much as one third, but they have proven difficult to operate and, in practice, this benefit has not been realized. The key difficulty lies in manoeuvering the booms within the tight confines of the tunnel and ensuring sequencing the drilling of holes so that each boom spends maximum time drilling. This thesis addresses the problem of optimally sequencing multi-boom jumbo drill rigs to minimize the overall time to drill a blast hole pattern, taking into account the various constraints on the problem including the geometric constraints restricting motion of the booms. The specific aims of the thesis are to: ² develop the algorithmic machinery needed to determine minimum- or near-minimum-time drill assignment for multi-boom jumbos which is suitable for "real-time" implementation; ² use this drill pattern assignment algorithm to quantify the benefits of optimal drill pattern assignment with three-boom jumbos; and ² investigate the management of unplanned events, such as boom breakdowns, and assess the potential of the algorithm to assist a human operator with the forward planning of drill-hole selection. Jumbo drill task assignment is a combinatorial optimization problem. A methodology based around receding horizon mixed integer programming is developed to solve the problem. At any time the set of drill-holes available to a boom is restricted by the location of the other booms as well as the tunnel perimeter. Importantly these constraints change as the problem evolves. The methodology builds these constraints into problem through use of a feasibility tensor that encodes the moves available to each boom given configurations of other booms. The feasibility tensor is constructed off-line using a rapidly exploring random tree algorithm. Simulations conducted using the sequencing algorithm predict, for a standard drill-hole pattern, a 10 - 22% reduction in drilling time with the three-boom rig relative to two-boom machines. The algorithms developed in this thesis have two intended applications. The first is for automated jumbo drill rigs where the capability to plan drilling sequences algorithmically is a prerequisite. Automated drill rigs are still some years from being a reality. The second, and more immediate application is in providing decision support for drill rig operators. It is envisaged that the algorithms described here might form the basis of a operator assist that provides guidance on which holes to drill next with each boom, adapting this plan as circumstances change.
253

Uncertainty Propagation in Model-Based Recognition

Jacobs, D.W., Alter, T.D. 01 February 1995 (has links)
Building robust recognition systems requires a careful understanding of the effects of error in sensed features. Error in these image features results in a region of uncertainty in the possible image location of each additional model feature. We present an accurate, analytic approximation for this uncertainty region when model poses are based on matching three image and model points, for both Gaussian and bounded error in the detection of image points, and for both scaled-orthographic and perspective projection models. This result applies to objects that are fully three- dimensional, where past results considered only two-dimensional objects. Further, we introduce a linear programming algorithm to compute the uncertainty region when poses are based on any number of initial matches. Finally, we use these results to extend, from two-dimensional to three- dimensional objects, robust implementations of alignmentt interpretation- tree search, and ransformation clustering.
254

On Models and Methods for Global Optimization of Structural Topology

Stolpe, Mathias January 2003 (has links)
This thesis consists of an introduction and sevenindependent, but closely related, papers which all deal withproblems in structural optimization. In particular, we considermodels and methods for global optimization of problems intopology design of discrete and continuum structures. In the first four papers of the thesis the nonconvex problemof minimizing the weight of a truss structure subject to stressconstraints is considered. First itis shown that a certainsubclass of these problems can equivalently be cast as linearprograms and thus efficiently solved to global optimality.Thereafter, the behavior of a certain well-known perturbationtechnique is studied. It is concluded that, in practice, thistechnique can not guarantee that a global minimizer is found.Finally, a convergent continuous branch-and-bound method forglobal optimization of minimum weight problems with stress,displacement, and local buckling constraints is developed.Using this method, several problems taken from the literatureare solved with a proof of global optimality for the firsttime. The last three papers of the thesis deal with topologyoptimization of discretized continuum structures. Theseproblems are usually modeled as mixed or pure nonlinear 0-1programs. First, the behavior of certain often usedpenalization methods for minimum compliance problems isstudied. It is concluded that these methods may fail to producea zero-one solution to the considered problem. To remedy this,a material interpolation scheme based on a rational functionsuch that compli- ance becomes a concave function is proposed.Finally, it is shown that a broad range of nonlinear 0-1topology optimization problems, including stress- anddisplacement-constrained minimum weight problems, canequivalently be modeled as linear mixed 0-1 programs. Thisresult implies that any of the standard methods available forgeneral linear integer programming can now be used on topologyoptimization problems. <b>Keywords:</b>topology optimization, global optimization,stress constraints, linear programming, mixed integerprogramming, branch-and-bound.
255

Algorithms For Piecewise Linear Knapsack Problems With Applications In Electronic Commerce

Kameshwaran, S 08 1900 (has links) (PDF)
No description available.
256

Product Costing for Sawmill Business Management

Johansson, Mats January 2007 (has links)
Several Swedish sawmill groups have recently developed product-costing systems to possibly compensate for diminishing production knowledge in recently centralized market organizations. The concept of product costs is challenging in sawmilling, since production is of a joint type – each log typically yields many products. The newly developed costing systems rely on traditional accounting-based methods and are of little use in decision-making because the resulting cost figures do not normally estimate actual cost changes. This thesis develops an alternative, theoretically defendable method based on linear programming, and tests it on a pine sawmill. Computer simulations are compared with traditional methods, and to analyze the effects of managing the salesmen by the product costs of the suggested method. The thesis relies on the joint-cost accounting discourse from the 1980s, which was abandoned before any essential application was found. This application has now been found through changes in the sawmill industry, and the discourse is here revived practically and theoretically. The sawmills are modeled with relative capacity restrictions and with constraints on the flexibility of their timber supply. Sales decisions based on product costs from the suggested method seem to successively improve company profit. To be successful, the product costs have to be recalculated regularly. Analyses indicate that with flexibility in purchasing timber and a low cost difference between buying scarce products and selling surplus products externally, the necessary length of the recalculation period and the usefulness of the suggested method both increase markedly.
257

Optimal power flow via quadratic modeling

Tao, Ye 29 August 2011 (has links)
Optimal power flow (OPF) is the choice tool for determining the optimal operating status of the power system by managing controllable devices. The importance of the OPF approach has increased due to increasing energy prices and availability of more control devices. Existing OPF approaches exhibit shortcomings. Current OPF algorithms can be classified into (a) nonlinear programming, (b) intelligent search methods, and (c) sequential algorithms. Nonlinear programming algorithms focus on the solution of the Kuhn-Tucker conditions; they require a starting feasible solution and the model includes all constraints; these characteristics limit the robustness and efficiency of these methods. Intelligent search methods are first-order methods and are totally inefficient for large-scale systems. Traditional sequential algorithms require a starting feasible solution, a requirement that limits their robustness. Present implementations of sequential algorithms use traditional modeling that result in inefficient algorithms. The research described in this thesis has overcome the shortcomings by developing a robust and highly efficient algorithm. Robustness is defined as the ability to provide a solution for any system; the proposed approach achieves robustness by operating on suboptimal points and moving toward feasible, it stops at a suboptimal solution if an optimum does not exist. Efficiency is achieved by (a) converting the nonlinear OPF problem to a quadratic problem (b) and limiting the size of the model; the quadratic model enables fast convergence and the algorithm that identifies the active constraints, limits the size of the model by only including the active constraints. A concise description of the method is as follows: The proposed method starts from an arbitrary state which may be infeasible; model equations and system constraints are satisfied by introducing artificial mismatch variables at each bus. Mathematically this is an optimal but infeasible point. At each iteration, the artificial mismatches are reduced while the solution point maintains optimality. When mismatches reach zero, the solution becomes feasible and the optimum has been found; otherwise, the mismatch residuals are converted to load shedding and the algorithm provides a suboptimal but feasible solution. Therefore, the algorithm operates on infeasible but optimal points and moves towards feasibility. The proposed algorithm maximizes efficiency with two innovations: (a) quadratization that converts the nonlinear model to quadratic with excellent convergence properties and (b) minimization of model size by identifying active constraints, which are the only constraints included in the model. Finally sparsity technique is utilized that provide the best computational efficiency for large systems. This dissertation work demonstrates the proposed OPF algorithm using various systems up to three hundred buses and compares it with several well-known OPF software packages. The results show that the proposed algorithm converges fast and its runtime is competitive. Furthermore, the proposed method is extended to a three-phase OPF (TOPF) algorithm for unbalanced networks using the quadratized three-phase power system model. An example application of the TOPF is presented. Specifically, TOPF is utilized to address the problem of fault induced delayed voltage recovery (FIDVR) phenomena, which lead to unwanted relay operations, stalling of motors and load disruptions. This thesis presents a methodology that will optimally enhance the distribution system to mitigate/eliminate the onset of FIDVR. The time domain simulation method has been integrated with a TOPF model and a dynamic programming optimization algorithm to provide the optimal reinforcing strategy for the circuits.
258

On Optimal Link Activation with Interference Cancelation in Wireless Networking

Yuan, Di, Angelakis, Vangelis, Chen, Lei, Karipidis, Eleftherios, Larsson, Erik G. January 2013 (has links)
A fundamental aspect in performance engineering of wireless networks is optimizing the set of links that can be concurrently activated to meet given signal-to-interference-and-noise ratio (SINR) thresholds. The solution of this combinatorial problem is the key element in scheduling and cross-layer resource management. In this paper, we assume multiuser decoding receivers, which can cancel strongly interfering signals. As a result, in contrast to classical spatial reuse, links being close to each other are more likely to be active concurrently. Our focus is to gauge the gain of successive interference cancellation (SIC), as well as the simpler, yet instructive, case of parallel interference cancellation (PIC), in the context of optimal link activation. We show that both problems are NP-hard and develop compact integer linear programming formulations that enable to approach global optimality. We provide an extensive numerical performance evaluation, indicating that for low to medium SINR thresholds the improvement is quite substantial, especially with SIC, whereas for high SINR thresholds the improvement diminishes and both schemes perform equally well.
259

P-Cycle-based Protection in Network Virtualization

Song, Yihong 25 February 2013 (has links)
As the "network of network", the Internet has been playing a central and crucial role in modern society, culture, knowledge, businesses and so on in a period of over two decades by supporting a wide variety of network technologies and applications. However, due to its popularity and multi-provider nature, the future development of the Internet is limited to simple incremental updates. To address this challenge, network virtualization has been propounded as a potential candidate to provide the essential basis for the future Internet architecture. Network virtualization is capable of providing an open and flexible networking environment in which service providers are allowed to dynamically compose multiple coexisting heterogeneous virtual networks on a shared substrate network. Such a flexible environment will foster the deployment of diversified services and applications. A major challenge in network virtualization area is the Virtual Network Embedding (VNE), which aims to statically or dynamically allocate virtual nodes and virtual links on substrate resources, physical nodes and paths. Making effective use of substrate resources requires high-efficient and survivable VNE techniques. The main contribution of this thesis is two high-performance p-Cycle-based survivable virtual network embedding approaches. These approaches take advantage of p-Cycle-based protection techniques that minimize the backup resources while providing a full VN protection scheme against link and node failures.
260

Computational video: post-processing methods for stabilization, retargeting and segmentation

Grundmann, Matthias 05 April 2013 (has links)
In this thesis, we address a variety of challenges for analysis and enhancement of Computational Video. We present novel post-processing methods to bridge the difference between professional and casually shot videos mostly seen on online sites. Our research presents solutions to three well-defined problems: (1) Video stabilization and rolling shutter removal in casually-shot, uncalibrated videos; (2) Content-aware video retargeting; and (3) spatio-temporal video segmentation to enable efficient video annotation. We showcase several real-world applications building on these techniques. We start by proposing a novel algorithm for video stabilization that generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To achieve this, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond conventional filtering, that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scanline at a time, which results in non-rigid image distortions such as shear and wobble. We propose a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. Our novel video stabilization and calibration free rolling shutter removal have been deployed on YouTube where they have successfully stabilized millions of videos. We also discuss several extensions to the stabilization algorithm and present technical details behind the widely used YouTube Video Stabilizer. We address the challenge of changing the aspect ratio of videos, by proposing algorithms that retarget videos to fit the form factor of a given device without stretching or letter-boxing. Our approaches use all of the screen's pixels, while striving to deliver as much video-content of the original as possible. First, we introduce a new algorithm that uses discontinuous seam-carving in both space and time for resizing videos. Our algorithm relies on a novel appearance-based temporal coherence formulation that allows for frame-by-frame processing and results in temporally discontinuous seams, as opposed to geometrically smooth and continuous seams. Second, we present a technique, that builds on the above mentioned video stabilization approach. We effectively automate classical pan and scan techniques by smoothly guiding a virtual crop window via saliency constraints. Finally, we introduce an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a "region graph" over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, and allows subsequent applications to choose from varying levels of granularity. We demonstrate the use of spatio-temporal segmentation as users interact with the video, enabling efficient annotation of objects within the video.

Page generated in 0.1003 seconds