• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 310
  • 123
  • 66
  • 29
  • 6
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 639
  • 105
  • 77
  • 76
  • 76
  • 71
  • 59
  • 59
  • 58
  • 57
  • 56
  • 55
  • 55
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Solving the Generalized Assignment Problem by column enumeration based on Lagrangian reduced costs

Brommesson, Peter January 2006 (has links)
In this thesis a method for solving the Generalized Assignment Problem (GAP) is described. It is based on a reformulation of the original problem into a Set Partitioning Problem (SPP), in which the columns represent partial solutions to the original problem. For solving this problem, column generation, with systematic overgeneration of columns, is used. Conditions that guarantee that an optimal solution to a restricted SPP is optimal also in the original problem are given. In order to satisfy these conditions, not only columns with the most negative Lagrangian reduced costs need to be generated, but also others; this observation leads to the use of overgeneration of columns. The Generalized Assignment Problem has shown to be NP-hard and therefore efficient algorithms are needed, especially for large problems. The application of the proposed method decomposes GAP into several knapsack problems via Lagrangian relaxation, and enumerates solutions to each of these problems. The solutions obtained from the knapsack problems form a Set Partitioning Problem, which consists of combining one solution from each knapsack problem to obtain a solution to the original problem. The algorithm has been tested on problems with 10 agents and 60 jobs. This leads to 10 knapsack problems, each with 60 variables.
192

Lagrangian decomposition of the Hadley Cells

Kjellsson, Joakim January 2009 (has links)
The Lagrangian trajectory code TRACMASS is extended to the atmosphere to examine the tropi- cal Hadley Cells using fields from the ERA-Interim reanalysis dataset. The analysis is made using both pressure, temperature and specific humidity as vertical coordinates. By letting a trajectory represent a mass transport and tracing millions of trajectories in a domain between the latitudes 15°N and 15°S, the mass stream function based on trajectories is obtained (Lagrangian stream function). By separating the trajectories into classes depending on their starting point and des- tination (“North-to-North”, “North-to-South”, “South-to-North” and “South-to-South”), the mass stream function is decomposed into four paths. This can not be done if the stream function is cal- culated directly from the velocity fields (Eulerian stream function). Using this technique, the mass transports recirculating within the cells are compared to the mass transports between the cells, giving further insight to the structure of the Hadley Circulations. The magnitudes of the mass stream functions are presented by converting the volume flux unit Sverdrup into a mass flux unit. It is found that the recirculating transports of the northern and southern cells are 473 Sv and 508 Sv respectively. The inter-hemispheric mass transports are 126 Sv northward and 125 Sv southward. It is also found that far from all trajectories follow paths sim- ilar to the stream lines, since the stream lines are zonal and temporal means and the particle trajectories chaotic.
193

Recovering Data with Group Sparsity by Alternating Direction Methods

Deng, Wei 06 September 2012 (has links)
Group sparsity reveals underlying sparsity patterns and contains rich structural information in data. Hence, exploiting group sparsity will facilitate more efficient techniques for recovering large and complicated data in applications such as compressive sensing, statistics, signal and image processing, machine learning and computer vision. This thesis develops efficient algorithms for solving a class of optimization problems with group sparse solutions, where arbitrary group configurations are allowed and the mixed L21-regularization is used to promote group sparsity. Such optimization problems can be quite challenging to solve due to the mixed-norm structure and possible grouping irregularities. We derive algorithms based on a variable splitting strategy and the alternating direction methodology. Extensive numerical results are presented to demonstrate the efficiency, stability and robustness of these algorithms, in comparison with the previously known state-of-the-art algorithms. We also extend the existing global convergence theory to allow more generality.
194

Cosmological Results and Implications in Effective DGP

Chow, Lik-Neng Nathan January 2009 (has links)
We study a simple extension of the decoupling limit of boundary effctive actions for the Dvali-Gabadadze-Porrati model, by covariantizing the π lagrangian and coupling to gravity in the usual way. This extension agrees with DGP to leading order in Mpl^−1 , and simplifies the cosmological analysis. It is also shown to softly break the shift symmetry, while still being consistent with solar system observations. The generally covariant equations of motion for π and the metric are derived, then the cosmology is developed under the Cosmological Principle. Three analytic solutions are found and their stability is studied. Interesting DGP phenomenology is reproduced, and we consider one of the stable solutions. The cosmological analogue of the Vainshtein effect is reproduced and the effective equation of state, w_π, is shown to be bounded by −1 from above. This solution is additionally shown to be an attractor solution in an expanding universe. We evolve π numerically and reproduce these properties, and show that the universe will go through a contraction phase, due to this π field. We then place a constraint on r_c ≥ 10^29 cm, given recent WMAP5 data. This lower bound on r_c gives an upper bound on the anomalous perihelion precession of the moon ∼ 1 × 10^−13, 2 orders of magnitude below current experimental precision.
195

Cosmological Results and Implications in Effective DGP

Chow, Lik-Neng Nathan January 2009 (has links)
We study a simple extension of the decoupling limit of boundary effctive actions for the Dvali-Gabadadze-Porrati model, by covariantizing the π lagrangian and coupling to gravity in the usual way. This extension agrees with DGP to leading order in Mpl^−1 , and simplifies the cosmological analysis. It is also shown to softly break the shift symmetry, while still being consistent with solar system observations. The generally covariant equations of motion for π and the metric are derived, then the cosmology is developed under the Cosmological Principle. Three analytic solutions are found and their stability is studied. Interesting DGP phenomenology is reproduced, and we consider one of the stable solutions. The cosmological analogue of the Vainshtein effect is reproduced and the effective equation of state, w_π, is shown to be bounded by −1 from above. This solution is additionally shown to be an attractor solution in an expanding universe. We evolve π numerically and reproduce these properties, and show that the universe will go through a contraction phase, due to this π field. We then place a constraint on r_c ≥ 10^29 cm, given recent WMAP5 data. This lower bound on r_c gives an upper bound on the anomalous perihelion precession of the moon ∼ 1 × 10^−13, 2 orders of magnitude below current experimental precision.
196

Green Supply Chain Design: A Lagrangian Approach

Merrick, Ryan J. 21 May 2010 (has links)
The expansion of supply chains into global networks has drastically increased the distance travelled along shipping lanes in a logistics system. Inherently, the increase in travel distances produces increased carbon emissions from transport vehicles. When increased emissions are combined with a carbon tax or emissions trading system, the result is a supply chain with increased costs attributable to the emission generated on the transportation routes. Most traditional supply chain design models do not take emissions and carbon costs into account. Hence, there is a need to incorporate emission costs into a supply chain optimization model to see how the optimal supply chain configuration may be affected by the additional expenses. This thesis presents a mathematical programming model for the design of green supply chains. The costs of carbon dioxide (CO2) emissions were incorporated in the objective function, along with the fixed and transportation costs that are typically modeled in traditional facility location models. The model also determined the unit flows between the various nodes of the supply chain, with the objective of minimizing the total cost of the system by strategically locating warehouses throughout the network. The literature shows that CO2 emissions produced by a truck are dependent on the weight of the vehicle and can be modeled using a concave function. Hence, the carbon emissions produced along a shipping lane are dependent upon the number of units and the weight of each unit travelling between the two nodes. Due to the concave nature of the emissions, the addition of the emission costs to the problem formulation created a nonlinear mixed integer programming (MIP) model. A solution algorithm was developed to evaluate the new problem formulation. Lagrangian relaxation was used to decompose the problem by echelon and by potential warehouse site, resulting in a problem that required less computational effort to solve and allowed for much larger problems to be evaluated. A method was then suggested to exploit a property of the relaxed formulation and transform the problem into a linear MIP problem. The solution method computed the minimum cost for a complete network that would satisfy all the needs of the customers. A primal heuristic was introduced into the Lagrangian algorithm to generate feasible solutions. The heuristic utilized data from the Lagrangian subproblems to produce good feasible solutions. Due to the many characteristics of the original problem that were carried through to the subproblems, the heuristic produced very good feasible solutions that were typically within 1% of the Lagrangian bound. The proposed algorithm was evaluated through a number of tests. The rigidity of the problem and cost breakdown were varied to assess the performance of the solution method in many situations. The test results indicated that the addition of emission costs to a network can change the optimal configuration of the supply chain. As such, this study concluded that emission costs should be considered when designing supply chains in jurisdictions with carbon costs. Furthermore, the tests revealed that in regions without carbon costs it may be possible to significantly reduce the emissions produced by the supply chain with only a small increase in the cost to operate the system.
197

Minimizing Multi-zone Orders in the Correlated Storage Assignment Problem

Garfinkel, Maurice 14 January 2005 (has links)
A fundamental issue in warehouse operations is the storage location of the products it contains. Placing products intelligently within the system can allow for great reductions in order pick costs. This is essential because order picking is a major cost of warehouse operations. For example, a study by Drury conducted in the UK found that 63% of warehouse operating costs are due to order picking. When orders contain a single item, the COI rule of Heskett is an optimal storage policy. This is not true when orders contain multiple line items because no information is used about what products are ordered together. In this situation, products that are frequently ordered together should be stored together. This is the basis of the correlated storage assignment problem. Several previous researchers have considered how to form such clusters of products with an ultimate objective of minimizing travel time. In this dissertation, we focus on the alternate objective of minimizing multi-zone orders. We present a mathematical model and discuss properties of the problem. A Lagrangian relaxation solution approach is discussed. In addition, we both develop and adapt several heuristics from the literature to give upper bounds for the model. A cyclic exchange improvement method is also developed. This exponential size neighborhood can be efficiently searched in polynomial time. Even for poor initial solutions, this method finds solutions which outperform the best approaches from the literature. Different product sizes, stock splitting, and rewarehousing are problem features that our model can handle. The cyclic exchange algorithm is also modified to allow these operating modes. In particular, stock splitting is a difficult issue which most previous research in correlated storage ignores. All of our algorithms are implemented and tested on data from a functioning warehouse. For all data sets, the cyclic exchange algorithm outperforms COI, the standard industry approach, by an average of 15%.
198

On The Algebraic Structure Of Relative Hamiltonian Diffeomorphism Group

Demir, Ali Sait 01 January 2008 (has links) (PDF)
Let M be smooth symplectic closed manifold and L a closed Lagrangian submanifold of M. It was shown by Ozan that Ham(M,L): the relative Hamiltonian diffeomorphisms on M fixing the Lagrangian submanifold L setwise is a subgroup which is equal to the kernel of the restriction of the flux homomorphism to the universal cover of the identity component of the relative symplectomorphisms. In this thesis we show that Ham(M,L) is a non-simple perfect group, by adopting a technique due to Thurston, Herman, and Banyaga. This technique requires the diffeomorphism group be transitive where this property fails to exist in our case.
199

RELAXATION HEURISTICS FOR THE SET COVERING PROBLEM

Umetani, Shunji, Yagiura, Mutsunori, 柳浦, 睦憲 12 1900 (has links) (PDF)
No description available.
200

Multi-beam-interference-based methodology for the fabrication of photonic crystal structures

Stay, Justin L. 23 October 2009 (has links)
A variety of techniques are available to enable the fabrication of photonic crystal structures. Multi-beam-interference lithography (MBIL) is a relatively new technique which offers many advantages over more traditional means of fabrication. Unlike the more common fabrication methods such as optical and electron-beam lithography, MBIL is a method that can produce both two- and three-dimensional large-area photonic crystal structures for use in the infrared and visible light regimes. While multi-beam-interference lithography represents a promising methodology for the fabrication of PC structures, there has been an incomplete understanding of MBIL itself. The research in this thesis focuses on providing a more complete, systematic description of MBIL in order to demonstrate its full capabilities. Analysis of both three- and four-beam interference is investigated and described in terms of contrast and crystallography. The concept of a condition for primitive-lattice-vector-direction equal contrasts} is introduced in this thesis. These conditions are developed as nonlinear constraints when optimizing absolute contrast for producing lithographically useful interference patterns (meaning high contrast and localized intensity extrema). By understanding the richness of possibilities within MBIL, a number of useful interference patterns are found that can be created in a straightforward manner. These patterns can be both lithographically useful and structurally useful (providing interference contours that can define wide-bandgap photonic crystals). Included within this investigation are theoretical calculations of band structures for photonic crystals that are fabricatable through MBIL. The resulting calculations show that not only do most MBIL-defined structures exhibit similar performance characteristics compared to conventionally designed photonic crystal structures, but in some cases MBIL-defined structures show a significant increase in bandgap size. Using the results from this analysis, a number of hexagonal photonic crystals are fabricated using a variety of process conditions. It is shown that both rod- and hole-type photonic crystal structures can be fabricated using processes based on both positive and negative photoresist. The "light-field" and "dark-field" interference patterns used to define the hexagonal photonic crystal structures are quickly interchanged by the proper adjustment of each beam's intensity and polarization. The resulting structures, including a large area (~1 cm², 1 x 10⁹ lattice points) photonic crystal are imaged using a scanning electron microscope. Multi-beam-interference lithography provides an enabling initial step for the wafer-scale, cost-effective integration of the impressive PC-based devices into manufacturable DIPCS. While multi-beam-interference lithography represents a promising methodology for the fabrication of PC structures, it lacks in the ability to produce PC-based integrated photonic circuits. Future research will target the lack of a large-scale, cost-effective fabrication methodology for photonic crystal devices. By utilizing diffractive elements, a photo-mask will be able to combine both MBIL and conventional lithography techniques into a single fabrication technology while taking advantage of the inherent positive attributes of both.

Page generated in 0.0464 seconds