• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 306
  • 123
  • 66
  • 29
  • 6
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 635
  • 105
  • 77
  • 75
  • 74
  • 71
  • 59
  • 58
  • 58
  • 56
  • 55
  • 55
  • 54
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Lagrangian decomposition of the Hadley Cells

Kjellsson, Joakim January 2009 (has links)
The Lagrangian trajectory code TRACMASS is extended to the atmosphere to examine the tropi- cal Hadley Cells using fields from the ERA-Interim reanalysis dataset. The analysis is made using both pressure, temperature and specific humidity as vertical coordinates. By letting a trajectory represent a mass transport and tracing millions of trajectories in a domain between the latitudes 15°N and 15°S, the mass stream function based on trajectories is obtained (Lagrangian stream function). By separating the trajectories into classes depending on their starting point and des- tination (“North-to-North”, “North-to-South”, “South-to-North” and “South-to-South”), the mass stream function is decomposed into four paths. This can not be done if the stream function is cal- culated directly from the velocity fields (Eulerian stream function). Using this technique, the mass transports recirculating within the cells are compared to the mass transports between the cells, giving further insight to the structure of the Hadley Circulations. The magnitudes of the mass stream functions are presented by converting the volume flux unit Sverdrup into a mass flux unit. It is found that the recirculating transports of the northern and southern cells are 473 Sv and 508 Sv respectively. The inter-hemispheric mass transports are 126 Sv northward and 125 Sv southward. It is also found that far from all trajectories follow paths sim- ilar to the stream lines, since the stream lines are zonal and temporal means and the particle trajectories chaotic.
192

Recovering Data with Group Sparsity by Alternating Direction Methods

Deng, Wei 06 September 2012 (has links)
Group sparsity reveals underlying sparsity patterns and contains rich structural information in data. Hence, exploiting group sparsity will facilitate more efficient techniques for recovering large and complicated data in applications such as compressive sensing, statistics, signal and image processing, machine learning and computer vision. This thesis develops efficient algorithms for solving a class of optimization problems with group sparse solutions, where arbitrary group configurations are allowed and the mixed L21-regularization is used to promote group sparsity. Such optimization problems can be quite challenging to solve due to the mixed-norm structure and possible grouping irregularities. We derive algorithms based on a variable splitting strategy and the alternating direction methodology. Extensive numerical results are presented to demonstrate the efficiency, stability and robustness of these algorithms, in comparison with the previously known state-of-the-art algorithms. We also extend the existing global convergence theory to allow more generality.
193

Cosmological Results and Implications in Effective DGP

Chow, Lik-Neng Nathan January 2009 (has links)
We study a simple extension of the decoupling limit of boundary effctive actions for the Dvali-Gabadadze-Porrati model, by covariantizing the π lagrangian and coupling to gravity in the usual way. This extension agrees with DGP to leading order in Mpl^−1 , and simplifies the cosmological analysis. It is also shown to softly break the shift symmetry, while still being consistent with solar system observations. The generally covariant equations of motion for π and the metric are derived, then the cosmology is developed under the Cosmological Principle. Three analytic solutions are found and their stability is studied. Interesting DGP phenomenology is reproduced, and we consider one of the stable solutions. The cosmological analogue of the Vainshtein effect is reproduced and the effective equation of state, w_π, is shown to be bounded by −1 from above. This solution is additionally shown to be an attractor solution in an expanding universe. We evolve π numerically and reproduce these properties, and show that the universe will go through a contraction phase, due to this π field. We then place a constraint on r_c ≥ 10^29 cm, given recent WMAP5 data. This lower bound on r_c gives an upper bound on the anomalous perihelion precession of the moon ∼ 1 × 10^−13, 2 orders of magnitude below current experimental precision.
194

Cosmological Results and Implications in Effective DGP

Chow, Lik-Neng Nathan January 2009 (has links)
We study a simple extension of the decoupling limit of boundary effctive actions for the Dvali-Gabadadze-Porrati model, by covariantizing the π lagrangian and coupling to gravity in the usual way. This extension agrees with DGP to leading order in Mpl^−1 , and simplifies the cosmological analysis. It is also shown to softly break the shift symmetry, while still being consistent with solar system observations. The generally covariant equations of motion for π and the metric are derived, then the cosmology is developed under the Cosmological Principle. Three analytic solutions are found and their stability is studied. Interesting DGP phenomenology is reproduced, and we consider one of the stable solutions. The cosmological analogue of the Vainshtein effect is reproduced and the effective equation of state, w_π, is shown to be bounded by −1 from above. This solution is additionally shown to be an attractor solution in an expanding universe. We evolve π numerically and reproduce these properties, and show that the universe will go through a contraction phase, due to this π field. We then place a constraint on r_c ≥ 10^29 cm, given recent WMAP5 data. This lower bound on r_c gives an upper bound on the anomalous perihelion precession of the moon ∼ 1 × 10^−13, 2 orders of magnitude below current experimental precision.
195

Green Supply Chain Design: A Lagrangian Approach

Merrick, Ryan J. 21 May 2010 (has links)
The expansion of supply chains into global networks has drastically increased the distance travelled along shipping lanes in a logistics system. Inherently, the increase in travel distances produces increased carbon emissions from transport vehicles. When increased emissions are combined with a carbon tax or emissions trading system, the result is a supply chain with increased costs attributable to the emission generated on the transportation routes. Most traditional supply chain design models do not take emissions and carbon costs into account. Hence, there is a need to incorporate emission costs into a supply chain optimization model to see how the optimal supply chain configuration may be affected by the additional expenses. This thesis presents a mathematical programming model for the design of green supply chains. The costs of carbon dioxide (CO2) emissions were incorporated in the objective function, along with the fixed and transportation costs that are typically modeled in traditional facility location models. The model also determined the unit flows between the various nodes of the supply chain, with the objective of minimizing the total cost of the system by strategically locating warehouses throughout the network. The literature shows that CO2 emissions produced by a truck are dependent on the weight of the vehicle and can be modeled using a concave function. Hence, the carbon emissions produced along a shipping lane are dependent upon the number of units and the weight of each unit travelling between the two nodes. Due to the concave nature of the emissions, the addition of the emission costs to the problem formulation created a nonlinear mixed integer programming (MIP) model. A solution algorithm was developed to evaluate the new problem formulation. Lagrangian relaxation was used to decompose the problem by echelon and by potential warehouse site, resulting in a problem that required less computational effort to solve and allowed for much larger problems to be evaluated. A method was then suggested to exploit a property of the relaxed formulation and transform the problem into a linear MIP problem. The solution method computed the minimum cost for a complete network that would satisfy all the needs of the customers. A primal heuristic was introduced into the Lagrangian algorithm to generate feasible solutions. The heuristic utilized data from the Lagrangian subproblems to produce good feasible solutions. Due to the many characteristics of the original problem that were carried through to the subproblems, the heuristic produced very good feasible solutions that were typically within 1% of the Lagrangian bound. The proposed algorithm was evaluated through a number of tests. The rigidity of the problem and cost breakdown were varied to assess the performance of the solution method in many situations. The test results indicated that the addition of emission costs to a network can change the optimal configuration of the supply chain. As such, this study concluded that emission costs should be considered when designing supply chains in jurisdictions with carbon costs. Furthermore, the tests revealed that in regions without carbon costs it may be possible to significantly reduce the emissions produced by the supply chain with only a small increase in the cost to operate the system.
196

Minimizing Multi-zone Orders in the Correlated Storage Assignment Problem

Garfinkel, Maurice 14 January 2005 (has links)
A fundamental issue in warehouse operations is the storage location of the products it contains. Placing products intelligently within the system can allow for great reductions in order pick costs. This is essential because order picking is a major cost of warehouse operations. For example, a study by Drury conducted in the UK found that 63% of warehouse operating costs are due to order picking. When orders contain a single item, the COI rule of Heskett is an optimal storage policy. This is not true when orders contain multiple line items because no information is used about what products are ordered together. In this situation, products that are frequently ordered together should be stored together. This is the basis of the correlated storage assignment problem. Several previous researchers have considered how to form such clusters of products with an ultimate objective of minimizing travel time. In this dissertation, we focus on the alternate objective of minimizing multi-zone orders. We present a mathematical model and discuss properties of the problem. A Lagrangian relaxation solution approach is discussed. In addition, we both develop and adapt several heuristics from the literature to give upper bounds for the model. A cyclic exchange improvement method is also developed. This exponential size neighborhood can be efficiently searched in polynomial time. Even for poor initial solutions, this method finds solutions which outperform the best approaches from the literature. Different product sizes, stock splitting, and rewarehousing are problem features that our model can handle. The cyclic exchange algorithm is also modified to allow these operating modes. In particular, stock splitting is a difficult issue which most previous research in correlated storage ignores. All of our algorithms are implemented and tested on data from a functioning warehouse. For all data sets, the cyclic exchange algorithm outperforms COI, the standard industry approach, by an average of 15%.
197

On The Algebraic Structure Of Relative Hamiltonian Diffeomorphism Group

Demir, Ali Sait 01 January 2008 (has links) (PDF)
Let M be smooth symplectic closed manifold and L a closed Lagrangian submanifold of M. It was shown by Ozan that Ham(M,L): the relative Hamiltonian diffeomorphisms on M fixing the Lagrangian submanifold L setwise is a subgroup which is equal to the kernel of the restriction of the flux homomorphism to the universal cover of the identity component of the relative symplectomorphisms. In this thesis we show that Ham(M,L) is a non-simple perfect group, by adopting a technique due to Thurston, Herman, and Banyaga. This technique requires the diffeomorphism group be transitive where this property fails to exist in our case.
198

RELAXATION HEURISTICS FOR THE SET COVERING PROBLEM

Umetani, Shunji, Yagiura, Mutsunori, 柳浦, 睦憲 12 1900 (has links) (PDF)
No description available.
199

Multi-beam-interference-based methodology for the fabrication of photonic crystal structures

Stay, Justin L. 23 October 2009 (has links)
A variety of techniques are available to enable the fabrication of photonic crystal structures. Multi-beam-interference lithography (MBIL) is a relatively new technique which offers many advantages over more traditional means of fabrication. Unlike the more common fabrication methods such as optical and electron-beam lithography, MBIL is a method that can produce both two- and three-dimensional large-area photonic crystal structures for use in the infrared and visible light regimes. While multi-beam-interference lithography represents a promising methodology for the fabrication of PC structures, there has been an incomplete understanding of MBIL itself. The research in this thesis focuses on providing a more complete, systematic description of MBIL in order to demonstrate its full capabilities. Analysis of both three- and four-beam interference is investigated and described in terms of contrast and crystallography. The concept of a condition for primitive-lattice-vector-direction equal contrasts} is introduced in this thesis. These conditions are developed as nonlinear constraints when optimizing absolute contrast for producing lithographically useful interference patterns (meaning high contrast and localized intensity extrema). By understanding the richness of possibilities within MBIL, a number of useful interference patterns are found that can be created in a straightforward manner. These patterns can be both lithographically useful and structurally useful (providing interference contours that can define wide-bandgap photonic crystals). Included within this investigation are theoretical calculations of band structures for photonic crystals that are fabricatable through MBIL. The resulting calculations show that not only do most MBIL-defined structures exhibit similar performance characteristics compared to conventionally designed photonic crystal structures, but in some cases MBIL-defined structures show a significant increase in bandgap size. Using the results from this analysis, a number of hexagonal photonic crystals are fabricated using a variety of process conditions. It is shown that both rod- and hole-type photonic crystal structures can be fabricated using processes based on both positive and negative photoresist. The "light-field" and "dark-field" interference patterns used to define the hexagonal photonic crystal structures are quickly interchanged by the proper adjustment of each beam's intensity and polarization. The resulting structures, including a large area (~1 cm², 1 x 10⁹ lattice points) photonic crystal are imaged using a scanning electron microscope. Multi-beam-interference lithography provides an enabling initial step for the wafer-scale, cost-effective integration of the impressive PC-based devices into manufacturable DIPCS. While multi-beam-interference lithography represents a promising methodology for the fabrication of PC structures, it lacks in the ability to produce PC-based integrated photonic circuits. Future research will target the lack of a large-scale, cost-effective fabrication methodology for photonic crystal devices. By utilizing diffractive elements, a photo-mask will be able to combine both MBIL and conventional lithography techniques into a single fabrication technology while taking advantage of the inherent positive attributes of both.
200

Fixed-scale statistics and the geometry of turbulent dispersion at high reynolds number via numerical simulation

Hackl, Jason F. 17 May 2011 (has links)
The relative dispersion of one fluid particle with respect to another is fundamentally related to the transport and mixing of contaminant species in turbulent flows. The most basic consequence of Kolmogorov's 1941 similarity hypotheses for relative dispersion, the Richardson-Obukhov law that mean-square pair separation distance grows with the cube of time at intermediate times in the inertial subrange, is notoriously difficult to observe in the environment, laboratory, and direct numerical simulations (DNS). Inertial subrange scaling in size parameters like the mean-square pair separation requires careful adjustment for the initial conditions of the dispersion process as well as a very wide range of scales (high Reynolds number) in the flow being studied. However, the statistical evolution of the shapes of clusters of more than two particles has already exhibited statistical invariance at intermediate times in existing DNS. This invariance is identified with inertial-subrange scaling and is more readily observed than inertial-subrange scaling for seemingly simpler quantities such as the mean-square pair separation Results from dispersion of clusters of four particles (called tetrads) in large-scale DNS at grid resolutions up to 4096 points in each of three directions and Taylor-scale Reynolds numbers from 140 to 1000 are used to explore the question of statistical universality in measures of the size and shape of tetrahedra in homogeneous isotropic turbulence in distinct scaling regimes at very small times (ballistic), intermediate times (inertial) and very late times (diffusive). Derivatives of fractional powers of the mean-square pair separation with respect to time normalized by the characteristic time scale at the initial tetrad size constitute a powerful technique in isolating cubic time scaling in the mean-square pair separation. This technique is applied to the eigenvalues of a moment-of-inertia-like tensor formed from the separation vectors between particles in the tetrad. Estimates of the proportionality constant "g" in the Richardson-Obukhov law from DNS at a Taylor-scale Reynolds number of 1000 converge towards the value g=0.56 reported in previous studies. The exit time taken by a particle pair to first reach successively larger thresholds of fixed separation distance is also briefly discussed and found to have unexplained dependence on initial separation distance for negative moments, but good inertial range scaling for positive moments. The use of diffusion models of relative dispersion in the inertial subrange to connect mean exit time to "g" is also tested and briefly discussed in these simulations. Mean values and probability density functions of shape parameters including the triangle aspect ratio "w," tetrahedron volume-to-gyration radius ratio, and normalized moment-of-inertia eigenvalues are all found to approach invariant forms in the inertial subrange for a wider range of initial separations than size parameters such as mean-square gyration radius. These results constitute the clearest evidence to date that turbulence has a tendency to distort and elongate multiparticle configurations more severely in the inertial subrange than it does in the diffusive regime at asymptotically late time. Triangle statistics are found to be independent of initial shape for all time beyond the ballistic regime. The development and testing of different schemes for parallelizing the cubic spline interpolation procedure for particle velocities needed to track particles in DNS is also covered. A "pipeline" method of moving batches of particles from processor to processor is adopted due to its low memory overhead, but there are challenges in achieving good performance scaling.

Page generated in 0.0332 seconds