• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 304
  • 123
  • 66
  • 29
  • 6
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 633
  • 104
  • 77
  • 75
  • 74
  • 71
  • 58
  • 58
  • 58
  • 55
  • 55
  • 55
  • 54
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Mass transport due to surface waves in a water-mud system

Huang, Lingyan. January 2005 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
112

Methodology to analyse three dimensional droplet dispersion applicable to Icing Wind Tunnels

Sorato, Sebastiano January 2009 (has links)
This dissertation presents a methodology to simulate the dispersion of water droplets in the air flow typical of an Icing Tunnel. It is based on the understanding the physical parameters that influence the uniformity and the distribution of cloud of droplets in the airflow and to connect them with analytical parameters which may be used to describe the dispersion process. Specifically it investigates the main geometrical and physical parameters contributing to the droplets dispersion at different tunnel operative conditions, finding a consistent numerical approach to reproduce the local droplets dynamic, quantifying the possible limits of commercial CFD methods, pulling out the empirical parameters/constant needing to simulate properly the local conditions and validating the results with calibrated experiment. An overview of the turbulence and multiphase flow theories, considered relevant to the Icing Tunnel environment, is presented as well as basic concepts and terminology of particle dispersion. Taylor’s theory of particle dispersion has been taken as starting point to explain further historical development of discrete phase dispersion. Common methods incorporated in commercial CFD software are explained and relative shortcomings underlined. The local aerodynamic condition within tunnel, which are required to perform the calculation with the Lagrangian particle equation of motions, are generated numerically using different turbulent models and are compared to the historical K-ε model. Verification of the calculation is performed with grid independency studies. Stochastic Separated Flow methods are applied to compute the particle trajectories. The Discrete Random Walk, as described in the literature, has been used to perform particle dispersion analysis. Numerical settings in the code are related to the characteristics of the local turbulent condition such as turbulence intensity and length scales. Cont/d.
113

Sur la topologie des sous-variétés lagrangiennes monotones de l'espace projectif complexe / A topological constraint for monotone Lagrangians in the complex projective space

Schatz, Simon 26 September 2016 (has links)
Les sous-variées isotropes maximales en géométries symplectique sont appelées lagrangiennes ; parmi celles-ci on distingue les lagrangiennes monotones. Historiquement leur définition est motivée en partie par la construction de l'homologie de Floer lagrangiennes ; elles présentent ainsi une classe plus rigide, moins étendue, de lagrangiennes. Ce manuscrit établit une contrainte sur le groupe fondamental de certaines lagrangiennes monotones, qui s'applique en particulier lorsque la variété symplectique ambiante est l'espace projectif complexe. Une des conséquences du théorème principal est d'exclure toute une classe d'exemples classiques de lagrangiennes, due à L. Polterovich, du cas monotone. Elle conduit également à une discussion sur les topologies possibles en dimension 3. / This thesis establishes a topological constraint on the fundamental group of some monotone Lagrangien. One useful consequence is to rule out a class of examples of Lagrangians due to L. Polterovich as monotone ones. It also leads to a discussion on the possible topologies en dimension 3.
114

Moving Source Identiication in an Uncertain Marine Flow: Mediterranean Sea Application

Hammoud, Mohamad Abed ElRahman 03 1900 (has links)
Identifying marine pollutant sources is essential in order to assess, contain and minimize their risk. We propose a Lagrangian Particle Tracking algorithm (LPT) to study the transport of passive tracers continuously released from fixed and moving sources and to identify their source in a backward mode. The LPT is designed to operate with uncertain flow fi elds, described by an ensemble of realizations of the sea currents. Starting from a region of high probability, re- verse tracking is used to generate inverse maps. A probability-weighted distance between the resulting inverse maps and the source trajectory is then minimized to identify the likely source of pollution. We conduct realistic simulations to demonstrate the efficiency of the proposed algorithm in the Mediterranean Sea using ocean data available from Copernicus Marine Environment Monitoring Services. Passive tracers are released along the path of a ship and propagated with an ensemble of flow fi elds forward in time to generate a probability map, which is then used for the inverse problem of source identi fication. Our experiments suggest that the algorithm is able to efficiently capture the release time and source, with some test cases successfully pinpointing the release time and source up to two weeks back in time.
115

Eulerian on Lagrangian Cloth Simulation

Piddington, Kyle C 01 June 2017 (has links)
This thesis introduces a novel Eulerian-on-Lagrangian (EoL) approach for simulating cloth. This approach allows for the simulation of traditionally difficult cloth scenarios, such as draping and sliding cloth over sharp features like the edge of a table. A traditional Lagrangian approach models a cloth as a series of connected nodes. These nodes are free to move in 3d space, but have difficulty with sliding over hard edges. The cloth cannot always bend smoothly around these edges, as motion can only occur at existing nodes. An EoL approach adds additional flexibility to a Lagrangian approach by constructing special Eulerian on Lagrangian nodes (EoL Nodes), where cloth material can pass through a fixed point. On contact with the edge of a box, EoL nodes are introduced directly on the edge. These nodes allow the cloth to bend exactly at the edge, and pass smoothly over the area while sliding. Using this ‘Eulerian-on-Lagrangian’ discretization, a set of rules for introducing and constraining EoL Nodes, and an adaptive remesher, This simulator allows cloth to move in a sliding motion over sharp edges. The current implementation is limited to cloth collision with static boxes, but the method presented can be expanded to include contact with more complicated meshes and dynamic rigid bodies.
116

Study of flow and heat transfer features of nanofluids using multiphase models : eulerian multiphase and discrete Lagrangian approaches

Mahdavi, Mostafa January 2016 (has links)
Choosing correct boundary conditions, flow field characteristics and employing right thermal fluid properties can affect the simulation of convection heat transfer using nanofluids. Nanofluids have shown higher heat transfer performance in comparison with conventional heat transfer fluids. The suspension of the nanoparticles in nanofluids creates a larger interaction surface to the volume ratio. Therefore, they can be distributed uniformly to bring about the most effective enhancement of heat transfer without causing a considerable pressure drop. These advantages introduce nanofluids as a desirable heat transfer fluid in the cooling and heating industries. The thermal effects of nanofluids in both forced and free convection flows have interested researchers to a great extent in the last decade. Investigating the interaction mechanisms happening between nanoparticles and base fluid is the main goal of the study. These mechanisms can be explained via different approaches through some theoretical and numerical methods. Two common approaches regarding particle-fluid interactions are Eulerian-Eulerian and Eulerian-Lagrangian. The dominant conceptions in each of them are slip velocity and interaction forces respectively. The mixture multiphase model as part of the Eulerian-Eulerian approach deals with slip mechanisms and somehow mass diffusion from the nanoparticle phase to the fluid phase. The slip velocity can be induced by a pressure gradient, buoyancy, virtual mass, attraction and repulsion between particles. Some of the diffusion processes can be caused by the gradient of temperature and concentration. The discrete phase model (DPM) is a part of the Eulerian-Lagrangian approach. The interactions between solid and liquid phase were presented as forces such as drag, pressure gradient force, virtual mass force, gravity, electrostatic forces, thermophoretic and Brownian forces. The energy transfer from particle to continuous phase can be introduced through both convective and conduction terms on the surface of the particles. A study of both approaches was conducted in the case of laminar and turbulent forced convections as well as cavity flow natural convection. The cases included horizontal and vertical pipes and a rectangular cavity. An experimental study was conducted for cavity flow to be compared with the simulation results. The results of the forced convections were evaluated with data from literature. Alumina and zinc oxide nanoparticles with different sizes were used in cavity experiments and the same for simulations. All the equations, slip mechanisms and forces were implemented in ANSYS-Fluent through some user-defined functions. The comparison showed good agreement between experiments and numerical results. Nusselt number and pressure drops were the heat transfer and flow features of nanofluid and were found in the ranges of the accuracy of experimental measurements. The findings of the two approaches were somehow different, especially regarding the concentration distribution. The mixture model provided more uniform distribution in the domain than the DPM. Due to the Lagrangian frame of the DPM, the simulation time of this model was much longer. The method proposed in this research could also be a useful tool for other areas of particulate systems. / Thesis (PhD)--University of Pretoria, 2016. / Mechanical and Aeronautical Engineering / PhD / Unrestricted
117

Nondifferentiable Optimization of Lagrangian Dual Formulations for Linear Programs with Recovery of Primal Solutions

Lim, Churlzu 15 July 2004 (has links)
This dissertation is concerned with solving large-scale, ill-structured linear programming (LP) problems via Lagrangian dual (LD) reformulations. A principal motivation for this work arises in the context of solving mixed-integer programming (MIP) problems where LP relaxations, sometimes in higher dimensional spaces, are widely used for bounding and cut-generation purposes. Often, such relaxations turn out to be large-sized, ill-conditioned problems for which simplex as well as interior point based methods can tend to be ineffective. In contrast, Lagrangian relaxation or dual formulations, when applied in concert with suitable primal recovery strategies, have the potential for providing quick bounds as well as enabling useful branching mechanisms. However, the objective function of the Lagrangian dual is nondifferentiable, and hence, we cannot apply popular gradient or Hessian-based optimization techniques that are commonly used in differentiable optimization. Moreover, the subgradient methods that are popularly used are typically slow to converge and tend to stall while yet remote from optimality. On the other hand, more advanced methods, such as the bundle method and the space dilation method, involve additional computational and storage requirements that make them impractical for large-scale applications. Furthermore, although we might derive an optimal or near-optimal solution for LD, depending on the dual-adequacy of the methodology used, a primal solution may not be available. While some algorithmically simple primal solution recovery schemes have been developed in theory to accompany Lagrangian dual optimization, their practical performance has been disappointing. Rectifying these inadequacies is a challenging task that constitutes the focal point for this dissertation. Many practical applications dealing with production planning and control, engineering design, and decision-making in different operational settings fall within the purview of this context and stand to gain by advances in this technology. With this motivation, our primary interests in the present research effort are to develop effective nondifferentiable optimization (NDO) methods for solving Lagrangian duals of large-sized linear programs, and to design practical primal solution recovery techniques. This contribution would then facilitate the derivation of quick bounds/cuts and branching mechanisms in the context of branch-and-bound/cut methodologies for solving mixed-integer programming problems. We begin our research by adapting the Volume Algorithm (VA) of Barahona and Anbil (2000) developed at IBM as a direction-finding strategy within the variable target value method (VTVM) of Sherali et al. (2000). This adaptation makes VA resemble a deflected subgradient scheme in contrast with the bundle type interpretation afforded by the modification of VA as proposed by Bahiense et al. (2002). Although VA was originally developed in order to recover a primal optimal solution, we first present an example to demonstrate that it might indeed converge to a nonoptimal primal solution. However, under a suitable condition on the geometric moving average factor, we establish the convergence of the proposed algorithm in the dual space. A detailed computational study reveals that this approach yields a competitive procedure as compared with alternative strategies including the average direction strategy (ADS) of Sherali and Ulular (1989), a modified Polyak-Kelley cutting-plane strategy (PKC) designed by Sherali et al. (2001), and the modified Volume Algorithm routines RVA and BVA proposed by Bahiense et al. (2002), all embedded within the same VTVM framework. As far as CPU times are concerned, the VA strategy consumed the least computational effort for most problems to attain a near-optimal objective value. Moreover, the VA, ADS, and PKC strategies revealed considerable savings in CPU effort over a popular commercial linear program solver, CPLEX Version 8.1, when used to derive near-optimal solutions. Next, we consider two variable target value methods, the Level Algorithm of Brännlund (1993) and VTVM, which require no prior knowledge of upper bounds on the optimal objective value while guaranteeing convergence to an optimal solution. We evaluate these two algorithms in conjunction with various direction-finding and step-length strategies such as PS, ADS, VA, and PKC. Furthermore, we generalize the PKC strategy by further modifying the cut's right-hand-side values and additionally performing sequential projections onto some previously generated Polyak-Kelley's cutting-planes. We call this a generalized PKC (GPKC) strategy. Moreover, we point out some latent deficiencies in the two aforementioned variable target value algorithms in regard to their target value update mechanisms, and we suggest modifications in order to alleviate these shortcomings. We further explore an additional local search procedure to strengthen the performance of the algorithms. Noting that no related convergence analyses have been presented, we prove the convergence of the Level Algorithm when used in conjunction with the ADS, VA, or GPKC schemes. We also establish the convergence of VTVM when employing GPKC. Based on our computational study, the modified VTVM algorithm produced the best quality solutions when implemented with the GPKC strategy, where the latter performs sequential projections onto the four most recently generated Polyak-Kelley cutting-planes as available. Also, we demonstrate that the proposed modifications and the local search technique significantly improve the overall performance. Moreover, the VTVM procedure was observed to consistently outperform the Level Algorithm as well as a popular heuristic subgradient method of Held et al. (1974) that is widely used in practice. As far as CPU times are concerned, the modified VTVM procedure in concert with the GPKC strategy revealed the best performance, providing near-optimal solutions in about 27.84% of the effort at an average as that required by CPLEX 8.1 to produce the same quality solutions. We next consider the Lagrangian dual of a bounded-variable equality constrained linear programming problem. We develop two novel approaches for solving this problem, which attempt to circumvent or obviate the nondifferentiability of the objective function. First, noting that the Lagrangian dual function is differentiable almost everywhere, whenever the NDO algorithm encounters a nondifferentiable point, we employ a proposed <i>perturbation technique</i> (PT) in order to detect a differentiable point in the vicinity of the current solution from which a further search can be conducted. In a second approach, called the <i>barrier-Lagrangian dual reformulation</i> (BLR) method, the primal problem is reformulated by constructing a barrier function for the set of bounding constraints such that an optimal solution to the original problem can be recovered by suitably adjusting the barrier parameter. However, instead of solving the barrier problem itself, we dualize the equality constraints to formulate a Lagrangian dual function, which is shown to be twice differentiable. Since differentiable pathways are made available via these two proposed techniques, we can advantageously utilize differentiable optimization methods along with popular conjugate gradient schemes. Based on these constructs, we propose an algorithmic procedure that consists of two sequential phases. In Phase I, the PT and BLR methods along with various deflected gradient strategies are utilized, and then, in Phase II, we switch to the modified VTVM algorithm in concert with GPKC (VTVM-GPKC) that revealed the best performance in the previous study. We also designed two target value initialization methods to commence Phase II, based on the output from Phase I. The computational results reveal that Phase I indeed helps to significantly improve the algorithmic performance as compared with implementing VTVM-GPKC alone, even though the latter was run for twice as many iterations as used in the two-phase procedures. Among the implemented procedures, the PT method in concert with certain prescribed deflection and Phase II initialization schemes yielded the best overall quality solutions and CPU time performance, consuming only 3.19% of the effort as that required by CPLEX 8.1 to produce comparable solutions. Moreover, we also tested some ergodic primal recovery strategies with and without employing BLR as a warm-start, and observed that an initial BLR phase can significantly enhance the convergence of such primal recovery schemes. Having observed that the VTVM algorithm requires the fine-tuning of several parameters for different classes of problems in order to improve its performance, our next research investigation focuses on developing a robust variable target value framework that entails the management of only a few parameters. We therefore design a novel algorithm, called the <i>Trust Region Target Value</i> (TRTV) method, in which a trust region is constructed in the dual space, and its center and size are adjusted in a manner that eventually induces a dual optimum to lie at the center of the hypercube trust region. A related convergence analysis has also been conducted for this procedure. We additionally examined a variation of TRTV, where the hyperrectangular trust region is more effectively adjusted for normalizing the effects of the dual variables. In our computational study, we compared the performance of TRTV with that of the VTVM-GPKC procedure. For four direction-finding strategies (PS, VA, ADS, and GPKC), the TRTV algorithm consistently produced better quality solutions than did VTVM-GPKC. The best performance was obtained when TRTV was employed in concert with the PS strategy. Moreover, we observed that the solution quality produced by TRTV was consistently better than that obtained via VTVM, hence lending a greater degree of robustness. As far as computational effort is concerned, the TRTV-PS combination consumed only 4.94% of the CPU time required by CPLEX 8.1 at an average in order to find comparable quality solutions. Therefore, based on our extensive set of test problems, it appears that the TRTV along with the PS strategy is the best and the most robust procedure among those tested. Finally, we explore an outer-linearization (or cutting-plane) method along with a trust region approach for refining available dual solutions and recovering a primal optimum in the process. This method enables us to escape from a jamming phenomenon experienced at a non-optimal point, which commonly occurs when applying NDO methods, as well as to refine the available dual solution toward a dual optimum. Furthermore, we can recover a primal optimal solution when the resulting dual solution is close enough to a dual optimum, without generating a potentially excessive set of constraints. In our computational study, we tested two such trust region strategies, the Box-step (BS) method of Marsten et al. (1975) and a new Box Trust Region (BTR) approach, both appended to the foregoing TRTV-PS dual optimization methodology. Furthermore, we also experimented with deleting nonbinding constraints when the number of cuts exceeds a prescribed threshold value. This proposed refinement was able to further improve the solution quality, reducing the near-zero relative optimality gap for TRTV-PS by 20.6-32.8%. The best strategy turned out to be using the BTR method while deleting nonbinding constraints (BTR-D). As far as the recovery of optimal solutions is concerned, the BTR-D scheme resulted in the best measure of primal feasibility, and although it was terminated after it had accumulated only 50 constraints, it revealed a better performance than the ergodic primal recovery scheme of Shor (1985) that was run for 2000 iterations while also assuming knowledge of the optimal objective value in the dual scheme. In closing, we mention that there exist many optimization methods for complex systems such as communication network design, semiconductor manufacturing, and supply chain management, that have been formulated as large-sized mixed-integer programs, but for which deriving even near-optimal solutions has been elusive due to their exorbitant computational requirements. Noting that the computational effort for solving mixed-integer programs via branch-and-bound/cut methods strongly depends on the effectiveness with which the underlying linear programming relaxations can be solved, applying theoretically convergent and practically effective NDO methods in concert with efficient primal recovery procedures to suitable Lagrangian dual reformulations of these relaxations can significantly enhance the overall computational performance of these methods. We therefore hope that the methodologies designed and analyzed in this research effort will have a notable positive impact on analyzing such complex systems. / Ph. D.
118

Exact Calculations for the Lagrangian Velocity

Schneider, Eduardo da Silva 23 April 2019 (has links)
No description available.
119

A VIRTUAL FINITE ELEMENT METHOD FOR CONTACT PROBLEMS

UNDERHILL, WILLIAM ROY CLARE 09 1900 (has links)
An algorithm is presented for the solution of mechanical contact problems using the displacement based Finite Element Method. The corrections are applied as forces at the global level, together with any corrections for other nonlinearities, without having to nominate either body as target or contactor. The technique requires statically reducing the global stiffness matrices to each degree of freedom involved in contact. Nodal concentrated force are redistributed as continuous tractions. These tractions are re-integrated over the element domains of the opposing body. This creates a set of virtual elements which are assembled to provide a convenient mesh of the properties of the opposing body no matter what its actual discretizaton into elements. Virtual nodal quantities are used to calculate corrective forces that are optimal to first order. The work also presents a derivation of refereritial strain tensors. This sheds new light on the updated Lagrangian formulation, gives a complete and correct incremental form for the Lagrangian strain tensor and illustrates the role of the reference configuration and what occurs when it is changed. / Thesis / Doctor of Philosophy (PhD)
120

A VIRTUAL FINITE ELEMENT METHOD FOR CONTACT PROBLEMS

Underhill, William Roy Clare 09 1900 (has links)
An algorithm is presented for the solution of mechanical contact problems using the displacement based Finite Element Method. The corrections are applied as forces at the global level, together with any corrections for other nonlinearities, without having to nominate either body as target or contactor. The technique requires statically reducing the global stiffness matrices to each degree of freedom involved in contact. Nodal concentrated force are redistributed as continuous tractions. These tractions are re-integrated over the element domains of the opposing body. This creates a set of virtual elements which are assembled to provide a convenient mesh of the properties of the opposing body no matter what its actual discretizaton into elements. Virtual nodal quantities are used to calculate corrective forces that are optimal to first order. The work also presents a derivation of refereritial strain tensors. This sheds new light on the updated Lagrangian formulation, gives a complete and correct incremental form for the Lagrangian strain tensor and illustrates the role of the reference configuration and what occurs when it is changed. / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.0947 seconds