• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 799
  • 286
  • 191
  • 127
  • 96
  • 61
  • 25
  • 22
  • 22
  • 13
  • 10
  • 8
  • 8
  • 7
  • 7
  • Tagged with
  • 1948
  • 200
  • 182
  • 166
  • 144
  • 132
  • 130
  • 120
  • 115
  • 115
  • 109
  • 109
  • 106
  • 98
  • 93
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Viskoelasticita polymerních skel / Viscoelasticity of polymer glasses

Ondreáš, František January 2014 (has links)
This work focuses on polymer glasses relaxation behavior. Polymethylmethacrylate was chosen as a typical representative of polymer glasses. Relaxation processes were studied by dynamical mechanical spectroscopy and differential scanning calorimetry was used as a supplemental analysis. Relaxation process above Tg and high values of rubberlike plateau modulus were observed in thermomechanical spectra. High temperature relaxation transition was studied from the perspective of thermal history, frequency and axial stress dependence and influence of molecular structure was also investigated. Apparent activation energies of studied processes and their axial stress dependence for polymethylmethacrylate were determined. On the basis of obtained data, a hypothesis was developed which connects high temperature relaxation process with molecular process responsible for strain hardening.
472

Urbanistický rozvoj města Brna v lokalitě Brno - Žebětín / The urban development of city Brno, locality Brno-Žebětín

Fučíková, Jana January 2014 (has links)
The aim of the Master’s thesis is a new utilisation of the former collective farm and its surroundings. The area is located in the town district of Brno – Žebětín and comes under the areas of brownfield of Brno. The purpose of urban design was to create a functional living organism that will respect the needs of society in this area, the values, topography, existing development and will be adequately connected to the transport and technical infrastructure. The resulting solution is to revitalize the Vrbovec Stream crossing the area and thus forms an imaginary axis around which all activities take place. The central part of the area is a large central space that serves to relax and also to take place various cultural events. The central area connects the existing settlement Kamechy with a suburb of Žebětín and is connected to it by another footpath to the entire territory. Being another important element of the area, the current zone of greenery will be enriched with newly planted leafy trees.
473

Relaxation Seminars, Ten one-hours sessions

Webb, Melessia D. 01 September 2002 (has links)
No description available.
474

Skakeling met transistors in die lawinegebied (Afrikaans)

Taute, Willem Jacobus 12 June 2013 (has links)
Skakeling met diffusie-transistors by höe kollektorspannings word ontleed. Die gebied waar vermenigvuldiging verkry word en skakelsnelheid aansienlik verhoog word, is ondersoek met behulp van 'n relaksasie-ossillator. Lawineskakeling soos veral benut in pulsgenerators, is hier ter sake. Met behulp van 'n ladingsmodel is dit moontlik om uitdrukkings vir stygtyd, piekstroom en daaltyd in terme van transistorparameters te kry. Hierdie waardes is getoets met 2N414-transistors. Die invloed van eksterne komponente en toevoer word beskou. ENGLISH : Switching with transistors (diffusion flow type) in the high voltage region is analysed. The high Switching speed in this multiplication region is investigated by means of a relaxation oscillator. Avalanche mode switching as is relevant here, is mostly used in pulse generators. Expressions for rise time, peak current and fall time are obtained by means of a charge control model in terms of transistor parameters. 2N414 transistors were used to verify the theory experimentally. The influence of external components and supplies are also considered. / Dissertation (MEng)--University of Pretoria, 1969. / Electrical, Electronic and Computer Engineering / unrestricted
475

Tortuosity estimate through paramagnetic gas diffusion in rock saturated with two fluids using T2 (z, t) low-field NMR

Shikhov, Igor, Arns, Christoph H. 11 September 2018 (has links)
Petrophysical interpretation of 1H NMR relaxation responses from saturated rocks is complicated by paramagnetic species present in fluids. Oxygen dissolved in liquids is one common example. Dipolar interactions of oxygen’s unpaired electron spins with the magnetic moment of fluid nuclei provide a strong relaxation mechanism known as paramagnetic relaxation enhancement (PRE). As a result even low concentrations of dioxygen in its common triplet ground state significantly shorten longitudinal and transverse relaxation times of host fluids. This effect may be employed similarly to any standard tracer technique to study pore connectivity in porous media by detecting a change of oxygen concentration due to diffusion resolved in time and space. Since relaxation enhancement effect is likely stronger in non-wetting phase than in wetting one (where surface relaxation process dominates) this difference can be utilized to study wettability in immiscible multiphase systems. We use a relaxation time contrast between air-saturated and oxygen-free fluids to evaluate oxygen concentration change within two fluid phases saturating rock, to estimate time required to establish equilibrium concentration and to calculate a mutual diffusion coefficient of oxygen. A spatially- and time-resolved T2(z,t) experiment provides the time-dependent oxygen concentration change along the fully- and partially-saturated carbonate core plug exposed to air saturated oil at its inlet. We derive an effective mutual diffusion coefficient of oxygen and accordingly a tortuosity estimate as a function of position along the core and rock saturation. The spatially resolved oxygen diffusion-based tortuosity is compared to simulated conductivitybased tortuosity. The latter is calculated on a high-resolution micro-tomographic image of Mount Gambier limestone by solving the Laplace equation for conductivity.
476

Material Characterization Using Nuclear Magnetic Resonance

Pope, Giovanna Marcella 23 February 2022 (has links)
Nuclear magnetic resonance techniques can provide highly accurate information about the local environment of both liquid and solid samples. In the first half of this dissertation research, solid state NMR has provided experimental evidence for turbostratic disorder in layered covalent organic solids. Additionally, comparison with candidate structures allowed a proposed correction to the accepted structure of Covalent Organic Framework-5. The second half of the dissertation work emphasized liquid NMR spectroscopy applied to doped iron oxides (IOs). In particular, the effect of IOs on water proton T2 relaxation times were determined as a measure of contrast agent efficacy. Both types of data lend towards structure elucidation for material efficiency.
477

Nondifferentiable Optimization of Lagrangian Dual Formulations for Linear Programs with Recovery of Primal Solutions

Lim, Churlzu 15 July 2004 (has links)
This dissertation is concerned with solving large-scale, ill-structured linear programming (LP) problems via Lagrangian dual (LD) reformulations. A principal motivation for this work arises in the context of solving mixed-integer programming (MIP) problems where LP relaxations, sometimes in higher dimensional spaces, are widely used for bounding and cut-generation purposes. Often, such relaxations turn out to be large-sized, ill-conditioned problems for which simplex as well as interior point based methods can tend to be ineffective. In contrast, Lagrangian relaxation or dual formulations, when applied in concert with suitable primal recovery strategies, have the potential for providing quick bounds as well as enabling useful branching mechanisms. However, the objective function of the Lagrangian dual is nondifferentiable, and hence, we cannot apply popular gradient or Hessian-based optimization techniques that are commonly used in differentiable optimization. Moreover, the subgradient methods that are popularly used are typically slow to converge and tend to stall while yet remote from optimality. On the other hand, more advanced methods, such as the bundle method and the space dilation method, involve additional computational and storage requirements that make them impractical for large-scale applications. Furthermore, although we might derive an optimal or near-optimal solution for LD, depending on the dual-adequacy of the methodology used, a primal solution may not be available. While some algorithmically simple primal solution recovery schemes have been developed in theory to accompany Lagrangian dual optimization, their practical performance has been disappointing. Rectifying these inadequacies is a challenging task that constitutes the focal point for this dissertation. Many practical applications dealing with production planning and control, engineering design, and decision-making in different operational settings fall within the purview of this context and stand to gain by advances in this technology. With this motivation, our primary interests in the present research effort are to develop effective nondifferentiable optimization (NDO) methods for solving Lagrangian duals of large-sized linear programs, and to design practical primal solution recovery techniques. This contribution would then facilitate the derivation of quick bounds/cuts and branching mechanisms in the context of branch-and-bound/cut methodologies for solving mixed-integer programming problems. We begin our research by adapting the Volume Algorithm (VA) of Barahona and Anbil (2000) developed at IBM as a direction-finding strategy within the variable target value method (VTVM) of Sherali et al. (2000). This adaptation makes VA resemble a deflected subgradient scheme in contrast with the bundle type interpretation afforded by the modification of VA as proposed by Bahiense et al. (2002). Although VA was originally developed in order to recover a primal optimal solution, we first present an example to demonstrate that it might indeed converge to a nonoptimal primal solution. However, under a suitable condition on the geometric moving average factor, we establish the convergence of the proposed algorithm in the dual space. A detailed computational study reveals that this approach yields a competitive procedure as compared with alternative strategies including the average direction strategy (ADS) of Sherali and Ulular (1989), a modified Polyak-Kelley cutting-plane strategy (PKC) designed by Sherali et al. (2001), and the modified Volume Algorithm routines RVA and BVA proposed by Bahiense et al. (2002), all embedded within the same VTVM framework. As far as CPU times are concerned, the VA strategy consumed the least computational effort for most problems to attain a near-optimal objective value. Moreover, the VA, ADS, and PKC strategies revealed considerable savings in CPU effort over a popular commercial linear program solver, CPLEX Version 8.1, when used to derive near-optimal solutions. Next, we consider two variable target value methods, the Level Algorithm of Brännlund (1993) and VTVM, which require no prior knowledge of upper bounds on the optimal objective value while guaranteeing convergence to an optimal solution. We evaluate these two algorithms in conjunction with various direction-finding and step-length strategies such as PS, ADS, VA, and PKC. Furthermore, we generalize the PKC strategy by further modifying the cut's right-hand-side values and additionally performing sequential projections onto some previously generated Polyak-Kelley's cutting-planes. We call this a generalized PKC (GPKC) strategy. Moreover, we point out some latent deficiencies in the two aforementioned variable target value algorithms in regard to their target value update mechanisms, and we suggest modifications in order to alleviate these shortcomings. We further explore an additional local search procedure to strengthen the performance of the algorithms. Noting that no related convergence analyses have been presented, we prove the convergence of the Level Algorithm when used in conjunction with the ADS, VA, or GPKC schemes. We also establish the convergence of VTVM when employing GPKC. Based on our computational study, the modified VTVM algorithm produced the best quality solutions when implemented with the GPKC strategy, where the latter performs sequential projections onto the four most recently generated Polyak-Kelley cutting-planes as available. Also, we demonstrate that the proposed modifications and the local search technique significantly improve the overall performance. Moreover, the VTVM procedure was observed to consistently outperform the Level Algorithm as well as a popular heuristic subgradient method of Held et al. (1974) that is widely used in practice. As far as CPU times are concerned, the modified VTVM procedure in concert with the GPKC strategy revealed the best performance, providing near-optimal solutions in about 27.84% of the effort at an average as that required by CPLEX 8.1 to produce the same quality solutions. We next consider the Lagrangian dual of a bounded-variable equality constrained linear programming problem. We develop two novel approaches for solving this problem, which attempt to circumvent or obviate the nondifferentiability of the objective function. First, noting that the Lagrangian dual function is differentiable almost everywhere, whenever the NDO algorithm encounters a nondifferentiable point, we employ a proposed <i>perturbation technique</i> (PT) in order to detect a differentiable point in the vicinity of the current solution from which a further search can be conducted. In a second approach, called the <i>barrier-Lagrangian dual reformulation</i> (BLR) method, the primal problem is reformulated by constructing a barrier function for the set of bounding constraints such that an optimal solution to the original problem can be recovered by suitably adjusting the barrier parameter. However, instead of solving the barrier problem itself, we dualize the equality constraints to formulate a Lagrangian dual function, which is shown to be twice differentiable. Since differentiable pathways are made available via these two proposed techniques, we can advantageously utilize differentiable optimization methods along with popular conjugate gradient schemes. Based on these constructs, we propose an algorithmic procedure that consists of two sequential phases. In Phase I, the PT and BLR methods along with various deflected gradient strategies are utilized, and then, in Phase II, we switch to the modified VTVM algorithm in concert with GPKC (VTVM-GPKC) that revealed the best performance in the previous study. We also designed two target value initialization methods to commence Phase II, based on the output from Phase I. The computational results reveal that Phase I indeed helps to significantly improve the algorithmic performance as compared with implementing VTVM-GPKC alone, even though the latter was run for twice as many iterations as used in the two-phase procedures. Among the implemented procedures, the PT method in concert with certain prescribed deflection and Phase II initialization schemes yielded the best overall quality solutions and CPU time performance, consuming only 3.19% of the effort as that required by CPLEX 8.1 to produce comparable solutions. Moreover, we also tested some ergodic primal recovery strategies with and without employing BLR as a warm-start, and observed that an initial BLR phase can significantly enhance the convergence of such primal recovery schemes. Having observed that the VTVM algorithm requires the fine-tuning of several parameters for different classes of problems in order to improve its performance, our next research investigation focuses on developing a robust variable target value framework that entails the management of only a few parameters. We therefore design a novel algorithm, called the <i>Trust Region Target Value</i> (TRTV) method, in which a trust region is constructed in the dual space, and its center and size are adjusted in a manner that eventually induces a dual optimum to lie at the center of the hypercube trust region. A related convergence analysis has also been conducted for this procedure. We additionally examined a variation of TRTV, where the hyperrectangular trust region is more effectively adjusted for normalizing the effects of the dual variables. In our computational study, we compared the performance of TRTV with that of the VTVM-GPKC procedure. For four direction-finding strategies (PS, VA, ADS, and GPKC), the TRTV algorithm consistently produced better quality solutions than did VTVM-GPKC. The best performance was obtained when TRTV was employed in concert with the PS strategy. Moreover, we observed that the solution quality produced by TRTV was consistently better than that obtained via VTVM, hence lending a greater degree of robustness. As far as computational effort is concerned, the TRTV-PS combination consumed only 4.94% of the CPU time required by CPLEX 8.1 at an average in order to find comparable quality solutions. Therefore, based on our extensive set of test problems, it appears that the TRTV along with the PS strategy is the best and the most robust procedure among those tested. Finally, we explore an outer-linearization (or cutting-plane) method along with a trust region approach for refining available dual solutions and recovering a primal optimum in the process. This method enables us to escape from a jamming phenomenon experienced at a non-optimal point, which commonly occurs when applying NDO methods, as well as to refine the available dual solution toward a dual optimum. Furthermore, we can recover a primal optimal solution when the resulting dual solution is close enough to a dual optimum, without generating a potentially excessive set of constraints. In our computational study, we tested two such trust region strategies, the Box-step (BS) method of Marsten et al. (1975) and a new Box Trust Region (BTR) approach, both appended to the foregoing TRTV-PS dual optimization methodology. Furthermore, we also experimented with deleting nonbinding constraints when the number of cuts exceeds a prescribed threshold value. This proposed refinement was able to further improve the solution quality, reducing the near-zero relative optimality gap for TRTV-PS by 20.6-32.8%. The best strategy turned out to be using the BTR method while deleting nonbinding constraints (BTR-D). As far as the recovery of optimal solutions is concerned, the BTR-D scheme resulted in the best measure of primal feasibility, and although it was terminated after it had accumulated only 50 constraints, it revealed a better performance than the ergodic primal recovery scheme of Shor (1985) that was run for 2000 iterations while also assuming knowledge of the optimal objective value in the dual scheme. In closing, we mention that there exist many optimization methods for complex systems such as communication network design, semiconductor manufacturing, and supply chain management, that have been formulated as large-sized mixed-integer programs, but for which deriving even near-optimal solutions has been elusive due to their exorbitant computational requirements. Noting that the computational effort for solving mixed-integer programs via branch-and-bound/cut methods strongly depends on the effectiveness with which the underlying linear programming relaxations can be solved, applying theoretically convergent and practically effective NDO methods in concert with efficient primal recovery procedures to suitable Lagrangian dual reformulations of these relaxations can significantly enhance the overall computational performance of these methods. We therefore hope that the methodologies designed and analyzed in this research effort will have a notable positive impact on analyzing such complex systems. / Ph. D.
478

A Study on Integrated Transportation and Facility Location Problem

Oyewole, Gbeminiyi John January 2019 (has links)
The focus of this thesis is the development and solution of problems that simultaneously involve the planning of the location of facilities and transportation decisions from such facilities to consumers. This has been termed integrated distribution planning problems with practical application in logistics and manufacturing. In this integration, different planning horizons of short, medium and long terms are involved with the possibility of reaching sub-optimal decisions being likely when the planning horizons are considered separately. Two categories of problems were considered under the integrated distribution models. The first is referred to as the Step-Fixed Charge Location and Transportation Problem (SFCLTP). The second is termed the Fixed Charge Solid Location and Transportation Problem (FCSLTP). In these models, the facility location problem is considered to be a strategic or long term decision. The short to medium-term decisions considered are the Step-Fixed Charge Transportation Problem (SFCTP) and the Fixed Charge Solid Transportation Problem (FCSTP). Both SFCTP and FCSTP are different extensions to the classical transportation problem, requiring a trade-off between fixed and variable costs along the transportation routes to minimize total transportation costs. Linearization and subsequent local improvement search techniques were developed to solve the SFCLTP. The first search technique involved the development of a hands-on solution including a numerical example. In this solution technique, linearization was employed as the primal solution, following which structured perturbation logic was developed to improve on the initial solution. The second search technique proposed also utilized the linearization principle as a base solution in addition to some heuristics to construct transportation problems. The resulting transportation problems were solved to arrive at a competitive solution as regards effectiveness (solution value) compared to those obtainable from standard solvers such as CPLEX. The FCSLTP is formulated and solved using the CPLEX commercial optimization suite. A Lagrange Relaxation Heuristic (LRH) and a Hybrid Genetic Algorithm (GA) solution of the FCSLTP are presented as alternative solutions. Comparative studies between the FCSTP and the FCSLTP formulation are also presented. The LRH is demonstrated with a numerical example and also extended to hopefully generate improved upper bounds. The CPLEX solution generated better lower bounds and upper bound when compared with the extended LRH. However, it was observed that as problem size increased, the solution time of CPLEX increased exponentially. The FCSTP was recommended as a possible starting solution for solving the FCSLTP. This is due to a lower solution time and its feasible solution generation illustrated through experimentation. The Hybrid Genetic Algorithm (HGA) developed integrates cost relaxation, greedy heuristic and a modified stepping stone method into the GA framework to further explore the solution search space. Comparative studies were also conducted to test the performance of the HGA solution with the classical Lagrange heuristics developed and CPLEX. Results obtained suggests that the performance of HGA is competitive with that obtainable from a commercial solver such as CPLEX. / Thesis (PhD)--University of Pretoria, 2019. / Industrial and Systems Engineering / PhD / Unrestricted
479

A Comparison Between Desensitization and Relaxation Training in the Treatment of Primary Dysmenorrhea

Carcelli, Susan Myrna Jones 01 May 1985 (has links)
The use of relaxation, desensitization, and relaxation plus desensitization in the treatment of primary dysmenorrhea was investigated in this study. Subjects were 45 university women who experienced either congestive or spasmodic dysmenorrhea. Each subject was individually treated in four, one-hour sessions during the first 20 days of her menstrual cycle. Subjects were divided into three groups: Group 1 obtained four hours of progressive relaxation training, group 2 was asked to self-relax while being administered scenes from a standardized menstrual hierarchy, and group 3 obtained both relaxation training and desensitization. Type of dysmenorrhea was assessed by the Menstrual Symptom Questionnaire (MSQ). Symptom intensity and duration were assessed by the Retrospective Symptom Scale, the Menstrual Semantic Differential, the Menstrual Activities Scale, and the Menstrual Behavior Scale, and were administered pre-test, posttest, and three-month follow-up. Skin temperature during session 4 was obtained to evaluate the level of relaxation. Differences among treatment groups were analyzed using a one-way analysis of variance. t-tests for correlated samples were used to analyze within group changes form pretreatment to posttreatment. Results suggest all three treatments to be equally effective in reducing symptoms, negative attitudes, pain mitigating behaviors, and invalid hours. Symptom relief was not associated with skin temperature increases. The possibility of placebo playing a role in these results cannot be ruled out. Finally, the division of primary dysmenorrhea into spasmodic and congestive types by the MSQ is inaccurate, most probably due to the confounding nature of the scoring system.
480

Magnetic Resonance Parameter Assessment from a Second Order Time-Dependent Linear Model

January 2019 (has links)
abstract: This dissertation develops a second order accurate approximation to the magnetic resonance (MR) signal model used in the PARSE (Parameter Assessment by Retrieval from Single Encoding) method to recover information about the reciprocal of the spin-spin relaxation time function (R2*) and frequency offset function (w) in addition to the typical steady-state transverse magnetization (M) from single-shot magnetic resonance imaging (MRI) scans. Sparse regularization on an approximation to the edge map is used to solve the associated inverse problem. Several studies are carried out for both one- and two-dimensional test problems, including comparisons to the first order approximation method, as well as the first order approximation method with joint sparsity across multiple time windows enforced. The second order accurate model provides increased accuracy while reducing the amount of data required to reconstruct an image when compared to piecewise constant in time models. A key component of the proposed technique is the use of fast transforms for the forward evaluation. It is determined that the second order model is capable of providing accurate single-shot MRI reconstructions, but requires an adequate coverage of k-space to do so. Alternative data sampling schemes are investigated in an attempt to improve reconstruction with single-shot data, as current trajectories do not provide ideal k-space coverage for the proposed method. / Dissertation/Thesis / Doctoral Dissertation Mathematics 2019

Page generated in 0.0776 seconds