• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 31
  • 22
  • 14
  • 9
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 236
  • 46
  • 43
  • 33
  • 32
  • 30
  • 29
  • 25
  • 24
  • 22
  • 21
  • 20
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A transient solver for current density in thin conductors for magnetoquasistatic conditions

Petersen, Todd H. January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Kenneth H. Carpenter / A computer simulation of transient current density distributions in thin conductors was developed using a time-stepped implementation of the integral equation method on a finite element mesh. A study of current distributions in thin conductors was carried out using AC analysis methods. The study of the AC current density distributions was used to develop a circuit theory model for the thin conductor which was then used to determine the nature of its transient response. This model was used to support the design and evaluation of the transient current density solver. A circuit model for strip lines was made using the Partial Inductance Method to allow for simulations with the SPICE circuit solver. Magnetic probes were designed and tested that allow for physical measurements of voltages induced by the magnetic field generated by the current distributions in the strip line. A comparison of the measured voltages to simulated values from SPICE was done to validate the SPICE model. This model was used to validate the finite-integration model for the same strip line. Formulation of the transient current density distribution problem is accomplished by the superposition of a source current and an eddy current distribution on the same space. The mathematical derivation and implementation of the time-stepping algorithm to the finite element model is explicitly shown for a surface mesh with triangular elements. A C++ computer program was written to solve for the total current density in a thin conductor by implementing the time-stepping integral formulation. Evaluation of the finite element implementation was made regarding mesh size. Finite element meshes of increasing node density were simulated for the same structure until a smooth current density distribution profile was observed. The transient current density solver was validated by comparing simulations with AC conduction and transient response simulations of the SPICE model. Transient responses are compared for inputs at different frequencies and for varying time steps. This program is applied to thin conductors of irregular shape.
52

MODULAR FAST DIRECT ANALYSIS USING NON-RADIATING LOCAL-GLOBAL SOLUTION MODES

Xu, Xin 01 January 2008 (has links)
This dissertation proposes a modular fast direct (MFD) analysis method for a class of problems involving a large fixed platform region and a smaller, variable design region. A modular solution algorithm is obtained by first decomposing the problem geometry into platform and design regions. The two regions are effectively detached from one another using basic equivalence concepts. Equivalence principles allow the total system model to be constructed in terms of independent interaction modules associated with the platform and design regions. These modules include interactions with the equivalent surface that bounds the design region. This dissertation discusses how to analyze (fill and factor) each of these modules separately and how to subsequently compose the solution to the original system using the separately analyzed modules. The focus of this effort is on surface integral equation formulations of electromagnetic scattering from conductors and dielectrics. In order to treat large problems, it is necessary to work with sparse representations of the underlying system matrix and other, related matrices. Fortunately, a number of such representations are available. In the following, we will primarily use the adaptive cross approximation (ACA) to fill the multilevel simply sparse method (MLSSM) representation of the system matrix. The MLSSM provides a sparse representation that is similar to the multilevel fast multipole method. Solutions to the linear systems obtained using the modular analysis strategies described above are obtained using direct methods based on the local-global solution (LOGOS) method. In particular, the LOGOS factorization provides a data sparse factorization of the MLSSM representation of the system matrix. In addition, the LOGOS solver also provides an approximate sparse factorization of the inverse of the system matrix. The availability of the inverse eases the development of the MFD method. Because the behavior of the LOGOS factorization is critical to the development of the proposed MFD method, a significant part of this dissertation is devoted to providing additional analyses, improvements, and characterizations of LOGOS-based direct solution methods. These further developments of the LOGOS factorization algorithms and their application to the development of the MFD method comprise the most significant contributions of this dissertation.
53

Parallelization of the HIROMB ocean model

Wilhelmsson, Tomas January 2002 (has links)
No description available.
54

Numerical Modelling of Transient and Droplet Transport for Pulsed Pressure - Chemical Vapour Deposition (PP-CVD) Process

Lim, Chin Wai January 2012 (has links)
The objective of this thesis is to develop an easy-to-use and computationally economical numerical tool to investigate the flow field in the Pulsed Pressure Chemical Vapour Deposition (PP-CVD) reactor. The PP-CVD process is a novel thin film deposition technique with some advantages over traditional CVD methods. The numerical modelling of the PP-CVD flow field is carried out using the Quiet Direct Simulation (QDS) method, which is a flux-based kinetic-theory approach. Two approaches are considered for the flux reconstruction, which are the true directional manner and the directional splitting method. Both the true directional and the directional decoupled QDS codes are validated against various numerical methods which include EFM, direct simulation, Riemann solver and the Godunov method. Both two dimensional and axisymmetric test problems are considered. Simulations are conducted to investigate the PP-CVD reactor flow field at 1 Pa and 1 kPa reactor base pressures. A droplet flash evaporation model is presented to model the evaporation and transport of the liquid droplets injected. The solution of the droplet flash evaporation model is used as the inlet conditions for the QDS gas phase solver. The droplet model is found to be able to provide pressure rise in the reactor at the predicted rate. A series of parametric studies are conducted for the PP-CVD process. The numerical study confirms the hypothesis that the flow field uniformity is insensitive to the reactor geometry. However, a sufficient distance from the injection inlet is required to allow the injected precursor solution to diffuse uniformly before reaching the substrate. It is also recommended that placement of the substrate at the reactor’s centre axis should be avoided.
55

Optimisation and computational methods to model the oculomotor system with focus on nystagmus

Avramidis, Eleftherios January 2015 (has links)
Infantile nystagmus is a condition that causes involuntary, bilateral and conjugate oscillations of the eyes, which are predominately restricted to the horizontal plane. In order to investigate the cause of nystagmus, computational models and nonlinear dynamics techniques have been used to model and analyse the oculomotor system. Computational models are important in making predictions and creating a quantitative framework for the analysis of the oculomotor system. Parameter estimation is a critical step in the construction and analysis of these models. A preliminary parameter estimation of a nonlinear dynamics model proposed by Broomhead et al. [1] has been shown to be able to simulate both normal rapid eye movements (i.e. saccades) and nystagmus oscillations. The application of nonlinear analysis to experimental jerk nystagmus recordings, has shown that the local dimensions number of the oscillation varies across the phase angle of the nystagmus cycle. It has been hypothesised that this is due to the impact of signal dependent noise (SDN) on the neural commands in the oculomotor system. The main aims of this study were: (i) to develop parameter estimation methods for the Broomhead et al. [1] model in order to explore its predictive capacity by fitting it to experimental recordings of nystagmus waveforms and saccades; (ii) to develop a stochastic oculomotor model and examine the hypothesis that noise on the neural commands could be the cause of the behavioural characteristics measured from experimental nystagmus time series using nonlinear analysis techniques. In this work, two parameter estimation methods were developed, one for fitting the model to the experimental nystagmus waveforms and one to saccades. By using the former method, we successfully fitted the model to experimental nystagmus waveforms. This fit allowed to find the specific parameter values that set the model to generate these waveforms. The types of the waveforms that we successfully fitted were asymmetric pseudo-cycloid, jerk and jerk with extended foveation. The fit of other types of nystagmus waveforms were not examined in this work. Moreover, the results showed which waveforms the model can generate almost perfectly and the waveform characteristics of a number of jerk waveforms which it cannot exactly generate. These characteristics were on a specific type of jerk nystagmus waveforms with a very extreme fast phase. The latter parameter estimation method allowed us to explore whether the model can generate horizontal saccades of different amplitudes with the same behaviour as observed experimentally. The results suggest that the model can generate the experimental saccadic velocity profiles of different saccadic amplitudes. However, the results show that best fittings of the model to the experimental data are when different model parameter values were used for different saccadic amplitude. Our parameter estimation methods are based on multi-objective genetic algorithms (MOGA), which have the advantage of optimising biological models with a multi-objective, high-dimensional and complex search space. However, the integration of these models, for a wide range of parameter combinations, is very computationally intensive for a single central processing unit (CPU). To overcome this obstacle, we accelerated the parameter estimation method by utilising the parallel capabilities of a graphics processing unit (GPU). Depending of the GPU model, this could provide a speedup of 30 compared to a midrange CPU. The stochastic model that we developed is based on the Broomhead et al. [1] model, with signal dependent noise (SDN) and constant noise (CN) added to the neural commands. We fitted the stochastic model to saccades and jerk nystagmus waveforms. It was found that SDN and CN can cause similar variability to the local dimensions number of the oscillation as found in the experimental jerk nystagmus waveforms and in the case of saccade generation the saccadic variability recorded experimentally. However, there are small differences in the simulated behaviour compared to the nystagmus experimental data. We hypothesise that these could be caused by the inability of the model to simulate exactly key jerk waveform characteristics. Moreover, the differences between the simulations and the experimental nystagmus waveforms indicate that the proposed model requires further expansion, and this could include other oculomotor subsystem(s).
56

Real-time operational response methodology for reducing failure impacts in water distribution systems

Mahmoud, Herman Abdulqadir Mahmoud January 2018 (has links)
Interruption to water services and low water pressure conditions are commonly observed problems in water distribution systems (WDSs). Of particular concern are the unplanned events, such as pipe bursts. The current regulation in the UK requires water utilities to provide reliable water service to consumers resulting in as little as possible interruptions and of as short possible duration. All this pushes water utilities toward developing and using smarter responses to these events, based on advanced tools and solutions. All with the aim to change network management style from reactive to a proactive, and reduce water losses, optimize energy use and provide better services for consumers. This thesis presents a novel methodology for efficient and effective operational, short time response to an unplanned failure event (such as pipe burst) in a WDS. The proposed automated, near real-time operational response methodology consists of isolating the failure event followed by the recovery of the affected system area by restoring the flows and pressures to normal conditions. The isolation is typically achieved by manipulating the relevant on/off valves that are located closely to the event location. The recovery involves selecting an optimal combination of suitable operational network interventions. These are selected from a number of possible options with the aim to reduce the negative impact of the failure over a pre-specified time horizon. The intervention options considered here include isolation valve manipulations, changing the pressure reducing valve’s (PRV) outlet pressure and installation and use of temporary overland bypasses from a nearby hydrant(s) in an adjacent, unaffected part of the network. The optimal mix of interventions is identified by using a multi-objective optimization approach driven by the minimization of the negative impact on the consumers and the minimization of the corresponding number of operational interventions (which acts as a surrogate for operational costs). The negative impact of a failure event was quantified here as a volume of water undelivered to consumers and was estimated by using a newly developed pressure-driven model (PDM) based hydraulic solver. The PDM based hydraulic solver was validated on a number of benchmark and real-life networks under different flow conditions. The results obtained clearly demonstrate its advantages when compared to a number of existing methods. The key advantages include the simplicity of its implementation and the ability to predict network pressures and flows in a consistently accurate, numerically stable and computationally efficient manner under both pressure-deficient and normal-flow conditions and in both steady-state and extended period simulations. The new real-time operational response methodology was applied to a real world water distribution network of D-Town. The results obtained demonstrate the effectiveness of the proposed methodology in identifying the Pareto optimal network type intervention strategies that could be ultimately presented to the control room operator for making a suitable decision in near real-time.
57

Automated reasoning over string constraints

Liang, Tianyi 01 December 2014 (has links)
An increasing number of applications in verification and security rely on or could benefit from automatic solvers that can check the satisfiability of constraints over a rich set of data types that includes character strings. Unfortunately, most string solvers today are standalone tools that can reason only about some fragment of the theory of strings and regular expressions, sometimes with strong restrictions on the expressiveness of their input language (such as, length bounds on all string variables). These specialized solvers reduce string problems to satisfiability problems over specific data types, such as bit vectors, or to automata decision problems. On the other side, despite their power and success as back-end reasoning engines, general-purpose Satisfiability Modulo Theories (SMT) solvers so far have provided minimal or no native support for string reasoning. This thesis presents a deductive calculus describing a new algebraic approach that allows solving constraints over the theory of unbounded strings and regular expressions natively, without reduction to other problems. We provide proofs of refutation soundness and solution soundness of our calculus, and solution completeness under a fair proof strategy. Moreover, we show that our calculus is a decision procedure for the theory of regular language membership with length constraints. We have implemented our calculus as a string solver for the theory of (unbounded) strings with concatenation, length, and membership in regular languages, and incorporated it into the SMT solver CVC4 to expand its already large set of built-in theories. This work makes CVC4 the first SMT solver that is able to accept and process a rich set of mixed constraints over strings, integers, reals, arrays and other data types. In addition, our initial experimental results show that, over string problems, CVC4 is highly competitive with specialized string solvers with a comparable input language. We believe that the approach we described in this thesis provides a new idea for string-based formal methods.
58

Modelling and Exploiting Structures in Solving Propositional Satisfiability Problems

Pham, Duc Nghia, n/a January 2006 (has links)
Recent research has shown that it is often preferable to encode real-world problems as propositional satisfiability (SAT) problems and then solve using a general purpose SAT solver. However, much of the valuable information and structure of these realistic problems is flattened out and hidden inside the corresponding Conjunctive Normal Form (CNF) encodings of the SAT domain. Recently, systematic SAT solvers have been progressively improved and are now able to solve many highly structured practical problems containing millions of clauses. In contrast, state-of-the-art Stochastic Local Search (SLS) solvers still have difficulty in solving structured problems, apparently because they are unable to exploit hidden structure as well as the systematic solvers. In this thesis, we study and evaluate different ways to effectively recognise, model and efficiently exploit useful structures hidden in realistic problems. A summary of the main contributions is as follows: 1. We first investigate an off-line processing phase that applies resolution-based pre-processors to input formulas before running SLS solvers on these problems. We report an extensive empirical examination of the impact of SAT pre-processing on the performance of contemporary SLS techniques. It emerges that while all the solvers examined do indeed benefit from pre-processing, the effects of different pre-processors are far from uniform across solvers and across problems. Our results suggest that SLS solvers need to be equipped with multiple pre-processors if they are ever to match the performance of systematic solvers on highly structured problems. [Part of this study was published at the AAAI-05 conference]. 2. We then look at potential approaches to bridging the gap between SAT and constraint satisfaction problem (CSP) formalisms. One approach has been to develop a many-valued SAT formalism (MV-SAT) as an intermediate paradigm between SAT and CSP, and then to translate existing highly efficient SAT solvers to the MV-SAT domain. In this study, we follow a different route, developing SAT solvers that can automatically recognise CSP structure hidden in SAT encodings. This allows us to look more closely at how constraint weighting can be implemented in the SAT and CSP domains. Our experimental results show that a SAT-based mechanism to handle weights, together with a CSP-based method to instantiate variables, is superior to other combinations of SAT and CSP-based approaches. In addition, SLS solvers based on this many-valued weighting approach outperform other existing approaches to handle many-valued CSP structures. [Part of this study was published at the AAAI-05 conference]. 3. Finally, we propose and evaluate six different schemes to encode temporal reasoning problems, in particular the Interval Algebra (IA) networks, into SAT CNF formulas. We then empirically examine the performance of local search as well as systematic solvers on the new temporal SAT representations, in comparison with solvers that operate on native IA representations. Our empirical results show that zChaff (a state-of-the-art complete SAT solver) together with the best IA-to-SAT encoding scheme, can solve temporal problems significantly faster than existing IA solvers working on the equivalent native IA networks. [Part of this study was published at the CP-05 workshop].
59

SPH Modeling of Solitary Waves and Resulting Hydrodynamic Forces on Vertical and Sloping Walls

El-Solh, Safinaz 04 February 2013 (has links)
Currently, the accurate prediction of the impact of an extreme wave on infrastructure located near shore is difficult to assess. There is a lack of established methods to accurately quantify these impacts. Extreme waves, such as tsunamis generate, through breaking, extremely powerful hydraulic bores that impact and significantly damage coastal structures and buildings located close to the shoreline. The damage induced by such hydraulic bores is often due to structural failure. Examples of devastating coastal disasters are the 2004 Indian Ocean Tsunami, 2005 Hurricane Katrina and most recently, the 2011 Tohoku Japan Tsunami. As a result, more advanced research is needed to estimate the magnitude of forces exerted on structures by such bores. This research presents results of a numerical model based on the Smoothed Particle Hydrodynamics (SPH) method which is used to simulate the impact of extreme hydrodynamic forces on shore protection walls. Typically, fluids are modeled numerically based on a Lagrangian approach, an Eulerian approach or a combination of the two. Many of the common problems that arise from using more traditional techniques can be avoided through the use of SPH-based models. Such challenges include the model computational efficiency in terms of complexity of implementation. The SPH method allows water particles to be individually modeled, each with their own characteristics, which then accurately depicts the behavior and properties of the flow field. An open source code, known as SPHysics, was used to run the simulations presented in this thesis. Several cases analysed consist of hydraulic bores impacting a flat vertical wall as well as a sloping seawall. The analysis includes comparisons of the numerical results with published experimental data. The model is shown to accurately reproduce the formation of solitary waves as well as their propagation and breaking. The impacting bore profiles as well as the resulting pressures are also efficiently simulated using the model.
60

Analysis and verification of routing effects on signal integrity for high-speed digital stripline interconnects in multi-layer PCB designs / Analys och verifiering av ledardragningens betydelse för signalintegriteten hos digitala höghastighetsanslutningar på flerlagermönsterkort

Frejd, Andreas January 2010 (has links)
The way printed circuit board interconnects for high-speed digital signals are designed ultimately determines the performance that can be achieved for a certain interface, thus having a profound impact on whether the complete communication channel will comply with the desired standard specification or not. Good understanding and methods for anticipating and verifying this behaviour through computer simulations and practical measurements are therefore essential. Characterization of an interconnect can be performed either in the time domain or in the frequency domain. Regardless of the domain chosen, a method for unobstrusively connecting to the test object is required. After various different attempts it could be concluded that frequency domain measurements using a vector network analyzer together with microwave probes will provide the best measurement fidelity and ease of use. In turn, this method requires the test object to be prepared for the measurement. Advanced computer simulation software is available, but comes with the drawback of dramatically increasing the requirements on computational resources for improved accuracy. In general, these simulators can be configured to show good agreement with measurements at frequencies as high as ten gigahertz. For ideal interconnects, the simplest and, thus, fastest methods will provide good enough accuracy. These simple methods should be complemented with the results from more accurate simulations in cases where the physical structure is complex or in other ways deviates from the ideal. Several practical routing situations were found to introduce severe signal integrity issues. Through appropriate use of the methods developed in this thesis, these can be identified in the design process and thereby avoided.

Page generated in 0.06 seconds