131 |
A macroscopic approach to model rarefied polyatomic gas behaviorRahimi, Behnam 02 May 2016 (has links)
A high-order macroscopic model for the accurate description of rarefied polyatomic gas flows is introduced based on a simplified kinetic equation. The different energy exchange processes are accounted for with a two term collision model. The order of magnitude method is applied to the primary moment equations to acquire the optimized moment definitions and the final scaled set of Grad's 36 moment equations for polyatomic gases. The proposed kinetic model, which is an extension of the S-model, predicts correct relaxation of higher moments and delivers the accurate Prandtl (Pr) number. Also, the model has a proven H-theorem. At the first order, a modification of the Navier-Stokes-Fourier (NSF) equations is obtained, which shows considerable extended range of validity in comparison to the classical NSF equations in modeling sound waves. At third order of accuracy, a set of 19 regularized PDEs (R19) is obtained. Furthermore, the terms associated with the internal degrees of freedom yield various intermediate orders of accuracy, a total of 13 different orders. Attenuation and speed of linear waves are studied as the first application of the many sets of equations. For frequencies were the internal degrees of freedom are effectively frozen, the equations reproduce the behavior of monatomic gases. Thereafter, boundary conditions for the proposed macroscopic model are introduced. The unsteady heat conduction of a gas at rest and steady Couette flow are studied numerically and analytically as examples of boundary value problems. The results for different gases are given and effects of Knudsen numbers, degrees of freedom, accommodation coefficients and temperature dependent properties are investigated. For some cases, the higher order effects are very dominant and the widely used first order set of the Navier Stokes Fourier equations fails to accurately capture the gas behavior and should be replaced by a higher order set of equations. / Graduate / 0346, 0791, 0548, 0759 / behnamr@uvic.ca
|
132 |
Efficient numerical modelling of wave-structure interactionSiddorn, Philip David January 2012 (has links)
Offshore structures are required to survive in extreme wave environments. Historically, the design of these offshore structures and vessels has relied on wave-tank experiments and linear theory. Today, with advances in computing power, it is becoming feasible to supplement these methods of analysis with fully nonlinear numerical simulation. This thesis is concerned with the development of an efficient method to perform this numerical modelling, in the context of potential flow theory. The interaction of a steep ocean wave with a floating body involves a moving free surface and a wide range of length scales. Attempts to reduce the size of the simulation domain cause problems with wave reflection from the domain edge and with the accurate creation of incident waves. A method of controlling the wave field around a structure is presented. The ability to effectively damp an outgoing wave in a very short distance is demonstrated. Steep incident waves are generated without the requirement for the wave to evolve over a large time or distance before interaction with the body. This enables a general wave-structure interaction problem to be modelled in a small tank, and behave as if it were surrounded by a large expanse of water. The suitability of the boundary element method for performing this modelling is analysed. Potential improvements are presented with respect to accuracy, robustness, and computational complexity. Evidence of third order diffraction is found for an FPSO model.
|
133 |
Dynamical system decomposition and analysis using convex optimizationAnderson, James David January 2012 (has links)
This thesis is concerned with investigating new methods for the analysis of large-scale dynamical systems using convex optimization. The proposed methodology is based on composite Lyapunov theory and is computationally implemented using polynomial programming techniques. The main result of this work is the development of a system decomposition framework that makes it possible to analyze systems that are of such a scale that traditional methods cannot cope with. We begin by addressing the problem of model invalidation. A barrier certificate method for invalidating models in the presence of uncertain data is presented for both continuous and discrete time models. It is shown how a re-parameterization of the time dependent variables can improve the numerical conditioning of the underlying optimization problem. The main contribution of this thesis is the development of an automated dynamical system decomposition framework that permits us to verify the stability of systems that typically have a state dimension large enough to render traditional computational methods intractable. The underlying idea is to decompose a system into a set of lower order subsystems connected in feedback in such a manner that composite methods for stability verification may be employed. What is unique about the algorithm presented is that it takes into account both dynamics and the topology of the interconnection graph. In the first instance we illustrate the methodology with an ecological network and primal Internet congestion control scheme. The versatility of the decomposition framework is also highlighted when it is shown that when applied to a model of the EGF-MAPK signaling pathway it is capable of identifying biologically relevant subsystems in addition to stability verification. Finally we introduce stability metrics for interconnected dynamical systems based on the theory of dissipativity. We conclude by outlining a clustering based decomposition algorithm that explicitly takes into account the input and output dynamics when determining the system decomposition.
|
134 |
Plasma vertical position control in the COMPASS–D tokamakVyas, Parag January 1996 (has links)
The plasma vertical position system on the COMPASS–D tokamak is studied in this thesis. An analogue P+D controller is used to regulate the plasma vertical position which is open loop unstable. Measurements from inside the vessel are used for the derivative component of the control signal and external measurements for the proportional component. Two main sources of disturbances are observed on COMPASS–D. One source is 600Hz noise from thyristor power supplies which cause large oscillations at the control amplifier output. Another source is impulse–like disturbances due to ELMs (Edge Localized Modes) and this can occasionally lead to loss of control when the control amplifier saturates. Models of the plasma open loop dynamics were obtained using the process of system identification. Experimental data is used to fit the coefficients of a mathematical model. The frequency response of the model is strongly dependent on the shape of the plasma. The effect of shielding by the vessel wall on external measurements when compared with internal measurements is also observed. The models were used to predict values of gain margins and phase crossover frequencies which were found to be in good agreement with measured values. The harsh reactor conditions on the proposed ITER tokamak preclude the use of internal measurements. On COMPASS–D the stability margins of the loop decrease when using only external flux loops. High order controllers were designed to stabilize the system using only external measurements and to reduce the effect of 600Hz noise on the control amplifier voltage. The controllers were tested on COMPASS–D and demonstrated the improved performance of high order controllers over the simple P+D controller. ELMs cause impulse–like disturbances on the plasma position. The optimal controller minimizing the peak of the impulse response can be calculated analytically for COMPASS–D. A multiobjective controller which combines a small peak impulse response with robust stability and noise attenuation can be obtained using a numerical search.
|
135 |
Machine learning in multi-frame image super-resolutionPickup, Lyndsey C. January 2007 (has links)
Multi-frame image super-resolution is a procedure which takes several noisy low-resolution images of the same scene, acquired under different conditions, and processes them together to synthesize one or more high-quality super-resolution images, with higher spatial frequency, and less noise and image blur than any of the original images. The inputs can take the form of medical images, surveillance footage, digital video, satellite terrain imagery, or images from many other sources. This thesis focuses on Bayesian methods for multi-frame super-resolution, which use a prior distribution over the super-resolution image. The goal is to produce outputs which are as accurate as possible, and this is achieved through three novel super-resolution schemes presented in this thesis. Previous approaches obtained the super-resolution estimate by first computing and fixing the imaging parameters (such as image registration), and then computing the super-resolution image with this registration. In the first of the approaches taken here, superior results are obtained by optimizing over both the registrations and image pixels, creating a complete simultaneous algorithm. Additionally, parameters for the prior distribution are learnt automatically from data, rather than being set by trial and error. In the second approach, uncertainty in the values of the imaging parameters is dealt with by marginalization. In a previous Bayesian image super-resolution approach, the marginalization was over the super-resolution image, necessitating the use of an unfavorable image prior. By integrating over the imaging parameters rather than the image, the novel method presented here allows for more realistic prior distributions, and also reduces the dimension of the integral considerably, removing the main computational bottleneck of the other algorithm. Finally, a domain-specific image prior, based upon patches sampled from other images, is presented. For certain types of super-resolution problems where it is applicable, this sample-based prior gives a significant improvement in the super-resolution image quality.
|
136 |
Bayesian statistical models of shape and appearance for subcortical brain segmentationPatenaude, Brian Matthew January 2007 (has links)
Our motivation is to develop an automated technique for the segmentation of sub-cortical human brain structures from MR images. To this purpose, models of shape-and-appearance are constructed and fit to new image data. The statistical models are trained from 317 manually labelled T1-weighted MR images. Shape is modelled using a surface-based point distribution model (PDM) such that the shape space is constrained to the linear combination of the mean shape and eigenvectors of the vertex coordinates. In addition, to model intensity at the structural boundary, intensities are sampled along the surface normal from the underlying image. We propose a novel Bayesian appearance model whereby the relationship between shape and intensity are modelled via the conditional distribution of intensity given shape. Our fully probabilistic approach eliminates the need for arbitrary weightings between shape and intensity as well as for tuning parameters that specify the relative contribution between the use of shape constraints and intensity information. Leave-one-out cross-validation is used to validate the model and fitting for 17 structures. The PDM for shape requires surface parameterizations of the volumetric, manual labels such that vertices retain a one-to-one correspondence across the training subjects. Surface parameterizations with correspondence are generated through the use of deformable models under constraints that embed the correspondence criterion within the deformation process. A novel force that favours equal-area triangles throughout the mesh is introduced. The force adds stability to the mesh such that minimal smoothing or within-surface motion is required. The use of the PDM for segmentation across a series of subjects results in a set surfaces that retain point correspondence. The correspondence facilitates landmark-based shape analysis. Amongst other metrics, vertex-wise multivariate statistics and discriminant analysis are used to investigate local and global size and shape differences between groups. The model is fit, and shape analysis is applied to two clinical datasets.
|
137 |
Automotive combustion modelling and controlFussey, Peter Michael January 2014 (has links)
This thesis seeks to bring together advances in control theory, modelling and controller hardware and apply them to automotive powertrains. Automotive powertrain control is dominated by PID controllers, look-up tables and their derivatives. These controllers have been constantly refined over the last two decades and now perform acceptably well. However, they are now becoming excessively complicated and time consuming to calibrate. At the same time the industry faces ever increasing pressure to improve fuel consumption, reduce emissions and provide driver responsiveness. The challenge is to apply more sophisticated control approaches which address these issues and at the same time are intuitive and straightforward to tune for good performance by calibration engineers. This research is based on a combustion model which, whilst simplified, facilitates an accurate estimate of the harmful NO<sub>x</sub> and soot emissions. The combustion model combines a representation of the fuel spray and mixing with charge air to give a time varying distribution of in-cylinder air and fuel mixture which is used to calculate flame temperatures and the subsequent emissions. A combustion controller was developed, initially in simulation, using the combustion model to minimise emissions during transient manoeuvres. The control approach was implemented on an FPGA exploiting parallel computations that allow the algorithm to run in real-time. The FPGA was integrated into a test vehicle and tested over a number of standard test cycles demonstrating that the combustion controller can be used to reduce NO<sub>x</sub> emissions by over 10% during the US06 test cycle. A further use of the combustion model was in the optimisation of fuel injection parameters to minimise fuel consumption, whilst delivering the required torque and respecting constraints on cylinder pressure (to preserve engine integrity) and rate of increase in cylinder pressure (to reduce noise).
|
138 |
Modeling the Homeschool timetabling problem using Integer programmingSrinivasan, Subhashini 14 June 2011 (has links)
Home schooling has steadily been increasing in the past decade. According to a survey in 2007, about 2.5 million children were being home schooled in the US. Typically, parents provide education at the convenience of their home and in some cases an instructor is appointed for the same. The Home School Timetabling problem (HSTP) deals with assigning subjects, timeslots and rooms to every student. In doing so, there are certain hard and specialty constraints that are to be satisfied. Integer programming (IP) has been used in solving the HSTP as it has the advantage of being able to provide information about the relative significance of each constraint with respect to the objective. A prototype in the form of a GUI has been built such that the parent can enter each student’s name, his/her subjects, duration, days and time for each subject, availability times of the parent etc. This data is then fed into the IP model so that it can generate a feasible timetable satisfying all of the constraints. When a solution is found it is formatted to provide the weekly timetable for each student, individually, as well as a complete timetable for all students each day.
|
139 |
Model-based segmentation methods for analysis of 2D and 3D ultrasound images and sequencesStebbing, Richard January 2014 (has links)
This thesis describes extensions to 2D and 3D model-based segmentation algorithms for the analysis of ultrasound images and sequences. Starting from a common 2D+t "track-to-last" algorithm, it is shown that the typical method of searching for boundary candidates perpendicular to the model contour is unnecessary if, for each boundary candidate, its corresponding position on the model contour is optimised jointly with the model contour geometry. With this observation, two 2D+t segmentation algorithms, which accurately recover boundary displacements and are capable of segmenting arbitrarily long sequences, are formulated and validated. Generalising to 3D, subdivision surfaces are shown to be natural choices for continuous model surfaces, and the algorithms necessary for joint optimisation of the correspondences and model surface geometry are described. Three applications of 3D model-based segmentation for ultrasound image analysis are subsequently presented and assessed: skull segmentation for fetal brain image analysis; face segmentation for shape analysis, and single-frame left ventricle (LV) segmentation from echocardiography images for volume measurement. A framework to perform model-based segmentation of multiple 3D sequences - while jointly optimising an underlying linear basis shape model - is subsequently presented for the challenging application of right ventricle (RV) segmentation from 3D+t echocardiography sequences. Finally, an algorithm to automatically select boundary candidates independent of a model surface estimate is described and presented for the task of LV segmentation. Although motivated by challenges in ultrasound image analysis, the conceptual contributions of this thesis are general and applicable to model-based segmentation problems in many domains. Moreover, the components are modular, enabling straightforward construction of application-specific formulations for new clinical problems as they arise in the future.
|
140 |
Drivers of Dengue Within-Host Dynamics and Virulence EvolutionBen-Shachar, Rotem January 2016 (has links)
<p>Dengue is an important vector-borne virus that infects on the order of 400 million individuals per year. Infection with one of the virus's four serotypes (denoted DENV-1 to 4) may be silent, result in symptomatic dengue 'breakbone' fever, or develop into the more severe dengue hemorrhagic fever/dengue shock syndrome (DHF/DSS). Extensive research has therefore focused on identifying factors that influence dengue infection outcomes. It has been well-documented through epidemiological studies that DHF is most likely to result from a secondary heterologous infection, and that individuals experiencing a DENV-2 or DENV-3 infection typically are more likely to present with more severe dengue disease than those individuals experiencing a DENV-1 or DENV-4 infection. However, a mechanistic understanding of how these risk factors affect disease outcomes, and further, how the virus's ability to evolve these mechanisms will affect disease severity patterns over time, is lacking. In the second chapter of my dissertation, I formulate mechanistic mathematical models of primary and secondary dengue infections that describe how the dengue virus interacts with the immune response and the results of this interaction on the risk of developing severe dengue disease. I show that only the innate immune response is needed to reproduce characteristic features of a primary infection whereas the adaptive immune response is needed to reproduce characteristic features of a secondary dengue infection. I then add to these models a quantitative measure of disease severity that assumes immunopathology, and analyze the effectiveness of virological indicators of disease severity. In the third chapter of my dissertation, I then statistically fit these mathematical models to viral load data of dengue patients to understand the mechanisms that drive variation in viral load. I specifically consider the roles that immune status, clinical disease manifestation, and serotype may play in explaining viral load variation observed across the patients. With this analysis, I show that there is statistical support for the theory of antibody dependent enhancement in the development of severe disease in secondary dengue infections and that there is statistical support for serotype-specific differences in viral infectivity rates, with infectivity rates of DENV-2 and DENV-3 exceeding those of DENV-1. In the fourth chapter of my dissertation, I integrate these within-host models with a vector-borne epidemiological model to understand the potential for virulence evolution in dengue. Critically, I show that dengue is expected to evolve towards intermediate virulence, and that the optimal virulence of the virus depends strongly on the number of serotypes that co-circulate. Together, these dissertation chapters show that dengue viral load dynamics provide insight into the within-host mechanisms driving differences in dengue disease patterns and that these mechanisms have important implications for dengue virulence evolution.</p> / Dissertation
|
Page generated in 0.1364 seconds