• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 56
  • 56
  • 55
  • 24
  • 18
  • 18
  • 16
  • 16
  • 15
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Peer production in the U.S. Navy enlisting Coase's Penguin /

Koszarek, William E. January 2009 (has links) (PDF)
Thesis (M.S. in Systems Engineering)--Naval Postgraduate School, December 2009. / Thesis Advisor(s): Langford, Gary O. ; Franck, Raymond. Second Reader: Rummel, John P. "December 2009." Description based on title screen as viewed on January 27, 2010. Author(s) subject terms: Peer Production, Crowd Sourcing, Collaboration, Web 2.0, Peer to peer, P2P. Includes bibliographical references (p. 139-140). Also available in print.
2

Signal processing methods for the analysis of cerebral blood flow and metabolism

Tingying, Peng January 2009 (has links)
An important protective feature of the cerebral circulation is its ability to maintain sufficient cerebral blood flow and oxygen supply in accordance with the energy demands of the brain despite variations in a number of external factors such as arterial blood pressure, heart rate and respiration rate. If cerebral autoregulation is impaired, abnormally low or high CBF can lead to cerebral ischemia, intracranial hypertension or even capillary damage, thus contributing to the onset of cerebrovascular events. The control and regulation of cerebral blood flow is a dynamic, multivariate phenomenon. Sensitive techniques are required to monitor and process experimental data concerning cerebral blood flow and metabolic rate in a clinical setting. This thesis presents a model simulation study and 4 related signal processing studies concerned with CBF regulation. The first study models the regulation of the cerebral vasculature to systemic changes in blood pressure, dissolved blood gas concentration and neural activation in a integrated haemodynamic system. The model simulations show that the three pathways which are generally thought to be independent (pressure, CO₂ and activation) greatly influence each other, it is vital to consider parallel changes of unmeasured variability when performing a single pathway study. The second study shows how simultaneously measured blood gas concentration fluctuations can improve the accuracy of an existing frequency domain technique for recovering cerebral autoregulation dynamics from spontaneous fluctuations in blood pressure and cerebral blood flow velocity. The third study shows how the continuous wavelet transform can recover both time and frequency information about dynamic autoregulation, including the contribution of blood gas concentration. The fourth study shows how the discrete wavelet transform can be used to investigate frequency-dependent coupling between cerebral and systemic cardiovascular dynamics. The final study then uses these techniques to investigate the systemic effects on resting BOLD variability. The general approach taken in this thesis is a combined analysis of both modelling and data analysis. Physiologically-based models encapsulate hypotheses about features of CBF regulation, particularly those features that may be difficult to recover using existing analysis methods, and thus provide the motivation for developing both new analysis methods and criteria to evaluate these methods. On the other hand, the statistical features extracted directly from experimental data can be used to validate and improve the model.
3

Motion correction and parameter estimation in DCE-MRI sequences : application to colorectal cancer

Bhushan, Manav January 2014 (has links)
Cancer is one of the leading causes of premature deaths across the world today, and there is an urgent need for imaging techniques that can help in early diagnosis and treatment planning for cancer patients. In the last four decades, magnetic resonance imaging (MRI) has emerged as one of the leading modalities for non-invasive imaging of tumours. By using dynamic contrast-enhanced magnetic resonance imaging (DCEMRI), this modality can be used to acquire information about perfusion and vascularity of tumours, which can help in predicting response to treatment. There are many factors that complicate the analysis of DCE-MRI data, and make clinical predictions based on it unreliable. During data acquisition, there are many sources of uncertainties and errors, especially patient motion, which result in the same image position being representative of many different anatomical locations across time. Apart from motion, there are also other inherent uncertainties and noise associated with the measurement of DCE-MRI parameters, which contribute to the model-fitting error observed when trying to apply pharmacokinetic (PK) models to the data. In this thesis, a probabilistic, model-based registration and parameter estimation (MoRPE) framework for motion correction and PK-parameter estimation in DCE-MRI sequences is presented. The MoRPE framework is first compared with conventional motion correction methods on simulated data, and then applied to data from a clinical trial involving twenty colorectal cancer patients. On clinical data, the ability of MoRPE to discriminate between responders and non-responders to combined chemoand radiotherapy is tested, and found to be superior to other methods. The effect of incorporating different arterial input functions within MoRPE is also assessed. Following this, a quantitative analysis of the uncertainties associated with the different PK parameters is performed using a variational Bayes mathematical framework. This analysis provides a quantitative estimate of the extent to which motion correction affects the uncertainties associated with different parameters. Finally, the importance of estimating spatial heterogeneity of PK parameters within tumours is assessed. The efficacy of different measures of spatial heterogeneity, in predicting response to therapy based on the pre-therapy scan alone are compared, and the prognostic value of a new derived PK parameter the 'acceleration constant' is investigated. The integration of uncertainty estimates of different DCE-MRI parameters into the calculation of their heterogeneity measures is also shown to improve the prediction of response to therapy.
4

An anatomical model of the cerebral vasculature and blood flow

Lucas, Claire January 2013 (has links)
The brain accounts for around 2 % of human adult bodyweight but consumes 20 % of the resting oxygen available to the whole body. The brain is dependent on a constant supply of oxygen to tissue, transported from the heart via the vasculature and carried in blood. An interruption to flow can lead to ischaemia (a reduced oxygen supply) and prolonged interruption may result in tissue death, and permanent brain damage. The cerebral vasculature consists of many, densely packed, micro-vessels with a very large total surface area. Oxygen dissolved in blood enters tissue by passive diffusion through the micro-vessel walls. Imaging shows bursts of metabolic activity and flow in localised brain areas coordinated with brain activity (such as raising a hand). An appropriate level of oxygenation, according to physiological demand, is maintained via autoregulation; a set of response pathways in the brain which cause upstream or downstream vessels to expand or contract in diameter as necessary to provide sufficient oxygen to every region of the brain. Further, autoregulation is also evident in the response to pressure changes in the vasculature: the perfusing pressure can vary over a wide range from the basal-state with only a small effect on flow due to the constriction or dilation of vessels. Presented here is a new vasculature model where diameter and length are calculated in order to match the data available for flow velocity and blood pressure in different sized vessels. These vessels are arranged in a network of 6 generations each of bifurcating arterioles and venules, and a set of capillary beds. The input pressure and number of generations are the only specifications required to describe the network. The number of vessels, and therefore vessel geometry, is governed by how many generations are chosen and this can be altered in order to create more simple or complex networks. The flow, geometry and oxygen concentrations are calculated based on the vessel resistance due to flow from geometry based on Kirchoff circuit laws. The passive and active length-tension characteristics of the vasculature are established using an approximation of the network at upper and lower autoregulation limits. An activation model is described with an activation factor which governs the contributions of elastic andmuscle tension to the total vessel tension. This tension balances with the circumferential tension due to pressure and diameter and the change in activation sets the vessel diameter. The mass transport equation for oxygen is used to calculate the concentration of oxygen at every point in the network using data for oxygen saturation to establish a relationship between the permeability of the vessel wall to oxygen and the geometry and flow in individual vessels. A tissue compartment is introduced which enables the modelling of metabolic control. There is evidence for a coordinated response by surrounding vessels to local changes. A signal is proposed based on oxygen demand which can be conducted upstream. This signal decays exponentially with vessel length but also accumulates with the signal added from other vessels. The activation factor is therefore set by weighted signals proportional to changes in tissue concentration, circumferential tension, shear stress and conducted oxygen demand. The model is able to reproduce the autoregulation curve whereby a change in pressure has only a small effect on flow. The model is also able to replicate experimental results of diameter and tissue concentration following an increase in oxygen demand.
5

Efficient numerical modelling of wave-structure interaction

Siddorn, Philip David January 2012 (has links)
Offshore structures are required to survive in extreme wave environments. Historically, the design of these offshore structures and vessels has relied on wave-tank experiments and linear theory. Today, with advances in computing power, it is becoming feasible to supplement these methods of analysis with fully nonlinear numerical simulation. This thesis is concerned with the development of an efficient method to perform this numerical modelling, in the context of potential flow theory. The interaction of a steep ocean wave with a floating body involves a moving free surface and a wide range of length scales. Attempts to reduce the size of the simulation domain cause problems with wave reflection from the domain edge and with the accurate creation of incident waves. A method of controlling the wave field around a structure is presented. The ability to effectively damp an outgoing wave in a very short distance is demonstrated. Steep incident waves are generated without the requirement for the wave to evolve over a large time or distance before interaction with the body. This enables a general wave-structure interaction problem to be modelled in a small tank, and behave as if it were surrounded by a large expanse of water. The suitability of the boundary element method for performing this modelling is analysed. Potential improvements are presented with respect to accuracy, robustness, and computational complexity. Evidence of third order diffraction is found for an FPSO model.
6

Dynamical system decomposition and analysis using convex optimization

Anderson, James David January 2012 (has links)
This thesis is concerned with investigating new methods for the analysis of large-scale dynamical systems using convex optimization. The proposed methodology is based on composite Lyapunov theory and is computationally implemented using polynomial programming techniques. The main result of this work is the development of a system decomposition framework that makes it possible to analyze systems that are of such a scale that traditional methods cannot cope with. We begin by addressing the problem of model invalidation. A barrier certificate method for invalidating models in the presence of uncertain data is presented for both continuous and discrete time models. It is shown how a re-parameterization of the time dependent variables can improve the numerical conditioning of the underlying optimization problem. The main contribution of this thesis is the development of an automated dynamical system decomposition framework that permits us to verify the stability of systems that typically have a state dimension large enough to render traditional computational methods intractable. The underlying idea is to decompose a system into a set of lower order subsystems connected in feedback in such a manner that composite methods for stability verification may be employed. What is unique about the algorithm presented is that it takes into account both dynamics and the topology of the interconnection graph. In the first instance we illustrate the methodology with an ecological network and primal Internet congestion control scheme. The versatility of the decomposition framework is also highlighted when it is shown that when applied to a model of the EGF-MAPK signaling pathway it is capable of identifying biologically relevant subsystems in addition to stability verification. Finally we introduce stability metrics for interconnected dynamical systems based on the theory of dissipativity. We conclude by outlining a clustering based decomposition algorithm that explicitly takes into account the input and output dynamics when determining the system decomposition.
7

Plasma vertical position control in the COMPASS–D tokamak

Vyas, Parag January 1996 (has links)
The plasma vertical position system on the COMPASS–D tokamak is studied in this thesis. An analogue P+D controller is used to regulate the plasma vertical position which is open loop unstable. Measurements from inside the vessel are used for the derivative component of the control signal and external measurements for the proportional component. Two main sources of disturbances are observed on COMPASS–D. One source is 600Hz noise from thyristor power supplies which cause large oscillations at the control amplifier output. Another source is impulse–like disturbances due to ELMs (Edge Localized Modes) and this can occasionally lead to loss of control when the control amplifier saturates. Models of the plasma open loop dynamics were obtained using the process of system identification. Experimental data is used to fit the coefficients of a mathematical model. The frequency response of the model is strongly dependent on the shape of the plasma. The effect of shielding by the vessel wall on external measurements when compared with internal measurements is also observed. The models were used to predict values of gain margins and phase crossover frequencies which were found to be in good agreement with measured values. The harsh reactor conditions on the proposed ITER tokamak preclude the use of internal measurements. On COMPASS–D the stability margins of the loop decrease when using only external flux loops. High order controllers were designed to stabilize the system using only external measurements and to reduce the effect of 600Hz noise on the control amplifier voltage. The controllers were tested on COMPASS–D and demonstrated the improved performance of high order controllers over the simple P+D controller. ELMs cause impulse–like disturbances on the plasma position. The optimal controller minimizing the peak of the impulse response can be calculated analytically for COMPASS–D. A multiobjective controller which combines a small peak impulse response with robust stability and noise attenuation can be obtained using a numerical search.
8

Machine learning in multi-frame image super-resolution

Pickup, Lyndsey C. January 2007 (has links)
Multi-frame image super-resolution is a procedure which takes several noisy low-resolution images of the same scene, acquired under different conditions, and processes them together to synthesize one or more high-quality super-resolution images, with higher spatial frequency, and less noise and image blur than any of the original images. The inputs can take the form of medical images, surveillance footage, digital video, satellite terrain imagery, or images from many other sources. This thesis focuses on Bayesian methods for multi-frame super-resolution, which use a prior distribution over the super-resolution image. The goal is to produce outputs which are as accurate as possible, and this is achieved through three novel super-resolution schemes presented in this thesis. Previous approaches obtained the super-resolution estimate by first computing and fixing the imaging parameters (such as image registration), and then computing the super-resolution image with this registration. In the first of the approaches taken here, superior results are obtained by optimizing over both the registrations and image pixels, creating a complete simultaneous algorithm. Additionally, parameters for the prior distribution are learnt automatically from data, rather than being set by trial and error. In the second approach, uncertainty in the values of the imaging parameters is dealt with by marginalization. In a previous Bayesian image super-resolution approach, the marginalization was over the super-resolution image, necessitating the use of an unfavorable image prior. By integrating over the imaging parameters rather than the image, the novel method presented here allows for more realistic prior distributions, and also reduces the dimension of the integral considerably, removing the main computational bottleneck of the other algorithm. Finally, a domain-specific image prior, based upon patches sampled from other images, is presented. For certain types of super-resolution problems where it is applicable, this sample-based prior gives a significant improvement in the super-resolution image quality.
9

Bayesian statistical models of shape and appearance for subcortical brain segmentation

Patenaude, Brian Matthew January 2007 (has links)
Our motivation is to develop an automated technique for the segmentation of sub-cortical human brain structures from MR images. To this purpose, models of shape-and-appearance are constructed and fit to new image data. The statistical models are trained from 317 manually labelled T1-weighted MR images. Shape is modelled using a surface-based point distribution model (PDM) such that the shape space is constrained to the linear combination of the mean shape and eigenvectors of the vertex coordinates. In addition, to model intensity at the structural boundary, intensities are sampled along the surface normal from the underlying image. We propose a novel Bayesian appearance model whereby the relationship between shape and intensity are modelled via the conditional distribution of intensity given shape. Our fully probabilistic approach eliminates the need for arbitrary weightings between shape and intensity as well as for tuning parameters that specify the relative contribution between the use of shape constraints and intensity information. Leave-one-out cross-validation is used to validate the model and fitting for 17 structures. The PDM for shape requires surface parameterizations of the volumetric, manual labels such that vertices retain a one-to-one correspondence across the training subjects. Surface parameterizations with correspondence are generated through the use of deformable models under constraints that embed the correspondence criterion within the deformation process. A novel force that favours equal-area triangles throughout the mesh is introduced. The force adds stability to the mesh such that minimal smoothing or within-surface motion is required. The use of the PDM for segmentation across a series of subjects results in a set surfaces that retain point correspondence. The correspondence facilitates landmark-based shape analysis. Amongst other metrics, vertex-wise multivariate statistics and discriminant analysis are used to investigate local and global size and shape differences between groups. The model is fit, and shape analysis is applied to two clinical datasets.
10

Automotive combustion modelling and control

Fussey, Peter Michael January 2014 (has links)
This thesis seeks to bring together advances in control theory, modelling and controller hardware and apply them to automotive powertrains. Automotive powertrain control is dominated by PID controllers, look-up tables and their derivatives. These controllers have been constantly refined over the last two decades and now perform acceptably well. However, they are now becoming excessively complicated and time consuming to calibrate. At the same time the industry faces ever increasing pressure to improve fuel consumption, reduce emissions and provide driver responsiveness. The challenge is to apply more sophisticated control approaches which address these issues and at the same time are intuitive and straightforward to tune for good performance by calibration engineers. This research is based on a combustion model which, whilst simplified, facilitates an accurate estimate of the harmful NO<sub>x</sub> and soot emissions. The combustion model combines a representation of the fuel spray and mixing with charge air to give a time varying distribution of in-cylinder air and fuel mixture which is used to calculate flame temperatures and the subsequent emissions. A combustion controller was developed, initially in simulation, using the combustion model to minimise emissions during transient manoeuvres. The control approach was implemented on an FPGA exploiting parallel computations that allow the algorithm to run in real-time. The FPGA was integrated into a test vehicle and tested over a number of standard test cycles demonstrating that the combustion controller can be used to reduce NO<sub>x</sub> emissions by over 10% during the US06 test cycle. A further use of the combustion model was in the optimisation of fuel injection parameters to minimise fuel consumption, whilst delivering the required torque and respecting constraints on cylinder pressure (to preserve engine integrity) and rate of increase in cylinder pressure (to reduce noise).

Page generated in 0.0987 seconds