• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 19
  • 19
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Mathematical modeling of the transmission dynamics of malaria in South Sudan

Mukhtar, Abdulaziz Yagoub Abdelrahman January 2019 (has links)
Philosophiae Doctor - PhD / Malaria is a common infection in tropical areas, transmitted between humans through female anopheles mosquito bites as it seeks blood meal to carry out egg production. The infection forms a direct threat to the lives of many people in South Sudan. Reports show that malaria caused a large proportion of morbidity and mortality in the fledgling nation, accounting for 20% to 40% morbidity and 20% to 25% mortality, with the majority of the affected people being children and pregnant mothers. In this thesis, we construct and analyze mathematical models for malaria transmission in South Sudan context incorporating national malaria control strategic plan. In addition, we investigate important factors such as climatic conditions and population mobility that may drive malaria in South Sudan. Furthermore, we study a stochastic version of the deterministic model by introducing a white noise.
2

Active Sonar Tracking Under Realistic Conditions

Liu, Ben January 2019 (has links)
This thesis focuses on the problem of underwater target tracking with consideration for realistic conditions using active sonar. This thesis addresses the following specific problems: 1) underwater detection in three dimensional (3D) space using multipath detections and an uncertain sound speed profile in heavy clutter, 2) tracking a group of divers whose motion is dependent on each other using sonar detections corrupted by unknown structured background clutter, 3) extended target tracking (ETT) with a high-resolution sonar in the presence of multipath detection and measurement origin uncertainty. Unrealistic assumptions about the environmental conditions may degrade the performance of underwater tracking algorithms. Hence, underwater target tracking with realistic conditions is addressed by integrating the environment-induced uncertainties or constraints into the trackers. First, an iterated Bayesian framework is formulated using the ray-tracing model and an extension of the Maximum Likelihood Probabilistic Data Association (ML-PDA) algorithm to make use of multipath information. With the ray-tracing model, the algorithm can handle more realistic sound speed profile (SSP) instead of using the commonly-assumed constant velocity model or isogradient SSP. Also, by using the iterated framework, we can simultaneously estimate the SSP and target state in uncertain multipath environments. Second, a new diver dynamic motion (DDM) model is integrated into the Probability Hypothesis Density (PHD) to track the dependent motion diver targets. The algorithm is implemented with Gaussian Mixtures (GM) to ensure low computational complexity. The DDM model not only includes inter-target interactions but also the environmental influences (e.g., water flow). Furthermore, a log-Gaussian Cox process (LGCP) model is seamlessly integrated into the proposed filter to distinguish the target-originated measurement and false alarms. The final topic of interest is to address the ETT problem with multipath detections and clutter, which is practically relevant but barely addressed in the literature. An improved filter, namely MP-ET-PDA, with the classical probabilistic data association (PDA) filter and random matrices (RM) is proposed. The optimal estimates can be provided by MP-ET-PDA filter by considering all possible association events. To deal with the high computational load resulting from the data association, a Variational Bayesian (VB) clustering-aided MP-ET-PDA is proposed to provide near real-time processing capability. The traditional Cramer-Rao Lower Bound (CRLB), which is the inverse of the Fisher information matrix (FIM), quantifies the best achievable accuracy of the estimates. For the estimation problems, the corresponding theoretical bounds are derived for performance evaluation under realistic underwater conditions. / Thesis / Doctor of Philosophy (PhD)
3

Bayesian point process modelling of ecological communities

Nightingale, Glenna Faith January 2013 (has links)
The modelling of biological communities is important to further the understanding of species coexistence and the mechanisms involved in maintaining biodiversity. This involves considering not only interactions between individual biological organisms, but also the incorporation of covariate information, if available, in the modelling process. This thesis explores the use of point processes to model interactions in bivariate point patterns within a Bayesian framework, and, where applicable, in conjunction with covariate data. Specifically, we distinguish between symmetric and asymmetric species interactions and model these using appropriate point processes. In this thesis we consider both pairwise and area interaction point processes to allow for inhibitory interactions and both inhibitory and attractive interactions. It is envisaged that the analyses and innovations presented in this thesis will contribute to the parsimonious modelling of biological communities.
4

Nonlinear design of geophysical surveys and processing strategies

Guest, Thomas January 2010 (has links)
The principal aim of all scientific experiments is to infer knowledge about a set of parameters of interest through the process of data collection and analysis. In the geosciences, large sums of money are spent on the data analysis stage but much less attention is focussed on the data collection stage. Statistical experimental design (SED), a mature field of statistics, uses mathematically rigorous methods to optimise the data collection stage so as to maximise the amount of information recorded about the parameters of interest. The uptake of SED methods in geophysics has been limited as the majority of SED research is based on linear and linearised theories whereas most geophysical methods are highly nonlinear and therefore the developed methods are not robust. Nonlinear SED methods are computationally demanding and hence to date the methods that do exist limit the designs to be either very simplistic or computationally infeasible and therefore cannot be used in an industrial setting. In this thesis, I firstly show that it is possible to design industry scale experiments for highly nonlinear problems within a computationally tractable time frame. Using an entropy based method constructed on a Bayesian framework I introduce an iteratively-constructive method that reduces the computational demand by introducing one new datum at a time for the design. The method reduces the multidimensional design space to a single-dimensional space at each iteration by fixing the experimental setup of the previous iteration. Both a synthetic experiment using a highly nonlinear parameter-data relationship, and a seismic amplitude versus offset (AVO) experiment are used to illustrate that the results produced by the iteratively-constructive method closely match the results of a global design method at a fraction of the computational cost. This new method thus extends the class of iterative design methods to nonlinear problems, and makes fully nonlinear design methods applicable to higher dimensional industrial scale problems. Using the new iteratively-constructive method, I show how optimal trace profiles for processing amplitude versus angle (AVA) surveys that account for all prior petrophysical information about the target reservoir can be generated using totally nonlinear methods. I examine how the optimal selections change as our prior knowledge of the rock parameters and reservoir fluid content change, and assess which of the prior parameters has the largest effect on the selected traces. The results show that optimal profiles are far more sensitive to prior information about reservoir porosity than information about saturating fluid properties. By applying ray tracing methods the AVA results can be used to design optimal processing profiles from seismic datasets, for multiple targets each with different prior model uncertainties. Although the iteratively-constructive method can be used to design the data collection stage it has been used here to select optimal data subsets post-survey. Using a nonlinear Bayesian SED method I show how industrial scale amplitude versus offset (AVO) data collection surveys can be constructed to maximise the information content contained in AVO crossplots, the principal source of petrophysical information from seismic surveys. The results show that the optimal design is highly dependant on the model parameters when a low number of receivers is being used, but that a single optimal design exists for the complete range of parameters once the number of receivers is increased above a threshold value. However, when acquisition and processing costs are considered I find that, in the case of AVO experiments, a design with constant spatial receiver separation is close to optimal. This explains why regularly-spaced, 2D seismic surveys have performed so well historically, not only from the point of view of noise attenuation and imaging in which homogeneous data coverage confers distinct advantages, but also as providing data to constrain subsurface petrophysical information. Finally, I discuss the implications of the new methods developed and assess which areas of geophysics would benefit from applying SED methods during the design stage.
5

Uncertainty Analysis in Upscaling Well Log data By Markov Chain Monte Carlo Method

Hwang, Kyubum 16 January 2010 (has links)
More difficulties are now expected in exploring economically valuable reservoirs because most reservoirs have been already developed since beginning seismic exploration of the subsurface. In order to efficiently analyze heterogeneous fine-scale properties in subsurface layers, one ongoing challenge is accurately upscaling fine-scale (high frequency) logging measurements to coarse-scale data, such as surface seismic images. In addition, numerically efficient modeling cannot use models defined on the scale of log data. At this point, we need an upscaling method replaces the small scale data with simple large scale models. However, numerous unavoidable uncertainties still exist in the upscaling process, and these problems have been an important emphasis in geophysics for years. Regarding upscaling problems, there are predictable or unpredictable uncertainties in upscaling processes; such as, an averaging method, an upscaling algorithm, analysis of results, and so forth. To minimize the uncertainties, a Bayesian framework could be a useful tool for providing the posterior information to give a better estimate for a chosen model with a conditional probability. In addition, the likelihood of a Bayesian framework plays an important role in quantifying misfits between the measured data and the calculated parameters. Therefore, Bayesian methodology can provide a good solution for quantification of uncertainties in upscaling. When analyzing many uncertainties in porosities, wave velocities, densities, and thicknesses of rocks through upscaling well log data, the Markov Chain Monte Carlo (MCMC) method is a potentially beneficial tool that uses randomly generated parameters with a Bayesian framework producing the posterior information. In addition, the method provides reliable model parameters to estimate economic values of hydrocarbon reservoirs, even though log data include numerous unknown factors due to geological heterogeneity. In this thesis, fine layered well log data from the North Sea were selected with a depth range of 1600m to 1740m for upscaling using an MCMC implementation. The results allow us to automatically identify important depths where interfaces should be located, along with quantitative estimates of uncertainty in data. Specifically, interfaces in the example are required near depths of 1,650m, 1,695m, 1,710m, and 1,725m. Therefore, the number and location of blocked layers can be effectively quantified in spite of uncertainties in upscaling log data.
6

Bayesian Parameter Estimation for Hyperelastic Constitutive Models of Soft Tissue under Non-homogeneous Deformation

Kenja, Krishna January 2017 (has links)
No description available.
7

Machine learning, data mining, and the World Wide Web : design of special-purpose search engines

Kruger, Andries F 04 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2003. / ENGLISH ABSTRACT: We present DEADLINER, a special-purpose search engine that indexes conference and workshop announcements, and which extracts a range of academic information from the Web. SVMs provide an efficient and highly accurate mechanism for obtaining relevant web documents. DEADLINER currently extracts speakers, locations (e.g. countries), dates, paper submission (and other) deadlines, topics, program committees, abstracts, and affiliations. Complex and detailed searches are possible on these fields. The niche search engine was constructed by employing a methodology for rapid implementation of specialised search engines. Bayesian integration of simple extractors provides this methodology, that avoids complex hand-tuned text extraction methods. The simple extractors exploit loose formatting and keyword conventions. The Bayesian framework further produces a search engine where each user can control each fields false alarm rate in an intuitive and rigorous fashion, thus providing easy-to-use metadata. / AFRIKAANSE OPSOMMING: Ons stel DEADLINER bekend: 'n soekmasjien wat konferensie en werkvergaderingsaankondigings katalogiseer en wat uiteindelik 'n wye reeks akademiese byeenkomsmateriaal sal monitor en onttrek uit die Web. DEAD LINER herken en onttrek tans sprekers, plekke (bv. landname), datums, o.a. sperdatums vir die inlewering van akademiese verrigtings, onderwerpe, programkomiteë, oorsigte of opsommings, en affiliasies. 'n Grondige soek is moontlik oor en deur hierdie velde. Die nissoekmasjien is gebou deur gebruik te maak van 'n metodologie vir die vinnige oprigting van spesialiteitsoekmasjiene. Die metodologie vermy komplekse instelling m.b.v. hande-arbeid van die teksuittreksels deur gebruik te maak van Bayesiese integrering van eenvoudige ontsluiters. Die ontsluiters buit dan styl- en gewoonte-sleutelwoorde uit. Die Bayesiese raamwerk skep hierdeur 'n soekmasjien wat gebruikers toelaat om elke veld se kans om verkeerd te kies op 'n intuïtiewe en deeglike manier te beheer.
8

Bayesian methods for joint modelling of survival and longitudinal data: applications and computing

Sabelnykova, Veronica 20 December 2012 (has links)
Multi-state models are useful for modelling progression of a disease, where states represent the health status of a subject under study. In practice, patients may be observed when necessity strikes thus implying that the disease and observation processes are not independent. Often, clinical visits are postponed or advanced by the clinician or patient themselves based on the health status of the patient. In such situations, there is a potential for the frequency and timing of the examinations to be dependent on the latent transition times, and the observation process may be informative. We consider the case where the exact times of transitions between health states of the patient are not observed and so the disease process is interval censored. We model the disease and observation processes jointly to ensure valid inference. The transition intensities are modelled assuming proportional hazards and we link the two processes via random effects. Using a Bayesian framework we apply our joint model to the analysis of a large study examining cancer trajectories of palliative care patients. / Graduate
9

Capturing random utility maximization behavior in continuous choice data : application to work tour scheduling

Lemp, Jason David 06 November 2012 (has links)
Recent advances in travel demand modeling have concentrated on adding behavioral realism by focusing on an individual’s activity participation. And, to account for trip-chaining, tour-based methods are largely replacing trip-based methods. Alongside these advances and innovations in dynamic traffic assignment (DTA) techniques, however, time-of-day (TOD) modeling remains an Achilles’ heel. As congestion worsens and operators turn to variable road pricing, sensors are added to networks, cell phones are GPS-enabled, and DTA techniques become practical, accurate time-of-day forecasts become critical. In addition, most models highlight tradeoffs between travel time and cost, while neglecting variations in travel time. Research into stated and revealed choices suggests that travel time variability can be highly consequential. This dissertation introduces a method for imputing travel time variability information as a continuous function of time-of-day, while utilizing an existing method for imputing average travel times (by TOD). The methods employ ordinary least squares (OLS) regression techniques, and rely on reported travel time information from survey data (typically available to researchers), as well as travel time and distance estimates by origin-destination (OD) pair for free-flow and peak-period conditions from network data. This dissertation also develops two models of activity timing that recognize the imputed average travel times and travel time variability. Both models are based in random utility theory and both recognize potential correlations across time-of-day alternatives. In addition, both models are estimated in a Bayesian framework using Gibbs sampling and Metropolis-Hastings (MH) algorithms, and model estimation relies on San Francisco Bay Area data collected in 2000. The first model is the continuous cross-nested logit (CCNL) and represents tour outbound departure time choice in a continuous context (rather than discretizing time) over an entire day. The model is formulated as a generalization of the discrete cross-nested logit (CNL) for continuous choice and represents the first random utility maximization model to incorporate the ability to capture correlations across alternatives in a continuous choice context. The model is then compared to the continuous logit, which represents a generalization of the multinomial logit (MNL) for continuous choice. Empirical results suggest that the CCNL out-performs the continuous logit in terms of predictive accuracy and reasonableness of predictions for three tolling policy simulations. Moreover, while this dissertation focuses on time-of-day modeling, the CCNL could be used in a number of other continuous choice contexts (e.g., location/destination, vehicle usage, trip durations, and profit-maximizing production). The second model is a bivariate multinomial probit (BVMNP) model. While the model relies on discretization of time (into 30-minute intervals), it captures both key dimensions of a tour’s timing (rather than just one, as in this dissertation’s application of the CCNL model), which is important for tour- and activity-based models of travel demand. The BVMNP’s ability to capture correlations across scheduling alternatives is something no existing two-dimensional choice models of tour timing can claim. Both models represent substantial contributions for continuous choice modeling in transportation, business, biology, and various other fields. In addition, the empirical results of the models evaluated here enhance our understanding of individuals’ time-of-day decisions. For instance, average travel time and its variance are estimated to have a negative effect on workers’ utilities, as expected, but are not found to be that practically relevant here, probably because most workers are rather constrained in their activity scheduling and/or work hours. However, correlations are found to be rather strong in both models, particularly for home-to-work journeys, suggesting that if models fail to accommodate such correlations, biased application results may emerge. / text
10

Capturing patterns of spatial and temporal autocorrelation in ordered response data : a case study of land use and air quality changes in Austin, Texas

Wang, Xiaokun, 1979- 05 May 2015 (has links)
Many databases involve ordered discrete responses in a temporal and spatial context, including, for example, land development intensity levels, vehicle ownership, and pavement conditions. An appreciation of such behaviors requires rigorous statistical methods, recognizing spatial effects and dynamic processes. This dissertation develops a dynamic spatial ordered probit (DSOP) model in order to capture patterns of spatial and temporal autocorrelation in ordered categorical response data. This model is estimated in a Bayesian framework using Gibbs sampling and data augmentation, in order to generate all autocorrelated latent variables. The specifications, methodologies, and applications undertaken here advance the field of spatial econometrics while enhancing our understanding of land use and air quality changes. The proposed DSOP model incorporates spatial effects in an ordered probit model by allowing for inter-regional spatial interactions and heteroskedasticity, along with random effects across regions (where "region" describes any cluster of observational units). The model assumes an autoregressive, AR(1), process across latent response values, thereby recognizing time-series dynamics in panel data sets. The model code and estimation approach is first tested on simulated data sets, in order to reproduce known parameter values and provide insights into estimation performance. Root mean squared errors (RMSE) are used to evaluate the accuracy of estimates, and the deviance information criterion (DIC) is used for model comparisons. It is found that the DSOP model yields much more accurate estimates than standard, non-spatial techniques. As for model selection, even considering the penalty for using more parameters, the DSOP model is clearly preferred to standard OP, dynamic OP and spatial OP models. The model and methods are then used to analyze both land use and air quality (ozone) dynamics in Austin, Texas. In analyzing Austin's land use intensity patterns over a 4-point panel, the observational units are 300 m × 300 m grid cells derived from satellite images (at 30 m resolution). The sample contains 2,771 such grid cells, spread among 57 clusters (zip code regions), covering about 10% of the overall study area. In this analysis, temporal and spatial autocorrelation effects are found to be significantly positive. In addition, increases in travel times to the region's central business district (CBD) are estimated to substantially reduce land development intensity. The observational units for the ozone variation analysis are 4 km × 4 km grid cells, and all 132 observations falling in the study area are used. While variations in ozone concentration levels are found to exhibit strong patterns of temporal autocorrelation, they appear strikingly random in a spatial context (after controlling for local land cover, transportation, and temperature conditions). While transportation and land cover conditions appear to influence ozone levels, their effects are not as instantaneous, nor as practically significant as the impact of temperature. The proposed and tested DSOP model is felt to be a significant contribution to the field of spatial econometrics, where binary applications (for discrete response data) have been seen as the cutting edge. The Bayesian framework and Gibbs sampling techniques used here permit such complexity, in world of two-dimensional autocorrelation. / text

Page generated in 0.0561 seconds