• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 9
  • 9
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A spatial sampling scheme for a road network

Reynolds, Hayley January 2017 (has links)
Rabies has been reported in Tanzania, mainly in the southern highland regions, since 1954. To date, rabies is endemic in all districts in Tanzania and efforts are being made to contain the disease. It was determined that mass vaccination of at least 70% of an animal population is most effective, in terms of profitability and cost, in reducing transmission of rabies. The current approach for vaccination in Tanzanian villages takes some features from the EPI method but is rather basic and unreliable. This mini-dissertation proposes using a sampling technique which incorporates the spatial component of the village data and minimises the walking distance between the sampled houses while ensuring the 70% coverage of the animal population. / Mini Dissertation(MSc)--University of Pretoria, 2017. / STATOMET The Centre for Artificial Intelligence Research (CAIR) National Research Foundation of South Africa (NRF CSUR grant number 90315) / Statistics / MSc / Unrestricted
2

The Gander search engine for personalized networked spaces

Michel, Jonas Reinhardt 05 March 2013 (has links)
The vision of pervasive computing is one of a personalized space populated with vast amounts of data that can be exploited by humans. Such Personalized Networked Spaces (PNetS) and the requisite support for general-purpose expressive spatiotemporal search of the “here” and “now” have eluded realization, due primarily to the complexities of indexing, storing, and retrieving relevant information within a vast collection of highly ephemeral data. This thesis presents the Gander search engine, founded on a novel conceptual model of search in PNetS and targeted for environments characterized by large volumes of highly transient data. We overview this model and provide a realization of it via the architecture and implementation of the Gander search engine. Gander connects formal notions of sampling a search space to expressive, spatiotemporal-aware protocols that perform distributed query processing in situ. This thesis evaluates Gander through a user study that examines the perceived usability and utility of our mobile application, and benchmarks the performance of Gander in large PNetS through network simulation. / text
3

An Analysis of Equally Weighted and Inverse Probability Weighted Observations in the Expanded Program on Immunization (EPI) Sampling Method

Reyes, Maria 11 1900 (has links)
Performing health surveys in developing countries and humanitarian emergencies can be challenging work because the resources in these settings are often quite limited and information needs to be gathered quickly. The Expanded Program on Immunization (EPI) sampling method provides one way of selecting subjects for a survey. It involves having field workers proceed on a random walk guided by a path of nearest household neighbours until they have met their quota for interviews. Due to its simplicity, the EPI sampling method has been utilized by many surveys. However, some concerns have been raised over the quality of estimates resulting from such samples because of possible selection bias inherent to the sampling procedure. We present an algorithm for obtaining the probability of selecting a household from a cluster under several variations of the EPI sampling plan. These probabilities are used to assess the sampling plans and compute estimator properties. In addition to the typical estimator for a proportion, we also investigate the Horvitz-Thompson (HT) estimator, an estimator that assigns weights to individual responses. We conduct our study on computer-generated populations having different settlement types, different prevalence rates for the characteristic of interest and different spatial distributions of the characteristic of interest. Our results indicate that within a cluster, selection probabilities can vary largely from household to household. The largest probability was over 10 times greater than the smallest probability in 78% of the scenarios that were tested. Despite this, the properties of the estimator with equally weighted observations (EQW) were similar to what would be expected from simple random sampling (SRS) given that cases of the characteristic of interest were evenly distributed throughout the cluster area. When this was not true, we found absolute biases as large as 0.20. While the HT estimator was always unbiased, the trade off was a substantial increase in the variability of the estimator where the design effect relative to SRS reached a high of 92. Overall, the HT estimator did not perform better than the EQW estimator under EPI sampling, and it involves calculations that may be difficult to do for actual surveys. Although we recommend continuing to use the EQW estimator, caution should be taken when cases of the characteristic of interest are potentially concentrated in certain regions of the cluster. In these situations, alternative sampling methods should be sought. / Thesis / Master of Science (MSc)
4

Extension of Particle Image Velocimetry to Full-Scale Turbofan Engine Bypass Duct Flows

George, William Mallory 10 July 2017 (has links)
Fan system efficiency for modern aircraft engine design is increasing to the point that bypass duct geometry is becoming a significant contributor and could ultimately become a limiting factor. To investigate this, a number of methods are available to provide qualitative and quantitative analysis of the flow around the loss mechanisms present in the duct. Particle image velocimetry (PIV) is a strong candidate among experimental techniques to address this challenge. Its use has been documented in many other locations within the engine and it can provide high spatial resolution data over large fields of view. In this work it is shown that these characteristics allow the PIV user to reduce the spatial sampling error associated with sparsely spaced point measurements in a large measurement region with high order gradients and small spatial scale flow phenomena. A synthetic flow featuring such attributes was generated by computational fluid dynamics (CFD) and was sampled by a virtual PIV system and a virtual generic point measurement system. The PIV sampling technique estimated the average integrated velocity field about five times more accurately than the point measurement sampling due to the large errors that existed between each point measurement location. Despite its advantages, implementation of PIV can be a significant challenge, especially for internal measurement where optical access is limited. To reduce the time and cost associated with iterating through experiment designs, a software package was developed which incorporates basic optics principles and fundamental PIV relationships, and calculates experimental output parameters of interest such as camera field of view and the amount of scattered light which reaches the camera sensor. The program can be used to judge the likelihood of success of a proposed PIV experiment design by comparing the output parameters with those calculated from benchmark experiments. The primary experiment in this work focused on the Pratt and Whitney Canada JT15D-1 aft support strut wake structure in the bypass duct and was comprised of three parts: a simulated engine environment was created to provide a proof of concept of the PIV experiment design; the PIV experiment was repeated in the full scale engine at four fan speeds ranging from engine idle up to 80% of the maximum corrected fan speed; and, finally, a CFD simulation was performed with simplifying assumptions to provide insight and perspective into the formation of the wake structures observed in the PIV data. Both computational and experimental results illustrate a non-uniform wake structure downstream of the support strut and support the hypothesis that the junction of the strut and the engine core wall is creating a separate wake structure from that created by the strut main body. The PIV data also shows that the wake structure moves in the circumferential direction at higher fan speeds, possibly due to bulk swirl present in the engine or a pressure differential created by the support strut. The experiment highlights the advantages of using PIV, but also illustrates a number of the implementation challenges present, most notably, those associated with consistently providing a sufficient number of seeding particles in the measurement region. Also, the experiment is the first to the author's knowledge to document the use of PIV in a full scale turbofan engine bypass duct. / Master of Science
5

Spatial scale, plant identity and management effects on the diversity-productivity relationship in a semi-natural grassland

From, Tatiana 16 May 2013 (has links)
No description available.
6

Fluorescence Molecular Tomography: A New Volume Reconstruction Method

Shamp, Stephen Joseph 06 July 2010 (has links)
Medical imaging is critical for the detection and diagnosis of disease, guided biopsies, assessment of therapies, and administration of treatment. While computerized tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultra-sound (US) are the more familiar modalities, interest in yet other modalities continues to grow. Among the motivations are reduction of cost, avoidance of ionizing radiation, and the search for new information, including biochemical and molecular processes. Fluorescence Molecular Tomography (FMT) is one such emerging technique and, like other techniques, has its advantages and limitations. FMT can reconstruct the distribution of fluorescent molecules in vivo using near-infrared radiation or visible band light to illuminate the subject. FMT is very safe since non-ionizing radiation is used, and inexpensive due to the comparatively low cost of the imaging system. This should make it particularly well suited for small animal studies for research. A broad range of cell activity can be identified by FMT, making it a potentially valuable tool for cancer screening, drug discovery and gene therapy. Since FMT imaging is scattering dominated, reconstruction of volume images is significantly more computationally intensive than for CT. For instance, to reconstruct a 32x32x32 image, a flattened matrix with approximately 10¹°, or 10 billion, elements must be dealt with in the inverse problem, while requiring more than 100 GB of memory. To reduce the error introduced by noisy measurements, significantly more measurements are needed, leading to a proportionally larger matrix. The computational complexity of reconstructing FMT images, along with inaccuracies in photon propagation models, has heretofore limited the resolution and accuracy of FMT. To surmount the problems stated above, we decompose the forward problem into a Khatri-Rao product. Inversion of this model is shown to lead to a novel reconstruction method that significantly reduces the computational complexity and memory requirements for overdetermined datasets. Compared to the well known SVD approach, this new reconstruction method decreases computation time by a factor of up to 25, while simultaneously reducing the memory requirement by up to three orders of magnitude. Using this method, we have reconstructed images up to 32x32x32. Also outlined is a two step approach which would enable imaging larger volumes. However, it remains a topic for future research. In achieving the above, the author studied the physics of FMT, developed an extensive set of original computer programs, performed COMSOL simulations on photon diffusion, and unavoidably, developed visual displays.
7

Spatial sampling and prediction

Schelin, Lina January 2012 (has links)
This thesis discusses two aspects of spatial statistics: sampling and prediction. In spatial statistics, we observe some phenomena in space. Space is typically of two or three dimensions, but can be of higher dimension. Questions in mind could be; What is the total amount of gold in a gold-mine? How much precipitation could we expect in a specific unobserved location? What is the total tree volume in a forest area? In spatial sampling the aim is to estimate global quantities, such as population totals, based on samples of locations (papers III and IV). In spatial prediction the aim is to estimate local quantities, such as the value at a single unobserved location, with a measure of uncertainty (papers I, II and V). In papers III and IV, we propose sampling designs for selecting representative probability samples in presence of auxiliary variables. If the phenomena under study have clear trends in the auxiliary space, estimation of population quantities can be improved by using representative samples. Such samples also enable estimation of population quantities in subspaces and are especially needed for multi-purpose surveys, when several target variables are of interest. In papers I and II, the objective is to construct valid prediction intervals for the value at a new location, given observed data. Prediction intervals typically rely on the kriging predictor having a Gaussian distribution. In paper I, we show that the distribution of the kriging predictor can be far from Gaussian, even asymptotically. This motivated us to propose a semiparametric method that does not require distributional assumptions. Prediction intervals are constructed from the plug-in ordinary kriging predictor. In paper V, we consider prediction in the presence of left-censoring, where observations falling below a minimum detection limit are not fully recorded. We review existing methods and propose a semi-naive method. The semi-naive method is compared to one model-based method and two naive methods, all based on variants of the kriging predictor.
8

Using Geospatial Techniques to Assess Responses of Black Bear Populations to Anthropogenically Modified Landscapes: Conflict & Recolonization

McFadden, Jamie Elizabeth 14 December 2018 (has links)
The convergence of three young scientific disciplines (ecology, geospatial sciences, and remote sensing) has generated unique advancements in wildlife research by connecting ecological data with remote sensing data through the application of geospatial techniques. Ecological datasets may contain spatial and sampling biases. By using geospatial techniques, datasets may be useful in revealing landscape scale (e.g., statewide) trends for wildlife populations, such as population recovery and human-wildlife interactions. Specifically, black bear populations across North America vary greatly in their degree of distribution stability. The black bear population in Michigan may be considered stable or secure, whereas the population in Missouri is currently recolonizing. The focus of the research in this dissertation is to examine the ecological and anthropogenic impacts 1) on human-black bear interactions in Michigan (see Chapter 2) and 2) on black bear presence in Missouri (see Chapter 3), through the use of black bear reports provided by the public to the state wildlife agencies. By using generalized linear modeling (GLM) and maximum entropy (MaxEnt), I developed spatial distribution models of probability of occurrence/presence for the 2 study areas (Michigan and Missouri). For the Missouri study, I quantified the spatiotemporal shifts in the probability of bear presence statewide. The results from my statewide studies corroborate previous local-scale research based on rigorous data collection. Overall, human-black bear interactions (e.g., wildlife sightings, conflicts), while very dynamic, appear greatest in forested and rural areas where the preferred habitat for black bears (i.e., forest) intersects with low density anthropogenic activities. As both human and black bear populations continue to expand, it is reasonable to expect human-black bear interactions to spatiotemporally increase across both study areas. The results from my studies provide wildlife managers with information critical to management decisions such as harvest regulations and habitat conservation actions across the landscape and through time. The ability to detect and monitor ecological changes through the use of geospatial techniques can lead to insights about the stressors and drivers of population-level change, further facilitating the development of proactive causeocused management strategies.
9

Statistical Improvements for Ecological Learning about Spatial Processes

Dupont, Gaetan L 20 October 2021 (has links) (PDF)
Ecological inquiry is rooted fundamentally in understanding population abundance, both to develop theory and improve conservation outcomes. Despite this importance, estimating abundance is difficult due to the imperfect detection of individuals in a sample population. Further, accounting for space can provide more biologically realistic inference, shifting the focus from abundance to density and encouraging the exploration of spatial processes. To address these challenges, Spatial Capture-Recapture (“SCR”) has emerged as the most prominent method for estimating density reliably. The SCR model is conceptually straightforward: it combines a spatial model of detection with a point process model of the spatial distribution of individuals, using data collected on individuals within a spatially referenced sampling design. These data are often coarse in spatial and temporal resolution, though, motivating research into improving the quality of the data available for analysis. Here I explore two related approaches to improve inference from SCR: sampling design and data integration. Chapter 1 describes the context of this thesis in more detail. Chapter 2 presents a framework to improve sampling design for SCR through the development of an algorithmic optimization approach. Compared to pre-existing recommendations, these optimized designs perform just as well but with far more flexibility to account for available resources and challenging sampling scenarios. Chapter 3 presents one of the first methods of integrating an explicit movement model into the SCR model using telemetry data, which provides information at a much finer spatial scale. The integrated model shows significant improvements over the standard model to achieve a specific inferential objective, in this case: the estimation of landscape connectivity. In Chapter 4, I close by providing two broader conclusions about developing statistical methods for ecological inference. First, simulation-based evaluation is integral to this process, but the circularity of its use can, unfortunately, be understated. Second, and often underappreciated: statistical solutions should be as intuitive as possible to facilitate their adoption by a diverse pool of potential users. These novel approaches to sampling design and data integration represent essential steps in advancing SCR and offer intuitive opportunities to advance ecological learning about spatial processes.

Page generated in 0.087 seconds