• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 117
  • 12
  • 11
  • 6
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 180
  • 180
  • 94
  • 34
  • 30
  • 25
  • 20
  • 19
  • 18
  • 16
  • 16
  • 16
  • 16
  • 15
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Dark Spot Detection from SAR Intensity Imagery with Spatial Density Thresholding for Oil Spill Monitoring

Shu, Yuanming 28 January 2010 (has links)
Since the 1980s, satellite-borne synthetic aperture radar (SAR) has been investigated for early warning and monitoring of marine oil spills to permit effective satellite surveillance in the marine environment. Automated detection of oil spills from satellite SAR intensity imagery consists of three steps: 1) Detection of dark spots; 2) Extraction of features from the detected dark spots; and 3) Classification of the dark spots into oil spills and look-alikes. However, marine oil spill detection is a very difficult and challenging task. Open questions exist in each of the three stages. In this thesis, the focus is on the first stage—dark spot detection. An efficient and effective dark spot detection method is critical and fundamental for developing an automated oil spill detection system. A novel method for this task is presented. The key to the method is utilizing the spatial density feature to enhance the separability of dark spots and the background. After an adaptive intensity thresholding, a spatial density thresholding is further used to differentiate dark spots from the background. The proposed method was applied to a evaluation dataset with 60 RADARSAT-1 ScanSAR Narrow Beam intensity images containing oil spill anomalies. The experimental results obtained from the test dataset demonstrate that the proposed method for dark spot detection is fast, robust and effective. Recommendations are given for future research to be conducted to ensure that this procedure goes beyond the prototype stage and becomes a practical application.
52

Interval Censoring and Longitudinal Survey Data

Pantoja Galicia, Norberto January 2007 (has links)
Being able to explore a relationship between two life events is of great interest to scientists from different disciplines. Some issues of particular concern are, for example, the connection between smoking cessation and pregnancy (Thompson and Pantoja-Galicia 2003), the interrelation between entry into marriage for individuals in a consensual union and first pregnancy (Blossfeld and Mills 2003), and the association between job loss and divorce (Charles and Stephens 2004, Huang 2003 and Yeung and Hofferth 1998). Establishing causation in observational studies is seldom possible. Nevertheless, if one of two events tends to precede the other closely in time, a causal interpretation of an association between these events can be more plausible. The role of longitudinal surveys is crucial, then, since they allow sequences of events for individuals to be observed. Thompson and Pantoja-Galicia (2003) discuss in this context several notions of temporal association and ordering, and propose an approach to investigate a possible relationship between two lifetime events. In longitudinal surveys individuals might be asked questions of particular interest about two specific lifetime events. Therefore the joint distribution might be advantageous for answering questions of particular importance. In follow-up studies, however, it is possible that interval censored data may arise due to several reasons. For example, actual dates of events might not have been recorded, or are missing, for a subset of (or all) the sampled population, and can be established only to within specified intervals. Along with the notions of temporal association and ordering, Thompson and Pantoja-Galicia (2003) also discuss the concept of one type of event "triggering" another. In addition they outline the construction of tests for these temporal relationships. The aim of this thesis is to implement some of these notions using interval censored data from longitudinal complex surveys. Therefore, we present some proposed tools that may be used for this purpose. This dissertation is divided in five chapters, the first chapter presents a notion of a temporal relationship along with a formal nonparametric test. The mechanisms of right censoring, interval censoring and left truncation are also overviewed. Issues on complex surveys designs are discussed at the end of this chapter. For the remaining chapters of the thesis, we note that the corresponding formal nonparametric test requires estimation of a joint density, therefore in the second chapter a nonparametric approach for bivariate density estimation with interval censored survey data is provided. The third chapter is devoted to model shorter term triggering using complex survey bivariate data. The semiparametric models in Chapter 3 consider both noncensoring and interval censoring situations. The fourth chapter presents some applications using data from the National Population Health Survey and the Survey of Labour and Income Dynamics from Statistics Canada. An overall discussion is included in the fifth chapter and topics for future research are also addressed in this last chapter.
53

Dark Spot Detection from SAR Intensity Imagery with Spatial Density Thresholding for Oil Spill Monitoring

Shu, Yuanming 28 January 2010 (has links)
Since the 1980s, satellite-borne synthetic aperture radar (SAR) has been investigated for early warning and monitoring of marine oil spills to permit effective satellite surveillance in the marine environment. Automated detection of oil spills from satellite SAR intensity imagery consists of three steps: 1) Detection of dark spots; 2) Extraction of features from the detected dark spots; and 3) Classification of the dark spots into oil spills and look-alikes. However, marine oil spill detection is a very difficult and challenging task. Open questions exist in each of the three stages. In this thesis, the focus is on the first stage—dark spot detection. An efficient and effective dark spot detection method is critical and fundamental for developing an automated oil spill detection system. A novel method for this task is presented. The key to the method is utilizing the spatial density feature to enhance the separability of dark spots and the background. After an adaptive intensity thresholding, a spatial density thresholding is further used to differentiate dark spots from the background. The proposed method was applied to a evaluation dataset with 60 RADARSAT-1 ScanSAR Narrow Beam intensity images containing oil spill anomalies. The experimental results obtained from the test dataset demonstrate that the proposed method for dark spot detection is fast, robust and effective. Recommendations are given for future research to be conducted to ensure that this procedure goes beyond the prototype stage and becomes a practical application.
54

Choosing a Kernel for Cross-Validation

Savchuk, Olga 14 January 2010 (has links)
The statistical properties of cross-validation bandwidths can be improved by choosing an appropriate kernel, which is different from the kernels traditionally used for cross- validation purposes. In the light of this idea, we developed two new methods of bandwidth selection termed: Indirect cross-validation and Robust one-sided cross- validation. The kernels used in the Indirect cross-validation method yield an improvement in the relative bandwidth rate to n^1=4, which is substantially better than the n^1=10 rate of the least squares cross-validation method. The robust kernels used in the Robust one-sided cross-validation method eliminate the bandwidth bias for the case of regression functions with discontinuous derivatives.
55

Deconvolution in Random Effects Models via Normal Mixtures

Litton, Nathaniel A. 2009 August 1900 (has links)
This dissertation describes a minimum distance method for density estimation when the variable of interest is not directly observed. It is assumed that the underlying target density can be well approximated by a mixture of normals. The method compares a density estimate of observable data with a density of the observable data induced from assuming the target density can be written as a mixture of normals. The goal is to choose the parameters in the normal mixture that minimize the distance between the density estimate of the observable data and the induced density from the model. The method is applied to the deconvolution problem to estimate the density of $X_{i}$ when the variable $% Y_{i}=X_{i}+Z_{i}$, $i=1,\ldots ,n$, is observed, and the density of $Z_{i}$ is known. Additionally, it is applied to a location random effects model to estimate the density of $Z_{ij}$ when the observable quantities are $p$ data sets of size $n$ given by $X_{ij}=\alpha _{i}+\gamma Z_{ij},~i=1,\ldots ,p,~j=1,\ldots ,n$, where the densities of $\alpha_{i} $ and $Z_{ij}$ are both unknown. The performance of the minimum distance approach in the measurement error model is compared with the deconvoluting kernel density estimator of Stefanski and Carroll (1990). In the location random effects model, the minimum distance estimator is compared with the explicit characteristic function inversion method from Hall and Yao (2003). In both models, the methods are compared using simulated and real data sets. In the simulations, performance is evaluated using an integrated squared error criterion. Results indicate that the minimum distance methodology is comparable to the deconvoluting kernel density estimator and outperforms the explicit characteristic function inversion method.
56

Bayesian Econometrics for Auction Models

KIM, DONG-HYUK January 2010 (has links)
This dissertation develops Bayesian methods to analyze data from auctions and produce policy recommendations for auction design. The essay, "Auction Design Using Bayesian Methods," proposes a decision theoretic method to choose a reserve price in an auction using data from past auctions. Our method formally incorporates parameter uncertainty and the payoff structure into the decision procedure. When the sample size is modest, it produces higher expected revenue than the plug-in methods. Monte Carlo evidence for this is provided. The second essay, "Flexible Bayesian Analysis of First Price Auctions Using Simulated Likelihood," develops an empirical framework that fully exploits all the shape restrictions arising from economic theory: bidding monotonicity and density affiliation. We directly model the valuation density so that bidding monotonicity is automatically satisfied, and restrict the parameter space to rule out all the nonaffiliated densities. Our method uses a simulated likelihood to allow for a very exible specification, but the posterior analysis is exact for the chosen likelihood. Our method controls the smoothness and tail behavior of the valuation density and provides a decision theoretic framework for auction design. We reanalyze a dataset of auctions for drilling rights in the Outer Continental Shelf that has been widely used in past studies. Our approach gives significantly different policy prescriptions on the choice of reserve price than previous methods, suggesting the importance of the theoretical shape restrictions. Lastly, in the essay, "Simple Approximation Methods for Bayesian Auction Design," we propose simple approximation methods for Bayesian decision making in auction design problems. Asymptotic posterior distributions replace the true posteriors in the Bayesian decision framework, which are typically a Gaussian model (second price auction) or a shifted exponential model (first price auction). Our method first approximates the posterior payoff using the limiting models and then maximizes the approximate posterior payoff. Both the approximate and exact Bayes rules converge to the true revenue maximizing reserve price under certain conditions. Monte Carlo studies show that my method closely approximates the exact procedure even for fairly small samples.
57

Geologic Factors Affecting Hydrocarbon Occurrence in Paleovalleys of the Mississippian-Pennsylvanian Unconformity in the Illinois Basin

London, Jeremy Taylor 01 May 2014 (has links)
Paleovalleys associated with the Mississippian-Pennsylvanian unconformity have been identified as potential targets for hydrocarbon exploration in the Illinois Basin. Though there is little literature addressing the geologic factors controlling hydrocarbon accumulation in sub-Pennsylvanian paleovalleys basin-wide, much work has been done to identify the Mississippian-Pennsylvanian unconformity, characterize the Chesterian and basal Pennsylvanian lithology, map the sub-Pennsylvanian paleogeology and delineate the pre-Pennsylvanian paleovalleys in the Illinois Basin. This study uses Geographic Information Systems (GIS) to determine the geologic factors controlling the distribution of hydrocarbon-bearing sub-Pennsylvanian paleovalley fill in the Illinois Basin. A methodology was developed to identify densely-drilled areas without associated petroleum occurrence in basal Pennsylvanian paleovalley fill. Kernel density estimation was used to approximate drilling activity throughout the basin and identify “hotspots” of high well density. Pennsylvanian oil and gas fields were compared to the hotspots to identify which areas were most likely unrelated to Pennsylvanian production. Those hotspots were then compared to areas with known hydrocarbon accumulations in sub-Pennsylvanian paleovalleys to determine what varies geologically amongst these locations. Geologic differences provided insight regarding the spatial distribution of hydrocarbon-bearing sub-Pennsylvanian paleovalleys in the Illinois Basin. It was found that the distribution of hydrocarbon-bearing paleovalleys in the Illinois Basin follows structural features and faults. In the structurally dominated portions of the Illinois Basin, especially in eastern Illinois along the La Salle Anticlinal Belt, hydrocarbons migrate into paleovalleys from underlying hydrocarbon-rich sub- Pennsylvanian paleogeology. Along the fault-dominated areas, such as the Wabash, Rough Creek and Pennyrile Fault Zones, migration occurs upwards along faults from deeper sources. Cross sections were made to gain a better understanding of the paleovalley reservoir and to assess the utility of using all the data collected in this study to locate paleovalley reservoirs. The Main Consolidated Field in Crawford County, Illinois, was chosen as the best site for subsurface mapping due to its high well density, associated Pennsylvanian production, and locally incised productive Chesterian strata. Four cross sections revealed a complex paleovalley reservoir with many potential pay zones. The methodology used to locate this paleovalley reservoir can be applied to other potential sites within the Illinois Basin and to other basins as well.
58

On the Modifiable Areal Unit Problem and kernel home range analyses: the case of woodland caribou (Rangifer tarandus caribou)

Kilistoff, Kristen 10 September 2014 (has links)
There are a myriad of studies of animal habitat use that employ the notion of “home range”. Aggregated information on animal locations provide insight into a geographically discrete units that represents the use of space by an animal. Among various methods to delineate home range is the commonly used Kernel Density Estimation (KDE). The KDE method delineates home ranges based on an animal’s Utilization Distribution (UD). Specifically, a UD estimates a three-dimensional surface representing the probability or intensity of habitat use by an animal based on known locations. The choice of bandwidth (i.e., kernel radius) in KDE determines the level of smoothing and thus, ultimately circumscribes the size and shape of an animal’s home range. The bounds of interest in a home range can then be delineated using different volume contours of the UD (e.g., 95% or 50%). Habitat variables can then be assessed within the chosen UD contour(s) to ascertain selection for certain habitat characteristics. Home range analyses that utilize the KDE method, and indeed all methods of home range delineation, are subject to the Modifiable Areal Unit Problem (MAUP) whereby the changes in the scale at which data (e.g., habitat variables) are analysed can alter the outcome of statistical analyses and resulting ecological inferences. There are two components to MAUP, the scale and zoning effects. The scale effect refers to changes to the data and, consequently the outcome of analyses as a result of aggregating data to coarser spatial units of analysis. The aggregation of data can result in a loss of fine-scale detail as well as change the observed spatial patterns. The zone effect refers to how, when holding scale constant, the delineation of areal units in space can alter data values and ultimately the results of analyses. For example, habitat features captured within 1km2 gridded sampling units may change if instead 1km2 hexagon units are used. This thesis holds there are three “modifiable” factors in home range analyses that render it subject to the MAUP. The first two relate specifically to the use of the KDE method namely, the choice of bandwidth and UD contour. The third is the grain (e.g., resolution) by which habitat variables are aggregated, which applies to KDE but also more broadly to other quantitative methods of home range delineation In the following chapters we examine the changes in values of elevation and slope that result from changes to KDE bandwidth (Chapter 2) UD contour (Chapter 3) and DEM resolution (Chapter 4). In each chapter we also examine how the observed effects of altering each individual parameter of scale (e.g., bandwidth) changes when different scales of the other two parameters are considered (e.g., contour and resolution). We expected that the scale of each parameter examined would change the observed effect of other parameters. For example, that the homogenization of data at coarser resolutions would reduce the degree of difference in variable values between UD contours of each home range. To explore the potential effects of MAUP on home range analyses we used as model population 13 northern woodland caribou (Rangifer tarandus). We created seasonal home ranges (winter, calving, summer, rut and fall) for each caribou using three different KDE bandwidths. Within each home range we delineated four contours based on differing levels of an animal’s UD. We then calculated values of elevation and slope (mean, standard deviation and coefficient of variation) using a Digital Elevation Model (DEM) aggregated to four different resolutions within the contours of each seasonal home range. We found that each parameter of scale significantly changed the values of elevation and slope within the home ranges of the model caribou population. The magnitude as well as direction of change in slope and elevation often varied depending the specific contour or season. There was a greater decrease in the variability of elevation within the fall and winter seasons at smaller KDE bandwidths. The topographic variables were significantly different between all contours of caribou home ranges and the difference between contours were in general, significantly higher in fall and winter (elevation) or calving and summer (slope). The mean and SD of slope decreased at coarser resolutions in all caribou home ranges, whereas there was no change in elevation. We also found interactive effects of all three parameters of scale, although these were not always as direct as initially anticipated. Each parameter examined (bandwidth, contour and resolution) may potentially alter the outcome of northern woodland caribou habitat analyses. We conclude that home range analyses that utilize the KDE method may be subject to MAUP by virtue the ability to modify the spatial dimensions of the units of analysis. As such, in habitat analyses using the KDE careful consideration should be given to the choice of bandwidth, UD contour and habitat variable resolution. / Graduate / 0366 / 0329 / spicym@uvic.ca
59

Statistical gas distribution modelling for mobile robot applications

Reggente, Matteo January 2014 (has links)
In this dissertation, we present and evaluate algorithms for statistical gas distribution modelling in mobile robot applications. We derive a representation of the gas distribution in natural environments using gas measurements collected with mobile robots. The algorithms fuse different sensors readings (gas, wind and location) to create 2D or 3D maps. Throughout this thesis, the Kernel DM+V algorithm plays a central role in modelling the gas distribution. The key idea is the spatial extrapolation of the gas measurement using a Gaussian kernel. The algorithm produces four maps: the weight map shows the density of the measurements; the confidence map shows areas in which the model is considered being trustful; the mean map represents the modelled gas distribution; the variance map represents the spatial structure of the variance of the mean estimate. The Kernel DM+V/W algorithm incorporates wind measurements in the computation of the models by modifying the shape of the Gaussian kernel according to the local wind direction and magnitude. The Kernel 3D-DM+V/W algorithm extends the previous algorithm to the third dimension using a tri-variate Gaussian kernel. Ground-truth evaluation is a critical issue for gas distribution modelling with mobile platforms. We propose two methods to evaluate gas distribution models. Firstly, we create a ground-truth gas distribution using a simulation environment, and we compare the models with this ground-truth gas distribution. Secondly, considering that a good model should explain the measurements and accurately predicts new ones, we evaluate the models according to their ability in inferring unseen gas concentrations. We evaluate the algorithms carrying out experiments in different environments. We start with a simulated environment and we end in urban applications, in which we integrated gas sensors on robots designed for urban hygiene. We found that typically the models that comprise wind information outperform the models that do not include the wind data.
60

ILLINOIS STATEWIDE HEALTHCARE AND EDUCATION MAPPING

KC, Binita 01 December 2010 (has links)
Illinois statewide infrastructure mapping provides basis for economic development of the state. As a part of infrastructure mapping, this study is focused on mapping healthcare and education services for Illinois. Over 4337 k-12 schools and 1331 hospitals and long term cares were used in analyzing healthcare and education services. Education service was measured as ratio of population to teacher and healthcare service as the ratio of population to bed. Both of these services were mapped using three mapping techniques including Choropleth mapping, Thiessen polygon, and Kernel Density Estimation. The mapping was also conducted at three scales including county, census tract, and ZIP code area. The obtained maps were compared by visual interpretation and statistical correlation analysis. Moreover, spatial pattern analysis of maps was conducted using global and local Moran's I, high/low clustering, and hotspot analysis methods. In addition, multivariate mapping was carried out to demonstrate the spatial distributions of multiple variables and their relationships. The results showed that both Choropleth mapping and Thiessen polygon methods resulted in the service levels that were homogeneous throughout the polygons and abruptly changed at the boundaries hence which ignored the cross boundary flow of people for healthcare and education services. In addition they do not reflect the distance decay of services. Kernel Density mapping quantified the continuous and variable healthcare and educational services and has the potential to provide more accurate estimates of healthcare and educational services. Moreover, the county scale maps are more reliable than the census tract and ZIP code area maps. In addition, multivariate map obtained by legend design that combined the values of multiple variables well demonstrated the spatial distributions of healthcare and education services along with per capita income and relationships between them. Overall, Morgan, Wayne, Mason, and Ford counties had higher services for both education and healthcare whereas Champaign, Johnson, and Perry had lower service levels of healthcare and education. Generally, cities and the areas close to cities have better healthcare and educational service than other areas because of higher per capita income. In addition to numbers of hospitals and schools, the healthcare and education service levels were also affected by populations and per capita income. Additionally, other factors may also have influence on the service levels but were not taken into account in this study because of limited time and data.

Page generated in 0.1357 seconds