201 |
Analysis of Modeling, Training, and Dimension Reduction Approaches for Target Detection in Hyperspectral ImageryFarrell, Michael D., Jr. 03 November 2005 (has links)
Whenever a new sensor or system comes online, engineers and analysts responsible for processing the measured data turn first to methods that are tried and true on existing systems. This is a natural, if not wholly logical approach, and is exactly what has happened in the advent of hyperspectral imagery (HSI) exploitation. However, a closer look at the assumptions made by the approaches published in the literature has not been undertaken.
This thesis analyzes three key aspects of HSI exploitation: statistical data modeling, covariance estimation from training data, and dimension reduction. These items are part of standard processing schemes, and it is worthwhile to understand and quantify the impact that various assumptions for these items have on target detectability and detection statistics.
First, the accuracy and applicability of the standard Gaussian (i.e., Normal) model is evaluated, and it is shown that the elliptically contoured t-distribution (EC-t) sometimes offers a better statistical model for HSI data. A finite mixture approach for EC-t is developed in which all parameters are estimated simultaneously without a priori information. Then the effects of making a poor covariance estimate are shown by including target samples in the training data. Multiple test cases with ground targets are explored. They show that the magnitude of the deleterious effect of covariance contamination on detection statistics depends on algorithm type and target signal characteristics. Next, the two most widely used dimension reduction approaches are tested. It is demonstrated that, in many cases, significant dimension reduction can be achieved with only a minor loss in detection performance.
In addition, a concise development of key HSI detection algorithms is presented, and the state-of-the-art in adaptive detectors is benchmarked for land mine targets. Methods for detection and identification of airborne gases using hyperspectral imagery are discussed, and this application is highlighted as an excellent opportunity for future work.
|
202 |
Minimally Supported D-optimal Designs for Response Surface Models with Spatially Correlated ErrorsHsu, Yao-chung 05 July 2012 (has links)
In this work minimally supported D-optimal designs for response surface models with spatially
correlated errors are studied. The spatially correlated errors describe the correlation between two
measurements depending on their distance d through the covariance function C(d)=exp(-rd). In one
dimensional design
space, the minimally supported D-optimal designs for polynomial models with spatially correlated errors
include two end points and are symmetric to the center of the design region. Exact solutions for simple
linear and quadratic regression models are presented. For models with third or higher order, numerical
solutions are given. While in two dimensional design space, the minimally supported D-optimal designs
are invariant under translation¡Brotation and reflection. Numerical results show that a regular triangle
on the experimental region of a circle is a minimally supported D-optimal design for the first-order
response surface model.
|
203 |
Ensemble Statistics and Error Covariance of a Rapidly Intensifying HurricaneRigney, Matthew C. 16 January 2010 (has links)
This thesis presents an investigation of ensemble Gaussianity, the effect of non-
Gaussianity on covariance structures, storm-centered data assimilation techniques, and
the relationship between commonly used data assimilation variables and the underlying
dynamics for the case of Hurricane Humberto. Using an Ensemble Kalman Filter
(EnKF), a comparison of data assimilation results in Storm-centered and Eulerian
coordinate systems is made. In addition, the extent of the non-Gaussianity of the model
ensemble is investigated and quantified. The effect of this non-Gaussianity on
covariance structures, which play an integral role in the EnKF data assimilation scheme,
is then explored. Finally, the correlation structures calculated from a Weather Research
Forecast (WRF) ensemble forecast of several state variables are investigated in order to
better understand the dynamics of this rapidly intensifying cyclone.
Hurricane Humberto rapidly intensified in the northwestern Gulf of Mexico from
a tropical disturbance to a strong category one hurricane with 90 mph winds in 24 hours.
Numerical models did not capture the intensification of Humberto well. This could be
due in large part to initial condition error, which can be addressed by data assimilation schemes. Because the EnKF scheme is a linear theory developed on the assumption of
the normality of the ensemble distribution, non-Gaussianity in the ensemble distribution
used could affect the EnKF update. It is shown that multiple state variables do indeed
show significant non-Gaussianity through an inspection of statistical moments.
In addition, storm-centered data assimilation schemes present an alternative to
traditional Eulerian schemes by emphasizing the centrality of the cyclone to the
assimilation window. This allows for an update that is most effective in the vicinity of
the storm center, which is of most concern in mesoscale events such as Humberto.
Finally, the effect of non-Gaussian distributions on covariance structures is
examined through data transformations of normal distributions. Various standard
transformations of two Gaussian distributions are made. Skewness, kurtosis, and
correlation between the two distributions are taken before and after the transformations.
It can be seen that there is a relationship between a change in skewness and kurtosis and
the correlation between the distributions. These effects are then taken into consideration
as the dynamics contributing to the rapid intensification of Humberto are explored
through correlation structures.
|
204 |
An analysis of Texas rainfall data and asymptotic properties of space-time covariance estimatorsLi, Bo 02 June 2009 (has links)
This dissertation includes two parts. Part 1 develops a geostatistical method
to calibrate Texas NexRad rainfall estimates using rain gauge measurements. Part 2
explores the asymptotic joint distribution of sample space-time covariance estimators.
The following two paragraphs briefly summarize these two parts, respectively.
Rainfall is one of the most important hydrologic model inputs and is considered
a random process in time and space. Rain gauges generally provide good quality
data; however, they are usually too sparse to capture the spatial variability. Radar
estimates provide a better spatial representation of rainfall patterns, but they are
subject to substantial biases. Our calibration of radar estimates, using gauge data,
takes season, rainfall type and rainfall amount into account, and is accomplished
via a combination of threshold estimation, bias reduction, regression techniques and
geostatistical procedures. We explore a varying-coefficient model to adapt to the
temporal variability of rainfall. The methods are illustrated using Texas rainfall data
in 2003, which includes WAR-88D radar-reflectivity data and the corresponding rain
gauge measurements. Simulation experiments are carried out to evaluate the accuracy of our methodology. The superiority of the proposed method lies in estimating total
rainfall as well as point rainfall amount.
We study the asymptotic joint distribution of sample space-time covariance esti-mators of stationary random fields. We do this without any marginal or joint distri-butional assumptions other than mild moment and mixing conditions. We consider
several situations depending on whether the observations are regularly or irregularly
spaced, and whether one part or the whole domain of interest is fixed or increasing.
A simulation experiment illustrates the asymptotic joint normality and the asymp-
totic covariance matrix of sample space-time covariance estimators as derived. An
extension of this part develops a nonparametric test for full symmetry, separability,
Taylor's hypothesis and isotropy of space-time covariances.
|
205 |
Identification of stochastic systems : Subspace methods and covariance extensionDahlen, Anders January 2001 (has links)
No description available.
|
206 |
Evaluating SLAM algorithms for Autonomous HelicoptersSkoglund, Martin January 2008 (has links)
<p>Navigation with unmanned aerial vehicles (UAVs) requires good knowledge of the current position and other states. A UAV navigation system often uses GPS and inertial sensors in a state estimation solution. If the GPS signal is lost or corrupted state estimation must still be possible and this is where simultaneous localization and mapping (SLAM) provides a solution. SLAM considers the problem of incrementally building a consistent map of a previously unknown environment and simultaneously localize itself within this map, thus a solution does not require position from the GPS receiver.</p><p>This thesis presents a visual feature based SLAM solution using a low resolution video camera, a low-cost inertial measurement unit (IMU) and a barometric pressure sensor. State estimation in made with a extended information filter (EIF) where sparseness in the information matrix is enforced with an approximation.</p><p>An implementation is evaluated on real flight data and compared to a EKF-SLAM solution. Results show that both solutions provide similar estimates but the EIF is over-confident. The sparse structure is exploited, possibly not fully, making the solution nearly linear in time and storage requirements are linear in the number of features which enables evaluation for a longer period of time.</p>
|
207 |
Bayesian parsimonious covariance estimation for hierarchical linear mixed modelsFrühwirth-Schnatter, Sylvia, Tüchler, Regina January 2004 (has links) (PDF)
We considered a non-centered parameterization of the standard random-effects model, which is based on the Cholesky decomposition of the variance-covariance matrix. The regression type structure of the non-centered parameterization allows to choose a simple, conditionally conjugate normal prior on the Cholesky factor. Based on the non-centered parameterization, we search for a parsimonious variance-covariance matrix by identifying the non-zero elements of the Cholesky factors using Bayesian variable selection methods. With this method we are able to learn from the data for each effect, whether it is random or not, and whether covariances among random effects are zero or not. An application in marketing shows a substantial reduction of the number of free elements of the variance-covariance matrix. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
|
208 |
Statistical models in environmental and life sciencesRajaram, Lakshminarayan 01 June 2006 (has links)
The dissertation focuses on developing statistical models in environmental and life sciences. The Generalized Extreme Value distribution is used to model annual monthly maximum rainfall data from 44 locations in Florida. Time dependence of the rainfall data is incorporated into the model by assuming the location parameter to be a function of time, both linear and quadratic. Estimates and confidence intervals are obtained for return levels of return periods of 10, 20, 50, and 100 years. Locations are grouped into statistical profiles based on their similarities in return level graphs for all locations, and locations within each climatic zone. A family of extreme values distributions is applied to model simulated maximum drug concentration (Cmax) data of an anticoagulant drug. For small samples (n <̲ 100) data exhibited bimodality. The results of investigating a mixture of two extreme value distributions to model such bimodal data using two-parameter Gumbel, Pareto and Weibu
ll concluded that a mixture of two Weibull distributions is the only suitable FTSel.For large samples , Cmax data are modeled using the Generalized Extreme Value, Gumbel, Weibull, and Pareto distributions. These results concluded that the Generalized Extreme Value distribution is the only suitable model. A system of random differential equations is used to investigate the drug concentration behavior in a three-compartment pharmacokinetic model which describes coumermycin's disposition. The rate constants used in the differential equations are assumed to have a trivariate distribution, and hence, simulated from the trivariate truncated normal probability distribution. Numerical solutions are developed under different combinations of the covariance structure and the nonrandom initial conditions. We study the dependence effect that such a pharmacokinetic system has among the three compartments as well as the effect of variance in identifying the concentration behavior in each compartment.
We identify the time delays in each compartment. We extend these models to incorporate the identified time delays. We provide the graphical display of the time delay effects on the drug concentration behavior as well as the comparison of the deterministic behavior with and without the time delay, and effect of different sets of time delay on deterministic and stochastic behaviors.
|
209 |
Data-driven approach for control performance monitoring and fault diagnosisYu, Jie 28 August 2008 (has links)
Not available / text
|
210 |
Bandwidth and power efficient wireless spectrum sensing networksKim, Jaeweon 17 June 2011 (has links)
Opportunistic spectrum reuse is a promising solution to the two main causes of spectrum scarcity: most of the radio frequency (RF) bands are allocated by static licensing, and many of them are underutilized. Frequency spectrum can be more efficiently utilized by allowing communication systems to find out unoccupied spectrum and to use it harmlessly to the licensed users. Reliable sensing of these spectral opportunities is perhaps the most essential element of this technology. Despite significant work on spectrum sensing, further performance improvement is needed to approach its full potential.
In this dissertation, wireless spectrum sensing networks (WSSNs) are investigated for reliable detection of the primary (licensed) users, that enables efficient spectrum utilization and minimal power consumption in communications. Reliable spectrum sensing is studied in depth in two parts: a single sensor algorithm and then cooperative sensing are proposed based on a spectral covariance sensing (SCS). The first novel contribution uses different statistical correlations of the received signal and noise in the frequency domain. This detector is analyzed theoretically and verified through realistic simulations using actual digital television signals captured in the US. The proposed SCS detector achieves significant improvement over the existing solutions in terms of sensitivity and also robustness to noise uncertainty. Second, SCS is extended to a distributed WSSN architecture to allow cooperation between 2 or more sensors. Theoretical limits of cooperative white space sensing under correlated shadowing are investigated. We analyze the probability of a false alarm when each node in the WSSN detects the white space using the SCS detection and the base station combines individual results to make the final decision. The detection performance compared with that of the cooperative energy detector is improved and fewer sensor nodes are needed to achieve the same sensitivity.
Third, we propose a low power source coding and modulation scheme for power efficient communication between the sensor nodes in WSSN. Complete analysis shows that the proposed scheme not only minimizes total power consumption in the network but also improves bit error rate (BER). / text
|
Page generated in 0.0314 seconds