• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 17
  • 9
  • 9
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An artificial intelligence approach to the processing of radar return signals for target detection

Li, Vincent Yiu Fai January 1999 (has links)
Most of the operating vessel traffic management systems experience problems, such as track loss and track swap, which may cause confusion to the traffic regulators and lead to potential hazards in the harbour operation. The reason is mainly due to the limited adaptive capabilities of the algorithms used in the detection process. The decision on whether a target is present is usually based on the magnitude of the returning echoes. Such a method has a low efficiency in discriminating between the target and clutter, especially when the signal to noise ratio is low. The performance of radar target detection depends on the features, which can be used to discriminate between clutter and targets. To have a significant improvement in the detection of weak targets, more obvious discriminating features must be identified and extracted. This research investigates conventional Constant False Alarm Rate (CFAR) algorithms and introduces the approach of applying ar1ificial intelligence methods to the target detection problems. Previous research has been unde11aken to improve the detection capability of the radar system in the heavy clutter environment and many new CFAR algorithms, which are based on amplitude information only, have been developed. This research studies these algorithms and proposes that it is feasible to design and develop an advanced target detection system that is capable of discriminating targets from clutters by learning the .different features extracted from radar returns. The approach adopted for this further work into target detection was the use of neural networks. Results presented show that such a network is able to learn particular features of specific radar return signals, e.g. rain clutter, sea clutter, target, and to decide if a target is present in a finite window of data. The work includes a study of the characteristics of radar signals and identification of the features that can be used in the process of effective detection. The use of a general purpose marine radar has allowed the collection of live signals from the Plymouth harbour for analysis, training and validation. The approach of using data from the real environment has enabled the developed detection system to be exposed to real clutter conditions that cannot be obtained when using simulated data. The performance of the neural network detection system is evaluated with further recorded data and the results obtained are compared with the conventional CFAR algorithms. It is shown that the neural system can learn the features of specific radar signals and provide a superior performance in detecting targets from clutters. Areas for further research and development arc presented; these include the use of a sophisticated recording system, high speed processors and the potential for target classification.
2

Development of a Low False-Alarm-Rate Fall-Down Detection System Based on Machine Learning for Senior Health Care

Sui, Yongkun 19 October 2015 (has links)
No description available.
3

Detection and Tracking of Human Targets using Ultra-Wideband Radar

Östman, Andreas January 2016 (has links)
The purpose of this thesis was to assess the plausibility of using two Ultra- Wideband radars for detecting and tracking human targets. The detection has been performed by two different types of methods, constant false-alarm rate methods and a type of CLEAN algorithm. For tracking the targets, multiple hypothesis tracking has been studied. Particle filtering has been used for the state prediction, considering a significant amount of uncertainty in a motion model used in this thesis project. The detection and tracking methods have been implemented in MATLAB. Tracking in the cases of a single target and multiple targets has been investigated in simulation and experiment. The simulation results in these cases were compared with accurate ground truth data obtained using a VICON optical tracking system. The detection methods showed poor performance when using data that had been collected by the two radars and post-processed to enhance target features. For single targets, the detections were accurate enough to continuously track a target moving randomly in a controlled area. In the multiple target cases the tracker was not able to distinguish the multiple moving subjects.
4

On Development and Performance Evaluation of Some Biosurveillance Methods

Zheng, Hongzhang 09 August 2011 (has links)
This study examines three applications of control charts used for monitoring syndromic data with different characteristics. The first part develops a seasonal autoregressive integrated moving average (SARIMA) based surveillance chart, and compares it with the CDC Early Aberration Reporting System (EARS) W2c method using both authentic and simulated data. After successfully removing the long-term trend and the seasonality involved in syndromic data, the performance of the SARIMA approach is shown to be better than the performance of the EARS method in terms of two key surveillance characteristics, the false alarm rate and the average time to detect the outbreaks. In the second part, we propose a generalized likelihood ratio (GLR) control chart to detect a wide range of shifts in the mean of Poisson distributed biosurveillance data. The application of a sign function on the original GLR chart statistics leads to downward-sided, upward-sided, and two-sided GLR chart statistics in an unified framework. To facilitate the use of such charts in practice, we provide detailed guidance on developing and implementing the GLR chart. Under the steady-state framework, this study indicates that the overall GLR chart performance in detecting a range of shifts of interest is superior to the performance of traditional control charts including the EARS method, Shewhart charts, EWMA charts, and CUSUM charts. There is often an excessive number of zeros involved in health care related data. Zero-inflated Poisson (ZIP) models are more appropriate than Poisson models to describe such data. The last part of the dissertation considers the GLR chart for ZIP data under a research framework similar to the second part. Because small sample sizes may influence the estimation of ZIP parameters, the efficiency of MLEs is investigated in depth, followed by suggestions for improvement. Numerical approaches to solving for the MLEs are discussed as well. Statistics for a set of GLR charts are derived, followed by modifications changing them from two-sided statistics to one-sided statistics. Although not a complete study of GLR charts for ZIP processes, due to limited time and resources, suggestions for future work are proposed at the end of this dissertation. / Ph. D.
5

Dynamic Probability Control Limits for Risk-Adjusted Bernoulli Cumulative Sum Charts

Zhang, Xiang 12 December 2015 (has links)
The risk-adjusted Bernoulli cumulative sum (CUSUM) chart developed by Steiner et al. (2000) is an increasingly popular tool for monitoring clinical and surgical performance. In practice, however, use of a fixed control limit for the chart leads to quite variable in-control average run length (ARL) performance for patient populations with different risk score distributions. To overcome this problem, the simulation-based dynamic probability control limits (DPCLs) patient-by-patient for the risk-adjusted Bernoulli CUSUM charts is determined in this study. By maintaining the probability of a false alarm at a constant level conditional on no false alarm for previous observations, the risk-adjusted CUSUM charts with DPCLs have consistent in-control performance at the desired level with approximately geometrically distributed run lengths. Simulation results demonstrate that the proposed method does not rely on any information or assumptions about the patients' risk distributions. The use of DPCLs for risk-adjusted Bernoulli CUSUM charts allows each chart to be designed for the corresponding particular sequence of patients for a surgeon or hospital. The effect of estimation error on performance of risk-adjusted Bernoulli CUSUM chart with DPCLs is also examined. Our simulation results show that the in-control performance of risk-adjusted Bernoulli CUSUM chart with DPCLs is affected by the estimation error. The most influential factors are the specified desired in-control average run length, the Phase I sample size and the overall adverse event rate. However, the effect of estimation error is uniformly smaller for the risk-adjusted Bernoulli CUSUM chart with DPCLs than for the corresponding chart with a constant control limit under various realistic scenarios. In addition, there is a substantial reduction in the standard deviation of the in-control run length when DPCLs are used. Therefore, use of DPCLs has yet another advantage when designing a risk-adjusted Bernoulli CUSUM chart. These researches are results of joint work with Dr. William H. Woodall (Department of Statistics, Virginia Tech). Moreover, DPCLs are adapted to design the risk-adjusted CUSUM charts for multiresponses developed by Tang et al. (2015). It is shown that the in-control performance of the charts with DPCLs can be controlled for different patient populations because these limits are determined for each specific sequence of patients. Thus, the risk-adjusted CUSUM chart for multiresponses with DPCLs is more practical and should be applied to effectively monitor surgical performance by hospitals and healthcare practitioners. This research is a result of joint work with Dr. William H. Woodall (Department of Statistics, Virginia Tech) and Mr. Justin Loda (Department of Statistics, Virginia Tech). / Ph. D.
6

Employing Multiple Kernel Support Vector Machines for Counterfeit Banknote Recognition

Su, Wen-pin 29 July 2008 (has links)
Finding an efficient method to detect counterfeit banknotes is imperative. In this study, we propose multiple kernel weighted support vector machine for counterfeit banknote recognition. A variation of SVM in optimizing false alarm rate, called FARSVM, is proposed which provide minimized false negative rate and false positive rate. Each banknote is divided into m ¡Ñ n partitions, and each partition comes with its own kernels. The optimal weight with each kernel matrix in the combination is obtained through the semidefinite programming (SDP) learning method. The amount of time and space required by the original SDP is very demanding. We focus on this framework and adopt two strategies to reduce the time and space requirements. The first strategy is to assume the non-negativity of kernel weights, and the second strategy is to set the sum of weights equal to 1. Experimental results show that regions with zero kernel weights are easy to imitate with today¡¦s digital imaging technology, and regions with nonzero kernel weights are difficult to imitate. In addition, these results show that the proposed approach outperforms single kernel SVM and standard SVM with SDP on Taiwanese banknotes.
7

CONSTANT FALSE ALARM RATE PERFORMANCE OF SOUND SOURCE DETECTION WITH TIME DELAY OF ARRIVAL ALGORITHM

Wang, Xipeng 01 January 2017 (has links)
Time Delay of Arrival (TDOA) based algorithms and Steered Response Power (SRP) based algorithms are two most commonly used methods for sound source detection and localization. SRP is more robust under high reverberation and multi-target conditions, while TDOA is less computationally intensive. This thesis introduces a modified TDOA algorithm, TDOA delay table search (TDOA-DTS), that has more stable performance than the original TDOA, and requires only 4% of the SRP computation load for a 3-dimensional space of a typical room. A 2-step adaptive thresholding procedure based on a Weibull noise peak distributions for the cross-correlations and a binomial distribution for combing potential peaks over all microphone pairs for the final detection. The first threshold limits the potential target peaks in the microphone pair cross-correlations with a user-defined false-alarm (FA) rates. The initial false-positive peak rate can be set to a higher level than desired for the final FA target rate so that high accuracy is not required of the probability distribution model (where model errors do not impact FA rates as they work for threshold set deep into the tail of the curve). The final FA rate can be lowered to the actual desired value using an M out of N (MON) rule on significant correlation peaks from different microphone pairs associated is a point in the space of interest. The algorithm is tested with simulated and real recorded data to verify resulting FA rates are consistent with the user-defined rates down to 10-6.
8

Radar Target Detection In Non-gaussian Clutter

Doyuran, Ulku 01 September 2007 (has links) (PDF)
In this study, novel methods for high-resolution radar target detection in non-Gaussian clutter environment are proposed. In solution of the problem, two approaches are used: Non-coherent detection that operates on the envelope-detected signal for thresholding and coherent detection that performs clutter suppression, Doppler processing and thresholding at the same time. The proposed non-coherent detectors, which are designed to operate in non-Gaussian and range-heterogeneous clutter, yield higher performance than the conventional methods that were designed either for Gaussian clutter or heterogeneous clutter. The proposed coherent detector exploits the information in all the range cells and pulses and performs the clutter reduction and thresholding simultaneously. The design is performed for uncorrelated, partially correlated and fully correlated clutter among range cells. The performance analysis indicates the superiority of the designed methods over the classical ones, in fully correlated and partially correlated situations. In addition, by design of detectors for multiple targets and making corrections to the conventional methods, the target-masking problem of the classical detectors is alleviated.
9

Techniques statistiques de détection de cibles dans des images infrarouges inhomogènes en milieu maritime. / Statistical techniques for target detection in inhomogenous infrared images in maritime environment

Vasquez, Emilie 11 January 2011 (has links)
Des techniques statistiques de détection d'objet ponctuel dans le ciel ou résolu dans la mer dans des images infrarouges de veille panoramique sont développées. Ces techniques sont adaptées aux inhomogénéités présentes dans ce type d'image. Elles ne sont fondées que sur l'analyse de l'information spatiale et ont pour objectif de maîtriser le taux de fausse alarme sur chaque image. Pour les zones de ciel, une technique conjointe de segmentation et détection adaptée aux variations spatiales de la luminosité moyenne est mise en œuvre et l'amélioration des performances auxquelles elle conduit est analysée. Pour les zones de mer, un détecteur de bord à taux de fausse alarme constant en présence d'inhomogénéités et de corrélations spatiales des niveaux de gris est développé et caractérisé. Dans chaque cas, la prise en compte des inhomogénéités dans les algorithmes statistiques s'avère essentielle pour maîtriser le taux de fausse alarme et améliorer les performances de détection. / Statistical detection techniques of point target in the sky or resolved target in the sea in infrared surveillance system images are developed. These techniques are adapted to inhomogeneities present in this kind of images. They are based on the spatial information analysis and allow the control of the false alarm rate in each image.For sky areas, a joint segmentation detection technique adapted to spatial variations of the mean luminosity is developed and its performance improvement is analyzed. For sea areas, an edge detector with constant false alarm rate when inhomogeneities and grey level spatial correlations are present is developed and characterized. In each case, taking into account the inhomogeneities in these statistical algorithms is essential to control the false alarm rate and to improve the detection performance.
10

Contributions to Profile Monitoring and Multivariate Statistical Process Control

Williams, James Dickson 14 December 2004 (has links)
The content of this dissertation is divided into two main topics: 1) nonlinear profile monitoring and 2) an improved approximate distribution for the T² statistic based on the successive differences covariance matrix estimator. Part 1: Nonlinear Profile Monitoring In an increasing number of cases the quality of a product or process cannot adequately be represented by the distribution of a univariate quality variable or the multivariate distribution of a vector of quality variables. Rather, a series of measurements are taken across some continuum, such as time or space, to create a profile. The profile determines the product quality at that sampling period. We propose Phase I methods to analyze profiles in a baseline dataset where the profiles can be modeled through either a parametric nonlinear regression function or a nonparametric regression function. We illustrate our methods using data from Walker and Wright (2002) and from dose-response data from DuPont Crop Protection. Part 2: Approximate Distribution of T² Although the T² statistic based on the successive differences estimator has been shown to be effective in detecting a shift in the mean vector (Sullivan and Woodall (1996) and Vargas (2003)), the exact distribution of this statistic is unknown. An accurate upper control limit (UCL) for the T² chart based on this statistic depends on knowing its distribution. Two approximate distributions have been proposed in the literature. We demonstrate the inadequacy of these two approximations and derive useful properties of this statistic. We give an improved approximate distribution and recommendations for its use. / Ph. D.

Page generated in 0.0308 seconds