• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 54
  • 54
  • 11
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

MONITORING AUTOCORRELATED PROCESSES

Tang, Weiping 10 1900 (has links)
<p>This thesis is submitted by Weiping Tang on August 2, 2011.</p> / <p>Several control schemes for monitoring process mean shifts, including cumulative sum (CUSUM), weighted cumulative sum (WCUSUM), adaptive cumulative sum (ACUSUM) and exponentially weighted moving average (EWMA) control schemes, display high performance in detecting constant process mean shifts. However, a variety of dynamic mean shifts frequently occur and few control schemes can efficiently work in these situations due to the limited window for catching shifts, particularly when the mean decreases rapidly. This is precisely the case when one uses the residuals from autocorrelated data to monitor the process mean, a feature often referred to as forecast recovery. This thesis focuses on detecting a shift in the mean of a time series when a forecast recovery dynamic pattern in the mean of the residuals is observed. Specifically, we examine in detail several particular cases of the Autoregressive Integrated Moving Average (ARIMA) time series models. We introduce a new upper-sided control chart based on the Exponentially Weighted Moving Average (EWMA) scheme combined with the Fast Initial Response (FIR) feature. To assess chart performance we use the well-established Average</p> <p>Run Length (ARL) criterion. A non-homogeneous Markov chain method is developed for ARL calculation for the proposed chart. We show numerically that the proposed procedure performs as well or better than the Weighted Cumulative Sum (WCUSUM) chart introduced by Shu, Jiang and Tsui (2008), and better than the conventional CUSUM, the ACUSUM and the Generalized Likelihood Ratio Test (GLRT) charts. The methods are illustrated on molecular weight data from a polymer manufacturing process.</p> / Master of Science (MSc)
42

Classification non supervisée de données spatio-temporelles multidimensionnelles : Applications à l’imagerie / Multidimensional spatio-temporal data clustering, with applications to imaging

Mure, Simon 02 December 2016 (has links)
Avec l'augmentation considérable d'acquisitions de données temporelles dans les dernières décennies comme les systèmes GPS, les séquences vidéo ou les suivis médicaux de pathologies ; le besoin en algorithmes de traitement et d'analyse efficaces d'acquisition longitudinales n'a fait qu'augmenter. Dans cette thèse, nous proposons une extension du formalisme mean-shift, classiquement utilisé en traitement d'images, pour le groupement de séries temporelles multidimensionnelles. Nous proposons aussi un algorithme de groupement hiérarchique des séries temporelles basé sur la mesure de dynamic time warping afin de prendre en compte les déphasages temporels. Ces choix ont été motivés par la nécessité d'analyser des images acquises en imagerie par résonance magnétique sur des patients atteints de sclérose en plaques. Cette maladie est encore très méconnue tant dans sa genèse que sur les causes des handicaps qu'elle peut induire. De plus aucun traitement efficace n'est connu à l'heure actuelle. Le besoin de valider des hypothèses sur les lésions de sclérose en plaque nous a conduit à proposer des méthodes de groupement de séries temporelles ne nécessitant pas d'a priori sur le résultat final, méthodes encore peu développées en traitement d'images. / Due to the dramatic increase of longitudinal acquisitions in the past decades such as video sequences, global positioning system (GPS) tracking or medical follow-up, many applications for time-series data mining have been developed. Thus, unsupervised time-series data mining has become highly relevant with the aim to automatically detect and identify similar temporal patterns between time-series. In this work, we propose a new spatio-temporal filtering scheme based on the mean-shift procedure, a state of the art approach in the field of image processing, which clusters multivariate spatio-temporal data. We also propose a hierarchical time-series clustering algorithm based on the dynamic time warping measure that identifies similar but asynchronous temporal patterns. Our choices have been motivated by the need to analyse magnetic resonance images acquired on people affected by multiple sclerosis. The genetics and environmental factors triggering and governing the disease evolution, as well as the occurrence and evolution of individual lesions, are still mostly unknown and under intense investigation. Therefore, there is a strong need to develop new methods allowing automatic extraction and quantification of lesion characteristics. This has motivated our work on time-series clustering methods, which are not widely used in image processing yet and allow to process image sequences without prior knowledge on the final results.
43

On the formulation of the alternative hypothesis for geodetic outlier detection / Über die Formulierung der Alternativhypothese für die geodätische Ausreißererkennung

Lehmann, Rüdiger 24 July 2014 (has links) (PDF)
The concept of outlier detection by statistical hypothesis testing in geodesy is briefly reviewed. The performance of such tests can only be measured or optimized with respect to a proper alternative hypothesis. Firstly, we discuss the important question whether gross errors should be treated as non-random quantities or as random variables. In the first case, the alternative hypothesis must be based on the common mean shift model, while in the second case, the variance inflation model is appropriate. Secondly, we review possible formulations of alternative hypotheses (inherent, deterministic, slippage, mixture) and discuss their implications. As measures of optimality of an outlier detection, we propose the premium and protection, which are briefly reviewed. Finally, we work out a practical example: the fit of a straight line. It demonstrates the impact of the choice of an alternative hypothesis for outlier detection. / Das Konzept der Ausreißererkennung durch statistische Hypothesentests in der Geodäsie wird kurz überblickt. Die Leistungsfähigkeit solch eines Tests kann nur gemessen oder optimiert werden in Bezug auf eine geeignete Alternativhypothese. Als erstes diskutieren wir die wichtige Frage, ob grobe Fehler als nicht-zufällige oder zufällige Größen behandelt werden sollten. Im ersten Fall muss die Alternativhypothese auf das Mean-Shift-Modell gegründet werden, im zweiten Fall ist das Variance-Inflation-Modell passend. Als zweites stellen wir mögliche Formulierungen von Alternativhypothesen zusammen und diskutieren ihre Implikationen. Als Optimalitätsmaß schlagen wir das Premium-Protection-Maß vor, welches kurz überblickt wird. Schließlich arbeiten wir ein praktisches Beispiel aus: Die Anpassung einer ausgleichenden Gerade. Es zeigt die Auswirkung der Wahl einer Alternativhypothese für die Ausreißererkennung.
44

Contributions to Mean Shift filtering and segmentation : Application to MRI ischemic data / Contributions au filtrage Mean Shift à la segmentation : Application à l’ischémie cérébrale en imagerie IRM

Li, Thing 04 April 2012 (has links)
De plus en plus souvent, les études médicales utilisent simultanément de multiples modalités d'acquisition d'image, produisant ainsi des données multidimensionnelles comportant beaucoup d'information supplémentaire dont l'interprétation et le traitement deviennent délicat. Par exemple, les études sur l'ischémie cérébrale se basant sur la combinaison de plusieurs images IRM, provenant de différentes séquences d'acquisition, pour prédire l'évolution de la zone nécrosée, donnent de bien meilleurs résultats que celles basées sur une seule image. Ces approches nécessitent cependant l'utilisation d'algorithmes plus complexes pour réaliser les opérations de filtrage, segmentation et de clustering. Une approche robuste pour répondre à ces problèmes de traitements de données multidimensionnelles est le Mean Shift qui est basé sur l'analyse de l'espace des caractéristiques et l'estimation non-paramétrique par noyau de la densité de probabilité. Dans cette thèse, nous étudions les paramètres qui influencent les résultats du Mean Shift et nous cherchons à optimiser leur choix. Nous examinons notamment l'effet du bruit et du flou dans l'espace des caractéristiques et comment le Mean Shift doit être paramétrés pour être optimal pour le débruitage et la réduction du flou. Le grand succès du Mean Shift est principalement du au réglage intuitif de ces paramètres de la méthode. Ils représentent l'échelle à laquelle le Mean Shift analyse chacune des caractéristiques. En se basant sur la méthode du Plug In (PI) monodimensionnel, fréquemment utilisé pour le filtrage Mean Shift et permettant, dans le cadre de l'estimation non-paramétrique par noyau, d'approximer le paramètre d'échelle optimal, nous proposons l'utilisation du PI multidimensionnel pour le filtrage Mean Shift. Nous évaluons l'intérêt des matrices d'échelle diagonales et pleines calculées à partir des règles du PI sur des images de synthèses et naturelles. Enfin, nous proposons une méthode de segmentation automatique et volumique combinant le filtrage Mean Shift et la croissance de région ainsi qu'une optimisation basée sur les cartes de probabilité. Cette approche est d'abord étudiée sur des images IRM synthétisées. Des tests sur des données réelles issues d'études sur l'ischémie cérébrale chez le rats et l'humain sont aussi conduits pour déterminer l'efficacité de l'approche à prédire l'évolution de la zone de pénombre plusieurs jours après l'accident vasculaire et ce, à partir des IRM réalisées peu de temps après la survenue de cet accident. Par rapport aux segmentations manuelles réalisées des experts médicaux plusieurs jours après l'accident, les résultats obtenus par notre approche sont mitigés. Alors qu'une segmentation parfaite conduirait à un coefficient DICE de 1, le coefficient est de 0.8 pour l'étude chez le rat et de 0.53 pour l'étude sur l'homme. Toujours en utilisant le coefficient DICE, nous déterminons la combinaison de d'images IRM conduisant à la meilleure prédiction. / Medical studies increasingly use multi-modality imaging, producing multidimensional data that bring additional information that are also challenging to process and interpret. As an example, for predicting salvageable tissue, ischemic studies in which combinations of different multiple MRI imaging modalities (DWI, PWI) are used produced more conclusive results than studies made using a single modality. However, the multi-modality approach necessitates the use of more advanced algorithms to perform otherwise regular image processing tasks such as filtering, segmentation and clustering. A robust method for addressing the problems associated with processing data obtained from multi-modality imaging is Mean Shift which is based on feature space analysis and on non-parametric kernel density estimation and can be used for multi-dimensional filtering, segmentation and clustering. In this thesis, we sought to optimize the mean shift process by analyzing the factors that influence it and optimizing its parameters. We examine the effect of noise in processing the feature space and how Mean Shift can be tuned for optimal de-noising and also to reduce blurring. The large success of Mean Shift is mainly due to the intuitive tuning of bandwidth parameters which describe the scale at which features are analyzed. Based on univariate Plug-In (PI) bandwidth selectors of kernel density estimation, we propose the bandwidth matrix estimation method based on multi-variate PI for Mean Shift filtering. We study the interest of using diagonal and full bandwidth matrix with experiment on synthesized and natural images. We propose a new and automatic volume-based segmentation framework which combines Mean Shift filtering and Region Growing segmentation as well as Probability Map optimization. The framework is developed using synthesized MRI images as test data and yielded a perfect segmentation with DICE similarity measurement values reaching the highest value of 1. Testing is then extended to real MRI data obtained from animals and patients with the aim of predicting the evolution of the ischemic penumbra several days following the onset of ischemia using only information obtained from the very first scan. The results obtained are an average DICE of 0.8 for the animal MRI image scans and 0.53 for the patients MRI image scans; the reference images for both cases are manually segmented by a team of expert medical staff. In addition, the most relevant combination of parameters for the MRI modalities is determined.
45

Detekce a sledování objektů pomocí význačných bodů / Object Detection and Tracking Using Interest Points

Bílý, Vojtěch January 2012 (has links)
This paper deals with object detection and tracking using iterest points. Existing approaches are described here. Inovated method based on Generalized Hough transform and iterative Hough-space searching is  proposed in this paper. Generality of proposed detector is shown in various types of objects. Object tracking is designed as frame by frame detection.
46

On the formulation of the alternative hypothesis for geodetic outlier detection

Lehmann, Rüdiger January 2013 (has links)
The concept of outlier detection by statistical hypothesis testing in geodesy is briefly reviewed. The performance of such tests can only be measured or optimized with respect to a proper alternative hypothesis. Firstly, we discuss the important question whether gross errors should be treated as non-random quantities or as random variables. In the first case, the alternative hypothesis must be based on the common mean shift model, while in the second case, the variance inflation model is appropriate. Secondly, we review possible formulations of alternative hypotheses (inherent, deterministic, slippage, mixture) and discuss their implications. As measures of optimality of an outlier detection, we propose the premium and protection, which are briefly reviewed. Finally, we work out a practical example: the fit of a straight line. It demonstrates the impact of the choice of an alternative hypothesis for outlier detection. / Das Konzept der Ausreißererkennung durch statistische Hypothesentests in der Geodäsie wird kurz überblickt. Die Leistungsfähigkeit solch eines Tests kann nur gemessen oder optimiert werden in Bezug auf eine geeignete Alternativhypothese. Als erstes diskutieren wir die wichtige Frage, ob grobe Fehler als nicht-zufällige oder zufällige Größen behandelt werden sollten. Im ersten Fall muss die Alternativhypothese auf das Mean-Shift-Modell gegründet werden, im zweiten Fall ist das Variance-Inflation-Modell passend. Als zweites stellen wir mögliche Formulierungen von Alternativhypothesen zusammen und diskutieren ihre Implikationen. Als Optimalitätsmaß schlagen wir das Premium-Protection-Maß vor, welches kurz überblickt wird. Schließlich arbeiten wir ein praktisches Beispiel aus: Die Anpassung einer ausgleichenden Gerade. Es zeigt die Auswirkung der Wahl einer Alternativhypothese für die Ausreißererkennung.
47

Large Eddy Simulation/Transported Probability Density Function Modeling of Turbulent Combustion: Model Advancement and Applications

Pei Zhang (6922148) 16 August 2019 (has links)
<div>Studies of turbulent combustion in the past mainly focus on problems with single-regime combustion. In practical combustion systems, however, combustion rarely occurs in a single regime, and different regimes of combustion can be observed in the same system. This creates a significant gap between our existing knowledge of combustion in single regime and the practical need in multi-regime combustion. In this work, we aim to extend the traditional single-regime combustion models to problems involving different regimes of combustion. Among the existing modeling methods, Transported Probability Density Function (PDF) method is attractive for its intrinsic closure of treating detailed chemical kinetics and has been demonstrated to be promising in predicting low-probability but practically important combustion events like local extinction and re-ignition. In this work, we focus on the model assessment and advancement of the Large Eddy Simulation (LES)/ PDF method in predicting turbulent multi-regime combustion.</div><div><br></div><div><div>Two combustion benchmark problems are considered for the model assessment. One is a recently designed turbulent piloted jet flame that features statistically transient processes, the Sydney turbulent pulsed piloted jet flame. A direct comparison of the predicted and measured time series of the axial velocity demonstrates a satisfactory prediction of the flow and turbulence fields of the pulsed jet flame by the employed LES/PDF modeling method. A comparison of the PLIF-OH images and the predicted OH mass fraction contours at a few selected times shows that the method captures the different combustion stages including healthy burning, significant extinction, and the re-establishment of healthy burning, in the statistically transient process. The temporal history of the conditional PDF of OH mass fraction/temperature at around stoichiometric conditions at different axial locations suggests that the method predicts the extinction and re-establishment timings accurately at upstream locations but less accurately at downstream locations with a delay of burning reestablishment. The other test case is a unified series of existing turbulent piloted flames. To facilitate model assessment across different combustion regimes, we develop a model validation framework by unifying several existing pilot stabilized turbulent jet flames in different combustion regimes. The characteristic similarity and difference of the employed piloted flames are examined, including the Sydney piloted flames L, B, and M, the Sandia piloted flames D, E, and F, a series of piloted premixed Bunsen flames, and the Sydney/Sandia inhomogeneous inlet piloted jet flames. Proper parameterization and a regime diagram are introduced to characterize the pilot stabilized flames covering non-premixed, partially premixed, and premixed flames. A preliminary model assessment is carried out to examine the simultaneous model performance of the LES/PDF method for the piloted jet flames across different combustion regimes.</div><div><br></div><div>With the assessment work in the above two test cases, it is found that the LES/PDF method can predict the statistically transient combustion and multi-regime combustion reasonably well but some modeling limitations are also identified. Thus, further model advancement is needed for the LES/PDF method. In this work, we focus on two model advancement studies related to the molecular diffusion and sub-filter scale mixing processes in turbulent combustion. The first study is to deal with differential molecular diffusion (DMD) among different species. The importance of theDMD effects on combustion has been found in many applications. However, in most previous combustion models equal molecular diffusivity is assumed. To incorporate the DMD effects accurately, we develop a model called Variance Consistent Mean Shift (VCMS) model. The second model advancement focuses on the sub-filter scale mixing in high-Karlovitz (Ka) number turbulent combustion. We analyze the DNS data of a Sandia high-Ka premixed jet flame to gain insights into the modeling of sub-filter scale mixing. A sub-filter scale mixing time scale is analyzed with respect to the filter size to examine the validity of a power-law scaling model for the mixing time scale.</div></div>
48

從貝氏觀點診斷離群值及具有影響力之觀察值 / Some diagnostics for outliers and influential observations from Bayesian point of view

謝季英, Shieh, Jih Ing Unknown Date (has links)
在線性迴歸分析中,資料的不適當,常導致研究者選擇了不當的模式,為避免此缺失,在分析資料前須先做好診斷工作。本文中將從貝氏觀點提出一些不同的診斷方法以供參考。首先推導出均數移動參數a=(a<sub>1</sub>,…,a<sub>k</sub>)'的事後分配,並利用a'a/k的事後均數診斷出不當資料點。接著,考慮在個別模式下以β事後分配之總變異及廣義變異為標準,診斷出離群值及具有潛在影響力之觀測值。最後,分別利用(i)β的事後分配(ii)σ<sup>2</sup>的事後分配(iii)(β,σ<sup>2</sup>)的聯合事後分配,推導出對應的對稱均方差以做為診斷標準。 / In this thesis, some different diagnostic methodologies for outliers and influential observations from Bayesian point of view are proposed. We firstly derive the marginal posterior distribution of the mean-shift parameter a=(a<sub>1</sub>,a<sub>k</sub>)<sup>1</sup>, then use the posterior mean of a<sup>1</sup>a/k to detect the spurious data items. Secondly, we use the posterior total variance and generalized variance of β as diagnostic criterions for outliers and influential observations. Finally, we utilize (i) the posterior distribution of β, (ii) the posterior distribution of σ<sup>2</sup>, and (iii) the joint posterior distribution of β, σ<sup>2</sup> to find their corresponding symmetric mean square differences , which can be used as diagnostic criterions.
49

Contributions to quality improvement methodologies and computer experiments

Tan, Matthias H. Y. 16 September 2013 (has links)
This dissertation presents novel methodologies for five problem areas in modern quality improvement and computer experiments, i.e., selective assembly, robust design with computer experiments, multivariate quality control, model selection for split plot experiments, and construction of minimax designs. Selective assembly has traditionally been used to achieve tight specifications on the clearance of two mating parts. Chapter 1 proposes generalizations of the selective assembly method to assemblies with any number of components and any assembly response function, called generalized selective assembly (GSA). Two variants of GSA are considered: direct selective assembly (DSA) and fixed bin selective assembly (FBSA). In DSA and FBSA, the problem of matching a batch of N components of each type to give N assemblies that minimize quality cost is formulated as axial multi-index assignment and transportation problems respectively. Realistic examples are given to show that GSA can significantly improve the quality of assemblies. Chapter 2 proposes methods for robust design optimization with time consuming computer simulations. Gaussian process models are widely employed for modeling responses as a function of control and noise factors in computer experiments. In these experiments, robust design optimization is often based on average quadratic loss computed as if the posterior mean were the true response function, which can give misleading results. We propose optimization criteria derived by taking expectation of the average quadratic loss with respect to the posterior predictive process, and methods based on the Lugannani-Rice saddlepoint approximation for constructing accurate credible intervals for the average loss. These quantities allow response surface uncertainty to be taken into account in the optimization process. Chapter 3 proposes a Bayesian method for identifying mean shifts in multivariate normally distributed quality characteristics. Multivariate quality characteristics are often monitored using a few summary statistics. However, to determine the causes of an out-of-control signal, information about which means shifted and the directions of the shifts is often needed. We propose a Bayesian approach that gives this information. For each mean, an indicator variable that indicates whether the mean shifted upwards, shifted downwards, or remained unchanged is introduced. Default prior distributions are proposed. Mean shift identification is based on the modes of the posterior distributions of the indicators, which are determined via Gibbs sampling. Chapter 4 proposes a Bayesian method for model selection in fractionated split plot experiments. We employ a Bayesian hierarchical model that takes into account the split plot error structure. Expressions for computing the posterior model probability and other important posterior quantities that require evaluation of at most two uni-dimensional integrals are derived. A novel algorithm called combined global and local search is proposed to find models with high posterior probabilities and to estimate posterior model probabilities. The proposed method is illustrated with the analysis of three real robust design experiments. Simulation studies demonstrate that the method has good performance. The problem of choosing a design that is representative of a finite candidate set is an important problem in computer experiments. The minimax criterion measures the degree of representativeness because it is the maximum distance of a candidate point to the design. Chapter 5 proposes algorithms for finding minimax designs for finite design regions. We establish the relationship between minimax designs and the classical set covering location problem in operations research, which is a binary linear program. We prove that the set of minimax distances is the set of discontinuities of the function that maps the covering radius to the optimal objective function value, and optimal solutions at the discontinuities are minimax designs. These results are employed to design efficient procedures for finding globally optimal minimax and near-minimax designs.
50

Real-time Detection And Tracking Of Human Eyes In Video Sequences

Savas, Zafer 01 September 2005 (has links) (PDF)
Robust, non-intrusive human eye detection problem has been a fundamental and challenging problem for computer vision area. Not only it is a problem of its own, it can be used to ease the problem of finding the locations of other facial features for recognition tasks and human-computer interaction purposes as well. Many previous works have the capability of determining the locations of the human eyes but the main task in this thesis is not only a vision system with eye detection capability / Our aim is to design a real-time, robust, scale-invariant eye tracker system with human eye movement indication property using the movements of eye pupil. Our eye tracker algorithm is implemented using the Continuously Adaptive Mean Shift (CAMSHIFT) algorithm proposed by Bradski and the EigenFace method proposed by Turk &amp / Pentland. Previous works for scale invariant object detection using Eigenface method are mostly dependent on limited number of user predefined scales which causes speed problems / so in order to avoid this problem an adaptive eigenface method using the information extracted from CAMSHIFT algorithm is implemented to have a fast and scale invariant eye tracking. First of all / human face in the input image captured by the camera is detected using the CAMSHIFT algorithm which tracks the outline of an irregular shaped object that may change size and shape during the tracking process based on the color of the object. Face area is passed through a number of preprocessing steps such as color space conversion and thresholding to obtain better results during the eye search process. After these preprocessing steps, search areas for left and right eyes are determined using the geometrical properties of the human face and in order to locate each eye indivually the training images are resized by the width information supplied by the CAMSHIFT algortihm. Search regions for left and right eyes are individually passed to the eye detection algortihm to determine the exact locations of each eye. After the detection of eyes, eye areas are individually passed to the pupil detection and eye area detection algorithms which are based on the Active Contours method to indicate the pupil and eye area. Finally, by comparing the geometrical locations of pupil with the eye area, human gaze information is extracted. As a result of this thesis a software named &ldquo / TrackEye&rdquo / with an user interface having indicators for the location of eye areas and pupils, various output screens for human computer interaction and controls for allowing to test the effects of color space conversions and thresholding types during object tracking has been built.

Page generated in 0.3781 seconds