• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 35
  • 9
  • 7
  • 6
  • 3
  • 1
  • Tagged with
  • 134
  • 134
  • 39
  • 25
  • 21
  • 21
  • 18
  • 15
  • 13
  • 12
  • 12
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Uncertainty in radar emitter classification and clustering / Gestion des incertitudes en identification des modes radar

Revillon, Guillaume 18 April 2019 (has links)
En Guerre Electronique, l’identification des signaux radar est un atout majeur de la prise de décisions tactiques liées au théâtre d’opérations militaires. En fournissant des informations sur la présence de menaces, la classification et le partitionnement des signaux radar ont alors un rôle crucial assurant un choix adapté des contre-mesures dédiées à ces menaces et permettant la détection de signaux radar inconnus pour la mise à jour des bases de données. Les systèmes de Mesures de Soutien Electronique enregistrent la plupart du temps des mélanges de signaux radar provenant de différents émetteurs présents dans l’environnement électromagnétique. Le signal radar, décrit par un motif de modulations impulsionnelles, est alors souvent partiellement observé du fait de mesures manquantes et aberrantes. Le processus d’identification se fonde sur l’analyse statistique des paramètres mesurables du signal radar qui le caractérisent tant quantitativement que qualitativement. De nombreuses approches mêlant des techniques de fusion de données et d’apprentissage statistique ont été développées. Cependant, ces algorithmes ne peuvent pas gérer les données manquantes et des méthodes de substitution de données sont requises afin d’utiliser ces derniers. L’objectif principal de cette thèse est alors de définir un modèle de classification et partitionnement intégrant la gestion des valeurs aberrantes et manquantes présentes dans tout type de données. Une approche fondée sur les modèles de mélange de lois de probabilités est proposée dans cette thèse. Les modèles de mélange fournissent un formalisme mathématique flexible favorisant l’introduction de variables latentes permettant la gestion des données aberrantes et la modélisation des données manquantes dans les problèmes de classification et de partionnement. L’apprentissage du modèle ainsi que la classification et le partitionnement sont réalisés dans un cadre d’inférence bayésienne où une méthode d’approximation variationnelle est introduite afin d’estimer la loi jointe a posteriori des variables latentes et des paramètres. Des expériences sur diverses données montrent que la méthode proposée fournit de meilleurs résultats que les algorithmes standards. / In Electronic Warfare, radar signals identification is a supreme asset for decision making in military tactical situations. By providing information about the presence of threats, classification and clustering of radar signals have a significant role ensuring that countermeasures against enemies are well-chosen and enabling detection of unknown radar signals to update databases. Most of the time, Electronic Support Measures systems receive mixtures of signals from different radar emitters in the electromagnetic environment. Hence a radar signal, described by a pulse-to-pulse modulation pattern, is often partially observed due to missing measurements and measurement errors. The identification process relies on statistical analysis of basic measurable parameters of a radar signal which constitute both quantitative and qualitative data. Many general and practical approaches based on data fusion and machine learning have been developed and traditionally proceed to feature extraction, dimensionality reduction and classification or clustering. However, these algorithms cannot handle missing data and imputation methods are required to generate data to use them. Hence, the main objective of this work is to define a classification/clustering framework that handles both outliers and missing values for any types of data. Here, an approach based on mixture models is developed since mixture models provide a mathematically based, flexible and meaningful framework for the wide variety of classification and clustering requirements. The proposed approach focuses on the introduction of latent variables that give us the possibility to handle sensitivity of the model to outliers and to allow a less restrictive modelling of missing data. A Bayesian treatment is adopted for model learning, supervised classification and clustering and inference is processed through a variational Bayesian approximation since the joint posterior distribution of latent variables and parameters is untractable. Some numerical experiments on synthetic and real data show that the proposed method provides more accurate results than standard algorithms.
132

Evoluce velikosti mozku u letounů (Chiroptera) / Evolution of brain size in bats (Chiroptera)

Králová, Zuzana January 2010 (has links)
According to the prevailing doctrine, brain size has mainly increased throughout the evolution of mammals and reductions in brain size were rare. On the other hand, energetic costs of developing and maintaining big brain are high, so brain size reduction should occur every time when the respective selective pressure is present. Modern phylogenetic methods make it possible to test the presence of evolutionary trend and to infer the ancestral values of the trait in question based on knowledge of phylogeny and trait values for recent species. However, this approach has been rarely applied to study brain evolution so far. In this thesis, I focus on bats (Chiroptera). Bats are a suitable group for demonstrating the importance of brain size reductions. Considering their energetically demanding mode of locomotion, they are likely to have been under selection pressure for brain reduction. Furthermore, there is a large amount of data on body and brain mass of recent species available. Finally, phylogenetic relationships among bats are relatively well resolved. My present study is based on body masses and brain masses of 334 recent bat species (Baron et al., 1996) and on a phylogeny obtained by adjusting existing bat supertree (Jones et al., 2002) according to recent molecular studies. Analysing the data for...
133

Inference for Generalized Multivariate Analysis of Variance (GMANOVA) Models and High-dimensional Extensions

Jana, Sayantee 11 1900 (has links)
A Growth Curve Model (GCM) is a multivariate linear model used for analyzing longitudinal data with short to moderate time series. It is a special case of Generalized Multivariate Analysis of Variance (GMANOVA) models. Analysis using the GCM involves comparison of mean growths among different groups. The classical GCM, however, possesses some limitations including distributional assumptions, assumption of identical degree of polynomials for all groups and it requires larger sample size than the number of time points. In this thesis, we relax some of the assumptions of the traditional GCM and develop appropriate inferential tools for its analysis, with the aim of reducing bias, improving precision and to gain increased power as well as overcome limitations of high-dimensionality. Existing methods for estimating the parameters of the GCM assume that the underlying distribution for the error terms is multivariate normal. In practical problems, however, we often come across skewed data and hence estimation techniques developed under the normality assumption may not be optimal. Simulation studies conducted in this thesis, in fact, show that existing methods are sensitive to the presence of skewness in the data, where estimators are associated with increased bias and mean square error (MSE), when the normality assumption is violated. Methods appropriate for skewed distributions are, therefore, required. In this thesis, we relax the distributional assumption of the GCM and provide estimators for the mean and covariance matrices of the GCM under multivariate skew normal (MSN) distribution. An estimator for the additional skewness parameter of the MSN distribution is also provided. The estimators are derived using the expectation maximization (EM) algorithm and extensive simulations are performed to examine the performance of the estimators. Comparisons with existing estimators show that our estimators perform better than existing estimators, when the underlying distribution is multivariate skew normal. Illustration using real data set is also provided, wherein Triglyceride levels from the Framingham Heart Study is modelled over time. The GCM assumes equal degree of polynomial for each group. Therefore, when groups means follow different shapes of polynomials, the GCM fails to accommodate this difference in one model. We consider an extension of the GCM, wherein mean responses from different groups can have different shapes, represented by polynomials of different degree. Such a model is referred to as Extended Growth Curve Model (EGCM). We extend our work on GCM to EGCM, and develop estimators for the mean and covariance matrices under MSN errors. We adopted the Restricted Expectation Maximization (REM) algorithm, which is based on the multivariate Newton-Raphson (NR) method and Lagrangian optimization. However, the multivariate NR method and hence, the existing REM algorithm are applicable to vector parameters and the parameters of interest in this study are matrices. We, therefore, extended the NR approach to matrix parameters, which consequently allowed us to extend the REM algorithm to matrix parameters. The performance of the proposed estimators were examined using extensive simulations and a motivating real data example was provided to illustrate the application of the proposed estimators. Finally, this thesis deals with high-dimensional application of GCM. Existing methods for a GCM are developed under the assumption of ‘small p large n’ (n >> p) and are not appropriate for analyzing high-dimensional longitudinal data, due to singularity of the sample covariance matrix. In a previous work, we used Moore-Penrose generalized inverse to overcome this challenge. However, the method has some limitations around near singularity, when p~n. In this thesis, a Bayesian framework was used to derive a test for testing the linear hypothesis on the mean parameter of the GCM, which is applicable in high-dimensional situations. Extensive simulations are performed to investigate the performance of the test statistic and establish optimality characteristics. Results show that this test performs well, under different conditions, including the near singularity zone. Sensitivity of the test to mis-specification of the parameters of the prior distribution are also examined empirically. A numerical example is provided to illustrate the usefulness of the proposed method in practical situations. / Thesis / Doctor of Philosophy (PhD)
134

Risk-averse periodic preventive maintenance optimization

Singh, Inderjeet,1978- 21 December 2011 (has links)
We consider a class of periodic preventive maintenance (PM) optimization problems, for a single piece of equipment that deteriorates with time or use, and can be repaired upon failure, through corrective maintenance (CM). We develop analytical and simulation-based optimization models that seek an optimal periodic PM policy, which minimizes the sum of the expected total cost of PMs and the risk-averse cost of CMs, over a finite planning horizon. In the simulation-based models, we assume that both types of maintenance actions are imperfect, whereas our analytical models consider imperfect PMs with minimal CMs. The effectiveness of maintenance actions is modeled using age reduction factors. For a repairable unit of equipment, its virtual age, and not its calendar age, determines the associated failure rate. Therefore, two sets of parameters, one describing the effectiveness of maintenance actions, and the other that defines the underlying failure rate of a piece of equipment, are critical to our models. Under a given maintenance policy, the two sets of parameters and a virtual-age-based age-reduction model, completely define the failure process of a piece of equipment. In practice, the true failure rate, and exact quality of the maintenance actions, cannot be determined, and are often estimated from the equipment failure history. We use a Bayesian approach to parameter estimation, under which a random-walk-based Gibbs sampler provides posterior estimates for the parameters of interest. Our posterior estimates for a few datasets from the literature, are consistent with published results. Furthermore, our computational results successfully demonstrate that our Gibbs sampler is arguably the obvious choice over a general rejection sampling-based parameter estimation method, for this class of problems. We present a general simulation-based periodic PM optimization model, which uses the posterior estimates to simulate the number of operational equipment failures, under a given periodic PM policy. Optimal periodic PM policies, under the classical maximum likelihood (ML) and Bayesian estimates are obtained for a few datasets. Limitations of the ML approach are revealed for a dataset from the literature, in which the use of ML estimates of the parameters, in the maintenance optimization model, fails to capture a trivial optimal PM policy. Finally, we introduce a single-stage and a two-stage formulation of the risk-averse periodic PM optimization model, with imperfect PMs and minimal CMs. Such models apply to a class of complex equipment with many parts, operational failures of which are addressed by replacing or repairing a few parts, thereby not affecting the failure rate of the equipment under consideration. For general values of PM age reduction factors, we provide sufficient conditions to establish the convexity of the first and second moments of the number of failures, and the risk-averse expected total maintenance cost, over a finite planning horizon. For increasing Weibull rates and a general class of increasing and convex failure rates, we show that these convexity results are independent of the PM age reduction factors. In general, the optimal periodic PM policy under the single-stage model is no better than the optimal two-stage policy. But if PMs are assumed perfect, then we establish that the single-stage and the two-stage optimization models are equivalent. / text

Page generated in 0.1766 seconds