• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 10
  • 9
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 77
  • 22
  • 20
  • 20
  • 17
  • 16
  • 15
  • 15
  • 12
  • 12
  • 12
  • 11
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Security and robustness of a modified parameter modulation communication scheme

Liang, Xiyin 07 April 2009 (has links)
Please read the abstract in the section front of this document / Thesis (PhD)--University of Pretoria, 2009. / Electrical, Electronic and Computer Engineering / Unrestricted
42

Adaptive Kernel Functions and Optimization Over a Space of Rank-One Decompositions

Wang, Roy Chih Chung January 2017 (has links)
The representer theorem from the reproducing kernel Hilbert space theory is the origin of many kernel-based machine learning and signal modelling techniques that are popular today. Most kernel functions used in practical applications behave in a homogeneous manner across the domain of the signal of interest, and they are called stationary kernels. One open problem in the literature is the specification of a non-stationary kernel that is computationally tractable. Some recent works solve large-scale optimization problems to obtain such kernels, and they often suffer from non-identifiability issues in their optimization problem formulation. Many practical problems can benefit from using application-specific prior knowledge on the signal of interest. For example, if one can adequately encode the prior assumption that edge contours are smooth, one does not need to learn a finite-dimensional dictionary from a database of sampled image patches that each contains a circular object in order to up-convert images that contain circular edges. In the first portion of this thesis, we present a novel method for constructing non-stationary kernels that incorporates prior knowledge. A theorem is presented that ensures the result of this construction yields a symmetric and positive-definite kernel function. This construction does not require one to solve any non-identifiable optimization problems. It does require one to manually design some portions of the kernel while deferring the specification of the remaining portions to when an observation of the signal is available. In this sense, the resultant kernel is adaptive to the data observed. We give two examples of this construction technique via the grayscale image up-conversion task where we chose to incorporate the prior assumption that edge contours are smooth. Both examples use a novel local analysis algorithm that summarizes the p-most dominant directions for a given grayscale image patch. The non-stationary properties of these two types of kernels are empirically demonstrated on the Kodak image database that is popular within the image processing research community. Tensors and tensor decomposition methods are gaining popularity in the signal processing and machine learning literature, and most of the recently proposed tensor decomposition methods are based on the tensor power and alternating least-squares algorithms, which were both originally devised over a decade ago. The algebraic approach for the canonical polyadic (CP) symmetric tensor decomposition problem is an exception. This approach exploits the bijective relationship between symmetric tensors and homogeneous polynomials. The solution of a CP symmetric tensor decomposition problem is a set of p rank-one tensors, where p is fixed. In this thesis, we refer to such a set of tensors as a rank-one decomposition with cardinality p. Existing works show that the CP symmetric tensor decomposition problem is non-unique in the general case, so there is no bijective mapping between a rank-one decomposition and a symmetric tensor. However, a proposition in this thesis shows that a particular space of rank-one decompositions, SE, is isomorphic to a space of moment matrices that are called quasi-Hankel matrices in the literature. Optimization over Riemannian manifolds is an area of optimization literature that is also gaining popularity within the signal processing and machine learning community. Under some settings, one can formulate optimization problems over differentiable manifolds where each point is an equivalence class. Such manifolds are called quotient manifolds. This type of formulation can reduce or eliminate some of the sources of non-identifiability issues for certain optimization problems. An example is the learning of a basis for a subspace by formulating the solution space as a type of quotient manifold called the Grassmann manifold, while the conventional formulation is to optimize over a space of full column rank matrices. The second portion of this thesis is about the development of a general-purpose numerical optimization framework over SE. A general-purpose numerical optimizer can solve different approximations or regularized versions of the CP decomposition problem, and they can be applied to tensor-related applications that do not use a tensor decomposition formulation. The proposed optimizer uses many concepts from the Riemannian optimization literature. We present a novel formulation of SE as an embedded differentiable submanifold of the space of real-valued matrices with full column rank, and as a quotient manifold. Riemannian manifold structures and tangent space projectors are derived as well. The CP symmetric tensor decomposition problem is used to empirically demonstrate that the proposed scheme is indeed a numerical optimization framework over SE. Future investigations will concentrate on extending the proposed optimization framework to handle decompositions that correspond to non-symmetric tensors.
43

Développement d’un nouveau modèle dédié à la commande du métabolisme glucidique appliqué aux patients diabétiques de type 1. / Development of a new control model of the glucose metabolism applied to type 1 diabetic patients

Ben Abbes, Ilham 28 June 2013 (has links)
La régulation de la concentration de glucose dans l'organisme est nécessaire au bon fonctionnement des globules rouges et de l'ensemble des cellules, dont celles des muscles et du cerveau. Cette régulation met en jeu plusieurs organes ainsi que le système hormonal dont une hormone en particulier, l’insuline. Le diabète de type 1 est une maladie où les cellules productrices d'insuline du pancréas sont détruites. Afin de compenser cette perte de production d'insuline, le traitement de cette maladie consiste, pour le patient, à déterminer une dose d'insuline à s'injecter en fonction de mesures de sa glycémie et de certaines caractéristiques intervenant dans la régulation de celle-ci (repas, activité physique, stress,...). Cette thèse s'inscrit dans une démarche d’automatisation du traitement en proposant un nouveau modèle non-linéaire du métabolisme glucidique pouvant être utilisé dans une solution de contrôle en boucle fermée. Nous avons prouvé que ce modèle possède une unique solution positive et bornée pour des conditions initiales fixées et sa commandabilité locale. Nous nous sommes ensuite intéressés à l’identification paramétrique de ce modèle. Nous avons montré son identifiabilité structurelle et pratique. Dans ce cadre, une nouvelle méthodologie permettant de qualifier l'identifiabilité pratique d'un modèle, basée sur une divergence de Kullback-Leibler, a été proposée. Une estimation des paramètres du modèle a été réalisée à partir de données de patients réels. Dans ce but, une méthodologie d'estimation robuste, basée sur un critère de Huber, a été utilisée. Les résultats obtenus ont montré la pertinence du nouveau modèle proposé. / The development of new control models to represent more accurately the plasma glucose-insulin dynamics in T1DM is needed for efficient closed-loop algorithms. In this PhD thesis, we proposed a new nonlinear model of five time-continuous state equations with the aim to identify its parameters from easily available real patients' data (i.e. data from the insulin pump and the glucose monitoring system. Its design is based on two assumptions. Firstly, two successive remote compartments, one for insulin and one for glucose issued from the meal, are introduced to account for the distribution of the insulin and the glucose in the organism. Secondly, the insulin action in glucose disappearance is modeled through an original nonlinear form. The mathematical properties of this model have been studied and we proved that a unique, positive and bounded solution exists for a fixed initial condition. It is also shown that the model is locally accessible. In this way, it can so be used as a control model. We proved the structural identifiability of this model and proposed a new method based on the Kullback-Leiber divergence in view to test its practical identifiability. The parameters of the model were estimated from real patients' data. The obtained mean fit indicates a good approximation of the glucose metabolism of real patients. The predictions of the model approximate accurately the glycemia of the studied patients during few hours. Finally, the obtained results let us validate the relevance of this new model as a control model in view to be applied to closed-loop algorithms.
44

Bayesian exploratory factor analysis

Conti, Gabriella, Frühwirth-Schnatter, Sylvia, Heckman, James J., Piatek, Rémi 27 June 2014 (has links) (PDF)
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. (authors' abstract)
45

Řešení inverzních úloh v oblasti výměníků hmoty a tepla / Solutions of inverse problems in the area of material and heat exchangers

Kůdelová, Tereza January 2014 (has links)
This master’s thesis deals with the dynamic behaviour of the heat exchangers which is described by a system of differential equations. In this connection, it contains general informations about heat transfer, heat exchangers and their arrangements. The main aim of this thesis is to solve the inverse problem of the antiparallel arrangement and discuss the question of the controllability, observability and identifiability of its parameters.
46

Développement de méthodes statistiques pour l'identification de gènes d'intérêt en présence d'apparentement et de dominance, application à la génétique du maïs / Development of Statistical Methods for the Identification of Interesting Genes with Relatedness and Dominance, Application to the Maize Genetic

Laporte, Fabien 13 March 2018 (has links)
La détection de gènes est une étape importante dans la compréhension des effets de l'information génétique d'un individu sur ses caractères phénotypiques. Durant mon doctorat, j'ai étudié les méthodes statistiques pour conduire les analyses de génétique d'association, avec les hybrides de maïs comme modèle d'application. Je me suis tout d'abord intéressé à l'estimation des paramètres d'apparentement entre individus à partir de données de marqueurs bialléliques. Cette estimation est réalisée dans le cadre d'un modèle de mélange paramétrique. J'ai étudié l'identifiabilité de ce modèle dans un cadre général mais aussi dans un cadre plus spécifique où les individus étudiés étaient issus de croisements entre lignées, cadre représentatif des plans de croisement classiquement utilisés en génétique végétale. Je me suis ensuite intéressé à l'estimation des paramètres des modèles mixtes à plusieurs composantes de variance et plus particulièrement à la performance des algorithmes pour tester l'effet de très nombreux marqueurs. J'ai comparé pour cela des logiciels existants et optimisé un algorithme Min-Max. La pertinence des différentes méthodes développées a finalement été illustrée dans le cadre de la détection de QTL à travers une analyse d'association réalisée sur un panel d'hybrides de maïs. / The detection of genes is a first step to understand the impact of the genetic information of individuals on their phenotypes. During my PhD, I studied statistical methods to perform genome-wide association studies, with maize hybrids as an application case. Firstly, I studied the inference of relatedness coefficients between individuals from biallelic marker data. This estimation is based on a parametric mixture model. I studied the identifiability of this model in the generic case but also in the specific case of mating design where observed individuals are obtained by crossing lines, a representative case of classical mating design in plant genetics. Then I studied inference of variance component mixed model parameters and particularly the performance of algorithms to test effects of numerous markers. I compared existing programs and I optimized a Min-Max algorithm. Relevance of developed methods had been illustrated for the detection of QTLs through a genome-wide association analysis in a maize hybrids panel.
47

Using Sequential Sampling Models to Detect Selective Infuences: Pitfalls and Recommendations.

Park, Joonsuk January 2019 (has links)
No description available.
48

Qualitative Adaptive Identification for Powertrain Systems. Powertrain Dynamic Modelling and Adaptive Identification Algorithms with Identifiability Analysis for Real-Time Monitoring and Detectability Assessment of Physical and Semi-Physical System Parameters

Souflas, Ioannis January 2015 (has links)
A complete chain of analysis and synthesis system identification tools for detectability assessment and adaptive identification of parameters with physical interpretation that can be found commonly in control-oriented powertrain models is presented. This research is motivated from the fact that future powertrain control and monitoring systems will depend increasingly on physically oriented system models to reduce the complexity of existing control strategies and open the road to new environmentally friendly technologies. At the outset of this study a physics-based control-oriented dynamic model of a complete transient engine testing facility, consisting of a single cylinder engine, an alternating current dynamometer and a coupling shaft unit, is developed to investigate the functional relationships of the inputs, outputs and parameters of the system. Having understood these, algorithms for identifiability analysis and adaptive identification of parameters with physical interpretation are proposed. The efficacy of the recommended algorithms is illustrated with three novel practical applications. These are, the development of an on-line health monitoring system for engine dynamometer coupling shafts based on recursive estimation of shaft’s physical parameters, the sensitivity analysis and adaptive identification of engine friction parameters, and the non-linear recursive parameter estimation with parameter estimability analysis of physical and semi-physical cyclic engine torque model parameters. The findings of this research suggest that the combination of physics-based control oriented models with adaptive identification algorithms can lead to the development of component-based diagnosis and control strategies. Ultimately, this work contributes in the area of on-line fault diagnosis, fault tolerant and adaptive control for vehicular systems.
49

Indirect System Identification for Unknown Input Problems : With Applications to Ships

Linder, Jonas January 2017 (has links)
System identification is used in engineering sciences to build mathematical models from data. A common issue in system identification problems is that the true inputs to the system are not fully known. In this thesis, existing approaches to unknown input problems are classified and some of their properties are analyzed.  A new indirect framework is proposed to treat system identification problems with unknown inputs. The effects of the unknown inputs are assumed to be measured through possibly unknown dynamics. Furthermore, the measurements may also be dependent on other known or measured inputs and can in these cases be called indirect input measurements. Typically, these indirect input measurements can arise when a subsystem of a larger system is of interest and only a limited set of sensors is available. Two examples are when it is desired to estimate parts of a mechanical system or parts of a dynamic network without full knowledge of the signals in the system. The input measurements can be used to eliminate the unknown inputs from a mathematical model of the system through algebraic manipulations. The resulting indirect model structure only depends on known and measured signals and can be used to estimate the desired dynamics or properties. The effects of using the input measurements are analyzed in terms of identifiability, consistency and variance properties. It is shown that cancelation of shared dynamics can occur and that the resulting estimation problem is similar to errors-in-variables and closed-loop estimation problems because of the noisy inputs used in the model. In fact, the indirect framework unifies a number of already existing system identification problems that are contained as special cases. For completeness, an instrumental variable method is proposed as one possibility for estimating the indirect model. It is shown that multiple datasets can be used to overcome certain identifiability issues and two approaches, the multi-stage and the joint identification approach, are suggested to utilize multiple datasets for estimation of models. Furthermore, the benefits of using the indirect model in filtering and for control synthesis are briefly discussed.  To show the applicability, the framework is applied to the roll dynamics of a ship for tracking of the loading conditions. The roll dynamics is very sensitive to changes in these conditions and a worst-case scenario is that the ship will capsize.  It is assumed that only motion measurements from an inertial measurement unit (IMU) together with measurements of the rudder angle are available. The true inputs are thus not available, but the measurements from the IMU can be used to form an indirect model from a well-established ship model. It is shown that only a subset of the unknown parameters can be estimated simultaneously. Data was collected in experiments with a scale ship model in a basin and the joint identification approach was selected for this application due to the properties of the model. The approach was applied to the collected data and gave promising results. / Till skillnad från många andra industrier där avancerade styrsystem har haft en omfattande utveckling under de senaste decennierna så har styrsystem för skepps- och marinindustrin inte alls utvecklats i samma utsträckning. Det är framförallt under de senaste 10 åren som lagkrav och stigande driftskostnader har ökat intresset för effektivitet och säkerhet genom användning av styrsystem. Rederier och den marina industrin är nu intresserade av hur de avancerade styrsystem som används inom andra områden kan tillämpas för marina ändamål. Huvudmålet är typiskt att minska den totala energianvändningen, och därmed också bränsleförbrukningen, genom att hela tiden planera om hur skeppet skall användas med hjälp av ny information samt styra skeppet och dess ingående system på ett sätt som maximerar effektiviteten. För många av dessa avancerade styrsystem är det grundläggande att ha en god förståelse för beteendet hos det systemet som skall styras. Ofta används matematiska modeller av systemet för detta ändamål. Sådana modeller kan skapas genom att observera hur systemet reagerar på yttre påverkan och använda dessa observationer för att finna eller skatta den modell som bäst beskriver observationerna. Observationerna är mätningar som görs med så kallade sensorer och tekniken att skapa modeller från mätningarna kallas för systemidentifiering. Detta är i grunden ett utmanande problem och det kan försvåras ytterligare om de sensorer som behövs inte finns tillgängliga eller är för dyra att installera. I denna avhandling föreslås en ny teknik där de mätningar som finns tillgängliga används på ett nytt och annorlunda sätt. Detta kan minska mängden nödvändiga sensorer eller möjliggöra användandet av alternativa sensorer i modell-framtagningen. Med hjälp av denna nya teknik kan enkla sensorer användas för att skatta en matematisk modell för en del av skeppet på ett sätt som inte är möjligt med traditionella metoder. Genom att skatta denna modell kan fysikaliska egenskaper hos skeppet, så som dess massa och hur massan är fördelad över skeppet, övervakas för att upptäcka förändringar. Just dessa två egenskaper har stor inverkan på hur skeppet beter sig och om skeppet är fellastat kan det i värsta fall kapsejsa. Vetskapen om dessa fysikaliska egenskaper kan alltså utöver effektivisering användas för att varna besättningen eller påverka styrsystemen så att farliga manövrar undviks. För att visa att tekniken fungerar i verkligheten har den använts på mätningar som har samlats in från ett skalenligt modellskepp. Experimenten utfördes i bassäng och resultaten visar att tekniken fungerar. Denna nya teknik är inte specifik för marint bruk utan kan också vara användbar i andra typer av tillämpningar. Även i dessa tillämpningar möjliggörs användandet av färre eller alternativa sensorer för att skatta modeller. Tekniken kan vara speciellt användbar när en modell av ett system eller process som verkar i ett nätverk av många system är av intresse, något som också diskuteras i avhandlingen.
50

Análise de cheias anuais segundo distribuição generalizada / Analysis of annual floods by generalized distribution

Queiroz, Manoel Moisés Ferreira de 02 July 2002 (has links)
A análise de freqüência de cheias através da distribuição de probabilidade generalizada de valores extremos-GEV tem crescido nos últimos anos. A estimação de altos quantis de cheias é comumente praticada extrapolando o ajuste, representado por uma das 3 formas inversas de distribuição GEV, para períodos de retorno bem superiores ao período dos dados observados. Eventos hidrológicos ocorrem na natureza com valores finitos, tal que, seus valores máximos seguem a forma assintótica da GEV limitada. Neste trabalho estuda-se a estimabilidade da distribuição GEV através de momentos LH, usando séries de cheias anuais com diferentes características e comprimentos, obtidas de séries de vazões diária gerada de diversas formas. Primeiramente, sequências estocásticas de vazões diárias foram obtidas da distribuição limitada como subjacente da distribuição GEV limitada. Os resultados da estimação dos parâmetros via momentos-LH, mostram que o ajuste da distribuição GEV as amostras de cheias anuais com menos de 100 valores, pode indicar qualquer forma de distribuição de valores extremos e não somente a forma limitada como seria esperado. Também, houve grande incerteza na estimação dos parâmetros obtidos de 50 séries geradas de uma mesma distribuição. Ajustes da distribuição GEV às séries de vazões anuais, obtidas séries de fluxo diários gerados com 4 modelos estocásticos disponíveis na literatura e calibrados aos dados dos rio Paraná e dos Patos, resultaram na forma de Gumbel. Propõe-se um modelo de geração diária que simula picos de vazões usando a distribuição limitada. O ajuste do novo modelo às vazões diárias do rio Paraná reproduziu as estatísticas diárias, mensais, anuais, assim como os valores extremos da série histórica. Além disso, a série das cheias anuais com longa duração, foi adequadamente descrita pela forma da distribuição GEV limitada. / Frequency analysis of floods by Generalized Extreme Value probability distribution has multiplied in the last few years. The estimations of high quantile floods is commonly practiced extrapolating the adjustment represented by one of the three forms of inverse GEV distribution for the return periods much greater than the period of observation. The hydrologic events occur in nature with finite values such that their maximum values follow the asymptotic form of limited GEV distribution. This work studies the identifiability of GEV distribution by LH-moments using annual flood series of different characteristics and lengths, obtained from daily flow series generated by various methods. Firstly, stochastic sequences of daily flows were obtained from the limited distribution underlying the GEV limited distribution. The results from the LH-moment estimation of parameters show that fitting GEV distribution to annual flood samples of less than 100 values may indicate any form of extreme value distribution and not just the limited form as one would expect. Also, there was great uncertainty noticed in the estimated parameters obtained for 50 series generated from the some distribution. Fitting GEV distribution to annual flood series, obtained from daily flow series generated by 4 stochastic model available in literature calibrated for the data from Paraná and dos Patos rivers, indicated Gumbel distribution. A daily flow generator is proposed which simulated the high flow pulses by limited distribution. It successfully reproduced the statistics related to daily, monthly and annual values as well as the extreme values of historic data. Further, annual flood series of long duration are shown to follow the form of asymptotic limited GEV distribution.

Page generated in 0.0725 seconds