31 |
Développement d’un nouveau modèle dédié à la commande du métabolisme glucidique appliqué aux patients diabétiques de type 1. / Development of a new control model of the glucose metabolism applied to type 1 diabetic patientsBen Abbes, Ilham 28 June 2013 (has links)
La régulation de la concentration de glucose dans l'organisme est nécessaire au bon fonctionnement des globules rouges et de l'ensemble des cellules, dont celles des muscles et du cerveau. Cette régulation met en jeu plusieurs organes ainsi que le système hormonal dont une hormone en particulier, l’insuline. Le diabète de type 1 est une maladie où les cellules productrices d'insuline du pancréas sont détruites. Afin de compenser cette perte de production d'insuline, le traitement de cette maladie consiste, pour le patient, à déterminer une dose d'insuline à s'injecter en fonction de mesures de sa glycémie et de certaines caractéristiques intervenant dans la régulation de celle-ci (repas, activité physique, stress,...). Cette thèse s'inscrit dans une démarche d’automatisation du traitement en proposant un nouveau modèle non-linéaire du métabolisme glucidique pouvant être utilisé dans une solution de contrôle en boucle fermée. Nous avons prouvé que ce modèle possède une unique solution positive et bornée pour des conditions initiales fixées et sa commandabilité locale. Nous nous sommes ensuite intéressés à l’identification paramétrique de ce modèle. Nous avons montré son identifiabilité structurelle et pratique. Dans ce cadre, une nouvelle méthodologie permettant de qualifier l'identifiabilité pratique d'un modèle, basée sur une divergence de Kullback-Leibler, a été proposée. Une estimation des paramètres du modèle a été réalisée à partir de données de patients réels. Dans ce but, une méthodologie d'estimation robuste, basée sur un critère de Huber, a été utilisée. Les résultats obtenus ont montré la pertinence du nouveau modèle proposé. / The development of new control models to represent more accurately the plasma glucose-insulin dynamics in T1DM is needed for efficient closed-loop algorithms. In this PhD thesis, we proposed a new nonlinear model of five time-continuous state equations with the aim to identify its parameters from easily available real patients' data (i.e. data from the insulin pump and the glucose monitoring system. Its design is based on two assumptions. Firstly, two successive remote compartments, one for insulin and one for glucose issued from the meal, are introduced to account for the distribution of the insulin and the glucose in the organism. Secondly, the insulin action in glucose disappearance is modeled through an original nonlinear form. The mathematical properties of this model have been studied and we proved that a unique, positive and bounded solution exists for a fixed initial condition. It is also shown that the model is locally accessible. In this way, it can so be used as a control model. We proved the structural identifiability of this model and proposed a new method based on the Kullback-Leiber divergence in view to test its practical identifiability. The parameters of the model were estimated from real patients' data. The obtained mean fit indicates a good approximation of the glucose metabolism of real patients. The predictions of the model approximate accurately the glycemia of the studied patients during few hours. Finally, the obtained results let us validate the relevance of this new model as a control model in view to be applied to closed-loop algorithms.
|
32 |
Modélisation et analyse statistique de la formation des prix à travers les échelles, Market impact / Statistical modelisation and analisys of the price formation through the scalesIuga, Relu Adrian 11 December 2014 (has links)
Le développement des marchés électroniques organisés induit une pression constante sur la recherche académique en finance. L'impact sur le prix d'une transaction boursière portant sur une grande quantité d'actions sur une période courte est un sujet central. Contrôler et surveiller l'impact sur le prix est d'un grand intérêt pour les praticiens, sa modélisation est ainsi devenue un point central de la recherche quantitative de la finance. Historiquement, le calcul stochastique s'est progressivement imposé en finance, sous l'hypothèse implicite que les prix des actifs satisfont à des dynamiques diffusives. Mais ces hypothèses ne tiennent pas au niveau de la ``formation des prix'', c'est-à-dire lorsque l'on se place dans les échelles fines des participants de marché. Des nouvelles techniques mathématiques issues de la statistique des processus ponctuels s'imposent donc progressivement. Les observables (prix traité, prix milieu) apparaissent comme des événements se réalisant sur un réseau discret, le carnet d'ordre, et ceci à des échelles de temps très courtes (quelques dizaines de millisecondes). L'approche des prix vus comme des diffusions browniennes satisfaisant à des conditions d'équilibre devient plutôt une description macroscopique de phénomènes complexes issus de la formation des prix. Dans un premier chapitre, nous passons en revue les propriétés des marchés électroniques. Nous rappelons la limite des modèles diffusifs et introduisons les processus de Hawkes. En particulier, nous faisons un compte rendu de la recherche concernant le maket impact et nous présentons les avancées de cette thèse. Dans une seconde partie, nous introduisons un nouveau modèle d'impact à temps continu et espace discret en utilisant les processus de Hawkes. Nous montrons que ce modèle tient compte de la microstructure des marchés et est capable de reproduire des résultats empiriques récents comme la concavité de l'impact temporaire. Dans le troisième chapitre, nous étudions l'impact d'un grand volume d'action sur le processus de formation des prix à l'échelle journalière et à une plus grande échelle (plusieurs jours après l'exécution). Par ailleurs, nous utilisons notre modèle pour mettre en avant des nouveaux faits stylisés découverts dans notre base de données. Dans une quatrième partie, nous nous intéressons à une méthode non-paramétrique d'estimation pour un processus de Hawkes unidimensionnel. Cette méthode repose sur le lien entre la fonction d'auto-covariance et le noyau du processus de Hawkes. En particulier, nous étudions les performances de cet estimateur dans le sens de l'erreur quadratique sur les espaces de Sobolev et sur une certaine classe contenant des fonctions « très » lisses / The development of organized electronic markets induces a constant pressure on academic research in finance. A central issue is the market impact, i.e. the impact on the price of a transaction involving a large amount of shares over a short period of time. Monitoring and controlling the market impact is of great interest for practitioners; its modeling and has thus become a central point of quantitative finance research. Historically, stochastic calculus gradually imposed in finance, under the assumption that the price satisfies a diffusive dynamic. But this assumption is not appropriate at the level of ”price formation”, i.e. when looking at the fine scales of market participants, and new mathematical techniques are needed as the point processes. The price (last trade, mid-price) appears as events on a discrete network, the order book, at very short time scales (milliseconds). The Brownien motion becomes rather a macroscopic description of the complex price formation process. In the first chapter, we review the properties of electronic markets. We recall the limit of diffusive models and introduce the Hawkes processes. In particular, we make a review of the market impact research and present this thesis advanced. In the second part, we introduce a new model for market impact model at continuous time and living on a discrete space using process Hawkes. We show that this model that takes into account the market microstructure and it is able to reproduce recent empirical results as the concavity of the temporary impact. In the third chapter, we investigate the impact of large orders on the price formation process at intraday scale and at a larger scale (several days after the meta-order execution). Besides, we use our model to discuss stylized facts discovered in the database. In the fourth part, we focus on the non-parametric estimation for univariate Hawkes processes. Our method relies on the link between the auto-covariance function and the kernel process. In particular, we study the performance of the estimator in squared error loss over Sobolev spaces and over a certain class containing "very'' smooth functions
|
33 |
Estimação de parâmetros de modelos compartimentais para tomografia por emissão de pósitrons. / Parameter estimation of compartmental models for positron emission tomography.Silva, João Eduardo Maeda Moreira da 23 April 2010 (has links)
O presente trabalho possui como metas o estudo, simulação, identificação de parâmetros e comparação estatística de modelos compartimentais utilizados em tomografia por emissão de pósitrons (PET). Para tanto, propõe-se utilizar a metodologia de equações de sensibilidade e o método de Levenberg-Marquardt para a tarefa de estimação de parâmetros característicos das equações diferenciais descritoras dos referidos sistemas. Para comparação entre modelos, foi empregado o critério de informação de Akaike. São consideradas três estruturas compartimentais compostas, respectivamente, por dois compartimentos e duas constantes características, três compartimentos e quatro constantes características e quatro compartimentos e seis constantes características. Os dados considerados neste texto foram sintetizados preocupando-se em reunir as principais características de um exame de tomografia real, tais como tipo e nível de ruído e morfologia de função de excitação do sistema. Para tanto, foram utilizados exames de pacientes do setor de Medicina Nuclear do Instituto do Coração da Faculdade de Medicina da Universidade de São Paulo. Aplicando-se a metodologia proposta em três níveis de ruído (baixo, médio e alto), obteve-se concordância do melhor modelo em graus forte e considerável (com índices de Kappa iguais a 0.95, 0.93 e 0.63, respectivamente). Observou-se que, com elevado nível de ruído e modelos mais complexos (quatro compartimentos), a classificação se deteriora devido ao pequeno número de dados para a decisão. Foram desenvolvidos programas e uma interface gráfica que podem ser utilizadas na investigação, elaboração, simulação e identificação de parâmetros de modelos compartimentais para apoio e análise de diagnósticos clínicos e práticas científicas. / This work has as goals the study, simulation, parameter identification and statistical comparison of compartmental models used in positron emission tomography (PET). We propose to use the methodology of sensitivity equations and the method of Levenberg-Marquardt for the task of estimating the characteristic parameters of the differential equations describing such systems. For model comparison, Akaikes information criterion is applied. We have considered three compartmental structures represented, respectively, by two compartments and two characteristic constants, three compartments and four characteristic constants and four compartments and six characteristics constants. The data considered in this work were synthesized taking into account key features of a real tomography exam, such as type and level of noise and morphology of the input function of the system. To this end, we used tests of patients in the sector of Nuclear Medicine of the Heart Institute of the Faculty of Medicine, University of São Paulo. Applying the proposed methodology with three noise levels (low, medium and high), we obtained agreement of the best model with strong and considerable degrees (with Kappa indexes equal to 0.95, 0.93 and 0.63, respectively). It was observed that, with high noise level and more complex models (four compartments), the classification is deteriorated due to lack of data for the decision. Programs have been developed and a graphical interface that can be used in research, development, simulation and parameter identification of compartmental models, supporting analysis of clinical diagnostics and scientific practices.
|
34 |
Distribuição empírica dos autovalores associados à matriz de interação dos modelos AMMI pelo método bootstrap não-paramétrico / Empirical distribution of eigenvalues associated with the interaction matrix of the AMMI models for non-parametric bootstrap methodHongyu, Kuang 25 January 2012 (has links)
A interação genótipos ambientes (G E) foi definido por Shelbourne (1972) como sendo a variação entre genótipos em resposta a diferentes condições ambientais. Sua magnitude na expressão fenotípica do caráter pode reduzir a correlação entre fenótipo e genótipo, in acionando a variância genética e, por sua vez, parâmetros dependentes desta, como herdabilidade e ganho genético com a seleção. Estudos sobre a adaptabilidade e a estabilidade fenotípica permitem particularizar os efeitos da interação GE ao nível de genótipo e ambiente, identificando a contribuição relativa de cada um para a interação total. Varias metodologias estatísticas têm sido propostas para a interpretação da interação G E proveniente de um grupo de cultivares testados em vários ambientes. Entre essas metodologias destaca-se os modelos AMMI (Additive Main Eects and Multiplicative Interaction Model), que vem ganhando grande aplicabilidade nos últimos anos. O modelo AMMI e um método uni-multivariado, que engloba uma analise de variância para os efeitos principais, que são os efeitos dos genótipos (G) e os ambientes (E) e para os efeitos multiplicativos (interação genótipo ambiente), para a qual utiliza-se a decomposição em valor singular (DVS). Essa técnica multivariada baseia-se no uso dos autovalores e autovetores provenientes da matriz de interação G E. Araujo e Dias (2005) verificaram o problema de superestimação e subestimação de autovalores estimados da maneira convencional. Efron(1979) propôs uma técnica de simulação numérica chamada Bootstrap para avaliar tais incertezas. O método Bootstrap consiste em uma técnica de reamostragem que permite aproximar a distribuição de uma função das observações a partir da distribuição empírica dos dados. Por meio desse método, podem ser estimados o erro-padrão da referida estimativa e os intervalos de confiança, com o intuito de fazer inferência sobre os parâmetros em questão. O objetivo deste trabalho será estudar o efeito da interação G E, avaliar a adaptabilidade e estabilidade de genótipos em diferentes ambientes através do modelo AMMI, com as analises através dos gráficos Biplot, encontrar a distribuição empírica dos autovalores e calcular o intervalo de confiança através o método Bootstrap não-paramétrico. Com o estudo da distribuição empírica dos autovalores poder-se-a validar os testes de hipóteses propostos na literatura para identificar o numero de IPCA (Incremental Principal Component Analysis) para seleção dos modelos AMMI, e propor um teste para seleção dos modelos. / The genotype environment interaction (G E) was dened by Shelbourne (1972) as the variation among genotypes in response to dierent environmental conditions. Its magnitude in phenotypic expression of the character can reduce the correlation between genotype and phenotype, in ating the genetic variance and, in turn, dependent on the parameters, as heritability and genetic gain with selection. Studies on the phenotypic adaptability and stability allow particularize the eects of interaction G E at the level of genotype and environment, identifying the relative contribution of each to the total interaction. There are several methods of analysis and interpretation for the genotype environment interaction from a group of genotype tested in several environments. These methods include AMMI models (Additive Main Eects and Multiplicative Interaction Model), coming gaining great applicability past years. The AMMI model is a uni-multivariate method, that includes an analysis of variance for the main eects (the eects of the genotypes (G) and environments (E)) and assumes multiplicative eects for the genotype environment interaction, using a singular value decomposition (DVS). This method estimates the eigenvalues and eigenvectors deriving from the matrix of genotype environment interaction. Araujo and Dias (2005) found an overestimation and underestimation problem with the eigenvalues in the conventional way. Efron (1979) proposed a numerical resampling technique called Bootstrap for evaluate such uncertainties. The bootstrap method consists of a resampling technique that allows to approximate the distribution of a function of the observations from the empirical distribution of the data. Through this method, can be estimated by the standard error of that estimate and condence intervals, in order to make inferences about the parameters in question. The aim of this work was to study the eect of genotype environment interection (GE), evaluate the adaptability and stability of genotypes in dierent environments through the AMMI model, with the analysis through the Biplot graphs, nd the empirical distribution of eigenvalues and calculate the condence interval using the nonparametric bootstrap, the study of the empirical distribution of eigenvalues serve to validate the hypothesis tests proposed in the literature to identify the number of IPCA (Incremental Principal Component Analysis) for selecting the AMMI model, and propose a test for selection of models.
|
35 |
Récepteur radiofréquence basé sur l’échantillonnage parcimonieux pour de l'extraction de caractéristiques dans les applications de radio cognitive / Radiofrequency receiver based on compressive sampling for feature extraction in cognitive radio applicationsMarnat, Marguerite 29 November 2018 (has links)
Cette thèse traite de la conception de récepteurs radiofréquences basés sur l'acquisition compressée pour de l'estimation paramétrique en radio cognitive.L'acquisition compressée est un changement de paradigme dans la conversion analogique-numérique qui permet de s'affranchir de la fréquence d'échantillonnage de Nyquist.Dans ces travaux, les estimations sont effectuées directement sur les échantillons compressés vu le coût prohibitif de la reconstruction du signal d'entrée.Tout d'abord, l'aspect architecture du récepteur est abordé,avec notamment le choix des codes de mélange pour le convertisseur modulé à large bande (MWC).Une analyse haut niveau des propriétés de la matrice d'acquisition, à savoir la cohérence pour réduire le nombre de mesures et l'isométrie pour la robustesse au bruit,est menée puis validée par une plateforme de simulation.Enfin l'estimation paramétrique à partir des échantillons compressés est abordée à travers la borne de Cramér-Rao sur la variance d'un estimateur non biaisé.Une forme analytique de la matrice de Fisher est établie sous certaines hypothèses et permet de dissocier les effets de la compression et de la création de diversité.L'influence du processus d'acquisition compressée, notamment le couplage entre paramètres et la fuite spectrale, est illustré par l'exemple. / This work deals with the topic of radiofrequency receivers based on Compressive Sampling for feature extraction in Cognitive Radio.Compressive Sampling is a paradigm shift in analog to digital conversion that bypasses the Nyquist sampling frequency.In this work, estimations are carried out directly on the compressed samples due to the prohibitive cost of signal reconstruction.First, the receiver architecture is considered, in particular through the choice of the mixing codes of the Modulated Wideband Converter.A high-level analysis on properties of the sensing matrix, coherence to reduce the number of measurement and isometry for noise robustness,is carried out and validated by a simulation platform.Finally, parametric estimation based on compressed samples is tackled through the prism of the Cram'{e}r-Rao lower bound on unbiased estimators.A closed form expression of the Fisher matrix is established under certain assumptions and enables to dissociate the effects of compression and diversity creation.The influence of Compressive Sampling on estimation bounds, in particular coupling between parameters and spectral leakage, is illustrated by the example.
|
36 |
"Métodos de estimação na teoria de resposta ao item" / Estimation methods in item response theoryAzevedo, Caio Lucidius Naberezny 27 February 2003 (has links)
Neste trabalho apresentamos os mais importantes processos de estimação em algumas classes de modelos de resposta ao item (Dicotômicos e Policotômicos). Discutimos algumas propriedades desses métodos. Com o objetivo de comparar o desempenho dos métodos conduzimos simulações apropriadas. / In this work we show the most important estimation methods for some item response models (both dichotomous and polichotomous). We discuss some proprieties of these methods. To compare the characteristic of these methods we conducted appropriate simulations.
|
37 |
Estimação de parâmetros de modelos compartimentais para tomografia por emissão de pósitrons. / Parameter estimation of compartmental models for positron emission tomography.João Eduardo Maeda Moreira da Silva 23 April 2010 (has links)
O presente trabalho possui como metas o estudo, simulação, identificação de parâmetros e comparação estatística de modelos compartimentais utilizados em tomografia por emissão de pósitrons (PET). Para tanto, propõe-se utilizar a metodologia de equações de sensibilidade e o método de Levenberg-Marquardt para a tarefa de estimação de parâmetros característicos das equações diferenciais descritoras dos referidos sistemas. Para comparação entre modelos, foi empregado o critério de informação de Akaike. São consideradas três estruturas compartimentais compostas, respectivamente, por dois compartimentos e duas constantes características, três compartimentos e quatro constantes características e quatro compartimentos e seis constantes características. Os dados considerados neste texto foram sintetizados preocupando-se em reunir as principais características de um exame de tomografia real, tais como tipo e nível de ruído e morfologia de função de excitação do sistema. Para tanto, foram utilizados exames de pacientes do setor de Medicina Nuclear do Instituto do Coração da Faculdade de Medicina da Universidade de São Paulo. Aplicando-se a metodologia proposta em três níveis de ruído (baixo, médio e alto), obteve-se concordância do melhor modelo em graus forte e considerável (com índices de Kappa iguais a 0.95, 0.93 e 0.63, respectivamente). Observou-se que, com elevado nível de ruído e modelos mais complexos (quatro compartimentos), a classificação se deteriora devido ao pequeno número de dados para a decisão. Foram desenvolvidos programas e uma interface gráfica que podem ser utilizadas na investigação, elaboração, simulação e identificação de parâmetros de modelos compartimentais para apoio e análise de diagnósticos clínicos e práticas científicas. / This work has as goals the study, simulation, parameter identification and statistical comparison of compartmental models used in positron emission tomography (PET). We propose to use the methodology of sensitivity equations and the method of Levenberg-Marquardt for the task of estimating the characteristic parameters of the differential equations describing such systems. For model comparison, Akaikes information criterion is applied. We have considered three compartmental structures represented, respectively, by two compartments and two characteristic constants, three compartments and four characteristic constants and four compartments and six characteristics constants. The data considered in this work were synthesized taking into account key features of a real tomography exam, such as type and level of noise and morphology of the input function of the system. To this end, we used tests of patients in the sector of Nuclear Medicine of the Heart Institute of the Faculty of Medicine, University of São Paulo. Applying the proposed methodology with three noise levels (low, medium and high), we obtained agreement of the best model with strong and considerable degrees (with Kappa indexes equal to 0.95, 0.93 and 0.63, respectively). It was observed that, with high noise level and more complex models (four compartments), the classification is deteriorated due to lack of data for the decision. Programs have been developed and a graphical interface that can be used in research, development, simulation and parameter identification of compartmental models, supporting analysis of clinical diagnostics and scientific practices.
|
38 |
Parametric estimation of randomly compressed functionsMantzel, William 20 September 2013 (has links)
Within the last decade, a new type of signal acquisition has emerged called Compressive Sensing that has proven especially useful in providing a recoverable representation of sparse signals. This thesis presents similar results for Compressive Parametric Estimation. Here, signals known to lie on some unknown parameterized subspace may be recovered via randomized compressive measurements, provided the number of compressive measurements is a small factor above the product of the parametric dimension with the subspace dimension with an additional logarithmic term. In addition to potential applications that simplify the acquisition hardware, there is also the potential to reduce the computational burden in other applications, and we explore one such application in depth in this thesis.
Source localization by matched-field processing (MFP) generally involves solving a number of computationally intensive partial differential equations. We introduce a technique that mitigates this computational workload by ``compressing'' these computations. Drawing on key concepts from the recently developed field of compressed sensing, we show how a low-dimensional proxy for the Green's function can be constructed by backpropagating a small set of random receiver vectors. Then, the source can be located by performing a number of ``short'' correlations between this proxy and the projection of the recorded acoustic data in the compressed space. Numerical experiments in a Pekeris ocean waveguide are presented which demonstrate that this compressed version of MFP is as effective as traditional MFP even when the compression is significant. The results are particularly promising in the broadband regime where using as few as two random backpropagations per frequency performs almost as well as the traditional broadband MFP, but with the added benefit of generic applicability. That is, the computationally intensive backpropagations may be computed offline independently from the received signals, and may be reused to locate any source within the search grid area.
This thesis also introduces a round-robin approach for multi-source localization based on Matched-Field Processing. Each new source location is estimated from the ambiguity function after nulling from the data vector the current source location estimates using a robust projection matrix. This projection matrix effectively minimizes mean-square energy near current source location estimates subject to a rank constraint that prevents excessive interference with sources outside of these neighborhoods. Numerical simulations are presented for multiple sources transmitting through a generic Pekeris ocean waveguide that illustrate the performance of the proposed approach which compares favorably against other previously published approaches. Furthermore, the efficacy with which randomized back-propagations may also be incorporated for computational advantage (as in the case of compressive parametric estimation) is also presented.
|
39 |
Regresiniai ir degradaciniai modeliai patikimumo teorijoje ir išgyvenamumo analizėje / Regression and degradation models in reliability theory and survival analysisMasiulaitytė, Inga 27 May 2010 (has links)
Daktaro disertacijos tyrimo objektai yra rezervuotos sistemos ir degradaciniai modeliai. Norint užtikrinti svarbių sistemos elementų aukštą patikimumą, naudojami jų rezerviniai elementai, kurie gali būti įjungiami sugedus šiems pagrindiniams elementams. Rezerviniai elementai gali funkcionuoti skirtinguose režimuose: „karštame“, „šaltame“ arba „šiltame“. Disertacijoje yra nagrinėjamos sistemos su „šiltai“ rezervuotais elementais. Darbe suformuluojama rezervinio elemento „sklandaus įjungimo“ hipotezė ir konstruojami statistiniai kriterijai šiai hipotezei tikrinti. Nagrinėjami neparametrinio ir parametrinio taškinio bei intervalinio vertinimo uždaviniai. Disertacijoje nagrinėjami pakankamai bendri degradacijos modeliai, kurie aprašo elementų gedimų intensyvumą kaip funkciją kiek naudojamų apkrovų, tiek ir degradacijos lygio, kuri savo ruožtu modeliuojama naudojant stochastinius procesus. / In doctoral thesis redundant systems and degradation models are considered. To ensure high reliability of important elements of the system, the stand-by units can be used. These units are commuted and operate instead of the main failed unit. The stand-by units can function in the different conditions: “hot”, “cold” or “warm” reserving. In the thesis systems with “warm” stand-by units are analyzed. Hypotheses of smooth commuting are formulated and goodness-of-fit tests for these hypotheses are constructed. Nonparametric and parametric point and interval estimation procedures are given. Modeling and statistical estimation of reliability of systems from failure time and degradation data are considered.
|
40 |
Regression and degradation models in reliability theory and survival analysis / Regresiniai ir degradaciniai modeliai patikimumo teorijoje ir išgyvenamumo analizėjeMasiulaitytė, Inga 27 May 2010 (has links)
In doctoral thesis redundant systems and degradation models are considered. To ensure high reliability of important elements of the system, the stand-by units can be used. These units are commuted and operate instead of the main failed unit. The stand-by units can function in the different conditions: “hot”, “cold” or “warm” reserving. In the thesis systems with “warm” stand-by units are analyzed. Hypotheses of smooth commuting are formulated and goodness-of-fit tests for these hypotheses are constructed. Nonparametric and parametric point and interval estimation procedures are given. Modeling and statistical estimation of reliability of systems from failure time and degradation data are considered. / Daktaro disertacijos tyrimo objektai yra rezervuotos sistemos ir degradaciniai modeliai. Norint užtikrinti svarbių sistemos elementų aukštą patikimumą, naudojami jų rezerviniai elementai, kurie gali būti įjungiami sugedus šiems pagrindiniams elementams. Rezerviniai elementai gali funkcionuoti skirtinguose režimuose: „karštame“, „šaltame“ arba „šiltame“. Disertacijoje yra nagrinėjamos sistemos su „šiltai“ rezervuotais elementais. Darbe suformuluojama rezervinio elemento „sklandaus įjungimo“ hipotezė ir konstruojami statistiniai kriterijai šiai hipotezei tikrinti. Nagrinėjami neparametrinio ir parametrinio taškinio bei intervalinio vertinimo uždaviniai. Disertacijoje nagrinėjami pakankamai bendri degradacijos modeliai, kurie aprašo elementų gedimų intensyvumą kaip funkciją kiek naudojamų apkrovų, tiek ir degradacijos lygio, kuri savo ruožtu modeliuojama naudojant stochastinius procesus.
|
Page generated in 0.1055 seconds