31 |
Probabilistic Model Checking for Temporal Logics in Weighted StructuresWunderlich, Sascha 23 September 2024 (has links)
Model checking is a well-established method for automatic system verification. Besides the extensively studied qualitative case, there is also an increasing interest in the quantitative analysis of system properties. Many important quantities can be formalised as the accumulated values of weight functions. These measures include resource usage such as energy consumption, or performance metrics such as the cost-utility ratio or reliability guarantees. Different kinds of accumulation like summation, averaging and ratios are necessary to cover the diverse spectrum of quantities.
This work provides a general framework for the formalisation and verification of system models and property specifications with accumulative values.
On the modelling side, we rely on weighted extensions of well-known modelling formalisms. Besides weighted transition systems, we investigate weighted probabilistic models such as Markov chains and Markov decision processes (MDPs). The weights in this sense are functions, mapping each state or transition in the model to a value, e.g., a rational vector.
For the specification side, we provide a language in the form of an extension of temporal logic with new modalities that impose restrictions on the accumulated weight along path fragments. These fragments are regular and can be characterised by finite automata, so called monitors. Specifically, we extend linear temporal logic (LTL) and (probabilistic) computation tree logic (CTL) with such constraints.
The framework allows variation to weaker formalisms, like non-negative or integral weight functions and bounded accumulation. We study the border of decidability of the model-checking problem for different combinations of these restrictions and give complexity results and algorithms for the decidable fragment.
An implementation of the model-checking algorithms on top of the popular probabilistic model checker PRISM is provided. We also investigate several optimization techniques that can be applied to a broad range of formula patterns. The practical behaviour of the implementation and its optimization methods is put to the test by a set of scaling experiments for each model type.:1. Introduction
1.1. Goal of the Thesis
1.2. Main Contributions
1.3. Related Work
1.4. Outline
1.5. Resources
2. Preliminaries
2.1. Modeling Formalisms
2.2. Finite Automata
2.3. Propositional Logic
2.4. Temporal Logics
2.4.1. Linear Temporal Logic
2.4.2. Computation Tree Logic
2.5. Model-Checking Problems
2.5.1. Markov Decision Processes
2.5.2. Markov Chains
2.5.3. Transition Systems
2.5.4. Calculate Probabilities
3. Specifications with Weight Accumulation
3.1. Weight Constraints
3.1.1. Syntax of Weight Constraints
3.1.2. Weighted Models
3.1.3. Interpretation of Weight Constraints
3.1.4. Properties of Weight Constraints
3.2. Monitor Automata
3.2.1. Automata Classes
3.2.2. Observing WMDP Paths
3.3. Variants
3.3.1. Weight Ratios
3.3.2. Other Linear Accumulation Operators
3.3.3. Other Weight Combinations
3.3.4. Filtered Semantics
4. Linear Temporal Logic with Accumulation
4.1. Syntax and Semantics of AccLTL
4.1.1. Syntax of AccLTL
4.1.2. Semantics of AccLTL
4.1.3. Past Variant
4.1.4. Transformation of Weight Functions
4.1.5. Examples for AccLTL Formulae
4.2. Decidability Results for Accumulation LTL
4.2.1. Encoding the Post Correspondence Problem
4.2.2. Reduction of the AccLTL Model-Checking Problem
4.3. Complexity Results for Bounded Accumulation LTL
4.3.1. Transformation to Unweighted MDP and LTL
4.3.2. Reduction to LTL Model-Checking Problems
4.3.3. Algorithm
4.4. Decidability Results for Conic Accumulation LTL and RMDPs
4.4.1. Transformation to Unweighted MDP and LTL
4.4.2. Simple Weight Constraints
4.4.3. 1-dimensional Weight Constraints
4.5. NP-hard and coNP-hard Formulae for WTS and WMCs
4.5.1. Formulae for WTS
4.5.2. Formulae for WMC
4.6. Efficiently Decidable Patterns
4.7. Summary
5. Computation Tree Logic with Accumulation
5.1. Syntax and Semantics
5.1.1. Syntax and Semantics of AccCTL
5.1.2. Syntax and Semantics of AccPCTL
5.2. Decidability Results for Accumulation (P)CTL
5.3. Complexity Results for Bounded Accumulation (P)CTL
5.3.1. Weighted Markov Decision Processes
5.3.2. Weighted Markov Chains
5.3.3. Weighted Transition Systems
5.4. Decidability Results for Conic Accumulation (P)CTL and RMDPs
5.5. Summary
6. Implementation and Experiments
6.1. Implementation Details
6.1.1. Formula Expression
6.1.2. Model Construction
6.2. Optimizations
6.2.1. Single Track Method
6.2.2. Rewriting Without Until
6.2.3. Monitor Filtering
6.2.4. Detection of Optimization Methods
6.3. Scaling Experiments
6.3.1. Scaling Dimensions
6.3.2. Setting
6.3.3. Model Description
6.3.4. Input Size
6.3.5. Optimization Effects
6.3.6. Filtering
7. Conclusions
7.1. Summary
7.2. Outlook and Future Work
A. Bibliography
B. Material for the experiments
B.1. Environment for the Experiments
B.1.1. Container Image
B.1.2. Model Definitions
|
32 |
Formal methods for the analysis of wireless network protocolsFruth, Matthias January 2011 (has links)
In this thesis, we present novel software technology for the analysis of wireless networks, an emerging area of computer science. To address the widely acknowledged lack of formal foundations in this field, probabilistic model checking, a formal method for verification and performance analysis, is used. Contrary to test and simulation, it systematically explores the full state space and therefore allows reasoning about all possible behaviours of a system. This thesis contributes to design, modelling, and analysis of ad-hoc networks and randomised distributed coordination protocols. First, we present a new hybrid approach that effectively combines probabilistic model checking and state-of-the-art models from the simulation community in order to improve the reliability of design and analysis of wireless sensor networks and their protocols. We describe algorithms for the automated generation of models for both analysis methods and their implementation in a tool. Second, we study spatial properties of wireless sensor networks, mainly with respect to Quality of Service and energy properties. Third, we investigate the contention resolution protocol of the networking standard ZigBee. We build a generic stochastic model for this protocol and analyse Quality of Service and energy properties of it. Furthermore, we assess the applicability of different interference models. Fourth, we explore slot allocation protocols, which serve as a bandwidth allocation mechanism for ad-hoc networks. We build a generic model for this class of protocols, study real-world protocols, and optimise protocol parameters with respect to Quality of Service and energy constraints. We combine this with the novel formalisms for wireless communication and interference models, and finally we optimise local (node) and global (network) routing policies. This is the first application of probabilistic model checking both to protocols of the ZigBee standard and protocols for slot allocation.
|
33 |
Optimisation de potentiels statistiques pour un modèle d'évolution soumis à des contraintes structurales / Optimization of statistical potentials for a structurally constrained evolutionary modelBonnard, Cécile 05 January 2010 (has links)
Ces dernières années, plusieurs modèles d'évolution moléculaire, basés sur l'hypothèse que les séquences des protéines évoluent sous la contrainte d'une structure bien définie et constante au cours de l'évolution, ont été développés. Cependant, un tel modèle repose sur l'expression de la fonction représentant le lien entre la structure et sa séquence. Les potentiels statistiques proposent une solution intéressante, mais parmi l'ensemble des potentiels statistiques existants, lequel serait le plus approprié pour ces modèles d'évolution ? Dans cette thèse est développé un cadre probabiliste d'optimisation de potentiels statistiques, dans le contexte du maximum de vraisemblance, et dans une optique de protein design. Ce cadre intègre différentes méthodes d'optimisation, incluant la prise en compte de structures alternatives pour l'optimisation des potentiels, et fournit un cadre robuste et des tests statistiques (à la fois dans le contexte de l'optimisation des potentiels mais aussi dans le contexte de l'évolution moléculaire) permettant de comparer différentes méthodes d'optimisation de potentiels statistiques pour les modèles soumis à des contraintes structurales. / In the field of molecular evolution, so called Structurally constrained (SC) models have been developped. Expressed at the codon level, they explicitely separe the mutation (applied to the nucleotide sequence) and the selection (applied to the encoded protein sequence) factors. The selection factor is described as a function between the structure and the sequence of the protein, via the use of a statistical potential. However, the whole evolutionary model depends on the expression of this potential, and one can ask wether a potential would be better than another. In this thesis, is developped a probabilistic framework to optimize statistical potentials especially meant for protein design, using a maximum likelihood approach. The statistical potential used in this thesis is composed by a contact potential and a solvent accessibility potential, but the probabilistic framework can easily be generalized to more complex statistical potentials. In a first part, the framework is defined, and then an algorithmical enhancement is proposed, and finally, the framework is modified in order to take into account misfolded structures (decoys). The framework defined in this thesis and in other works allows to compare different optimization methods of statistical potentials for SC models, using cross-validation and Bayes factor comparisons.
|
34 |
Efficient variable screening method and confidence-based method for reliability-based design optimizationCho, Hyunkyoo 01 May 2014 (has links)
The objectives of this study are (1) to develop an efficient variable screening method for reliability-based design optimization (RBDO) and (2) to develop a new RBDO method incorporated with the confidence level for limited input data problems. The current research effort involves: (1) development of a partial output variance concept for variable screening; (2) development of an effective variable screening sequence; (3) development of estimation method for a confidence level of a reliability output; and (4) development of a design sensitivity method for the confidence level.
In the RBDO process, surrogate models are frequently used to reduce the number of simulations because analysis of a simulation model takes a great deal of computational time. On the other hand, to obtain accurate surrogate models, we have to limit the dimension of the RBDO problem and thus mitigate the curse of dimensionality. Therefore, it is desirable to develop an efficient and effective variable screening method for reduction of the dimension of the RBDO problem. In this study, it is found that output variance is critical for identifying important variables in the RBDO process. A partial output variance, which is an efficient approximation method based on the univariate dimension reduction method (DRM), is proposed to calculate output variance efficiently. For variable screening, the variables that has larger partial output variances are selected as important variables. To determine important variables, hypothesis testing is used so that possible errors are contained at a user-specified error level. Also, an appropriate number of samples is proposed for calculating the partial output variance. Moreover, a quadratic interpolation method is studied in detail to calculate output variance efficiently. Using numerical examples, performance of the proposed variable screening method is verified. It is shown that the proposed method finds important variables efficiently and effectively.
The reliability analysis and the RBDO require an exact input probabilistic model to obtain accurate reliability output and RBDO optimum design. However, often only limited input data are available to generate the input probabilistic model in practical engineering problems. The insufficient input data induces uncertainty in the input probabilistic model, and this uncertainty forces the RBDO optimum to lose its confidence level. Therefore, it is necessary to consider the reliability output, which is defined as the probability of failure, to follow a probability distribution. The probability of the reliability output is obtained with consecutive conditional probabilities of input distribution type and parameters using the Bayesian approach. The approximate conditional probabilities are obtained under reasonable assumptions, and Monte Carlo simulation is applied to practically calculate the probability of the reliability output. A confidence-based RBDO (C-RBDO) problem is formulated using the derived probability of the reliability output. In the C-RBDO formulation, the probabilistic constraint is modified to include both the target reliability output and the target confidence level. Finally, the design sensitivity of the confidence level, which is the new probabilistic constraint, is derived to support an efficient optimization process. Using numerical examples, the accuracy of the developed design sensitivity is verified and it is confirmed that C-RBDO optimum designs incorporate appropriate conservativeness according to the given input data.
|
35 |
Un modèle probabiliste de fleur de fertilité et facteurs influant sur la production de semences en colza d'hiverXiujuan, Wang 08 June 2011 (has links) (PDF)
Le nombre de siliques par plante et le nombre de graines par silique sont les composantes du rendement du colza d'hiver qui présentent la plus grande variabilité. La production d'une graine résulte de la combinaison de plusieurs processus physiologiques, à savoir la formation des ovules et des grains de pollen, la fécondation des ovules et le développement de jeunes embryons. Un problème survenu à n'importe quelles des étapes peut entraîner l'avortement de graines ou de la silique. Le nombre potentiel d'ovules par silique et le nombre graines arrivant la maturité dépendraient de la position du dans l'architecture de plante et le temps de son apparition, mais le mode complexe de développement de colza rend difficile l'analyse des causes et effets. Dans cette étude, la variabilité des composantes du rendement suivantes est étudiée: (a) nombre d'ovules par silique, (b) nombre de graines par silique, et (c) nombre de siliques par axe en fonction d'une part, l'emplacement de la fleur dans l'inflorescence, et la position de cette dernière sur la tige, et l'autre part, le temps d'apparition de la silique, qui affectent la disponibilité d'assimilats. Basé sur les processus biologiques de la fertilité des fleurs, un modèle probabiliste est développé pour simuler le développement des graines. Le nombre de grains de pollen par fleur peut être déduit par le modèle et ainsi que les facteurs qui influent le rendement. Des expériences de terrain ont été menées en 2008 et 2009. Le nombre et la position des fleurs qui s'épanouissaient dans l'inflorescence ont été enregistrés sur la base des observations tous les deux à trois jours pendant la saison de floraison. Différents états trophiques ont été créés par tailler de la tige principale ou des ramifications à étudier l'effet de l'assimilation de la compétition. Les résultats montrent que la quantité d'assimilâtes disponibles a été le principal déterminant de la production de graines et de siliques. La répartition d'assimilâtes a été sensiblement affectée par l'emplacement de silique au sein d'une inflorescence et la location de l'inflorescence sur la tige colza. En outre, le paramètre de la distribution du nombre de pollen a indiqué que la production de graines pourrait être limitée par la pollinisation. La réduction de la viabilité des ovules pourrait entraîner la diminution du nombre de siliques et le nombre de graines par silique à l'extrémité de l'inflorescence. Le modèle proposé pourrait être un outil pour étudier la stratégie de l'amélioration du rendement des plantes à fleurs.
|
36 |
[en] STUDY OF THE COMPRESSIVE FATIGUE BEHAVIOR OF FIBER REINFORCED CONCRETE / [pt] ESTUDO DO COMPORTAMENTO À FADIGA EM COMPRESSÃO DO CONCRETO COM FIBRASARTHUR MEDEIROS 24 August 2018 (has links)
[pt] Esta pesquisa teórico-experimental teve como objetivo avaliar a influência da frequência de carregamento no comportamento à fadiga em compressão do concreto com e sem fibras e foi realizada através da colaboração entre a Pontifícia Universidade Católica do Rio de Janeiro e a Universidad de Castilla-La Mancha – Espanha durante o doutorado sanduíche. A motivação surgiu da idéia de construir torres eólicas, com cem metros de altura, em concreto de alto desempenho como uma solução mais econômica. Estas torres estão submetidas a ciclos de carga e descarga com frequências desde 0,01 Hz até 0,3 Hz. A adição de fibras melhora o desempenho do concreto à tração, reduzindo a fissuração. No estudo experimental foram produzidos três concretos de mesma matriz: sem fibras, com fibras de polipropileno e fibras de aço. Foram realizados 124 ensaios de fadiga em compressão em corpos de prova cúbicos de 100 mm de aresta, divididos em doze séries: três concretos e quatro frequências 4 Hz, 1 Hz, 0,25 Hz e 0,0625 Hz. Comparando-se o número de ciclos até a ruptura foi possível verificar experimentalmente que a frequência influenciou o comportamento do concreto à fadiga em compressão e que a adição de fibras melhorou o desempenho à fadiga apenas para as frequências mais baixas. O desempenho das fibras de aço foi bastante superior ao das de polipropileno. Foi proposto um modelo probabilístico que busca relacionar os parâmetros de um ensaio de fadiga com a frequência de carregamento, levando em consideração a distribuição estatística dos ensaios de fadiga e das propriedades mecânicas do concreto. O modelo foi validado pelos resultados experimentais. Foi comprovado que a ruptura é probabilística em termos do número de ciclos N ou da taxa de deformação específica secundária, e que existe uma relação direta entre N e. Em termos práticos, o modelo permite estimar o número de ciclos até a ruptura sem chegar a romper o corpo de prova. / [en] This work presents the results of a theorical-experimental study performed in cooperation between the Pontifícia Universidade Católica do Rio de Janeiro and the Universidad de Castilla-La Mancha in Spain. The main goal was to verify the influence of the loading frequency on the compressive fatigue behavior of plain and fiber reinforced concrete FRC. The motivation comes from the intention on building wind energy generator towers with one hundred meters in height by using a high-performance concrete as a cheaper alternative material instead of steel. These towers are subjected to load and unload cycles at frequencies from 0,01 Hz to 0,3 Hz. The addition of fibers improves concrete properties such as tensile strength, reducing cracking. In the experimental study three types of concrete were produced from the same matrix: a plain concrete and two FRC, with polypropylene fibers and with steel fibers. One hundred twenty four
compressive fatigue tests were performed on cubic specimens with 100 mm in edge length, divided on twelve series: three types of concrete and four frequencies 4 Hz, 1 Hz, 0,25 Hz and 0,0625 Hz. Comparing the number of cycles to failure, it is clear that the loading frequency influences the compressive fatigue
behavior and that the addition of fibers improves fatigue performance only at the lower frequencies. The performance of the steel fibers is more efficient than the polypropylene ones. A probabilistic model was proposed to relate the fatigue parameters with the loading frequency, considering both statistical distributions of the fatigue tests and the concrete mechanical properties. There is a good agreement between the model and the experimental results. In terms of number of cycles N or strain history (through the secondary strain rate) the rupture is probabilistic, and there is a direct relation between N and. This relation provides the possibility to estimate the number of cycles to failure without breaking the
specimen.
|
37 |
Prioris para modelos probabilísticos discretos em ciências agráriasSARAIVA, Cristiane Almeida 30 March 2007 (has links)
Submitted by Mario BC (mario@bc.ufrpe.br) on 2016-07-26T16:30:42Z
No. of bitstreams: 1
Cristiane Almeida Saraiva.pdf: 654226 bytes, checksum: e8f2868121f9f239abad81f4e3eba456 (MD5) / Made available in DSpace on 2016-07-26T16:30:42Z (GMT). No. of bitstreams: 1
Cristiane Almeida Saraiva.pdf: 654226 bytes, checksum: e8f2868121f9f239abad81f4e3eba456 (MD5)
Previous issue date: 2007-03-30 / With the propose to choose priors more fited for discrete data, we study technics for determination of priors just as Laplace’s Methods, Jeffreys’s Methods and Haldane’s Methods which are conjugated prior. We take a sample of ten grange among the fifty three ones existent of the Pernambuco’s State to estimate the probability of commercial egg (big type). We suppose that the distribution from the sample data is binomial and we use the methods quoted above. The software used for that was the package Winbugs 1.4 where we compute the average, standard deviation, 95% credible interval and their amplitude. For each one of the methods it was observed that 20.000 iterations were sufficient since the balance of the chain already had established with 12.500 iterations. The estimated parameter p=0,664 was obtained by the Laplace’s Method, Jeffreys’s Method and Haldane’s Method. / Objetivando selecionar prioris mais adequadas para dados discretos estudamos técnicas para determinação de prioris, tais como métodos de Laplace, método de Jeffreys e método de Haldane em que as prioris sâo conjugadas. Foi tomada uma amotra de dez granjas dentre as 53 existentes do Estado de Pernambuco com o propósito de estimar a probabilidade de ovos comerciais (grandes). Tendo em vista que os ovos são classificados como industrial, pequeno, médio, grande, extra e jumbo, classificamos os ovos em pequeno e grande. Os ovos industriais, pequenos e médios foram tidos como pequeno e os ovos grandes, extra
e jumbo , como grande. Com a suposição de que os dados amostrais seguem uma distribuição binomial e utilizando prioris determinadas pelos métodos acima descritos, utilizamos o software Winbugs 1.4 com o qual foram calculados a média, desvio padrão, intervalo de credibilidade de 95% e sua amplitude. Para cada um dos métodos utilizamos 20.000 iterações das quais as 10.000 primeiras foram descartadas observando-se que o equilíbrio da cadeia iniciou-se com 12.500 iterações. Obtivemos uma estimativa média do parâmetro p o qual foi semelhante nos métodos de Laplace, Jeffreys e Haldane, correspondendo a aproximadamente p= 0,664.
|
38 |
Modélisation stochastique, en mécanique des milieux continus, de l'interphase inclusion-matrice à partir de simulations en dynamique moléculaire / Stochastic modeling, in continuum mechanics, of the inclusion-matrix interphase from molecular dynamics simulationsLe, Tien-Thinh 21 October 2015 (has links)
Dans ce travail, nous nous intéressons à la modélisation stochastique continue et à l'identification des propriétés élastiques dans la zone d'interphase présente au voisinage des hétérogénéités dans un nano composite prototypique, composé d'une matrice polymère modèle renforcée par une nano inclusion de silice. Des simulations par dynamique moléculaire (DM) sont tout d'abord conduites afin d'extraire certaines caractéristiques de conformation des chaînes proches de la surface de l'inclusion, ainsi que pour estimer, par des essais mécaniques virtuels, des réalisations du tenseur apparent associé au domaine de simulation. Sur la base des résultats obtenus, un modèle informationnel de champ aléatoire est proposé afin de modéliser les fluctuations spatiales du tenseur des rigidités dans l'interphase. Les paramètres du modèle probabiliste sont alors identifiés par la résolution séquentielle de deux problèmes d'optimisation inverses (l'un déterministe et associé au modèle moyen, l'autre stochastique et lié aux paramètres de dispersion et de corrélation spatiale) impliquant une procédure d'homogénéisation numérique. On montre en particulier que la longueur de corrélation dans la direction radiale est du même ordre de grandeur que l'épaisseur de l'interphase, indiquant ainsi la non-séparation des échelles. Enfin, la prise en compte, par un modèle de matrices aléatoires, du bruit intrinsèque généré par les simulations de DM (dans la procédure de calibration) est discutée / This work is concerned with the stochastic modeling and identification of the elastic properties in the so-called interphase region surrounding the inclusions in nanoreinforced composites. For the sake of illustration, a prototypical nanocomposite made up with a model polymer matrix filled by a silica nanoinclusion is considered. Molecular Dynamics (MD) simulations are first performed in order to get a physical insight about the local conformation of the polymer chains in the vicinity of the inclusion surface. In addition, a virtual mechanical testing procedure is proposed so as to estimate realizations of the apparent stiffness tensor associated with the MD simulation box. An information-theoretic probabilistic representation is then proposed as a surrogate model for mimicking the spatial fluctuations of the elasticity field within the interphase. The hyper parameters defining the aforementioned model are subsequently calibrated by solving, in a sequential manner, two inverse problems involving a computational homogenization scheme. The first problem, related to the mean model, is formulated in a deterministic framework, whereas the second one involves a statistical metric allowing the dispersion parameter and the spatial correlation lengths to be estimated. It is shown in particular that the spatial correlation length in the radial direction is roughly equal to the interphase thickness, hence showing that the scales under consideration are not well separated. The calibration results are finally refined by taking into account, by means of a random matrix model, the MD finite-sampling noise
|
39 |
Prise en compte des méconnaissances dans la quantification de la nocivité des fissures en fatigue / Integration of uncertainties in fatigue cracks hazardness quantification.Boutet, Pierre 15 December 2015 (has links)
Dans les installations industrielles, des inspections régulières sont planifiées pour évaluer l’état de santé interne des composants. Si des fissures sont révélées, il est souhaitable de savoir si l’exploitation de la structure peut se poursuivre ou si un régime de fonctionnement dégradé pourrait être envisagé. En se basant sur la mécanique élastique linéaire de la rupture, les travaux présentés traitent donc dans le cas de composants fissurés de la dispersion de durée de vie résiduelle relative aux incertitudes sur les paramètres du modèle de prévision. La longueur de fissure initiale, les propriétés du matériau ainsi que les paramètres d’entrée de la loi de Paris ont été considérés comme des variables aléatoires, dont les distributions ont été déterminées expérimentalement puis ajustées par des lois statistiques adéquates. Des contrôles ultrasonores par mesure du temps de vol de l’onde diffractée – Time Of Flight Diffraction (TOFD) en anglais – et des mesures de champs obtenues par corrélation d’images numériques ont été utilisés pour quantifier la propagation d’une fissure dans une éprouvette à défaut soumise à des sollicitations cycliques uniaxiales. Les données expérimentales recueillies ont été utilisées pour initialiser les calculs et valider les résultats numériques. Les distributions de taille de fissure obtenue après un nombre donné de cycles de sollicitation et de nombre de cycles de sollicitation conduisant à une taille définie de fissure ont été obtenues par une méthode de Monte-Carlo appliquée au modèle de prévision. L’ajustement de ces distributions par de lois log-normales a fourni des outils analytiques d’estimation probabiliste de propagation de fissure. Cela a notamment permis la réalisation de cartographies de risques et l’évaluation de l’évolution de la fiabilité du composant étudié. Enfin, les effets d’une actualisation de la connaissance de la longueur de fissure au cours de la vie de ce composant en termes d’incertitude de prévision et d’extension de durée de vie résiduelle prévisionnelle ont été étudiés. En particulier, afin de limiter le coût des campagnes de contrôle non destructifs dans le cas industriel, une stratégie d’optimisation de l’actualisation de cette connaissance basée sur l’étude de fiabilité a été proposée. / In industrial plants, regular inspections are planned to assess the internal state of installations. If some cracks are revealed, it is desirable to know whether the structure can still be used or if a degraded mode of operation should be considered. Starting from a linear elastic fracture mechanics model, the work presented studied the scatter of the remaining life of such cracked parts due to the uncertainties on the parameters of the prediction model. Initial crack size, material properties and input parameters of Paris’ law have been considered as random variables and their distributions have been experimentally identified and fitted with convenient statistical laws. Time Of Flight Diffraction (TOFD) and field measurement technique based on Digital Image Correlation (DIC) were used to monitor the crack propagation initiated from a notch introduced in specimen submitted to uniaxial cyclic loading. Experimental crack length results were used to initiate computations and as a mean to validate numerical results. Both the crack lengths distributions resulting from the application of loading cycles and the distribution of the number of cycles leading to a given crack length were obtained from a Monte-Carlo method applied to the prediction model. The fit of these distributions with log-normal laws provided analytical tools to assess probabilistic crack propagation. It allowed for risk mapping and for the evaluation of the studied component’s reliability evolution. Last, the effects of an actualisation of crack length knowledge along the life of this component in terms of assessment uncertainty and predicted residual life extension has been studied. Especially, to limit the cost of non-destructive techniques inspection in industrial cases, a reliability-based strategy has been proposed for the optimisation of the crack knowledge actualisation.
|
40 |
Analyse des tolérances des systèmes complexes – Modélisation des imperfections de fabrication pour une analyse réaliste et robuste du comportement des systèmes / Tolerance analysis of complex mechanisms - Manufacturing imperfections modeling for a realistic and robust geometrical behavior modeling of the mechanismsGoka, Edoh 12 June 2019 (has links)
L’analyse des tolérances a pour but de vérifier lors de la phase de conception, l’impact des tolérances individuelles sur l’assemblage et la fonctionnalité d’un système mécanique. Les produits fabriqués possèdent différents types de contacts et sont sujets à des imperfections de fabrication qui sont sources de défaillances d’assemblage et fonctionnelle. Les méthodes généralement proposées pour l’analyse des tolérances ne considèrent pas les défauts de forme. L’objectif des travaux de thèse est de proposer une nouvelle procédure d’analyse des tolérances permettant de prendre en compte les défauts de forme et le comportement géométriques des différents types de contacts. Ainsi, dans un premier temps, une méthode de modélisation des défauts de forme est proposée afin de rendre les simulations plus réalistes. Dans un second temps, ces défauts de forme sont intégrés dans la modélisation du comportement géométrique d’un système mécanique hyperstatique, en considérant les différents types de contacts. En effet, le comportement géométrique des différents types de contacts est différent dès que les défauts de forme sont considérés. La simulation de Monte Carlo associée à une technique d’optimisation est la méthode choisie afin de réaliser l’analyse des tolérances. Cependant, cette méthode est très couteuse en temps de calcul. Pour pallier ce problème, une approche utilisant des modèles probabilistes obtenus grâce à l’estimation par noyaux, est proposée. Cette nouvelle approche permet de réduire les temps de calcul de manière significative. / Tolerance analysis aims toward the verification of the impact of individual tolerances on the assembly and functional requirements of a mechanical system. The manufactured products have several types of contacts and their geometry is imperfect, which may lead to non-functioning and non-assembly. Traditional methods for tolerance analysis do not consider the form defects. This thesis aims to propose a new procedure for tolerance analysis which considers the form defects and the different types of contact in its geometrical behavior modeling. A method is firstly proposed to model the form defects to make realistic analysis. Thereafter, form defects are integrated in the geometrical behavior modeling of a mechanical system and by considering also the different types of contacts. Indeed, these different contacts behave differently once the imperfections are considered. The Monte Carlo simulation coupled with an optimization technique is chosen as the method to perform the tolerance analysis. Nonetheless, this method is subject to excessive numerical efforts. To overcome this problem, probabilistic models using the Kernel Density Estimation method are proposed.
|
Page generated in 0.1072 seconds