• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 9
  • 5
  • 3
  • 3
  • Tagged with
  • 66
  • 66
  • 18
  • 16
  • 12
  • 11
  • 11
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Un modèle probabiliste de fleur de fertilité et facteurs influant sur la production de semences en colza d'hiver

Xiujuan, Wang 08 June 2011 (has links) (PDF)
Le nombre de siliques par plante et le nombre de graines par silique sont les composantes du rendement du colza d'hiver qui présentent la plus grande variabilité. La production d'une graine résulte de la combinaison de plusieurs processus physiologiques, à savoir la formation des ovules et des grains de pollen, la fécondation des ovules et le développement de jeunes embryons. Un problème survenu à n'importe quelles des étapes peut entraîner l'avortement de graines ou de la silique. Le nombre potentiel d'ovules par silique et le nombre graines arrivant la maturité dépendraient de la position du dans l'architecture de plante et le temps de son apparition, mais le mode complexe de développement de colza rend difficile l'analyse des causes et effets. Dans cette étude, la variabilité des composantes du rendement suivantes est étudiée: (a) nombre d'ovules par silique, (b) nombre de graines par silique, et (c) nombre de siliques par axe en fonction d'une part, l'emplacement de la fleur dans l'inflorescence, et la position de cette dernière sur la tige, et l'autre part, le temps d'apparition de la silique, qui affectent la disponibilité d'assimilats. Basé sur les processus biologiques de la fertilité des fleurs, un modèle probabiliste est développé pour simuler le développement des graines. Le nombre de grains de pollen par fleur peut être déduit par le modèle et ainsi que les facteurs qui influent le rendement. Des expériences de terrain ont été menées en 2008 et 2009. Le nombre et la position des fleurs qui s'épanouissaient dans l'inflorescence ont été enregistrés sur la base des observations tous les deux à trois jours pendant la saison de floraison. Différents états trophiques ont été créés par tailler de la tige principale ou des ramifications à étudier l'effet de l'assimilation de la compétition. Les résultats montrent que la quantité d'assimilâtes disponibles a été le principal déterminant de la production de graines et de siliques. La répartition d'assimilâtes a été sensiblement affectée par l'emplacement de silique au sein d'une inflorescence et la location de l'inflorescence sur la tige colza. En outre, le paramètre de la distribution du nombre de pollen a indiqué que la production de graines pourrait être limitée par la pollinisation. La réduction de la viabilité des ovules pourrait entraîner la diminution du nombre de siliques et le nombre de graines par silique à l'extrémité de l'inflorescence. Le modèle proposé pourrait être un outil pour étudier la stratégie de l'amélioration du rendement des plantes à fleurs.
32

[en] STUDY OF THE COMPRESSIVE FATIGUE BEHAVIOR OF FIBER REINFORCED CONCRETE / [pt] ESTUDO DO COMPORTAMENTO À FADIGA EM COMPRESSÃO DO CONCRETO COM FIBRAS

ARTHUR MEDEIROS 24 August 2018 (has links)
[pt] Esta pesquisa teórico-experimental teve como objetivo avaliar a influência da frequência de carregamento no comportamento à fadiga em compressão do concreto com e sem fibras e foi realizada através da colaboração entre a Pontifícia Universidade Católica do Rio de Janeiro e a Universidad de Castilla-La Mancha – Espanha durante o doutorado sanduíche. A motivação surgiu da idéia de construir torres eólicas, com cem metros de altura, em concreto de alto desempenho como uma solução mais econômica. Estas torres estão submetidas a ciclos de carga e descarga com frequências desde 0,01 Hz até 0,3 Hz. A adição de fibras melhora o desempenho do concreto à tração, reduzindo a fissuração. No estudo experimental foram produzidos três concretos de mesma matriz: sem fibras, com fibras de polipropileno e fibras de aço. Foram realizados 124 ensaios de fadiga em compressão em corpos de prova cúbicos de 100 mm de aresta, divididos em doze séries: três concretos e quatro frequências 4 Hz, 1 Hz, 0,25 Hz e 0,0625 Hz. Comparando-se o número de ciclos até a ruptura foi possível verificar experimentalmente que a frequência influenciou o comportamento do concreto à fadiga em compressão e que a adição de fibras melhorou o desempenho à fadiga apenas para as frequências mais baixas. O desempenho das fibras de aço foi bastante superior ao das de polipropileno. Foi proposto um modelo probabilístico que busca relacionar os parâmetros de um ensaio de fadiga com a frequência de carregamento, levando em consideração a distribuição estatística dos ensaios de fadiga e das propriedades mecânicas do concreto. O modelo foi validado pelos resultados experimentais. Foi comprovado que a ruptura é probabilística em termos do número de ciclos N ou da taxa de deformação específica secundária, e que existe uma relação direta entre N e. Em termos práticos, o modelo permite estimar o número de ciclos até a ruptura sem chegar a romper o corpo de prova. / [en] This work presents the results of a theorical-experimental study performed in cooperation between the Pontifícia Universidade Católica do Rio de Janeiro and the Universidad de Castilla-La Mancha in Spain. The main goal was to verify the influence of the loading frequency on the compressive fatigue behavior of plain and fiber reinforced concrete FRC. The motivation comes from the intention on building wind energy generator towers with one hundred meters in height by using a high-performance concrete as a cheaper alternative material instead of steel. These towers are subjected to load and unload cycles at frequencies from 0,01 Hz to 0,3 Hz. The addition of fibers improves concrete properties such as tensile strength, reducing cracking. In the experimental study three types of concrete were produced from the same matrix: a plain concrete and two FRC, with polypropylene fibers and with steel fibers. One hundred twenty four compressive fatigue tests were performed on cubic specimens with 100 mm in edge length, divided on twelve series: three types of concrete and four frequencies 4 Hz, 1 Hz, 0,25 Hz and 0,0625 Hz. Comparing the number of cycles to failure, it is clear that the loading frequency influences the compressive fatigue behavior and that the addition of fibers improves fatigue performance only at the lower frequencies. The performance of the steel fibers is more efficient than the polypropylene ones. A probabilistic model was proposed to relate the fatigue parameters with the loading frequency, considering both statistical distributions of the fatigue tests and the concrete mechanical properties. There is a good agreement between the model and the experimental results. In terms of number of cycles N or strain history (through the secondary strain rate) the rupture is probabilistic, and there is a direct relation between N and. This relation provides the possibility to estimate the number of cycles to failure without breaking the specimen.
33

Prioris para modelos probabilísticos discretos em ciências agrárias

SARAIVA, Cristiane Almeida 30 March 2007 (has links)
Submitted by Mario BC (mario@bc.ufrpe.br) on 2016-07-26T16:30:42Z No. of bitstreams: 1 Cristiane Almeida Saraiva.pdf: 654226 bytes, checksum: e8f2868121f9f239abad81f4e3eba456 (MD5) / Made available in DSpace on 2016-07-26T16:30:42Z (GMT). No. of bitstreams: 1 Cristiane Almeida Saraiva.pdf: 654226 bytes, checksum: e8f2868121f9f239abad81f4e3eba456 (MD5) Previous issue date: 2007-03-30 / With the propose to choose priors more fited for discrete data, we study technics for determination of priors just as Laplace’s Methods, Jeffreys’s Methods and Haldane’s Methods which are conjugated prior. We take a sample of ten grange among the fifty three ones existent of the Pernambuco’s State to estimate the probability of commercial egg (big type). We suppose that the distribution from the sample data is binomial and we use the methods quoted above. The software used for that was the package Winbugs 1.4 where we compute the average, standard deviation, 95% credible interval and their amplitude. For each one of the methods it was observed that 20.000 iterations were sufficient since the balance of the chain already had established with 12.500 iterations. The estimated parameter p=0,664 was obtained by the Laplace’s Method, Jeffreys’s Method and Haldane’s Method. / Objetivando selecionar prioris mais adequadas para dados discretos estudamos técnicas para determinação de prioris, tais como métodos de Laplace, método de Jeffreys e método de Haldane em que as prioris sâo conjugadas. Foi tomada uma amotra de dez granjas dentre as 53 existentes do Estado de Pernambuco com o propósito de estimar a probabilidade de ovos comerciais (grandes). Tendo em vista que os ovos são classificados como industrial, pequeno, médio, grande, extra e jumbo, classificamos os ovos em pequeno e grande. Os ovos industriais, pequenos e médios foram tidos como pequeno e os ovos grandes, extra e jumbo , como grande. Com a suposição de que os dados amostrais seguem uma distribuição binomial e utilizando prioris determinadas pelos métodos acima descritos, utilizamos o software Winbugs 1.4 com o qual foram calculados a média, desvio padrão, intervalo de credibilidade de 95% e sua amplitude. Para cada um dos métodos utilizamos 20.000 iterações das quais as 10.000 primeiras foram descartadas observando-se que o equilíbrio da cadeia iniciou-se com 12.500 iterações. Obtivemos uma estimativa média do parâmetro p o qual foi semelhante nos métodos de Laplace, Jeffreys e Haldane, correspondendo a aproximadamente p= 0,664.
34

Modélisation stochastique, en mécanique des milieux continus, de l'interphase inclusion-matrice à partir de simulations en dynamique moléculaire / Stochastic modeling, in continuum mechanics, of the inclusion-matrix interphase from molecular dynamics simulations

Le, Tien-Thinh 21 October 2015 (has links)
Dans ce travail, nous nous intéressons à la modélisation stochastique continue et à l'identification des propriétés élastiques dans la zone d'interphase présente au voisinage des hétérogénéités dans un nano composite prototypique, composé d'une matrice polymère modèle renforcée par une nano inclusion de silice. Des simulations par dynamique moléculaire (DM) sont tout d'abord conduites afin d'extraire certaines caractéristiques de conformation des chaînes proches de la surface de l'inclusion, ainsi que pour estimer, par des essais mécaniques virtuels, des réalisations du tenseur apparent associé au domaine de simulation. Sur la base des résultats obtenus, un modèle informationnel de champ aléatoire est proposé afin de modéliser les fluctuations spatiales du tenseur des rigidités dans l'interphase. Les paramètres du modèle probabiliste sont alors identifiés par la résolution séquentielle de deux problèmes d'optimisation inverses (l'un déterministe et associé au modèle moyen, l'autre stochastique et lié aux paramètres de dispersion et de corrélation spatiale) impliquant une procédure d'homogénéisation numérique. On montre en particulier que la longueur de corrélation dans la direction radiale est du même ordre de grandeur que l'épaisseur de l'interphase, indiquant ainsi la non-séparation des échelles. Enfin, la prise en compte, par un modèle de matrices aléatoires, du bruit intrinsèque généré par les simulations de DM (dans la procédure de calibration) est discutée / This work is concerned with the stochastic modeling and identification of the elastic properties in the so-called interphase region surrounding the inclusions in nanoreinforced composites. For the sake of illustration, a prototypical nanocomposite made up with a model polymer matrix filled by a silica nanoinclusion is considered. Molecular Dynamics (MD) simulations are first performed in order to get a physical insight about the local conformation of the polymer chains in the vicinity of the inclusion surface. In addition, a virtual mechanical testing procedure is proposed so as to estimate realizations of the apparent stiffness tensor associated with the MD simulation box. An information-theoretic probabilistic representation is then proposed as a surrogate model for mimicking the spatial fluctuations of the elasticity field within the interphase. The hyper parameters defining the aforementioned model are subsequently calibrated by solving, in a sequential manner, two inverse problems involving a computational homogenization scheme. The first problem, related to the mean model, is formulated in a deterministic framework, whereas the second one involves a statistical metric allowing the dispersion parameter and the spatial correlation lengths to be estimated. It is shown in particular that the spatial correlation length in the radial direction is roughly equal to the interphase thickness, hence showing that the scales under consideration are not well separated. The calibration results are finally refined by taking into account, by means of a random matrix model, the MD finite-sampling noise
35

Prise en compte des méconnaissances dans la quantification de la nocivité des fissures en fatigue / Integration of uncertainties in fatigue cracks hazardness quantification.

Boutet, Pierre 15 December 2015 (has links)
Dans les installations industrielles, des inspections régulières sont planifiées pour évaluer l’état de santé interne des composants. Si des fissures sont révélées, il est souhaitable de savoir si l’exploitation de la structure peut se poursuivre ou si un régime de fonctionnement dégradé pourrait être envisagé. En se basant sur la mécanique élastique linéaire de la rupture, les travaux présentés traitent donc dans le cas de composants fissurés de la dispersion de durée de vie résiduelle relative aux incertitudes sur les paramètres du modèle de prévision. La longueur de fissure initiale, les propriétés du matériau ainsi que les paramètres d’entrée de la loi de Paris ont été considérés comme des variables aléatoires, dont les distributions ont été déterminées expérimentalement puis ajustées par des lois statistiques adéquates. Des contrôles ultrasonores par mesure du temps de vol de l’onde diffractée – Time Of Flight Diffraction (TOFD) en anglais – et des mesures de champs obtenues par corrélation d’images numériques ont été utilisés pour quantifier la propagation d’une fissure dans une éprouvette à défaut soumise à des sollicitations cycliques uniaxiales. Les données expérimentales recueillies ont été utilisées pour initialiser les calculs et valider les résultats numériques. Les distributions de taille de fissure obtenue après un nombre donné de cycles de sollicitation et de nombre de cycles de sollicitation conduisant à une taille définie de fissure ont été obtenues par une méthode de Monte-Carlo appliquée au modèle de prévision. L’ajustement de ces distributions par de lois log-normales a fourni des outils analytiques d’estimation probabiliste de propagation de fissure. Cela a notamment permis la réalisation de cartographies de risques et l’évaluation de l’évolution de la fiabilité du composant étudié. Enfin, les effets d’une actualisation de la connaissance de la longueur de fissure au cours de la vie de ce composant en termes d’incertitude de prévision et d’extension de durée de vie résiduelle prévisionnelle ont été étudiés. En particulier, afin de limiter le coût des campagnes de contrôle non destructifs dans le cas industriel, une stratégie d’optimisation de l’actualisation de cette connaissance basée sur l’étude de fiabilité a été proposée. / In industrial plants, regular inspections are planned to assess the internal state of installations. If some cracks are revealed, it is desirable to know whether the structure can still be used or if a degraded mode of operation should be considered. Starting from a linear elastic fracture mechanics model, the work presented studied the scatter of the remaining life of such cracked parts due to the uncertainties on the parameters of the prediction model. Initial crack size, material properties and input parameters of Paris’ law have been considered as random variables and their distributions have been experimentally identified and fitted with convenient statistical laws. Time Of Flight Diffraction (TOFD) and field measurement technique based on Digital Image Correlation (DIC) were used to monitor the crack propagation initiated from a notch introduced in specimen submitted to uniaxial cyclic loading. Experimental crack length results were used to initiate computations and as a mean to validate numerical results. Both the crack lengths distributions resulting from the application of loading cycles and the distribution of the number of cycles leading to a given crack length were obtained from a Monte-Carlo method applied to the prediction model. The fit of these distributions with log-normal laws provided analytical tools to assess probabilistic crack propagation. It allowed for risk mapping and for the evaluation of the studied component’s reliability evolution. Last, the effects of an actualisation of crack length knowledge along the life of this component in terms of assessment uncertainty and predicted residual life extension has been studied. Especially, to limit the cost of non-destructive techniques inspection in industrial cases, a reliability-based strategy has been proposed for the optimisation of the crack knowledge actualisation.
36

Analyse des tolérances des systèmes complexes – Modélisation des imperfections de fabrication pour une analyse réaliste et robuste du comportement des systèmes / Tolerance analysis of complex mechanisms - Manufacturing imperfections modeling for a realistic and robust geometrical behavior modeling of the mechanisms

Goka, Edoh 12 June 2019 (has links)
L’analyse des tolérances a pour but de vérifier lors de la phase de conception, l’impact des tolérances individuelles sur l’assemblage et la fonctionnalité d’un système mécanique. Les produits fabriqués possèdent différents types de contacts et sont sujets à des imperfections de fabrication qui sont sources de défaillances d’assemblage et fonctionnelle. Les méthodes généralement proposées pour l’analyse des tolérances ne considèrent pas les défauts de forme. L’objectif des travaux de thèse est de proposer une nouvelle procédure d’analyse des tolérances permettant de prendre en compte les défauts de forme et le comportement géométriques des différents types de contacts. Ainsi, dans un premier temps, une méthode de modélisation des défauts de forme est proposée afin de rendre les simulations plus réalistes. Dans un second temps, ces défauts de forme sont intégrés dans la modélisation du comportement géométrique d’un système mécanique hyperstatique, en considérant les différents types de contacts. En effet, le comportement géométrique des différents types de contacts est différent dès que les défauts de forme sont considérés. La simulation de Monte Carlo associée à une technique d’optimisation est la méthode choisie afin de réaliser l’analyse des tolérances. Cependant, cette méthode est très couteuse en temps de calcul. Pour pallier ce problème, une approche utilisant des modèles probabilistes obtenus grâce à l’estimation par noyaux, est proposée. Cette nouvelle approche permet de réduire les temps de calcul de manière significative. / Tolerance analysis aims toward the verification of the impact of individual tolerances on the assembly and functional requirements of a mechanical system. The manufactured products have several types of contacts and their geometry is imperfect, which may lead to non-functioning and non-assembly. Traditional methods for tolerance analysis do not consider the form defects. This thesis aims to propose a new procedure for tolerance analysis which considers the form defects and the different types of contact in its geometrical behavior modeling. A method is firstly proposed to model the form defects to make realistic analysis. Thereafter, form defects are integrated in the geometrical behavior modeling of a mechanical system and by considering also the different types of contacts. Indeed, these different contacts behave differently once the imperfections are considered. The Monte Carlo simulation coupled with an optimization technique is chosen as the method to perform the tolerance analysis. Nonetheless, this method is subject to excessive numerical efforts. To overcome this problem, probabilistic models using the Kernel Density Estimation method are proposed.
37

A Study Of Localization And Latency Reduction For Action Recognition

Masood, Syed Zain 01 January 2012 (has links)
The success of recognizing periodic actions in single-person-simple-background datasets, such as Weizmann and KTH, has created a need for more complex datasets to push the performance of action recognition systems. In this work, we create a new synthetic action dataset and use it to highlight weaknesses in current recognition systems. Experiments show that introducing background complexity to action video sequences causes a significant degradation in recognition performance. Moreover, this degradation cannot be fixed by fine-tuning system parameters or by selecting better feature points. Instead, we show that the problem lies in the spatio-temporal cuboid volume extracted from the interest point locations. Having identified the problem, we show how improved results can be achieved by simple modifications to the cuboids. For the above method however, one requires near-perfect localization of the action within a video sequence. To achieve this objective, we present a two stage weakly supervised probabilistic model for simultaneous localization and recognition of actions in videos. Different from previous approaches, our method is novel in that it (1) eliminates the need for manual annotations for the training procedure and (2) does not require any human detection or tracking in the classification stage. The first stage of our framework is a probabilistic action localization model which extracts the most promising sub-windows in a video sequence where an action can take place. We use a non-linear classifier in the second stage of our framework for the final classification task. We show the effectiveness of our proposed model on two well known real-world datasets: UCF Sports and UCF11 datasets. iii Another application of the weakly supervised probablistic model proposed above is in the gaming environment. An important aspect in designing interactive, action-based interfaces is reliably recognizing actions with minimal latency. High latency causes the system’s feedback to lag behind and thus significantly degrade the interactivity of the user experience. With slight modification to the weakly supervised probablistic model we proposed for action localization, we show how it can be used for reducing latency when recognizing actions in Human Computer Interaction (HCI) environments. This latency-aware learning formulation trains a logistic regression-based classifier that automatically determines distinctive canonical poses from the data and uses these to robustly recognize actions in the presence of ambiguous poses. We introduce a novel (publicly released) dataset for the purpose of our experiments. Comparisons of our method against both a Bag of Words and a Conditional Random Field (CRF) classifier show improved recognition performance for both pre-segmented and online classification tasks.
38

Syngas ash deposition for a three row film cooled leading edge turbine vane

Sreedhran, Sai Shrinivas 10 August 2010 (has links)
Coal gasification and combustion can introduce contaminants in the solid or molten state depending on the gas clean up procedures used, coal composition and operating conditions. These byproducts when combined with high temperatures and high gas stream velocities can cause Deposition, Erosion, and Corrosion (DEC) of turbine components downstream of the combustor section. The objective of this dissertation is to use computational techniques to investigate the dynamics of ash deposition in a leading edge vane geometry with film cooling. Large Eddy Simulations (LES) is used to model the flow field of the coolant jet-mainstream interaction and the deposition of syngas ash in the leading edge region of a turbine vane is modeled using a Lagrangian framework. The three row leading edge vane geometry is modeled as a symmetric semi-cylinder with a flat afterbody. One row of coolant holes is located along the stagnation line and the other two rows of coolant holes are located at ±21.3° from the stagnation line. The coolant is injected at 45° to the vane surface with 90° compound angle injection. The coolant to mainstream density ratio is set to unity and the freestream Reynolds number based on leading edge diameter is 32000. Coolant to mainstream blowing ratios (B.R.) of 0.5, 1.0, 1.5, and 2.0 are investigated. It is found that the stagnation cooling jets penetrate much further into the mainstream, both in the normal and lateral directions, than the off-stagnation jets for all blowing ratios. Jet dilution is characterized by turbulent diffusion and entrainment. The strength of both mechanisms increases with blowing ratio. The adiabatic effectiveness in the stagnation region initially increases with blowing ratio but then generally decreases as the blowing ratio increases further. Immediately downstream of off-stagnation injection, the adiabatic effectiveness is highest at B.R.=0.5. However, in spite of the larger jet penetration and dilution at higher blowing ratios, the larger mass of coolant injected increases the effectiveness with blowing ratio further downstream of injection location. A novel deposition model which integrates different sources of published experimental data to form a holistic numerical model is developed to predict ash deposition. The deposition model computes the ash sticking probabilities as a function of particle temperature and ash composition. This deposition model is validated with available experimental results on a flat plate inclined at 45°. Subsequently, this model was then used to study ash deposition in a leading edge vane geometry with film cooling for coolant to mainstream blowing ratios of 0.5, 1.0, 1.5 and 2.0. Ash particle sizes of 5, 7, 10μm are considered. Under the conditions of the current simulations, ash particles have Stokes numbers less than unity of O(1) and hence are strongly affected by the flow and thermal fields generated by the coolant interaction with the main-stream. Because of this, the stagnation coolant jets are successful in pushing and/or cooling the particles away from the surface and minimizing deposition and erosion in the stagnation region. Capture efficiency for eight different ash compositions are investigated. Among all the ash samples, ND ash sample shows the highest capture efficiency due to its low softening temperature. A trend that is common to all particle sizes is that the percentage capture efficiency is least for blowing ratio of 1.5 as the coolant is successful in pushing the particles away from the surface. However, further increasing the blowing ratio to 2.0, the percentage capture efficiency increases as more number of particles are transported to the surface by strong mainstream entrainment by the coolant jets. / Ph. D.
39

Tools and Techniques for Evaluating Reliability Trade-offs for Nano-Architectures

Bhaduri, Debayan 20 May 2004 (has links)
It is expected that nano-scale devices and interconnections will introduce unprecedented level of defects in the substrates, and architectural designs need to accommodate the uncertainty inherent at such scales. This consideration motivates the search for new architectural paradigms based on redundancy based defect-tolerant designs. However, redundancy is not always a solution to the reliability problem, and often too much or too little redundancy may cause degradation in reliability. The key challenge is in determining the granularity at which defect tolerance is designed, and the level of redundancy to achieve a specific level of reliability. Analytical probabilistic models to evaluate such reliability-redundancy trade-offs are error prone and cumbersome, and do not scalewell for complex networks of gates. In this thesiswe develop different tools and techniques that can evaluate the reliability measures of combinational circuits, and can be used to analyze reliability-redundancy trade-offs for different defect-tolerant architectural configurations. In particular, we have developed two tools, one of which is based on probabilistic model checking and is named NANOPRISM, and another MATLAB based tool called NANOLAB. We also illustrate the effectiveness of our reliability analysis tools by pointing out certain anomalies which are counter-intuitive but can be easily discovered by these tools, thereby providing better insight into defecttolerant design decisions. We believe that these tools will help furthering research and pedagogical interests in this area, expedite the reliability analysis process and enhance the accuracy of establishing reliability-redundancy trade-off points. / Master of Science
40

Computing Quantiles in Markov Reward Models

Ummels, Michael, Baier, Christel 10 July 2014 (has links) (PDF)
Probabilistic model checking mainly concentrates on techniques for reasoning about the probabilities of certain path properties or expected values of certain random variables. For the quantitative system analysis, however, there is also another type of interesting performance measure, namely quantiles. A typical quantile query takes as input a lower probability bound p ∈ ]0,1] and a reachability property. The task is then to compute the minimal reward bound r such that with probability at least p the target set will be reached before the accumulated reward exceeds r. Quantiles are well-known from mathematical statistics, but to the best of our knowledge they have not been addressed by the model checking community so far. In this paper, we study the complexity of quantile queries for until properties in discrete-time finite-state Markov decision processes with nonnegative rewards on states. We show that qualitative quantile queries can be evaluated in polynomial time and present an exponential algorithm for the evaluation of quantitative quantile queries. For the special case of Markov chains, we show that quantitative quantile queries can be evaluated in pseudo-polynomial time.

Page generated in 0.0462 seconds