• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 25
  • 10
  • 8
  • 7
  • 2
  • 2
  • 1
  • Tagged with
  • 118
  • 118
  • 17
  • 16
  • 14
  • 13
  • 12
  • 12
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Acceleration for statistical model checking / Accélérations pour le model checking statistique

Barbot, Benoît 20 November 2014 (has links)
Ces dernières années, l'analyse de systèmes complexes critiques est devenue de plus en plus importante. En particulier, l'analyse quantitative de tels systèmes est nécessaire afin de pouvoir garantir que leur probabilité d'échec est très faible. La difficulté de l'analyse de ces systèmes réside dans le fait que leur espace d’état est très grand et que la probabilité recherchée est extrêmement petite, de l'ordre d'une chance sur un milliard, ce qui rend les méthodes usuelles inopérantes. Les algorithmes de Model Checking quantitatif sont les algorithmes classiques pour l'analyse de systèmes probabilistes. Ils prennent en entrée le système et son comportement attendu et calculent la probabilité avec laquelle les trajectoires du système correspondent à ce comportement. Ces algorithmes de Model Checking ont été largement étudié depuis leurs créations. Deux familles d'algorithme existent : - le Model Checking numérique qui réduit le problème à la résolution d'un système d'équations. Il permet de calculer précisément des petites probabilités mais soufre du problème d'explosion combinatoire- - le Model Checking statistique basé sur la méthode de Monte-Carlo qui se prête bien à l'analyse de très gros systèmes mais qui ne permet pas de calculer de petite probabilités. La contribution principale de cette thèse est le développement d'une méthode combinant les avantages des deux approches et qui renvoie un résultat sous forme d'intervalles de confiance. Cette méthode s'applique à la fois aux systèmes discrets et continus pour des propriétés bornées ou non bornées temporellement. Cette méthode est basée sur une abstraction du modèle qui est analysée à l'aide de méthodes numériques, puis le résultat de cette analyse est utilisé pour guider une simulation du modèle initial. Ce modèle abstrait doit à la fois être suffisamment petit pour être analysé par des méthodes numériques et suffisamment précis pour guider efficacement la simulation. Dans le cas général, cette abstraction doit être construite par le modélisateur. Cependant, une classe de systèmes probabilistes a été identifiée dans laquelle le modèle abstrait peut être calculé automatiquement. Cette approche a été implémentée dans l'outil Cosmos et des expériences sur des modèles de référence ainsi que sur une étude de cas ont été effectuées, qui montrent l'efficacité de la méthode. Cette approche à été implanté dans l'outils Cosmos et des expériences sur des modèles de référence ainsi que sur une étude de cas on été effectué, qui montre l'efficacité de la méthode. / In the past decades, the analysis of complex critical systems subject to uncertainty has become more and more important. In particular the quantitative analysis of these systems is necessary to guarantee that their probability of failure is very small. As their state space is extremly large and the probability of interest is very small, typically less than one in a billion, classical methods do not apply for such systems. Model Checking algorithms are used for the analysis of probabilistic systems, they take as input the system and its expected behaviour, and compute the probability with which the system behaves as expected. These algorithms have been broadly studied. They can be divided into two main families: Numerical Model Checking and Statistical Model Checking. The former computes small probabilities accurately by solving linear equation systems, but does not scale to very large systems due to the space size explosion problem. The latter is based on Monte Carlo Simulation and scales well to big systems, but cannot deal with small probabilities. The main contribution of this thesis is the design and implementation of a method combining the two approaches and returning a confidence interval of the probability of interest. This method applies to systems with both continuous and discrete time settings for time-bounded and time-unbounded properties. All the variants of this method rely on an abstraction of the model, this abstraction is analysed by a numerical model checker and the result is used to steer Monte Carlo simulations on the initial model. This abstraction should be small enough to be analysed by numerical methods and precise enough to improve the simulation. This abstraction can be build by the modeller, or alternatively a class of systems can be identified in which an abstraction can be automatically computed. This approach has been implemented in the tool Cosmos, and this method was successfully applied on classical benchmarks and a case study.
12

Design and Analysis of Allocation Methods for Peer Assessment in Education / 相互評価における生徒への答案割り当て手法の開発と分析

Ohashi, Hideaki 23 March 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22576号 / 情博第713号 / 新制||情||122(附属図書館) / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 吉川 正俊, 教授 鹿島 久嗣, 教授 田島 敬史 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
13

A statistical model to forecast short-term Atlantic hurricane intensity

Law, Kevin T. 08 August 2006 (has links)
No description available.
14

Vérification de la validité du concept de surface somme par une approche statistique du contact élastique entre deux surfaces rugueuses / Validity study of the sum surface concept using a statistical approach of elastic contact between two rough surfaces

Tran, Ich tach 26 January 2015 (has links)
Les propriétés de surface, particulièrement microgéométriques, jouent un rôle essentiel dans tous les systèmes tribologiques. L’analyse de la répartition des efforts de contact dans l’interface entre surfaces rugueuses est indispensable à la prédiction du frottement, de l'usure, de l'adhérence, des résistances de contact électrique et thermique… De nombreux modèles ont été proposés ces dernières décennies pour prédire les efforts entre aspérités de surfaces rugueuses. Parmi ces modèles, les modèles statistiques sont majoritairement développés en considérant le contact ente une surface rugueuse équivalente, la surface somme - qui tient compte des microgéométries des deux surfaces en contact ainsi que de leur matériau - et un plan lisse. Cependant la validité de cette modélisation n’a pas été clairement démontrée. L’objectif de notre étude a été de développer un modèle statistique de contact entre deux surfaces rugueuses isotropes aléatoires puis de comparer les résultats obtenus pour ces deux surfaces avec ceux obtenus en considérant la surface somme définie classiquement à partir des deux surfaces rugueuses et un plan lisse. Les différences entre les résultats nous ont amenés à proposer une nouvelle modélisation de la surface somme. / Surface properties, particularly micro-geometry, play a key role in all tribological systems. The analysis of the distribution of contact forces in the interface between rough surfaces is essential for the prediction of friction, wear, adhesion, electrical and thermal contact resistance... Many models have been proposed during the last decades to predict the forces between asperities of rough surfaces. Among these models, statistical models are mainly based on the contact between an equivalent rough surface, the sum surface - which combines micro-geometry of the two surfaces in contact and their material - and a smooth plane. However, the validity of this model has not been clearly demonstrated. The aim of our study was to develop a statistical model of the contact between two random isotropic rough surfaces and then compare the results with those obtained by considering the classical sum surface. The differences between the results have led us to propose a new definition for the sum surface.
15

"Determinação de um modelo para a taxa de carbonização transversal a grã para a madeira de E. citriodora e E. grandis" / Determination of model to the charring rate transversal to gran to E. citriodora and E. grandis wood.

Pinto, Edna Moura 01 August 2005 (has links)
A taxa na qual a madeira se converte em carvão é determinante para a avaliação da resistência ao fogo, pois o colapso de elementos estruturais de madeira e de seus derivados quando expostos ao fogo ocorre principalmente pela redução da área resistente da seção, devido à formação de carvão. A resistência ao fogo depende, portanto, das dimensões da seção transversal que é reduzida gradualmente ao ser exposta ao fogo. Vários países têm investido em pesquisas para a caracterização e determinação da taxa de carbonização com base em diferentes espécies. Nesse trabalho são apresentados dois modelos de taxa de carbonização para a madeira de Eucalyptus das espécies citriodora e grandis, de grande interesse para aplicação estrutural e plantadas no Brasil. Um para peças de madeira com pequena dimensão (17,2 cm x 17,2 cm x 6,0 cm) e outro para as vigas estruturais (0,16 cm x 0,26 cm x 2,00 m). A curva de aquecimento adotada foi a recomendada pela ASTM E-119. / The rate at which the wood converts in char is determinant to evaluate the wood fire endurance, because the failure of wooden structural elements and its composites exposed to fire occurs through reduction of cross section. The fire resistance depends on cross sections dimensions that are gradually reduced when exposed to fire. Several countries have invested in research to determine the wood charring used for building construction that result in the establishment of values to different species. This work presents two charring rate models for citriodora and grandis species of Eucalyptus, that presents structural interests in assemblies in Brazil. One model for small pieces (17,2 cm x 17,2 cm x 6,0 cm) and the other using structural beam ( 0,16 cm x 0,26 cm x 2,00 m). The standard temperature x time curve was ASTM E-119.
16

Model Based Speech Enhancement and Coding

Zhao, David Yuheng January 2007 (has links)
In mobile speech communication, adverse conditions, such as noisy acoustic environments and unreliable network connections, may severely degrade the intelligibility and natural- ness of the received speech quality, and increase the listening effort. This thesis focuses on countermeasures based on statistical signal processing techniques. The main body of the thesis consists of three research articles, targeting two specific problems: speech enhancement for noise reduction and flexible source coder design for unreliable networks. Papers A and B consider speech enhancement for noise reduction. New schemes based on an extension to the auto-regressive (AR) hidden Markov model (HMM) for speech and noise are proposed. Stochastic models for speech and noise gains (excitation variance from an AR model) are integrated into the HMM framework in order to improve the modeling of energy variation. The extended model is referred to as a stochastic-gain hidden Markov model (SG-HMM). The speech gain describes the energy variations of the speech phones, typically due to differences in pronunciation and/or different vocalizations of individual speakers. The noise gain improves the tracking of the time-varying energy of non-stationary noise, e.g., due to movement of the noise source. In Paper A, it is assumed that prior knowledge on the noise environment is available, so that a pre-trained noise model is used. In Paper B, the noise model is adaptive and the model parameters are estimated on-line from the noisy observations using a recursive estimation algorithm. Based on the speech and noise models, a novel Bayesian estimator of the clean speech is developed in Paper A, and an estimator of the noise power spectral density (PSD) in Paper B. It is demonstrated that the proposed schemes achieve more accurate models of speech and noise than traditional techniques, and as part of a speech enhancement system provide improved speech quality, particularly for non-stationary noise sources. In Paper C, a flexible entropy-constrained vector quantization scheme based on Gaus- sian mixture model (GMM), lattice quantization, and arithmetic coding is proposed. The method allows for changing the average rate in real-time, and facilitates adaptation to the currently available bandwidth of the network. A practical solution to the classical issue of indexing and entropy-coding the quantized code vectors is given. The proposed scheme has a computational complexity that is independent of rate, and quadratic with respect to vector dimension. Hence, the scheme can be applied to the quantization of source vectors in a high dimensional space. The theoretical performance of the scheme is analyzed under a high-rate assumption. It is shown that, at high rate, the scheme approaches the theoretically optimal performance, if the mixture components are located far apart. The practical performance of the scheme is confirmed through simulations on both synthetic and speech-derived source vectors. / QC 20100825
17

Intentionality as Methodology

Hochstein, Eric 05 December 2011 (has links)
In this dissertation, I examine the role that intentional descriptions play in our scientific study of the mind. Behavioural scientists often use intentional language in their characterization of cognitive systems, making reference to “beliefs”, “representations”, or “states of information”. What is the scientific value gained from employing such intentional terminology? I begin the dissertation by contrasting intentional descriptions with mechanistic descriptions, as these are the descriptions most commonly used to provide explanations in the behavioural sciences. I then examine the way that intentional descriptions are employed in various scientific contexts. I conclude that while mechanistic descriptions characterize the underlying structure of systems, intentional descriptions allow us to generate predictions of systems while remaining agnostic as to their mechanistic underpinnings. Having established this, I then argue that intentional descriptions share much in common with statistical models in the way they characterize systems. Given these similarities, I theorize that intentional descriptions are employed within scientific practice as a particular type of phenomenological model. Phenomenological models are used to study, characterize, and predict the phenomena produced by mechanistic systems without describing their underlying structure. I demonstrate why such models are integral to our scientific discovery, and understanding, of the mechanisms that make up the brain. With my account on the table, I then look back at previous accounts of intentional language that philosophers have offered in the past. I highlight insights that each brought to our understanding of intentional language, and point out where each ultimately goes astray. I conclude the dissertation by examining the ontological implications of my theory. I demonstrate that my account is compatible with versions of both realism, and anti-realism, regarding the existence of intentional states.
18

Object-oriented software development effort prediction using design patterns from object interaction analysis

Adekile, Olusegun 15 May 2009 (has links)
Software project management is arguably the most important activity in modern software development projects. In the absence of realistic and objective management, the software development process cannot be managed in an effective way. Software development effort estimation is one of the most challenging and researched problems in project management. With the advent of object-oriented development, there have been studies to transpose some of the existing effort estimation methodologies to the new development paradigm. However, there is not in existence a holistic approach to estimation that allows for the refinement of an initial estimate produced in the requirements gathering phase through to the design phase. A SysML point methodology is proposed that is based on a common, structured and comprehensive modeling language (OMG SysML) that factors in the models that correspond to the primary phases of object-oriented development into producing an effort estimate. This dissertation presents a Function Point-like approach, named Pattern Point, which was conceived to estimate the size of object-oriented products using the design patterns found in object interaction modeling from the late OO analysis phase. In particular, two measures are proposed (PP1 and PP2) that are theoretically validated showing that they satisfy wellknown properties necessary for size measures. An initial empirical validation is performed that is meant to assess the usefulness and effectiveness of the proposed measures in predicting the development effort of object-oriented systems. Moreover, a comparative analysis is carried out; taking into account several other size measures. The experimental results show that the Pattern Point measure can be effectively used during the OOA phase to predict the effort values with a high degree of confidence. The PP2 metric yielded the best results with an aggregate PRED (0.25) = 0.874.
19

Risk Assessment of Driving Safety in Long Scaled Bridge under Severe Weather Conditions

Chen, Shengdi 01 January 2013 (has links)
Weather conditions have certain impacts on roadway traffic operations, especially traffic safety. Bridges differ from most surface streets and highways in terms of their physical properties and operational characteristics. This research assess the driving risk under different weather conditions through focus group firstly, then it develops a multi-ordered discrete choice model that is used to analyze and evaluate driving risks under both single and dual weather conditions. The data is derived from an extensive questionnaire survey in Shanghai. And the questionnaire includes those factors related to roadway, drivers, vehicles, and traffic that may have significant impacts on traffic safety under severe weather conditions. Considering the actual situation these variables except driver's gender are selected as independent variables of risk evaluation. As a result, different risk levels and corresponding probability are calculated, which are very important to optimize emergency resource allocation and make reasonable emergency measures. Moreover, in order to reduce severe bridge-related crashes, the research develops an ordered probit model to analyze those factors contributing to bridge-related crash severity and to predict probabilities of different severity levels under rainy conditions.
20

Coupling distances between Lévy measures and applications to noise sensitivity of SDE

Gairing, Jan, Högele, Michael, Kosenkova, Tetiana, Kulik, Alexei January 2013 (has links)
We introduce the notion of coupling distances on the space of Lévy measures in order to quantify rates of convergence towards a limiting Lévy jump diffusion in terms of its characteristic triplet, in particular in terms of the tail of the Lévy measure. The main result yields an estimate of the Wasserstein-Kantorovich-Rubinstein distance on path space between two Lévy diffusions in terms of the couping distances. We want to apply this to obtain precise rates of convergence for Markov chain approximations and a statistical goodness-of-fit test for low-dimensional conceptual climate models with paleoclimatic data.

Page generated in 0.1245 seconds