• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 410
  • 58
  • 47
  • 19
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 7
  • 7
  • 4
  • 3
  • Tagged with
  • 690
  • 132
  • 95
  • 94
  • 76
  • 70
  • 62
  • 59
  • 56
  • 54
  • 46
  • 42
  • 38
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

The development and assessment of techniques for daily rainfall disaggregation in South Africa.

Knoesen, Darryn Marc. January 2005 (has links)
The temporal distribution of rainfall , viz. the distribution of rainfall intensity during a storm, is an important factor affecting the timing and magnitude of peak flow from a catchment and hence the flood-generating potential of rainfall events. It is also one of the primary inputs into hydrological models used for hydraulic design purposes. The use of short duration rainfall data inherently accounts for the temporal distribution of rainfall, however, there is a relative paucity of short duration data when compared to the more abundantly available daily data. One method of overcoming this is to disaggregate courser-scale data to a finer resolution, e.g. daily to hourly. A daily to hourly rainfall disaggregation model developed by Boughton (2000b) in Australia has been modified and applied in South Africa. The primary part of the model is the . distribution of R, which is the fraction of the daily total that occurs in the hour of maximum rainfall. A random number is used to sample from the distribution of R at the site of interest. The sample value of R determines the other 23 values, which then undergo a clustering procedure. This clustered sequence is then arranged into 1 of 24 possible temporal arrangements, depending when the hour the maximum rainfall occurs. The structure of the model allows for the production of 480 different temporal distributions with variation between uniform and non-uniform rainfall. The model was then regionalised to allow for application at sites where daily rainfall data, but no short duration data, were available. The model was evaluated at 15 different locations in differing climatic regions in South Africa. At each location, observed hourly rainfall data were aggregated to yield 24-hour values and these were then disaggregated using the methodology. Results show that the model is able to retain the daily total and most of the characteristics of the hourly rainfall at the site, for when both at-site and regional information are used. The model, however, is less capable of simulating statistics related to the sequencing of hourly rainfalls, e.g. autocorrelations. The model also tends to over-estimate design rainfalls, particularly for the shorter durations . / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2005.
602

Linear and non-linear boundary crossing probabilities for Brownian motion and related processes

Wu, Tung-Lung Jr 12 1900 (has links)
We propose a simple and general method to obtain the boundary crossing probability for Brownian motion. This method can be easily extended to higher dimensional of Brownian motion. It also covers certain classes of stochastic processes associated with Brownian motion. The basic idea of the method is based on being able to construct a nite Markov chain such that the boundary crossing probability of Brownian motion is obtained as the limiting probability of the nite Markov chain entering a set of absorbing states induced by the boundary. Numerical results are given to illustrate our method.
603

Stable iterated function systems

Gadde, Erland January 1992 (has links)
The purpose of this thesis is to generalize the growing theory of iterated function systems (IFSs). Earlier, hyperbolic IFSs with finitely many functions have been studied extensively. Also, hyperbolic IFSs with infinitely many functions have been studied. In this thesis, more general IFSs are studied. The Hausdorff pseudometric is studied. This is a generalization of the Hausdorff metric. Wide and narrow limit sets are studied. These are two types of limits of sequences of sets in a complete pseudometric space. Stable Iterated Function Systems, a kind of generalization of hyperbolic IFSs, are defined. Some different, but closely related, types of stability for the IFSs are considered. It is proved that the IFSs with the most general type of stability have unique attractors. Also, invariant sets, addressing, and periodic points for stable IFSs are studied. Hutchinson’s metric (also called Vaserhstein’s metric) is generalized from being defined on a space of probability measures, into a class of norms, the £-norms, on a space of real measures (on certain metric spaces). Under rather general conditions, it is proved that these norms, when they are restricted to positive measures, give rise to complete metric spaces with the metric topology coinciding with the weak*-topology. Then, IFSs with probabilities (IFSPs) are studied, in particular, stable IFSPs. The £-norm-results are used to prove that, as in the case of hyperbolic IFSPs, IFSPs with the most general kind of stability have unique invariant measures. These measures are ”attractive”. Also, an invariant measure is constructed by first ”lifting” the IFSP to the code space. Finally, it is proved that the Random Iteration Algorithm in a sense will ”work” for some stable IFSPs. / <p>Diss. Umeå : Umeå universitet, 1992</p> / digitalisering@umu
604

Linear and non-linear boundary crossing probabilities for Brownian motion and related processes

Wu, Tung-Lung Jr 12 1900 (has links)
We propose a simple and general method to obtain the boundary crossing probability for Brownian motion. This method can be easily extended to higher dimensional of Brownian motion. It also covers certain classes of stochastic processes associated with Brownian motion. The basic idea of the method is based on being able to construct a nite Markov chain such that the boundary crossing probability of Brownian motion is obtained as the limiting probability of the nite Markov chain entering a set of absorbing states induced by the boundary. Numerical results are given to illustrate our method.
605

Analysis Of Stochastic And Non-stochastic Volatility Models

Ozkan, Pelin 01 September 2004 (has links) (PDF)
Changing in variance or volatility with time can be modeled as deterministic by using autoregressive conditional heteroscedastic (ARCH) type models, or as stochastic by using stochastic volatility (SV) models. This study compares these two kinds of models which are estimated on Turkish / USA exchange rate data. First, a GARCH(1,1) model is fitted to the data by using the package E-views and then a Bayesian estimation procedure is used for estimating an appropriate SV model with the help of Ox code. In order to compare these models, the LR test statistic calculated for non-nested hypotheses is obtained.
606

封閉式等候網路機率分配之估計與分析 / Estimation of Probability Distributions on Closed Queueing Networks

莊依文 Unknown Date (has links)
在這一篇論文裡,我們討論兩個階段的封閉式等候線網路,其中服務時間的機率分配都是Phase type分配。我們猜測服務時間的機率分配和離開時間間隔的機率分配滿足一組聯立方程組。然後,我們推導出非邊界狀態的穩定機率可以被表示成 product-form的線性組合,而每個product-form可以用聯立方程組的根來構成。利用非邊界狀態的穩定機率, 我們可以求出邊界狀態的機率。最後我們建立一個求穩定機率的演算過程。利用這個演算方法,可以簡化求穩定機率的複雜度。 / In this thesis, we are concerned with the property of a two-stage closed system in which the service times are identically of phase type. We first conjecture that the  Laplace-Stieltjes Transforms (LST) of service time distributions may satisfy a system of equations. Then we present that the stationary probabilities on the unboundary states can be written as a linear combination of product-forms. Each component of these products can be expressed in terms of roots of the system of equations. Finally, we establish an algorithm to obtain all the stationary probabilities. The algorithm is expected to work well for relatively large customers in the system.
607

Modélisation hiérarchique bayésienne des amas stellaires jeunes / Bayesian hierarchical modelling of young stellar clusters

Olivares Romero, Javier 19 October 2017 (has links)
Il semble maintenant établi que la majorité des étoiles se forment dans des amas (Carpenter 2000; Porras et al. 2003; Lada & Lada 2003). Comprendre l'origine et l'évolution des populations stellaires est donc l'un des plus grands défis de l'astrophysique moderne. Malheureusement, moins d'un dixième de ces amas restent gravitationellement liés au delà de quelques centaines de millions d'années (Lada & Lada 2003). L’étude des amas stellaires doit donc se faire avant leur dissolution dans la galaxie.Le projet Dynamical Analysis of Nearby Clusters (DANCe, Bouy et al. 2013), dont le travail fait partie, fournit le cadre scientifique pour l'analyse des amas proches et jeunes (NYC) dans le voisinage solaire. Les observations de l'amas ouvert des Pléiades par le projet DANCe offrent une opportunité parfaite pour le développement d'outils statistiques visant à analyser les premières phases de l'évolution des amas.L'outil statistique développé ici est un système intelligent probabiliste qui effectue une inférence bayésienne des paramètres régissant les fonctions de densité de probabilité (PDF) de la population de l'amas (PDFCP). Il a été testé avec les données photométriques et astrométriques des Pléiades du relevé DANCe. Pour éviter la subjectivité de ces choix des priors, le système intelligent les établit en utilisant l'approche hiérarchique bayésienne (BHM). Dans ce cas, les paramètres de ces distributions, qui sont également déduits des données, proviennent d'autres distributions de manière hiérarchique.Dans ce système intelligent BHM, les vraies valeurs du PDFCP sont spécifiées par des relations stochastiques et déterministes représentatives de notre connaissance des paramètres physiques de l'amas. Pour effectuer l'inférence paramétrique, la vraisemblance (compte tenu de ces valeurs réelles), tient en compte des propriétés de l'ensemble de données, en particulier son hétéroscédasticité et des objects avec des valeurs manquantes.Le BHM obtient les PDF postérieures des paramètres dans les PDFCP, en particulier celles des distributions spatiales, de mouvements propres et de luminosité, qui sont les objectifs scientifiques finaux du projet DANCe. Dans le BHM, chaque étoile du catalogue contribue aux PDF des paramètres de l'amas proportionnellement à sa probabilité d'appartenance. Ainsi, les PDFCP sont exempts de biais d'échantillonnage résultant de sélections tronquées au-dessus d'un seuil de probabilité défini plus ou moins arbitrairement.Comme produit additionnel, le BHM fournit également les PDF de la probabilité d'appartenance à l'amas pour chaque étoile du catalogue d'entrée, qui permettent d'identifier les membres probables de l'amas, et les contaminants probables du champ. La méthode a été testée avec succès sur des ensembles de données synthétiques (avec une aire sous la courbe ROC de 0,99), ce qui a permis d'estimer un taux de contamination pour les PDFCP de seulement 5,8 %.Ces nouvelles méthodes permettent d'obtenir et/ou de confirmer des résultats importants sur les propriétés astrophysiques de l'amas des Pléiades. Tout d'abord, le BHM a découvert 200 nouveaux candidats membres, qui représentent 10% de la population totale de l'amas. Les résultats sont en excellent accord (99,6% des 100 000 objets dans l'ensemble de données) avec les résultats précédents trouvés dans la littérature, ce qui fournit une validation externe importante de la méthode. Enfin, la distribution de masse des systèmes actuelle (PDSMD) est en général en bon accord avec les résultats précédents de Bouy et al. 2015, mais présente l'avantage inestimable d'avoir des incertitudes beaucoup plus robustes que celles des méthodes précédentes.Ainsi, en améliorant la modélisation de l'ensemble de données et en éliminant les restrictions inutiles ou les hypothèses simplificatrices, le nouveau système intelligent, développé et testé dans le présent travail, représente l'état de l'art pour l'analyse statistique des populations de NYC. / The origin and evolution of stellar populations is one of the greatest challenges in modern astrophysics. It is known that the majority of the stars has its origin in stellar clusters (Carpenter 2000; Porras et al. 2003; Lada & Lada 2003). However, only less than one tenth of these clusters remains bounded after the first few hundred million years (Lada & Lada 2003). Ergo, the understanding of the origin and evolution of stars demands meticulous analyses of stellar clusters in these crucial ages.The project Dynamical Analysis of Nearby Clusters (DANCe, Bouy et al. 2013), from which the present work is part of, provides the scientific framework for the analysis of Nearby Young Clusters (NYC) in the solar neighbourhood (< 500 pc). The DANCe carefully designed observations of the well known Pleiades cluster provide the perfect case study for the development and testing of statistical tools aiming at the analysis of the early phases of cluster evolution.The statistical tool developed here is a probabilistic intelligent system that performs Bayesian inference for the parameters governing the probability density functions (PDFs) of the cluster population (PDFCP). It has been benchmarked with the Pleiades photometric and astrometric data of the DANCe survey. As any Bayesian framework, it requires the setting up of priors. To avoid the subjectivity of these, the intelligent system establish them using the Bayesian Hierarchical Model (BHM) approach. In it, the parameters of prior distributions, which are also inferred from the data, are drawn from other distributions in a hierarchical way.In this BHM intelligent system, the true values of the PDFCP are specified by stochastic and deterministic relations representing the state of knowledge of the NYC. To perform the parametric inference, the likelihood of the data, given these true values, accounts for the properties of the data set, especially its heteroscedasticity and missing value objects. By properly accounting for these properties, the intelligent system: i) Increases the size of the data set, with respect to previous studies working exclusively on fully observed objects, and ii) Avoids biases associated to fully observed data sets, and restrictions to low-uncertainty objects (sigma-clipping procedures).The BHM returns the posterior PDFs of the parameters in the PDFCPs, particularly of the spatial, proper motions and luminosity distributions. In the BHM each object in the data set contributes to the PDFs of the parameters proportionally to its likelihood. Thus, the PDFCPs are free of biases resulting from typical high membership probability selections (sampling bias).As a by-product, the BHM also gives the PDFs of the cluster membership probability for each object in the data set. These PDFs together with an optimal probability classification threshold, which is obtained from synthetic data sets, allow the classification of objects into cluster and field populations. This by-product classifier shows excellent results when applied on synthetic data sets (with an area under the ROC curve of 0.99). From the analysis of synthetic data sets, the expected value of the contamination rate for the PDFCPs is 5.8 ± 0.2%.The following are the most important astrophysical results of the BHM applied tothe Pleiades cluster. First, used as a classifier, it finds ∼ 200 new candidate members, representing 10% new discoveries. Nevertheless, it shows outstanding agreement (99.6% of the 105 objects in the data set) with previous results from the literature. Second, the derived present day system mass distribution (PDSMD) is in general agreement with the previous results of Bouy et al. (2015).Thus, by better modelling the data set and eliminating unnecessary restrictions to it, the new intelligent system, developed and tested in the present work, represents the state of the art for the statistical analysis of NYC populations.
608

Nuclear reactions inside the water molecule

Dicks, Jesse 30 June 2005 (has links)
A scheme, analogous to the linear combination of atomic orbitals (LCAO), is used to calculate rates of reactions for the fusion of nuclei con¯ned in molecules. As an example, the possibility of nuclear fusion in rotationally excited H2O molecules of angular momentum 1¡ is estimated for the p + p + 16O ! 18Ne¤(4:522; 1¡) nuclear transition. Due to a practically exact agreement of the energy of the Ne resonance and of the p + p + 16O threshold, the possibility of an enhanced transition probability is investigated. / Physics / M.Sc.
609

Management de l'incertitude pour les systèmes booléens complexes - Application à la maintenance préventive des avions / Uncertainty Management for Boolean Complex Systems Application to Preventive Maintenance of Aircrafts

Jacob, Christelle 25 February 2014 (has links)
Les analyses de sûreté de fonctionnement standards sont basées sur la représentation des événements redoutés par des arbres de défaillances, qui les décrivent à l'aide de combinaison logiques d'événements plus basiques (formules Booléennes complexes). Les analyses quantitatives se font avec l'hypothèse que les probabilités d'occurrence de ces événements basiques sont connues. Le but de cette thèse est d'étudier l'impact de l'incertitude épistémique sur les événements élémentaires, ainsi que la propagation de cette incertitude à de plus hauts niveaux. Le problème soulevé est comment calculer l'intervalle de probabilité dans lequel se trouvera l'occurrence d'un événement redouté, lorsque les événements basiques qui le décrivent ont eux-mêmes une probabilité imprécise. Lorsque l'indépendance stochastique est supposée, on se retrouve avec un problème NP-hard. Nous avons donc développé un algorithme permettant de calculer l'intervalle exact dans lequel se trouvera la probabilité d'occurrence d'un événement redouté, grâce à des techniques d'analyse par intervalles. Cet algorithme a également été étendu dans le cas où les probabilités des événements basiques évolueraient en fonction du temps. Nous avons également utilisé une approche par fonctions de croyance pour étudier le cas où l'indépendance stochastique des événements ne peut pas être démontrée : on suppose alors que les probabilités viennent de différentes sources d'information Indépendantes. Dans ce cas, les mesures de plausibilité et de nécessité d'une formule Booléenne complexe sont difficiles à calculer, néanmoins nous avons pu dégager des situations pratiques dans le cadre de leur utilisation pour les Arbres de défaillances pour lesquelles elles se prêtent aux calculs. / Standard approaches to reliability analysis relies on a probabilistic analysis of critical events based on fault tree representations. However in practice, and especially for preventive maintenance tasks, the probabilities ruling the occurrence of these events are seldom precisely known. The aim of this thesis is to study the impact of epistemic uncertainty on probabilities of elementary events such as failures over the probability of some higher level critical event. The fundamental problem addressed by the thesis is thus to compute the probability interval for a Boolean proposition representing a failure condition, given the probability intervals of atomic propositions. When the stochastic independence is assumed, we face a problem of interval analysis, which is NP-hard in general. We have provided an original algorithm that computes the output probability interval exactly, taking into account the monotonicity of the obtained function in terms of some variables so as to reduce the uncertainty. We have also considered the evolution of the probability interval with time, assuming parameters of the reliability function to be imprecisely known. Besides, taking advantage of the fact that a probability interval on a binary space can be modelled by a belief function, we have solved the same problem with a different assumption, namely information source independence. While the problem of computing the belief and plausibility of a Boolean proposition are even harder to compute, we have shown that in practical situations such as usual fault-trees, the additivity condition of probability theory is still valid, which simplifies this calculation. A prototype has been developed to compute the probability interval for a complex Boolean proposition.
610

Cadeias de Markov ocultas

Medeiros, Sérgio da Silva January 2017 (has links)
Orientador: Prof. Dr. Daniel Miranda Machado / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Mestrado Profissional em Matemática em Rede Nacional, 2017. / O foco principal deste trabalho é o estudo das Cadeias de Markov e das Cadeias de Markov Ocultas. As cadeias de Markov fornecem uma forma prática para o estudo de conceitos probabilísticos e matriciais. Procuramos utilizar de forma contextualizada a aplicação do produto e potência de matrizes associados ao software Geogebra. Além dos exemplos, estão contidas questões de aprendizagem, sempre com objetivo de torná-los aliados e valiosos ao aprendizado referente a este tema. / The main focus of this work is the study of Markov Chains and the Markov Hidden Chains, which in turn brings the study of probabilistic and matrix concepts into practice. We seek to use in a contextualized way the application of the multiplication and potency of matrices associated to the software Geogebra. In addition to the examples, are contained learning issues, always with the goal of making them allies and valuable to the learning related to this theme.

Page generated in 0.093 seconds