621 |
Contribuições ao dimensionamento de redes sem fio / Contributions on the dimensioning of wireless networksMello, Renata Valverde 07 March 2009 (has links)
Orientador: Michel Daoud Yacoub / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-13T23:58:49Z (GMT). No. of bitstreams: 1
Mello_RenataValverde_M.pdf: 868177 bytes, checksum: 291f0f645a4f99467efc3f39b3164f5e (MD5)
Previous issue date: 2009 / Resumo: Este trabalho aborda o dimensionamento de redes sem fio por meio da análise da probabilidade de outage visando contemplar sistemas com múltiplas classes de serviço. Esta é uma tarefa complexa, uma vez que deve considerar aspectos como o desvanecimento e a interferência do canal sem fio e o tráfego de diferentes classes de serviço. Inicialmente é analisada uma rede ad hoc com uma única classe de serviço a partir da probabilidade de outage conjunta. Em seguida, determina-se uma nova formulação analítica fechada para o cálculo da probabilidade de outage em sistemas com múltiplas classes de serviço. Esta foi prontamente validada por meio de simulações a eventos discretos. Desta maneira, a formulação proposta pode ser utilizada para o dimensionamento de redes sem fio multi-serviço sem a necessidade da realização de simulações. Este método apresenta vantagens como fácil implementação e baixo esforço computacional. Isto significa que foi obtida uma ferramenta de dimensionamento rápida e precisa. / Abstract: This work tackles with the problem of dimensioning wireless networks through the analysis of the outage probability aiming to contemplate multiservice traffic. This is an intrincate task, since it must consider aspects such as fading and interference in the wireless channel as well as multiservice traffic. First, an ad hoc network with only one class of service is analysed through the joint outage probability. Then, a novel closed formulation for the outage probability in multiservice systems is found. This formulation was promptly validated through discrete-event simulation. Therefore, the proposed formulation may be used for the dimensioning in multiservice wireless networks without the necessity of simulations. This method presents the advantages of easy implementation and low computational effort. This means that this tool enables a fast and precise dimensioning. / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
|
622 |
Simulação de multidões e planejamento probabilístico para otimização dos tempos de semáforos / Crowd simulation and probabilistic planning for traffic light optimizationCoelho, Renato Schattan Pereira, 1987- 03 February 2012 (has links)
Orientadores: Siome Klein Goldenstein, Jacques Wainer / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-22T23:53:33Z (GMT). No. of bitstreams: 1
Coelho_RenatoSchattanPereira_M.pdf: 864445 bytes, checksum: 8f57902047a23925af4b81fa0d7f3188 (MD5)
Previous issue date: 2013 / Resumo: O trânsito é um problema cada vez maior nas cidades, consumindo recursos e agravando a poluição; em São Paulo perdem-se cerca de R$33 bilhões por ano por causa do trânsito. Neste trabalho de mestrado desenvolvemos um sistema que une as áreas de simulação de multidões e planejamento probabilístico para otimizar semáforos de tempo fixo. Essas duas áreas apresentam algoritmos que permitem soluções eficientes para os problemas, mas a sua aplicação ainda depende largamente da intervenção de especialistas no problema a ser estudado, seja descrevendo o problema de planejamento probabilístico, seja interpretando os dados devolvidos pelo simulador. Nosso sistema diminui essa dependência ao utilizar autômatos celulares para simular o tráfego e gerar informações que são então utilizadas para descrever o problema de planejamento probabilístico. Com isso podemos: (i) reduzir a necessidade de coleta de dados, que passam a ser gerados pelo simulador e (ii) produzir bons planos para o controle de semáforos de tempo fixo sem que seja necessária a intervenção de especialistas para a análise dos dados. Nos dois testes realizados a solução proposta pelo sistema diminuiu o tempo médio de percurso em 18:51% e 13:51%, respectivamente / Abstract: Traffic is an ever increasing problem, draining resources and aggravating pollution. In Sao Paulo, for instance, financial losses caused by traffic represent a sum of about R$33 billions a year. In this work we've developed a system that puts together the areas of Crowd Simulation and Probabilistic Planning to optimize fixed time traffic lights. Although both areas present good algorithms their use is limited by their reliance on specialists, whether to describe the probabilistic planning problem or to analyze the data produced by the simulations. Our approach contributes to minimize this dependence by using cellular automata simulations to generate the data that is used to describe the probabilistic planning problem. This allows us to: (i) reduce the amount of data collection, since the data is now generated by the simulator and (ii) produce good policies for fixed time traffic light control without the intervention of specialists to analyze the data. In the two tests performed the solution proposed by the system was able to reduce travel times by 18:51% and 13:51%, respectively / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
|
623 |
Lambdas-théories probabilistes / Probabilistic lambda-theoriesLeventis, Thomas 08 December 2016 (has links)
Le lambda-calcul est un formalisation de la notion de calcul. Dans cette thèse nous nous intéresserons à certaines variantes non déterministes, et nous nous pencherons plus particulièrement sur le cas probabiliste.L'étude du lambda-calcul probabiliste n'est pas nouvelle, mais les travaux précédents considéraient le comportement probabiliste comme un effet de bord. Notre objectif est de présenter ce calcul d'une manière plus équationnelle, en intégrant le comportement probabiliste à la réduction.Tout d'abord nous définissons une sémantique opérationnelle déterministe et contextuelle pour le lambda-calcul probabiliste en appel par nom. Afin de traduire la signification de la somme nous définissons une équivalence syntaxique dans notre calcul, dont nous démontrons qu'il ne déforme pas la réduction: considérer une réduction modulo équivalence revient à considérer simplement le résultat du calcul modulo équivalence. Nous prouvons également un résultat de standardisation.Dans ce cadre nous définissons une notion de théorie équationnelle pour le lambda-calcul probabiliste. Nous étendons certaines notions usuelles, et en particulier celle de bon sens. Cette dernière se formalise facilement dans un cadre déterministe mais est bien plus complexe dans le cas probabiliste.Pour finir nous prouvons une correspondance entre l'équivalence observationnelle, l'égalité des arbres de Böhm et la théorie cohérente sensée maximale. Nous définissons une notion d'arbres de Böhm probabilistes dont nous prouvons qu'elle forme un modèle. Nous démontrons ensuite un résultat de séparabilité disant que deux termes avec des arbres de Böhm distincts ne sont pas observationnellement équivalents. / The lambda-calculus is a way to formalize the notion of computation. In this thesis we will be interested in some of these variants introducing non deterministim, and we will focus mostly on a probabilistic calculus.The probabilistic lambda-calculus has been studied for some time, but the probabilistic behaviour has always been treated as a side effect. Our purpose is to give a more equational representation of this calculus, by handling the probabilities inside the reduction rather than as a side effect.To begin with we give a deterministic and contextual operational semantics for the call-by-name probabilistic lambda-calculus. To express the probabilistic behaviour of the sum we introduce a syntactic equivalence in our calculus, and we show it has little consequence on the calculus: reducing modulo equivalence amount to reducing and then looking at the result modulo equivalence. We also prove a standardization theorem.Then using this operational semantics we define a notion of equational theories for the probabilistic lambda-calculus. We extend some usual notions to this setting, and in particular the sensibility of a theory. This notion is quite simple in a deterministic setting but becomes more complicated when we have a probabilistic computation.Finally we prove a generalization of the equality between the observational equivalence, the Böhm tree equality and the maximal coherent sensible lambda-theory. We give a notion of probabilistic Böhm trees, and prove that this forms a model of the probabilistic lambda-calculus. Then we prove a separability result stating that two terms with different Böhm trees are separable, i.e. are not observationally equivalent.
|
624 |
Insurance portfolio's with dependent risksBadran, Rabih 23 January 2014 (has links)
Cette thèse traite de portefeuilles d’assurance avec risques dépendants en théorie du risque.<p>Le premier chapitre traite les modèles avec risques équicorrelés. Nous proposons une structure mathématique qui amène à une fonction génératrice de probabilités particulière (fgp) proposé par Tallis. Cette fgp implique des variables équicorrelées. Puis, nous étudions l’effet de ce type de dépendance sur des quantités d’intérêt dans la littérature actuarielle telle que la fonction de répartition de la somme des montants des sinistres, les primes stop-loss et les probabilités de ruine sur horizon fini. Nous utilisons la structure proposée pour corriger des erreurs dans la littérature dues au fait que plusieurs auteurs agissaient comme si la somme des variables aléatoires équicorrélés aient nécessairement la fgp proposée par Tallis. <p><p>Dans le second chapitre, nous proposons un modèle qui combine les modèles avec chocs et les modèles avec mélanges communs en introduisant une variable qui contrôle le niveau du choc. Dans le cadre de ce nouveau modèle, nous considérons deux applications où nous généralisons le modèle de Bernoulli avec choc et le modèle de Poisson avec choc. Nous étudions, dans les deux applications, l’effet de la dépendance sur la fonction de répartition des montants des sinistres, les primes stop-loss et les probabilités de ruine sur horizon fini et infini. Pour la deuxième application, nous proposons une construction basée sur les copules qui permet de contrôler le niveau de dépendance avec le niveau du choc.<p><p>Dans le troisième chapitre, nous proposons, une généralisation du modèle classique de Poisson où les montants des sinistres et les intersinistres sont supposés dépendants. Nous calculons la transformée de Laplace des probabilités de survie. Dans le cas particulier où les montants des sinistres ont une distribution exponentielle nous obtenons des formules explicites pour les probabilités de survie. <p><p>Dans le quatrième chapitre nous généralisons le modèle classique de Poisson en introduisant de la dépendance entre les intersinistres. Nous utilisons le lien entre les files fluides et le processus du risque pour modéliser la dépendance. Nous calculons les probabilités de survie en utilisant un algorithme numérique et nous traitons le cas où les montants de<p>sinistres et les intersinistres ont des distributions de type phase.<p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
|
625 |
Mikroskopické jaderné modely pro jádra s nezaplněnými slupkami / Microscopic nuclear models for open-shell nucleiHerko, Jakub January 2017 (has links)
Title: Microscopic nuclear models for open-shell nuclei Author: Jakub Herko Institute: Institute of Particle and Nuclear Physics Supervisor: Mgr. František Knapp, Ph.D., Institute of Particle and Nuclear Physics Abstract: Since the nucleus is a quantum many-body system consisting of con- stituents whose mutual interaction is not satisfactorily known, it is necessary to use approximate methods when describing the nucleus. Basic approximate approaches in the microscopic theory of the nucleus are the Hartree-Fock the- ory, Tamm-Dancoff approximation and random phase approximation. They are described in the first chapter of this thesis. The main aim was to develop mi- croscopic models for open-shell nuclei with two valence particles or holes. They are described in the second chapter, which contains detailed derivations of the relevant formulae. These methods have been numerically implemented. The re- sults of the calculations of the nuclear spectra and the electromagnetic transition probabilities are presented in the third chapter. Keywords: Tamm-Dancoff approximation, random phase approximation, open- shell nuclei, nuclear spectra, electromagnetic transition probabilities ii
|
626 |
Intelligent design and biologyRamsden, Sean January 2003 (has links)
The thesis is that contrary to the received popular wisdom, the combination of David Hume's sceptical enquiry and Charles Darwin's provision of an alternative theoretical framework to the then current paradigm of natural theology did not succeed in defeating the design argument. I argue that William Paley's work best represented the status quo in the philosophy of biology circa 1800 and that with the logical mechanisms provided us by William Dembski in his seminal work on probability, there is a strong argument for thr work of Michael Behe to stand in a similar position today to that of Paley two centuries ago. The argument runs as follows: In Sections 1 and 2 of Chapter 1 I introduce the issues. In Section 3 I argue that William Paley's exposition of the design argument was archetypical of the natural theology school and that given Hume's already published criticism of the argument, Paley for one did not feel the design argument to be done for. I further argue in Section 4 that Hume in fact did no such thing and that neither did he see himself as having done so, but that the design argument was weak rather than fallacious. In Section 5 I outline the demise of natural theology as the dominant school of thought in the philosophy of biology, ascribing this to the rise of Darwinism and subsequently neo-Darwinism. I argue that design arguments were again not defeated but went into abeyance with the rise of a new paradigm associated with Darwinism, namely methodological naturalism. In Chapter 2 I advance the project by a discussion of William Dembski's formulation of design inferences, demonstrating their value in both everyday and technical usage. This is stated in Section 1. In Sections 2 and 3 I discuss Dembski's treatment of probability, whilst in Section 4 I examine Dembski's tying of different levels of probability to different mechanisms of explanation used in explicating the world. Section 5 is my analysis of the logic of the formal statement of the design argument according to Dembski. In Section 6 I encapsulate objections to Dembski. I conclude the chapter (with Section 7) by claiming that Dembski forwards a coherent model of design inferences that can be used in demonstrating that there is little difference between the way that Paley came to his conclusions two centuries ago and how modem philosophers of biology (such as I take Michael Behe to be, albeit that by profession he is a scientist) come to theirs when offering design explanations. Inference to the best explanation is demonstrated as lying at the crux of design arguments. In Chapter 3 I draw together the work of Michael Behe and Paley, showing through the mechanism of Dembski's work that they are closely related in many respects and that neither position is to be lightly dismissed. Section 1 introduces this. In Section 2 I introduce Behe's concept of irreducible complexity in the light of (functional) explanation. Section 3 is a detailed analysis of irreducible complexity. Section 4 raises and covers objections to Behe with the general theme being that (neo-) Darwinians beg the question against him. In Section 4 I apply the Dembskian mechanic directly to Behe's work. I argue that Behe does not quite meet the Dembskian criteria he needs to in order for his argument to stand as anything other than defeasible. However, in Section 5 I conclude by arguing that this is exactly what we are to expect from Behe's and similar theories, even within competing paradigms, in the philosophy of biology, given that inference to the best explanation is the logical lever therein at work. / KMBT_363 / Adobe Acrobat 9.54 Paper Capture Plug-in
|
627 |
Modèles prudents en apprentissage statistique supervisé / Cautious models in supervised machine learningYang, Gen 22 March 2016 (has links)
Dans certains champs d’apprentissage supervisé (e.g. diagnostic médical, vision artificielle), les modèles prédictifs sont non seulement évalués sur leur précision mais également sur la capacité à l'obtention d'une représentation plus fiable des données et des connaissances qu'elles induisent, afin d'assister la prise de décisions de manière prudente. C'est la problématique étudiée dans le cadre de cette thèse. Plus spécifiquement, nous avons examiné deux approches existantes de la littérature de l'apprentissage statistique pour rendre les modèles et les prédictions plus prudents et plus fiables: le cadre des probabilités imprécises et l'apprentissage sensible aux coûts. Ces deux domaines visent tous les deux à rendre les modèles d'apprentissage et les inférences plus fiables et plus prudents. Pourtant peu de travaux existants ont tenté de les relier, en raison de problèmes à la fois théorique et pratique. Nos contributions consistent à clarifier et à résoudre ces problèmes. Sur le plan théorique, peu de travaux existants ont abordé la manière de quantifier les différentes erreurs de classification quand des prédictions sous forme d'ensembles sont produites et quand ces erreurs ne se valent pas (en termes de conséquences). Notre première contribution a donc été d'établir des propriétés générales et des lignes directrices permettant la quantification des coûts d'erreurs de classification pour les prédictions sous forme d'ensembles. Ces propriétés nous ont permis de dériver une formule générale, le coût affaiblie généralisé (CAG), qui rend possible la comparaison des classifieurs quelle que soit la forme de leurs prédictions (singleton ou ensemble) en tenant compte d'un paramètre d'aversion à la prudence. Sur le plan pratique, la plupart des classifieurs utilisant les probabilités imprécises ne permettent pas d'intégrer des coûts d'erreurs de classification génériques de manière simple, car la complexité du calcul augmente de magnitude lorsque des coûts non unitaires sont utilisés. Ce problème a mené à notre deuxième contribution, la mise en place d'un classifieur qui permet de gérer les intervalles de probabilités produits par les probabilités imprécises et les coûts d'erreurs génériques avec le même ordre de complexité que dans le cas où les probabilités standards et les coûts unitaires sont utilisés. Il s'agit d'utiliser une technique de décomposition binaire, les dichotomies emboîtées. Les propriétés et les pré-requis de ce classifieur ont été étudiés en détail. Nous avons notamment pu voir que les dichotomies emboîtées sont applicables à tout modèle probabiliste imprécis et permettent de réduire le niveau d'indétermination du modèle imprécis sans perte de pouvoir prédictif. Des expériences variées ont été menées tout au long de la thèse pour appuyer nos contributions. Nous avons caractérisé le comportement du CAG à l’aide des jeux de données ordinales. Ces expériences ont mis en évidence les différences entre un modèle basé sur les probabilités standards pour produire des prédictions indéterminées et un modèle utilisant les probabilités imprécises. Ce dernier est en général plus compétent car il permet de distinguer deux sources d'indétermination (l'ambiguïté et le manque d'informations), même si l'utilisation conjointe de ces deux types de modèles présente également un intérêt particulier dans l'optique d'assister le décideur à améliorer les données ou les classifieurs. De plus, des expériences sur une grande variété de jeux de données ont montré que l'utilisation des dichotomies emboîtées permet d'améliorer significativement le pouvoir prédictif d'un modèle imprécis avec des coûts génériques. / In some areas of supervised machine learning (e.g. medical diagnostics, computer vision), predictive models are not only evaluated on their accuracy but also on their ability to obtain more reliable representation of the data and the induced knowledge, in order to allow for cautious decision making. This is the problem we studied in this thesis. Specifically, we examined two existing approaches of the literature to make models and predictions more cautious and more reliable: the framework of imprecise probabilities and the one of cost-sensitive learning. These two areas are both used to make models and inferences more reliable and cautious. Yet few existing studies have attempted to bridge these two frameworks due to both theoretical and practical problems. Our contributions are to clarify and to resolve these problems. Theoretically, few existing studies have addressed how to quantify the different classification errors when set-valued predictions are produced and when the costs of mistakes are not equal (in terms of consequences). Our first contribution has been to establish general properties and guidelines for quantifying the misclassification costs for set-valued predictions. These properties have led us to derive a general formula, that we call the generalized discounted cost (GDC), which allow the comparison of classifiers whatever the form of their predictions (singleton or set-valued) in the light of a risk aversion parameter. Practically, most classifiers basing on imprecise probabilities fail to integrate generic misclassification costs efficiently because the computational complexity increases by an order (or more) of magnitude when non unitary costs are used. This problem has led to our second contribution, the implementation of a classifier that can manage the probability intervals produced by imprecise probabilities and the generic error costs with the same order of complexity as in the case where standard probabilities and unitary costs are used. This is to use a binary decomposition technique, the nested dichotomies. The properties and prerequisites of this technique have been studied in detail. In particular, we saw that the nested dichotomies are applicable to all imprecise probabilistic models and they reduce the imprecision level of imprecise models without loss of predictive power. Various experiments were conducted throughout the thesis to illustrate and support our contributions. We characterized the behavior of the GDC using ordinal data sets. These experiences have highlighted the differences between a model based on standard probability framework to produce indeterminate predictions and a model based on imprecise probabilities. The latter is generally more competent because it distinguishes two sources of uncertainty (ambiguity and the lack of information), even if the combined use of these two types of models is also of particular interest as it can assist the decision-maker to improve the data quality or the classifiers. In addition, experiments conducted on a wide variety of data sets showed that the use of nested dichotomies significantly improves the predictive power of an indeterminate model with generic costs.
|
628 |
Controlled Semi-Markov Processes With Partial ObservationGoswami, Anindya 03 1900 (has links) (PDF)
No description available.
|
629 |
Transição de fase para um modelo de percolação dirigida na árvore homogênea / Phase transition for a directed percolation model on homogeneous treesUtria Valdes, Jaime Antonio, 1988- 27 August 2018 (has links)
Orientador: Élcio Lebensztayn / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-27T03:09:48Z (GMT). No. of bitstreams: 1
UtriaValdes_JaimeAntonio_M.pdf: 525263 bytes, checksum: 3a980748a98761becf1b573639a361c1 (MD5)
Previous issue date: 2015 / Resumo: O Resumo poderá ser visualizado no texto completo da tese digital / Abstract: The Abstract is available with the full electronic digital document / Mestrado / Estatistica / Mestre em Estatística
|
630 |
Nosologie et probabilités. Une histoire épistémologique de la méthode numérique en médecine / Nosology and Probability. A Historical Epistemology of the Numerical Method in MedicineCorteel, Mathieu 13 December 2017 (has links)
Dans Naissance de la clinique, Michel Foucault mit en évidence l’émergence au XIXe siècle d’un regard médical qui, en faisant taire la théorie au lit du malade, tâche de parler la langue étrangère de la maladie dans la profondeur des tissus. En opposition aux nosographies essentialistes du XVIIIe siècle, une forme de nominalisme médical apparaît progressivement à travers le développement de l’anatomo-pathologie. Cette médecine clinique est parcourue par un concept souvent oublié qui se trame, pourtant, dans l’ombre de son savoir et préfigure son dépassement. Il s’agit du concept de « probabilité ». Bien que celui-ci s’inscrit dans la clinique, l’application du calcul de probabilités ne parvient pas à s’y intégrer. Le XIXe siècle sera le théâtre d’un véritable conflit sur la conjecture qui oppose « les numéristes » et les cliniciens d’obédience hippocratique. L’orthodoxie de l’Ecole de Paris se trouve confrontée à l’émergence de la méthode numérique. La dispute théorique qui en résulte problématise l’application du calcul de probabilités en la médecine : du probable peut-on connaître autre chose que du probable ? Durant tout le XIXe siècle, on s’accorde à rejeter épistémologiquement cette méthode. Elle ne cadre pas avec la positivité des sciences médicales. Ce sera l’hygiène publique qui en fera usage pour pallier à l’inanité clinique dans le traitement des épidémies, des endémies et des épizooties. Cette rencontre conflictuelle de l’individuel et du collectif dans le médical fera naître une nouvelle forme de nosologie au XXe siècle. Il s’agit d’en comprendre l’émergence. / In The Birth of The Clinic, Michel Foucault highlights the emergence of a medical gaze in the 19th-century that – by vanishing the theory at the patient's bedside – tries to speak the foreign language of the disease in the depth of organic tissues. With the development of anatomo-pathology, a form of medical nominalism progressively appears in opposition to the essentialist nosography of the 18th-century. This clinical medicine is shot-through by a concept often forgotten that is framed, however in the shadow of clinical medical knowledge and that prefigures its disappearance. This is the concept of "probability". Even though this concept is part of clinical medicine, the application of probability calculation fails to be part of medical knowledge. The 19th-century was the scene of a conflict over numerical conjecture that opposes "Numerists" and Hippocratic’s Clinician. The Ecole de Paris’s orthodoxy was then confronted with the emergence of the numerical method. The theoretical dispute that results from the application of the calculation of probabilities in medicine gives rise to this question: from what is only probable, can we know anything else than what is probable? Throughout the 19th-century, the numerical method is rejected on epistemological grounds. It is held not to fit with the positivity of medical science. In the treatment of epidemics, endemic diseases, and epizootics, public health services make use of it still. This confrontation between the individual and the collective in medicine gives rise to a new form of nosology in the 20th-century.
|
Page generated in 0.0674 seconds