• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 10
  • 10
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

The State Space of Complex Systems

Heilmann, Frank 14 October 2005 (has links) (PDF)
In dieser Arbeit wird eine Beschreibung von Monte-Carlo-Verfahren zur Lösung komplexer Optimierungsaufgaben mit Hilfe von Markov-Ketten durchgeführt. Nach einer kurzen Einführung werden Lösungsmenge solcher Aufgaben und der physikalische Zustandsraum komplexer Systeme identifiziert. Zunächst wird die Dynamik von Zufallswanderern im Zustandsraum mit Hilfe von Master-Gleichungen modelliert. Durch Einführung von Performanzkriterien können verschiedene Optimierungsstrategien quantitativ miteinander verglichen werden. Insbesondere wird das Verfahren Extremal Optimization vorgestellt, dass ebenfalls als Markov-Prozess verstanden werden kann. Es wird bewiesen, dass eine im Sinne der genannten Kriterien beste Implementierung existiert. Da diese von einem sogenannten Fitness Schedule abhängt, wird dieser für kleine Beispielsysteme explizit berechnet. Daran anschließend wird die Zustandsdichte komplexer Systeme betrachtet. Nach einem kurzen Überblick über vorhandene Methoden folgt eine detaillierte Untersuchung des Verfahrens von Wang und Landau. Numerische und analytische Hinweise werden gegeben, nach denen dieser Algorithmus innerhalb seiner Klasse wahrscheinlich der Optimale ist. Eine neue Methode zur Approximation der Zustandsdichte wird vorgestellt, die insbesondere für die Untersuchung komplexer Systeme geeignet ist. Abschließend wird ein Ausblick auf zukünftige Arbeiten gegeben.
22

MODELAGEM DINÂMICA PARA SIMULAÇÃO DE MUDANÇAS NA COBERTURA FLORESTAL DAS SERRAS DO SUDESTE E CAMPANHA MERIDIONAL DO RIO GRANDE DO SUL / DYNAMIC MODELING FOR CHANGES SIMULATIONS IN THE FOREST COVER AT THE SERRAS DO SUDESTE AND CAMPANHA MERIDIONAL FROM RIO GRANDE DO SUL STATE

Benedetti, Ana Caroline Paim 02 June 2010 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Significant processes of conversion in the land use patterns have been verified in the State of Rio Grande do Sul due to the incorporation of forest areas during the last years. In this conception, this work aims to establish the methodological guidelines to analyze the dynamics of these patterns in the Serras do Sudeste and Campanha Meridional, which are micro regions located in the southern part of the State. Thematic maps from the years 2000, 2004 and 2008 were elaborated through the product NDVI (Normalized Difference Vegetation Index) from the MODIS sensor in a way to relate this index to the land use classes. The maps that we elaborated were used to drive a dynamic model, which made it possible to quantify, through the use of Markov model, the conversion rates between classes. The simulation designed using the probabilistic method weighs-of-evidence, and allowed to assess the role of influential variables in the forest cover changes of both micro regions. The main results are simulations based on the paradigm of Cellular Automata (CA), in which the forest areas are quantified and spatially distributed until the year 2016. The predictions modeled, based on the considered variables, indicate that the forest will expand over spaces previously allocated to agricultural activities and to extensive grazing of cattle, either with the introduction of exotic species or by means of regeneration. The forest cover in Serras do Sudeste will increase from 8,6% to 16,2% by the end of the period. In Campanha Meridional the increase will be from 11,1% to 12,5% and, in both micro regions, the expansion will tend to stabilize within the observed period. / Processos significativos de conversão nos padrões de uso da terra têm sido verificados no Rio Grande do Sul devido à incorporação de áreas florestais nos últimos anos. Nessa concepção, este trabalho tem como objetivo estabelecer diretrizes metodológicas para analisar a dinâmica desses padrões nas Serras do Sudeste e Campanha Meridional, microrregiões pertencentes à Metade Sul do Estado. Mapas temáticos dos anos de 2000, 2004 e 2008 foram elaborados a partir do produto NDVI (Índice de Vegetação por Diferença Normalizada) do sensor MODIS, de forma a relacionar este índice às classes de uso da terra. Os mapas elaborados serviram para alimentar um modelo dinâmico, o qual possibilitou quantificar, através de matrizes Markovianas, as taxas de conversão entre as classes. A simulação foi concebida através do método probabilístico pesos de evidência, o qual permitiu inferir sobre a contribuição das variáveis influentes nas mudanças na cobertura florestal das duas microrregiões. Os principais resultados constituem simulações baseadas no paradigma de Autômatos Celulares (AC), nas quais são quantificadas e distribuídas espacialmente as áreas florestais até o ano de 2016. Os prognósticos modelados, com base nas variáveis analisadas, indicam que a floresta deve ocupar espaços anteriormente destinados às atividades agrícolas e ao pastoreio extensivo do gado, tanto pela introdução de espécies exóticas como pelo estabelecimento da regeneração. A cobertura florestal na microrregião Serras do Sudeste passará de 8,6% para 16,2% ao fim do período. Na Campanha Meridional o acréscimo será de 11,1% para 12,5%, sendo observadas, nas duas microrregiões, tendências à estabilidade dessa expansão.
23

On Ergodic Theorems for Cesàro Convergence of Spherical Averages for Fuchsian Groups: Geometric Coding via Fundamental Domains

Drygajlo, Lars 04 November 2021 (has links)
The thesis is organized as follows: First we state basic ergodic theorems in Section 2 and introduce the notation of Cesàro averages for multiple operators in Section 3. We state a general theorem in Section 3 for groups that can be represented by a finite alphabet and a transition matrix. In the second part we show that finitely generated Fuchsian groups, with certain restrictions to the fundamental domain, admit such a representation. To develop the representation we give an introduction into Möbius transformations (Section 4), hyperbolic geometry (Section 5), the concept of Fuchsian groups and their action in the hyperbolic plane (Section 6) and fundamental domains (Section 7). As hyperbolic geometry calls for visualization we included images at various points to make the definitions and statements more approachable. With those tools at hand we can develop a geometrical coding for Fuchsian groups with respect to their fundamental domain in Section 8. Together with the coding we state in Section 9 the main theorem for Fuchsian groups. The last chapter (Section 10) is devoted to the application of the main theorem to three explicit examples. We apply the developed method to the free group F3, to a fundamental group of a compact manifold with genus two and we show why the main theorem does not hold for the modular group PSL(2, Z).:1 Introduction 2 Ergodic Theorems 2.1 Mean Ergodic Theorems 2.2 Pointwise Ergodic Theorems 2.3 The Limit in Ergodic Theorems 3 Cesàro Averages of Sphere Averages 3.1 Basic Notation 3.2 Cesàro Averages as Powers of an Operator 3.3 Convergence of Cesàro Averages 3.4 Invariance of the Limit 3.5 The Limit of Cesàro Averages 3.6 Ergodic Theorems for Strictly Markovian Groups 4 Möbius Transformations 4.1 Introduction and Properties 4.2 Classes of Möbius Transformations 5 Hyperbolic Geometry 5.1 Hyperbolic Metric 5.2 Upper Half Plane and Poincaré Disc 5.3 Topology 5.4 Geodesics 5.5 Geometry of Möbius Transformations 6 Fuchsian Groups and Hyperbolic Space 6.1 Discrete Groups 6.2 The Group PSL(2, R) 6.3 Fuchsian Group Actions on H 6.4 Fuchsian Group Actions on D 7 Geometry of Fuchsian Groups 7.1 Fundamental Domains 7.2 Dirichlet Domains 7.3 Locally Finite Fundamental Domains 7.3.1 Sides of Locally Finite Fundamental Domains 7.3.2 Side Pairings for Locally Finite Fundamental Domains 7.3.3 Finite Sided Fundamental Domains 7.4 Tessellations of Hyperbolic Space 7.5 Example Fundamental Domains 8 Coding for Fuchsian Groups 8.1 Geometric Alphabet 8.1.1 Alphabet Map 8.2 Transition Matrix 8.2.1 Irreducibility of the Transition Matrix 8.2.2 Strict Irreducibility of the Transition Matrix 9 Ergodic Theorem for Fuchsian Groups 10 Example Constructions 10.1 The Free Group with Three Generators 10.1.1 Transition Matrix 10.2 Example of a Surface Group 10.2.1 Irreducibility of the Transition Matrix 10.2.2 Strict Irreducibility of the Transition Matrix 10.3 Example of PSL(2, Z) 10.3.1 Irreducibility of the Transition Matrix 10.3.2 Strict Irreducibility of the Transition Matrix
24

Kmitání stlačitelné tekutiny ve válcové oblasti / Oscillation of the compressible fluid in the cylindrical space

Smolík, Adam January 2016 (has links)
The thesis includes two main parts. The first part comprises the theoretical analysis, i.e. literary research. It has been focused on questions regarding compressive pulsation and possible damping elements. The second part focuses on the formation of mathematical model of transition of matrix gas accumulator, the hydraulic set formation by the transition matrix method, the deduction of innate and constrained values and the compressible liquid mode shapes in cylindrical space.
25

The State Space of Complex Systems

Heilmann, Frank 14 October 2005 (has links)
In dieser Arbeit wird eine Beschreibung von Monte-Carlo-Verfahren zur Lösung komplexer Optimierungsaufgaben mit Hilfe von Markov-Ketten durchgeführt. Nach einer kurzen Einführung werden Lösungsmenge solcher Aufgaben und der physikalische Zustandsraum komplexer Systeme identifiziert. Zunächst wird die Dynamik von Zufallswanderern im Zustandsraum mit Hilfe von Master-Gleichungen modelliert. Durch Einführung von Performanzkriterien können verschiedene Optimierungsstrategien quantitativ miteinander verglichen werden. Insbesondere wird das Verfahren Extremal Optimization vorgestellt, dass ebenfalls als Markov-Prozess verstanden werden kann. Es wird bewiesen, dass eine im Sinne der genannten Kriterien beste Implementierung existiert. Da diese von einem sogenannten Fitness Schedule abhängt, wird dieser für kleine Beispielsysteme explizit berechnet. Daran anschließend wird die Zustandsdichte komplexer Systeme betrachtet. Nach einem kurzen Überblick über vorhandene Methoden folgt eine detaillierte Untersuchung des Verfahrens von Wang und Landau. Numerische und analytische Hinweise werden gegeben, nach denen dieser Algorithmus innerhalb seiner Klasse wahrscheinlich der Optimale ist. Eine neue Methode zur Approximation der Zustandsdichte wird vorgestellt, die insbesondere für die Untersuchung komplexer Systeme geeignet ist. Abschließend wird ein Ausblick auf zukünftige Arbeiten gegeben.
26

Measuring Skill Importance in Women's Soccer and Volleyball

Allan, Michelle L. 11 March 2009 (has links) (PDF)
The purpose of this study is to demonstrate how to measure skill importance for two sports: soccer and volleyball. A division I women's soccer team filmed each home game during a competitive season. Every defensive, dribbling, first touch, and passing skill was rated and recorded for each team. It was noted whether each sequence of plays led to a successful shot. A hierarchical Bayesian logistic regression model is implemented to determine how the performance of the skill affects the probability of a successful shot. A division I women's volleyball team rated each skill (serve, pass, set, etc.) and recorded rally outcomes during home games in a competitive season. The skills were only rated when the ball was on the home team's side of the net. Events followed one of these three patterns: serve-outcome, pass-set-attack-outcome, or dig-set-attack-outcome. We analyze the volleyball data using two different techniques, Markov chains and Bayesian logistic regression. These sequences of events are assumed to be first-order Markov chains. This means the quality of the current skill only depends on the quality of the previous skill. The count matrix is assumed to follow a multinomial distribution, so a Dirichlet prior is used to estimate each row of the count matrix. Bayesian simulation is used to produce the unconditional posterior probability (e.g., a perfect serve results in a point). The volleyball logistic regression model uses a Bayesian approach to determine how the performance of the skill affects the probability of a successful outcome. The posterior distributions produced from each of the models are used to calculate importance scores. The soccer data importance scores revealed that passing, first touch, and dribbling skills are the most important to the primary team. The Markov chain model for the volleyball data indicates setting 3–5 feet off the net increases the probability of a successful outcome. The logistic regression model for the volleyball data reveals that serves have a high importance score because of their steep slope. Importance scores can be used to assist coaches in allocating practice time, developing new strategies, and analyzing each player's skill performance.
27

Quantum computers for nuclear physics

Yusf, Muhammad F 08 December 2023 (has links) (PDF)
We explore the paradigm shift in quantum computing and quantum information science, emphasizing the synergy between hardware advancements and algorithm development. Only now have the recent advances in quantum computing hardware, despite a century of quantum mechanics, unveiled untapped potential, requiring innovative algorithms for full utilization. Project 1 addresses quantum applications in radiative reactions, overcoming challenges in many-fermion physics due to imaginary time evolution, stochastic methods like Monte Carlo simulations, and the associated sign problem. The methodology introduces the Electromagnetic Transition System and a general two-level system for computing radiative capture reactions. Project 2 utilizes Variational Quantum Eigensolver (VQE) to address the difficulties in adiabatic quantum computations, highlighting Singular Value Decomposition (SVD) in quantum computing. Results demonstrate an accurate ground state wavefunction match with only a 0.016% energy error. These projects advance quantum algorithm design, error mitigation, and SVD integration, showcasing quantum computing’s transformative potential in computational science.
28

Intra-Household Decision Making

Mohemkar-Kheirandish, Reza 27 October 2008 (has links)
This dissertation consists of three essays. In the first one (Chapter three), "Gains and Losses from Household Formation," I introduce a general equilibrium model, wherein a household may consist of more than one member, each with their own preferences and endowments. In these models at first, individuals form households. Then, collective decisions (or bargaining) within the household specifies the consumption plans of household members. Finally, competition across households determines a feasible allocation of resources. I consider a model with two types of individuals and pure group externalities. I investigate the competitive equilibrium allocation and stability of the equilibrium in that setting. Specifically, I show that under a certain set of assumptions a competitive equilibrium with free exit is also a competitive equilibrium with free household formation. Similar results are obtained for a special case of consumption externality. Illustrative examples, where prices may change as household structures change, are used to show how general equilibrium model with variable household structure works and some interesting results are discussed at the end of the first essay. In the second essay (Chapter four), “Effects of the Price System on Household Labor Supply,” I introduce leisure and labor into the two-type economy framework that was constructed in the first essay. The main objective of this essay is to investigate the effects of exogenous prices on the labor supply decisions, and completely analyze the partial equilibrium model outcomes in a two-type economy setting. I assume a wage gap and explore the effect of that gap on labor supply. The main content of the second essay is the analysis of the effect of change in wages, price of the private good, power of each individual in the household, relative importance of private consumption compared to leisure, and the level of altruism on individual's decisions about how much private good or leisure he/she wants to consume. The effect of a relative price change on labor supply, private consumption and utility level is also investigated. Moreover, one of the variations of Spence's signaling model is borrowed to explain why higher education of women in Iran does not necessarily translate into higher female labor force participation. Finally, fixed point theorem is used to calculate the power (or alternatively labor supply) of individuals in the household endogenously for the two-type economy with labor at the end of this essay. In the third essay (Chapter five), “Dynamics of Poverty in Iran: What Are the Determinants of the Probability of Being Poor?,” I explore the characteristics of the households who fall below the poverty line and stay there as well as those who climb up later. I decompose poverty in Iran into chronic and transient poverty, and investigate the relation of each component of poverty with certain characteristics of households. I also study mobility and the main characteristics of growth in expenditure of households. One of the main issues in economic policy making nowadays is the evaluation of effectiveness of anti-poverty programs. In order to achieve this goal one should be able to track down a household for a period of time. In this essay, I am going to investigate the dynamics of poverty in Iran during 1992-95. I am especially interested in finding the characteristics of the households that fall below the poverty line and stay there in addition to those that climb up later. Obviously, if policy-makers want to have efficient policies to reduce poverty, they should target the former group. I decompose poverty in Iran into chronic and transient poverty, and investigate the relation of each component of poverty with certain characteristics of households. I also study mobility in this period with an emphasis on mobility in and out of poverty and review the main characteristics of the growth in expenditure of households. / Ph. D.
29

傳染性風險下的信用風險因子模型與多期連續的移轉矩陣 / The credit risk model with the infectious effects and the continuous-time migration matrix

許柏園, Hsu, Po-Yuan Unknown Date (has links)
放款的利息收入雖是商業銀行主要之獲利來源, 但借貸行為卻同時使得銀行承受著違約風險。銀行應透過風險管理方法, 計算經濟資本以提列足夠準備來防範預期以及未預期損失。 另外, 若銀行忽略違約行為之間的相關性, 將有可能低估損失的嚴重性。因此, 為了在考量違約相關性下提列經濟資本, 本文由 Merton (1974) 模型出發, 以信用風險因子模型判定放款對象是否違約, 進而決定銀行面對的整體損失為何。 為簡化分析, 本文假設違約損失率 (loss given default) 為 100%。 再者, 為加強相關性, 本文亦將違約傳染性加入因子模型並比較有無傳染性效果時, 模型所計算出的損失孰輕孰重。 而在決定違約與否時, 須利用來自移轉矩陣上的無條件違約機率, 然信評機構所發布之移轉矩陣概遺漏諸多訊息, 依此, 本文以多期連續的移轉矩陣修正之並得到另一不同的無條件違約機率。 最後, 以臺灣的 537 家上市櫃公司作為資產組合, 經由蒙地卡羅模擬得到兩個因子模型的損失分配, 我們發現具有傳染性效果存在時, 預期損失和非預期損失較大且損失分配也較為右偏。 / Despite interest income from loans is a major profit contributor for commercial banks, lending inevitably makes banks bear default risks. For the sake of avoiding expected and unexpected losses, risk management methods ough to be employed by banks to meet the ecomical capital requirement. Besides, loan loss severity may very well be underestimated if the correlation between default events is disregarded. Therefore, in order to calculate economical capital when taking default correlation into account, we start from Merton (1974) model, and identify if loans will be in default via facor models for portfolio credit risk and portfolio losses can then be detemined. To simplify our analysis in this paper, loss given default is assumed to be 100%. To intensify correlation, default contagion is, moreover, introduced to our factor model and we investigate which model results in larger losses as well. When determining default, we have to utilize rating transition matrices to obtain unconditional probability of default. Transition matrices published by credit rating agencies, however, have embedded drawback of insufficient information. We correct this flaw by means of another transition matrix based on continuous-time observations and produce different unconditional probability of default. Through Monte Carlo simulation, loss distributions are calibrated respectively from the two factor models under portfolio of 537 Taiwan listed and OTC companies. We find that expected and unexpected losses are larger and loss distribution is more right-skewed when infectious effects exsit.
30

Sur l’utilisation des modèles multi-états pour la mesure et la gestion des risques d’un contrat d’assurance / On the use of multi-state models to measure and manage the risks of an insurance contract

Guibert, Quentin 07 December 2015 (has links)
La mise en place de Solvabilité II conduit les actuaires à s'interroger sur la bonne adéquation entre modèles et données. Aussi, cette thèse a pour objectif d'étudier plusieurs approches statistiques, souvent méconnues des praticiens, permettant l'utilisation de méthodes multi états pour modéliser et gérer les risques individuels en assurance. Le Chapitre 1 présente le contexte général de cette thèse et permet de faire positionner ses principales contributions. Nous abordons les concepts de base liés à l'utilisation de modèles multi-états en assurance et décrivons les techniques d'inférence classiques adaptées aux données rencontrées, qu'ils soient markoviens ou non-markoviens. Pour finir, nous présentons comment il est possible d'utiliser ces modèles pour la gestion des risques de crédit. Le Chapitre 2 se concentre sur l'utilisation de méthodes d'inférence non-paramétriques pour la construction de lois d'incidence en assurance dépendance. Puisque plusieurs causes d'entrée sont susceptibles d'intervenir et d'intéresser les actuaires, nous nous concentrons sur une méthode utilisée pour l'estimation de modèles multi-états markoviens en temps continu. Nous comparons, dans un second temps, ces estimateurs à ceux utilisés classiquement par les praticiens tires de l'analyse de survie. Cette seconde approche peut comporter des biais non négligeables car ne permettant pas d'appréhender correctement l'interaction possible entre les causes. En particulier, elle comprend une hypothèse d'indépendance ne pouvant être testée dans le cadre de modèles à risques concurrents. Notre approche consiste alors à mesurer l'erreur commise par les praticiens lors de la construction de lois d'incidence. Une application numérique est alors considérée sur la base des données d'un assureur dépendance / With the implementation of the Solvency II framework, actuaries should examine the good adequacy between models and data. This thesis aims to study several statistical approaches, often ignored by practitioners, enabling the use of multi-state methods to model and manage individual risks in insurance. Chapter 1 presents the general context of this thesis and positions its main contributions. The basic tools to use multi-state models in insurance are introduced and classical inference techniques, adapted to insurance data with and without the Markov assumption, are presented. Finally, a development of these models for credit risk is outlined. Chapter 2 focuses on using nonparametric inference methods to build incidence tables for long term care insurance contracts. Since there are several entry-causes in disability states which are useful for actuaries, an inference method for competing risks data, seen as a Markov multi-state model in continuous time, is used. In a second step, I compare these estimators to those conventionally used by practitioners, based on survival analysis methods. This second approach may involve significant bias because the interaction between entry-causes cannot be appropriately captured. In particular, these approaches assume that latent failure times are independent, while this hypothesis cannot be tested for competing risks data. Our approach allows to measure the error done by practitioners when they build incidence tables. Finally, a numerical application is considered on a long term care insurance dataset

Page generated in 0.1178 seconds