• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 8
  • 5
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 65
  • 65
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Makroekonomická analýza pomocí DSGE modelů / The Macroeconomic Analysis with DSGE Models

Průchová, Anna January 2012 (has links)
Dynamic stochastic general equilibrium models are derived from microeconomic principles and they retain the hypothesis of rational expectations under policy changes. Thus they are resistant to the Lucas critique. The DSGE model has become associated with new Keynesian thinking. The basic New Keynesian model is studied in this thesis. The three equations of this model are dynamic IS curve, Phillips-curve and monetary policy rule. Blanchard and Kahn's approach is introduced as the solution strategy for linearized model. Two methods for evaluating DSGE models are presented -- calibration and Bayesian estimation. Calibrated parametres are used to fit the model to Czech economy. The results of numeric experiments are compared with empricial data from Czech republic. DSGE model's suitability for monetary policy analysis is evaluated.
32

Inverse Uncertainty Quantification using deterministic sampling : An intercomparison between different IUQ methods

Andersson, Hjalmar January 2021 (has links)
In this thesis, two novel methods for Inverse Uncertainty Quantification are benchmarked against the more established methods of Monte Carlo sampling of output parameters(MC) and Maximum Likelihood Estimation (MLE). Inverse Uncertainty Quantification (IUQ) is the process of how to best estimate the values of the input parameters in a simulation, and the uncertainty of said estimation, given a measurement of the output parameters. The two new methods are Deterministic Sampling (DS) and Weight Fixing (WF). Deterministic sampling uses a set of sampled points such that the set of points has the same statistic as the output. For each point, the corresponding point of the input is found to be able to calculate the statistics of the input. Weight fixing uses random samples from the rough region around the input to create a linear problem that involves finding the right weights so that the output has the right statistic. The benchmarking between the four methods shows that both DS and WF are comparably accurate to both MC and MLE in most cases tested in this thesis. It was also found that both DS and WF uses approximately the same amount of function calls as MLE and all three methods use a lot fewer function calls to the simulation than MC. It was discovered that WF is not always able to find a solution. This is probably because the methods used for WF are not the optimal method for what they are supposed to do. Finding more optimal methods for WF is something that could be investigated further.
33

Computer Model Emulation and Calibration using Deep Learning

Bhatnagar, Saumya January 2022 (has links)
No description available.
34

Towards Reliable Hybrid Human-Machine Classifiers

Sayin Günel, Burcu 26 September 2022 (has links)
In this thesis, we focus on building reliable hybrid human-machine classifiers to be deployed in cost-sensitive classification tasks. The objective is to assess ML quality in hybrid classification contexts and design the appropriate metrics, thereby knowing whether we can trust the model predictions and identifying the subset of items on which the model is well-calibrated and trustworthy. We start by discussing the key concepts, research questions, challenges, and architecture to design and implement an effective hybrid classification service. We then present a deeper investigation of each service component along with our solutions and results. We mainly contribute to cost-sensitive hybrid classification, selective classification, model calibration, and active learning. We highlight the importance of model calibration in hybrid classification services and propose novel approaches to improve the calibration of human-machine classifiers. In addition, we argue that the current accuracy-based metrics are misaligned with the actual value of machine learning models and propose a novel metric ``value". We further test the performance of SOTA machine learning models in NLP tasks with a cost-sensitive hybrid classification context. We show that the performance of the SOTA models in cost-sensitive tasks significantly drops when we evaluate them according to value rather than accuracy. Finally, we investigate the quality of hybrid classifiers in the active learning scenarios. We review the existing active learning strategies, evaluate their effectiveness, and propose a novel value-aware active learning strategy to improve the performance of selective classifiers in the active learning of cost-sensitive tasks.
35

American Monte Carlo option pricing under pure jump levy models

West, Lydia 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: We study Monte Carlo methods for pricing American options where the stock price dynamics follow exponential pure jump L évy models. Only stock price dynamics for a single underlying are considered. The thesis begins with a general introduction to American Monte Carlo methods. We then consider two classes of these methods. The fi rst class involves regression - we briefly consider the regression method of Tsitsiklis and Van Roy [2001] and analyse in detail the least squares Monte Carlo method of Longsta and Schwartz [2001]. The variance reduction techniques of Rasmussen [2005] applicable to the least squares Monte Carlo method, are also considered. The stochastic mesh method of Broadie and Glasserman [2004] falls into the second class we study. Furthermore, we consider the dual method, independently studied by Andersen and Broadie [2004], Rogers [2002] and Haugh and Kogan [March 2004] which generates a high bias estimate from a stopping rule. The rules we consider are estimates of the boundary between the continuation and exercise regions of the option. We analyse in detail how to obtain such an estimate in the least squares Monte Carlo and stochastic mesh methods. These models are implemented using both a pseudo-random number generator, and the preferred choice of a quasi-random number generator with bridge sampling. As a base case, these methods are implemented where the stock price process follows geometric Brownian motion. However the focus of the thesis is to implement the Monte Carlo methods for two pure jump L évy models, namely the variance gamma and the normal inverse Gaussian models. We first provide a broad discussion on some of the properties of L évy processes, followed by a study of the variance gamma model of Madan et al. [1998] and the normal inverse Gaussian model of Barndor -Nielsen [1995]. We also provide an implementation of a variation of the calibration procedure of Cont and Tankov [2004b] for these models. We conclude with an analysis of results obtained from pricing American options using these models. / AFRIKAANSE OPSOMMING: Ons bestudeer Monte Carlo metodes wat Amerikaanse opsies, waar die aandeleprys dinamika die patroon van die eksponensiële suiwer sprong L évy modelle volg, prys. Ons neem slegs aandeleprys dinamika vir 'n enkele aandeel in ag. Die tesis begin met 'n algemene inleiding tot Amerikaanse Monte Carlo metodes. Daarna bestudeer ons twee klasse metodes. Die eerste behels regressie - ons bestudeer die regressiemetode van Tsitsiklis and Van Roy [2001] vlugtig en analiseer die least squares Monte Carlo metode van Longsta and Schwartz [2001] in detail. Ons gee ook aandag aan die variansie reduksie tegnieke van Rasmussen [2005] wat van toepassing is op die least squares Monte Carlo metodes. Die stochastic mesh metode van Broadie and Glasserman [2004] val in die tweede klas wat ons onder oë neem. Ons sal ook aandag gee aan die dual metode, wat 'n hoë bias skatting van 'n stop reël skep, en afsonderlik deur Andersen and Broadie [2004], Rogers [2002] and Haugh and Kogan [March 2004] bestudeer is. Die reëls wat ons bestudeer is skattings van die grense tussen die voortsettings- en oefenareas van die opsie. Ons analiseer in detail hoe om so 'n benadering in die least squares Monte Carlo en stochastic mesh metodes te verkry. Hierdie modelle word geï mplementeer deur beide die pseudo kansgetalgenerator en die verkose beste quasi kansgetalgenerator met brug steekproefneming te gebruik. As 'n basisgeval word hierdie metodes geï mplimenteer wanneer die aandeleprysproses 'n geometriese Browniese beweging volg. Die fokus van die tesis is om die Monte Carlo metodes vir twee suiwer sprong L évy modelle, naamlik die variance gamma en die normal inverse Gaussian modelle, te implimenteer. Eers bespreek ons in breë trekke sommige van die eienskappe van L évy prossesse en vervolgens bestudeer ons die variance gamma model soos in Madan et al. [1998] en die normal inverse Gaussian model soos in Barndor -Nielsen [1995]. Ons gee ook 'n implimentering van 'n variasie van die kalibreringsprosedure deur Cont and Tankov [2004b] vir hierdie modelle. Ons sluit af met die resultate wat verkry is, deur Amerikaanse opsies met behulp van hierdie modelle te prys.
36

EXPERIMENTALLY VALIDATED CRYSTAL PLASTICITY MODELING OF TITANIUM ALLOYS AT MULTIPLE LENGTH-SCALES BASED ON MATERIAL CHARACTERIZATION, ACCOUNTING FOR RESIDUAL STRESSES

Kartik Kapoor (7543412) 30 October 2019 (has links)
<p>There is a growing need to understand the deformation mechanisms in titanium alloys due to their widespread use in the aerospace industry (especially within gas turbine engines), variation in their properties and performance based on their microstructure, and their tendency to undergo premature failure due to dwell and high cycle fatigue well below their yield strength. Crystal plasticity finite element (CPFE) modeling is a popular computational tool used to understand deformation in these polycrystalline alloys. With the advancement in experimental techniques such as electron backscatter diffraction, digital image correlation (DIC) and high-energy x-ray diffraction, more insights into the microstructure of the material and its deformation process can be attained. This research leverages data from a number of experimental techniques to develop well-informed and calibrated CPFE models for titanium alloys at multiple length-scales and use them to further understand the deformation in these alloys.</p> <p>The first part of the research utilizes experimental data from high-energy x-ray diffraction microscopy to initialize grain-level residual stresses and capture the correct grain morphology within CPFE simulations. Further, another method to incorporate the effect of grain-level residual stresses via geometrically necessary dislocations obtained from 2D material characterization is developed and implemented within the CPFE framework. Using this approach, grain level information about residual stresses obtained spatially over the region of interest, directly from the EBSD and high-energy x-ray diffraction microscopy, is utilized as an input to the model.</p> <p>The second part of this research involves calibrating the CPFE model based upon a systematic and detailed optimization routine utilizing experimental data in the form of macroscopic stress-strain curves coupled with lattice strains on different crystallographic planes for the α and β phases, obtained from high energy X-ray diffraction experiments for multiple material pedigrees with varying β volume fractions. This fully calibrated CPFE model is then used to gain a comprehensive understanding of deformation behavior of Ti-6Al-4V, specifically the effect of the relative orientation of the α and β phases within the microstructure.</p> <p>In the final part of this work, large and highly textured regions, referred to as macrozones or microtextured regions (MTRs), with sizes up to several orders of magnitude larger than that of the individual grains, found in dual phase Titanium alloys are modeled using a reduced order simulation strategy. This is done to overcome the computational challenges associated with modeling macrozones. The reduced order model is then used to investigate the strain localization within the microstructure and the effect of varying the misorientation tolerance on the localization of plastic strain within the macrozones.</p>
37

Automated Calibration Of Water Distribution Networks

Apaydin, Oncu 01 February 2013 (has links) (PDF)
Water distribution network models are widely used for various purposes such as long-range planning, design, operation and water quality management. Before these models are used for a specific study, they should be calibrated by adjusting model parameters such as pipe roughness values and nodal demands so that models can yield compatible results with site observations (basically, pressure readings). Many methods have been developed to calibrate water distribution networks. In this study, Darwin Calibrator, a computer software that uses genetic algorithm, is used to calibrate N8.3 pressure zone model of Ankara water distribution network / in this case study the network is calibrated on the basis of roughness parameter, Hazen Williams coefficient for the sake of simplicity. It is understood that there are various parameters that contribute to the uncertainties in water distribution network modelling and the calibration process. Besides, computer software&rsquo / s are valuable tools to solve water distribution network problems and to calibrate network models in an accurate and fast way using automated calibration technique. Furthermore, there are many important aspects that should be considered during automated calibration such as pipe roughness grouping. In this study, influence of flow velocity on pipe roughness grouping is examined. Roughness coefficients of pipes have been estimated in the range of 70-140.
38

Hydrological and sediment Yield modelling in Lake Tana Basin, Blue Nile Ethiopia

Setegn, Shimelis Gebriye January 2008 (has links)
<p>Land and water resources degradation are the major problems on the Ethiopian highlands. Poor land use practices and improper management systems have played a significant role in causing high soil erosion rates, sediment transport and loss of agricultural nutrients. So far limited meas-ures have been taken to combat the problems. In this study a physically based watershed model, SWAT2005 was applied to the Northern Highlands of Ethiopia for modelling of the hydrology and sediment yield. The main objective of this study was to test the performance and feasibility of SWAT2005 model to examine the influence of topography, land use, soil and climatic condi-tion on streamflows, soil erosion and sediment yield. The model was calibrated and validated on four tributaries of Lake Tana as well as Anjeni watershed using SUFI-2, GLUE and ParaSol algo-rithms. SWAT and GIS based decision support system (MCE analysis) were also used to identify the most erosion prone areas in the Lake Tana Basin. Streamflows are more sensitive to the hy-drological response unites definition thresholds than subbasin discretization. Prediction of sedi-ment yield is highly sensitive to subbasin size and slope discretization. Baseflow is an important component of the total discharge within the study area that contributes more than the surface runoff. There is a good agreement between the measured and simulated flows and sediment yields with higher values of coefficients of determination and Nash Sutcliffe efficiency. The an-nual average measured sediment yield in Anjeni watershed was 24.6 tonnes/ha. The annual aver-age simulated sediment yield was 27.8 and 29.5 tonnes/ha for calibration and validation periods, respectively. The SWAT model indicated that 18.5 % of the Lake Tana Basin is erosion potential areas. Whereas the MCE result indicated that 25.5 % of the basin are erosion potential areas. The calibrated model can be used for further analysis of the effect of climate and land use change as well as other different management scenarios on streamflows and soil erosion. The result of the study could help different stakeholders to plan and implement appropriate soil and water conser-vation strategies.</p>
39

Model Validation and Discovery for Complex Stochastic Systems

Jha, Sumit Kumar 02 July 2010 (has links)
In this thesis, we study two fundamental problems that arise in the modeling of stochastic systems: (i) Validation of stochastic models against behavioral specifications such as temporal logics, and (ii) Discovery of kinetic parameters of stochastic biochemical models from behavioral specifications. We present a new Bayesian algorithm for Statistical Model Checking of stochastic systems based on a sequential version of Jeffreys’ Bayes Factor test. We argue that the Bayesian approach is more suited for application do- mains like systems biology modeling, where distributions on nuisance parameters and priors may be known. We prove that our Bayesian Statistical Model Checking algorithm terminates for a large subclass of prior probabilities. We also characterize the Type I/II errors associated with our algorithm. We experimentally demonstrate that this algorithm is suitable for the analysis of complex biochemical models like those written in the BioNetGen language. We then argue that i.i.d. sampling based Statistical Model Checking algorithms are not an effective way to study rare behaviors of stochastic models and present another Bayesian Statistical Model Checking algorithm that can incorporate non-i.i.d. sampling strategies. We also present algorithms for synthesis of chemical kinetic parameters of stochastic biochemical models from high level behavioral specifications. We consider the setting where a modeler knows facts that must hold on the stochastic model but is not confident about some of the kinetic parameters in her model. We suggest algorithms for discovering these kinetic parameters from facts stated in appropriate formal probabilistic specification languages. Our algorithms are based on our theoretical results characterizing the probability of a specification being true on a stochastic biochemical model. We have applied this algorithm to discover kinetic parameters for biochemical models with as many as six unknown parameters.
40

Utilização de mercados artificiais com formadores de mercado para análise de estratégias

Odriozola, Fernando Reis 24 August 2015 (has links)
Submitted by Fernando Reis de Odriozola (odriozola.fernando@gmail.com) on 2015-09-21T04:39:27Z No. of bitstreams: 1 Dissertação - Fernando R Odriozola.pdf: 881210 bytes, checksum: 13c5e46a6da326c976883920a7af7eb6 (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2015-09-21T23:06:54Z (GMT) No. of bitstreams: 1 Dissertação - Fernando R Odriozola.pdf: 881210 bytes, checksum: 13c5e46a6da326c976883920a7af7eb6 (MD5) / Made available in DSpace on 2015-09-22T13:32:47Z (GMT). No. of bitstreams: 1 Dissertação - Fernando R Odriozola.pdf: 881210 bytes, checksum: 13c5e46a6da326c976883920a7af7eb6 (MD5) Previous issue date: 2015-08-24 / For complex systems, traditional analytical-approach with differential equations sometimes results in intractable solutions. An alternative approach could be through Agents-Based Models as a complementary tool witch systems can be modeled from their constituent parts and interactions. Financial Markets are good examples of complex system and thus Agent-Based Models would be a correct approach. This paper implements an Artificial Financial Market composed by market makers, information broadcasters and a set of heterogeneous agents who trade assets through a Continuous Double Auction mechanism. Several aspects of the simulation were investigated to consolidate their understanding and thus contribute to the design of models, where we can highlight, among others: distinctions between Discrete and Continuous Double Auction; implications of Market Maker spread settings; Budget Constraints effects on agents and Analysis of pricing formation in offer submissions. Thinking about the adherence of the model to the Brazilian market reality, a method named Inverse Simulation is used to calibrate the input parameters in a way that the output matches historical market price series. / Na modelagem de sistemas complexos, abordagens analíticas tradicionais com equações diferenciais muitas vezes resultam em soluções intratáveis. Para contornar este problema, Modelos Baseados em Agentes surgem como uma ferramenta complementar, onde o sistema é modelado a partir de suas entidades constituintes e interações. Mercados Financeiros são exemplos de sistemas complexos, e como tais, o uso de modelos baseados em agentes é aplicável. Este trabalho implementa um Mercado Financeiro Artificial composto por formadores de mercado, difusores de informações e um conjunto de agentes heterogêneos que negociam um ativo através de um mecanismo de Leilão Duplo Contínuo. Diversos aspectos da simulação são investigados para consolidar sua compreensão e assim contribuir com a concepção de modelos, onde podemos destacar entre outros: Diferenças do Leilão Duplo Contínuo contra o Discreto; Implicações da variação do spread praticado pelo Formador de Mercado; Efeito de Restrições Orçamentárias sobre os agentes e Análise da formação de preços na emissão de ofertas. Pensando na aderência do modelo com a realidade do mercado brasileiro, uma técnica auxiliar chamada Simulação Inversa, é utilizada para calibrar os parâmetros de entrada, de forma que trajetórias de preços simulados resultantes sejam próximas à séries de preços históricos observadas no mercado.

Page generated in 0.2364 seconds