• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 67
  • 48
  • 32
  • 28
  • 18
  • 14
  • 13
  • 12
  • 9
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 667
  • 667
  • 359
  • 359
  • 150
  • 147
  • 101
  • 72
  • 66
  • 66
  • 65
  • 63
  • 62
  • 60
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

A model-based statistical approach to functional MRI group studies

Bothma, Adel January 2010 (has links)
Functional Magnetic Resonance Imaging (fMRI) is a noninvasive imaging method that reflects local changes in brain activity. FMRI group studies involves the analysis of the functional images acquired for each of a group of subjects under the same experimental conditions. We propose a spatial marked point-process model for the activation patterns of the subjects in a group study. Each pattern is described as the sum of individual centres of activation. The marked point-process that we propose allows the researcher to enforce repulsion between all pairs of centres of an individual subject that are within a specified minimum distance of each other. It also allows the researcher to enforce attraction between similarly-located centres from different subjects. This attraction helps to compensate for the misalignment of corresponding functional areas across subjects and is a novel method of addressing the problem of imperfect inter-subject registration of functional images. We use a Bayesian framework and choose prior distributions according to current understanding of brain activity. Simulation studies and exploratory studies of our reference dataset are used to fine-tune the prior distributions. We perform inference via Markov chain Monte Carlo. The fitted model gives a summary of the activation in terms of its location, height and size. We use this summary both to identify brain regions that were activated in response to the stimuli under study and to quantify the discrepancies between the activation maps of subjects. Applied to our reference dataset, our measure is successful in separating out those subjects with activation patterns that do not agree with the overall group pattern. In addition, our measure is sensitive to subjects with a large number of activation centres relative to the other subjects in the group. The activation summary given by our model makes it possible to pursue a range of inferential questions that cannot be addressed with ease by current model-based approaches.
262

On auxiliary variables and many-core architectures in computational statistics

Lee, Anthony January 2011 (has links)
Emerging many-core computer architectures provide an incentive for computational methods to exhibit specific types of parallelism. Our ability to perform inference in Bayesian statistics is often dependent upon our ability to approximate expectations of functions of random variables, for which Monte Carlo methodology provides a general purpose solution using a computer. This thesis is primarily concerned with exploring the gains that can be obtained by using many-core architectures to accelerate existing population-based Monte Carlo algorithms, as well as providing a novel general framework that can be used to devise new population-based methods. Monte Carlo algorithms are often concerned with sampling random variables taking values in X whose density is known up to a normalizing constant. Population-based methods typically make use of collections of interacting auxiliary random variables, each of which is in X, in specifying an algorithm. Such methods are good candidates for parallel implementation when the collection of samples can be generated in parallel and their interaction steps are either parallelizable or negligible in cost. The first contribution of this thesis is in demonstrating the potential speedups that can be obtained for two common population-based methods, population-based Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC). The second contribution of this thesis is in the derivation of a hierarchical family of sparsity-inducing priors in regression and classification settings. Here, auxiliary variables make possible the implementation of a fast algorithm for finding local modes of the posterior density. SMC, accelerated on a many-core architecture, is then used to perform inference for a range of prior specifications to gain an understanding of sparse association signal in the context of genome-wide association studies. The third contribution is in the use of a new perspective on reversible MCMC kernels that allows for the construction of novel population-based methods. These methods differ from most existing methods in that one can make the resulting kernels define a Markov chain on X. A further development is that one can define kernels in which the number of auxiliary variables is given a distribution conditional on the values of the auxiliary variables obtained so far. This is perhaps the most important methodological contribution of the thesis, and the adaptation of the number of particles used within a particle MCMC algorithm provides a general purpose algorithm for sampling from a variety of complex distributions.
263

Addressing Issues in the Detection of Gene-Environment Interaction Through the Study of Conduct Disorder

Prom, Elizabeth Chin 01 January 2007 (has links)
This work addresses issues in the study of gene-environment interaction (GxE) through research of conduct disorder (CD) among adolescents and extends the recent report of significant GxE and subsequent replication studies. A sub-sample of 1,299 individual participants/649 twin pairs and their parents from the Virginia Twin Study of Adolescent and Behavioral Development was used for whom Monoamine Oxidase A (MAOA) genotype, diagnosis of CD, maternal antisocial personality symptoms, and household neglect were obtained. This dissertation (1) tested for GxE by gender using MAOA and childhood adversity using multiple approaches to CD measurement and model assessment, (2) determined whether other mechanisms would explain differences in GxE by gender and (3) identified and assessed other genes and environments related to the interaction MAOA and childhood adversity. Using a multiple regression approach, a main effect of the low/low MAOA genotype remained after controlling other risk factors in females. However, the effects of GxE were modest and were removed by transforming the environmental measures. In contrast, there was no significant effect of the low activity MAOA allele in males although significant GxE was detected and remained after transformation. The sign of the interaction for males was opposite from females, indicating genetic sensitivity to childhood adversity may differ by gender. Upon further investigation, gender differences in GxE were due to genotype-sex interaction and may involve MAOA. A Markov Chain Monte Carlo approach including a genetic Item Response Theory modeled CD as a trait with continuous liability, since false detection of GxE may result from measurement. In males and females, the inclusion of GxE while controlling for the other covariates was appropriate, but was little improvement in model fit and effect sizes of GxE were small. Other candidate genes functioning in the serotonin and dopamine neurotransmitter systems were tested for interaction with MAOA to affect risk for CD. Main genetic effects of dopamine transporter genotype and MAOA in the presence of comorbidity were detected. No epistatic effects were detected. The use of random forests systematically assessed the environment and produced several interesting environments that will require more thoughtful consideration before incorporation into a model testing GxE.
264

Seismic interferometry and non-linear tomography

Galetti, Erica January 2015 (has links)
Seismic records contain information that allows geoscientists to make inferences about the structure and properties of the Earth’s interior. Traditionally, seismic imaging and tomography methods require wavefields to be generated and recorded by identifiable sources and receivers, and use these directly-recorded signals to create models of the Earth’s subsurface. However, in recent years the method of seismic interferometry has revolutionised earthquake seismology by allowing unrecorded signals between pairs of receivers, pairs of sources, and source-receiver pairs to be constructed as Green’s functions using either cross-correlation, convolution or deconvolution of wavefields. In all of these formulations, seismic energy is recorded and emitted by surrounding boundaries of receivers and sources, which need not be active and impulsive but may even constitute continuous, naturally-occurring seismic ambient noise. In the first part of this thesis, I provide a comprehensive overview of seismic interferometry, its background theory, and examples of its application. I then test the theory and evaluate the effects of approximations that are commonly made when the interferometric formulae are applied to real datasets. Since errors resulting from some approximations can be subtle, these tests must be performed using almost error-free synthetic data produced with an exact waveform modelling method. To make such tests challenging the method and associated code must be applicable to multiply-scattering media. I developed such a modelling code specifically for interferometric tests and applications. Since virtually no errors are introduced into the results from modelling, any difference between the true and interferometric waveforms can safely be attributed to specific origins in interferometric theory. I show that this is not possible when using other, previously available methods: for example, the errors introduced into waveforms synthesised by finite-difference methods due to the modelling method itself, are larger than the errors incurred due to some (still significant) interferometric approximations; hence that modelling method can not be used to test these commonly-applied approximations. I then discuss the ability of interferometry to redatum seismic energy in both space and time, allowing virtual seismograms to be constructed at new locations where receivers may not have been present at the time of occurrence of the associated seismic source. I present the first successful application of this method to real datasets at multiple length scales. Although the results are restricted to limited bandwidths, this study demonstrates that the technique is a powerful tool in seismologists’ arsenal, paving the way for a new type of ‘retrospective’ seismology where sensors may be installed at any desired location at any time, and recordings of seismic events occurring at any other time can be constructed retrospectively – even long after their energy has dissipated. Within crustal seismology, a very common application of seismic interferometry is ambient-noise tomography (ANT). ANT is an Earth imaging method which makes use of inter-station Green’s functions constructed from cross-correlation of seismic ambient noise records. It is particularly useful in seismically quiescent areas where traditional tomography methods that rely on local earthquake sources would fail to produce interpretable results due to the lack of available data. Once constructed, interferometric Green’s functions can be analysed using standard waveform analysis techniques, and inverted for subsurface structure using more or less traditional imaging methods. In the second part of this thesis, I discuss the development and implementation of a fully non-linear inversion method which I use to perform Love-wave ANT across the British Isles. Full non-linearity is achieved by allowing both raypaths and model parametrisation to vary freely during inversion in Bayesian, Markov chain Monte Carlo tomography, the first time that this has been attempted. Since the inversion produces not only one, but a large ensemble of models, all of which fit the data to within the noise level, statistical moments of different order such as the mean or average model, or the standard deviation of seismic velocity structures across the ensemble, may be calculated: while the ensemble average map provides a smooth representation of the velocity field, a measure of model uncertainty can be obtained from the standard deviation map. In a number of real-data and synthetic examples, I show that the combination of variable raypaths and model parametrisation is key to the emergence of previously-unobserved, loop-like uncertainty topologies in the standard deviation maps. These uncertainty loops surround low- or high-velocity anomalies. They indicate that, while the velocity of each anomaly may be fairly well reconstructed, its exact location and size tend to remain uncertain; loops parametrise this location uncertainty, and hence constitute a fully non-linearised, Bayesian measure of spatial resolution. The uncertainty in anomaly location is shown to be due mainly to the location of the raypaths that were used to constrain the anomaly also only being known approximately. The emergence of loops is therefore related to the variation in raypaths with velocity structure, and hence to 2nd and higher order wave-physics. Thus, loops can only be observed using non-linear inversion methods such as the one described herein, explaining why these topologies have never been observed previously. I then present the results of fully non-linearised Love-wave group-velocity tomography of the British Isles in different frequency bands. At all of the analysed periods, the group-velocity maps show a good correlation with known geology of the region, and also robustly detect novel features. The shear-velocity structure with depth across the Irish Sea sedimentary basin is then investigated by inverting the Love-wave group-velocity maps, again fully non-linearly using Markov chain Monte Carlo inversion, showing an approximate depth to basement of 5 km. Finally, I discuss the advantages and current limitations of the fully non-linear tomography method implemented in this project, and provide guidelines and suggestions for its improvement.
265

Nonparametric Mixture Modeling on Constrained Spaces

Putu Ayu G Sudyanti (7038110) 16 August 2019 (has links)
<div>Mixture modeling is a classical unsupervised learning method with applications to clustering and density estimation. This dissertation studies two challenges in modeling data with mixture models. The first part addresses problems that arise when modeling observations lying on constrained spaces, such as the boundaries of a city or a landmass. It is often desirable to model such data through the use of mixture models, especially nonparametric mixture models. Specifying the component distributions and evaluating normalization constants raise modeling and computational challenges. In particular, the likelihood forms an intractable quantity, and Bayesian inference over the parameters of these models results in posterior distributions that are doubly-intractable. We address this problem via a model based on rejection sampling and an algorithm based on data augmentation. Our approach is to specify such models as restrictions of standard, unconstrained distributions to the constraint set, with measurements from the model simulated by a rejection sampling algorithm. Posterior inference proceeds by Markov chain Monte Carlo, first imputing the rejected samples given mixture parameters and then resampling parameters given all samples. We study two modeling approaches: mixtures of truncated Gaussians and truncated mixtures of Gaussians, along with Markov chain Monte Carlo sampling algorithms for both. We also discuss variations of the models, as well as approximations to improve mixing, reduce computational cost, and lower variance.</div><div><br></div><div>The second part of this dissertation explores the application of mixture models to estimate contamination rates in matched tumor and normal samples. Bulk sequencing of tumor samples are prone to contaminations from normal cells, which lead to difficulties and inaccuracies in determining the mutational landscape of the cancer genome. In such instances, a matched normal sample from the same patient can be used to act as a control for germline mutations. Probabilistic models are popularly used in this context due to their flexibility. We propose a hierarchical Bayesian model to denoise the contamination in such data and detect somatic mutations in tumor cell populations. We explore the use of a Dirichlet prior on the contamination level and extend this to a framework of Dirichlet processes. We discuss MCMC schemes to sample from the joint posterior distribution and evaluate its performance on both synthetic experiments and publicly available data.</div>
266

Dias trabalháveis para a colheita mecanizada da cana-de-açúcar no estado de São Paulo, com base em critérios agrometeorológicos / Workable days for sugarcane harvesting in the State of São Paulo, Brazil, based on agrometeorological criteria

Vieira, Luciano Henrique de Souza 17 August 2017 (has links)
A determinação de dias trabalháveis para operações agrícolas de campo é de extrema importância para o dimensionamento de frotas de máquinas agrícolas. Isso é especialmente importante para a cultura da cana-de-açúcar, na qual as operações de campo se estendem por todo o ano, sendo a colheita a operação que mais danos pode trazer aos solos, quando executada sob condições inadequadas, já que essa ocorre por cerca de 8 a 10 meses. Com base nisso, o presente estudo teve como objetivos: definir os critérios para a determinação do número de dias trabalháveis (NDT) para colheita mecanizada da cana-de-açúcar no estado de São Paulo; determinar o NDT de diferentes regiões do estado de São Paulo com base em um critério geral e em critérios específicos para cada região; determinar a probabilidade de ocorrência de sequência do NDT por meio da cadeia de Markov; e elaborar uma planilha modelo para dimensionamento de frota para colheita mecanizada da cana-de-açúcar com base no NDT gerado pelo critério individual e geral. Para a definição dos critérios do NDT foram empregados dados de interrupção da moagem de 30 usinas no estado de São Paulo, para períodos variando de duas a cinco safras. Foram testados critérios de precipitação mínima (PREC), capacidade de água disponível (CAD) e limite da relação entre o armazenamento de água no solo e CAD (ARM/CAD) para se interromper a operação de colheita. De posse dos critérios de PREC, CAD e ARM/CAD para cada região e um critério geral, foram elaborados mapas do NDT para o estado de São Paulo, Posteriormente, definiu-se com base na cadeia de Markov, as probabilidades condicionais e, por meio dessas, as probabilidades de sequência de dias trabalháveis para cada decêndio do ano. Finalmente, foi elaborada uma planilha Excel&reg; para o dimensionamento de frota de colhedoras de cana, com base no NDT. Os resultados mostraram que houve variação no critério para definição do NDT nas diferentes regiões do estado. No entanto, o uso de um critério geral para todo o estado, considerando-se a PREC de 3 mm, CAD de 40 mm e ARM/CAD de 90%, proporcionou resultados muito similares. O mapa de NDT gerado a partir do critério individual para cada região resultou em um erro médio de 24,9 dias por ano, ao passo que o mapa gerado a partir do critério geral resultou em um erro médio de 4,4 dias por ano, sendo este último mais adequado para a determinação do NDT médio para a colheita mecanizada de cana-de-açúcar. Com relação à probabilidade de sequência de dias trabalháveis, observou-se que nas regiões a oeste, noroeste e norte do estado há, em média, maiores probabilidades de ocorrência de dias trabalháveis, sendo que as maiores probabilidades ocorreram entre abril e setembro, com o primeiro decêndio de julho sendo o que tem os maiores valores. A probabilidade de o dia ser trabalhável dado que o anterior foi trabalhável manteve-se sempre com o mínimo em torno de 50% de probabilidade e com o máximo próximo de 90% em todas as regiões avaliadas. Finalmente, observou-se que a variação do NDT em função do critério usado (individual ou geral) não teve impacto expressivo nos custos de produção e totais. Com base nos resultados apresentados, conclui-se que a determinação correta do NDT é fundamental para o dimensionamento de sistemas de colheita da cana-de-açúcar. / The determination of workable days for agricultural field operations is of extreme importance for dimensioning agricultural machinery fleets. This is especially important for sugarcane cultivation, in which field operations extend throughout the year, from 8 to 10 months per year. Based on this, the present study had as objectives: to define the criteria for determining the number of workable days (NWD) for sugarcane harvest in the state of São Paulo; to determine the NWD of different regions of the state of São Paulo based on general criteria and on specific criteria to each region; to determine the probability of a given NWD sequence by means of the Markov chain; and to elaborate a spreadsheet model for sizing the fleet to harvest sugarcane based on the NWD generated by the individual and general criteria For the definition of NWD criteria, the data from harvest interruption of 30 mills in the state of São Paulo, for periods ranging from two to five harvests, were used. The following criteria were tested: minimum precipitation (PREC); soil water holding capacity (SWHC) and the limit of the relationship between soil moisture storage and SWHC (ARM/SWHC), to define the harvest interruption. Based on these criteria for each region and also a general criteria, NWD maps were prepared for the state of São Paulo.. Afterwards, the conditional probabilities were defined on the basis of the Markov chain, and based on them the probabilities of sequences of working days were defined for all year long for each 10-day periods. Finally, an Excel&reg; spreadsheet was programmed for dimensioning sugarcane harvest machines, based on the NWD. The results showed that there was variation in the criteria for the definition of NWD in the different regions of the state. However, the use of a general criteria for the whole state, considering PREC of 3 mm, SWHC of 40 mm and ARM/SWHC of 90% provided very similar results. The NWD maps generated from the individual criteria for each region generated an average error of 24.9 days per year whereas the map generated from the general criteria resulted in an average error of 4.4 days per year, being the last map the most suitable for determining the average NWD. Regarding the probability of sequences of workable days, it was observed that in the western, northwestern and northern regions of the state there is, on average, a greater probability of workable days, with the periods of greatest probability of workable days being between April and September. The probability of the a day be workable considering that the previous one was workable, always remained between 50% and 90% probability in all regions. Finally, for dimensioning havest mechines, it was observed that the variation of NWD as a function of the criteria used (individual or general) had no significant impact on production and total costs, but affected the total investment. It was concluded that the correct determination of NWD is fundamental for the design of sugarcane harvesting systems.
267

Random Function Iterations for Stochastic Feasibility Problems

Hermer, Neal 24 January 2019 (has links)
No description available.
268

Mudança de regime markoviana em modelos DSGE : uma estimação do pass-through de câmbio para inflação brasileira durante o período 2000 a 2015

Marodin, Fabrizio Almeida January 2016 (has links)
Esta pesquisa investiga o comportamento não-linear do pass-through de taxa de câmbio na economia brasileira, durante o período de câmbio flutuante (2000-2015), a partir de um modelo de equilíbrio geral dinâmico estocástico com mudança de regime Markoviana (MS-DSGE). Para isso, utilizamos a metodologia proposta por Baele et al. (2015) e um modelo Novo-Keynesiano básico, sobre o qual incluímos novos elementos na curva de oferta agregada e uma nova equação para a dinâmica cambial. Encontramos evidências de existência de dois regimes distintos para o repasse cambial e para a variância dos choques sobre a inflação. No regime denominado de “Normal”, o pass-through de longo prazo é estimado em 0.0092 pontos percentuais para inflação, dado um choque cambial de 1%, contra 0.1302 pontos percentuais no regime de “Crise”. A superioridade do modelo MS-DSGE sobre o modelo com parâmetros fixos é constatada de acordo com diversos critérios comparativos. / This research investigates the non-linearity of exchange rate pass-through on the Brazilian economy during the floating exchange rate period (2000-2015) in a Markov-switching dynamic stochastic general equilibrium framework (MS-DSGE). We apply methods proposed by Baele et al. (2015) in a basic New Keynesian model, with the addition of new elements to the aggregate supply curve and a new equation for the exchange rate dynamics. We find evidence of two distinct regimes for the exchange rate pass-through and for the volatility of shocks to inflation. During the regime named “Normal”, the long run pass-through is estimated as 0.0092 percent points to inflation, given a 1% exchange rate shock, in contrast to 0.1302 percent points during the “Crisis” regime. The MS-DSGE model appears superior to the fixed parameters model according to various comparison criteria.
269

Google matrix analysis of Wikipedia networks

El Zant, Samer 06 July 2018 (has links)
Cette thèse s’intéresse à l’analyse du réseau dirigé extrait de la structure des hyperliens de Wikipédia. Notre objectif est de mesurer les interactions liant un sous-ensemble de pages du réseau Wikipédia. Par conséquent, nous proposons de tirer parti d’une nouvelle représentation matricielle appelée matrice réduite de Google ou "reduced Google Matrix". Cette matrice réduite de Google (GR) est définie pour un sous-ensemble de pages donné (c-à-d un réseau réduit).Comme pour la matrice de Google standard, un composant de GR capture la probabilité que deux noeuds du réseau réduit soient directement connectés dans le réseau complet. Une des particularités de GR est l’existence d’un autre composant qui explique la probabilité d’avoir deux noeuds indirectement connectés à travers tous les chemins possibles du réseau entier. Dans cette thèse, les résultats de notre étude de cas nous montrent que GR offre une représentation fiable des liens directs et indirects (cachés). Nous montrons que l’analyse de GR est complémentaire à l’analyse de "PageRank" et peut être exploitée pour étudier l’influence d’une variation de lien sur le reste de la structure du réseau. Les études de cas sont basées sur des réseaux Wikipédia provenant de différentes éditions linguistiques. Les interactions entre plusieurs groupes d’intérêt ont été étudiées en détail : peintres, pays et groupes terroristes. Pour chaque étude, un réseau réduit a été construit. Les interactions directes et indirectes ont été analysées et confrontées à des faits historiques, géopolitiques ou scientifiques. Une analyse de sensibilité est réalisée afin de comprendre l’influence des liens dans chaque groupe sur d’autres noeuds (ex : les pays dans notre cas). Notre analyse montre qu’il est possible d’extraire des interactions précieuses entre les peintres, les pays et les groupes terroristes. On retrouve par exemple, dans le réseau de peintre sissu de GR, un regroupement des artistes par grand mouvement de l’histoire de la peinture. Les interactions bien connues entre les grands pays de l’UE ou dans le monde entier sont également soulignées/mentionnées dans nos résultats. De même, le réseau de groupes terroristes présente des liens pertinents en ligne avec leur idéologie ou leurs relations historiques ou géopolitiques.Nous concluons cette étude en montrant que l’analyse réduite de la matrice de Google est une nouvelle méthode d’analyse puissante pour les grands réseaux dirigés. Nous affirmons que cette approche pourra aussi bien s’appliquer à des données représentées sous la forme de graphes dynamiques. Cette approche offre de nouvelles possibilités permettant une analyse efficace des interactions d’un groupe de noeuds enfoui dans un grand réseau dirigé / This thesis concentrates on the analysis of the large directed network representation of Wikipedia.Wikipedia stores valuable fine-grained dependencies among articles by linking webpages togetherfor diverse types of interactions. Our focus is to capture fine-grained and realistic interactionsbetween a subset of webpages in this Wikipedia network. Therefore, we propose to leverage anovel Google matrix representation of the network called the reduced Google matrix. This reducedGoogle matrix (GR) is derived for the subset of webpages of interest (i.e. the reduced network). Asfor the regular Google matrix, one component of GR captures the probability of two nodes of thereduced network to be directly connected in the full network. But unique to GR, anothercomponent accounts for the probability of having both nodes indirectly connected through allpossible paths in the full network. In this thesis, we demonstrate with several case studies that GRoffers a reliable and meaningful representation of direct and indirect (hidden) links of the reducednetwork. We show that GR analysis is complementary to the well-known PageRank analysis andcan be leveraged to study the influence of a link variation on the rest of the network structure.Case studies are based on Wikipedia networks originating from different language editions.Interactions between several groups of interest are studied in details: painters, countries andterrorist groups. For each study, a reduced network is built, direct and indirect interactions areanalyzed and confronted to historical, geopolitical or scientific facts. A sensitivity analysis isconducted to understand the influence of the ties in each group on other nodes (e.g. countries inour case). From our analysis, we show that it is possible to extract valuable interactions betweenpainters, countries or terrorist groups. Network of painters with GR capture art historical fact sucha painting movement classification. Well-known interactions of countries between major EUcountries or worldwide are underlined as well in our results. Similarly, networks of terrorist groupsshow relevant ties in line with their objective or their historical or geopolitical relationships. Weconclude this study by showing that the reduced Google matrix analysis is a novel powerfulanalysis method for large directed networks. We argue that this approach can find as well usefulapplication for different types of datasets constituted by the exchange of dynamic content. Thisapproach offers new possibilities to analyze effective interactions in a group of nodes embedded ina large directed network.
270

Exponenciální řízení homogenních markovských procesů / Exponenciální řízení homogenních markovských procesů

Stanek, Pavol January 2012 (has links)
Title: Exponential control of homogeneous Markov processes Author: Pavol Stanek Department: Department of Probability and Mathematical Statistics, MFF UK Supervisor: Mgr. Peter Dostál Ph.D., Department of Probability and Mathematical Statistics, MFF UK Abstract: This master thesis concerns exponential control of Markov decision chains. An iterative alghorithm for finding a control, that maximizes a long term growth rate of expected utility is developed. The utility is measured by exponential utility function. The algorithm is derived for both discrete time and continuous time chain. Subsequently, the results are applied on the problem of optimally managing port- folio with proportional transaction costs. The dynamics of the investor's position is derived and the consequent process is approximated by Markov chain. Using the iterative alghorithm, the optimal trading strategy is numerically found. Keywords: exponential control, Markov chain, portfolio optimization, proportional transaction costs 1

Page generated in 0.0293 seconds