• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Probabilistic modelling of replication fidelity in eukaryotic genomes

Mamun, Mohammed Al January 2016 (has links)
Eukaryotic DNA replication is composed of a complex array of molecular biological activities compounded by the pressure for faithful replication in order to maintain genetic and genomic integrity. The constraints governing DNA replication biology is of fundamental importance to understand the degree of replication error and strategies employed by organisms to tackle the threats to replication fidelity from such errors. We apply a simple conceptual model, formalized by the use of probability theory and statistics, to discern fundamental pressures and constraints that optimise complete DNA replication in genomes of different size scales (10 Megabases to 10 Gigabases), spanning the whole eukaryota. We show in yeasts (genome size ~10 Megabases) that the replication origins (sites on DNA where replication can be initiated) are biased towards equal spacing on the genome and the largest gap between adjacent origins is limited compared to that is expected by chance, as well as origins are placed very close to the telomeric ends in order to minimize the replication errors arising from occasional irreversible failures of replication forks. Replication origin mapping data from five different yeasts confirm to all of these predictions. We derive an estimate of ~5.8×10-8 for the fork stalling rate per nucleotide, the one unknown parameter in our theory, which conforms to previous experimental estimates. We show in higher eukaryotes (genome size 100 Megabases to 10 Gigabases) that the bias for equal origin spacing is absent, larger origin gaps contribute more to the errors while the permissible origin separations are restricted by the rate of fork stalling per nucleotide, and in the larger genomes ( > 100 Megabases) errors become increasingly inevitable, yet with low net number of events, that follows a Poisson with small mean. We show, in very large genomes e.g. human genome, that larger gaps contributing most to the error are distributed as a power law to spread the risk of damage from the error, and post-replicative error-correction mechanisms are necessary for containment of the inevitable errors. Replication origin mapping data from yeast, Arabidopsis, Drosophila and human cell lines as well as experimental observations of post replicative error markers validate these predictions. We show that replication errors can be quantified from the nucleosome scale minimum inter-origin distance permissible under the known DNA structure and we propose a universal replication constant maintained across all eukaryotes independent of their architectural complexity. We show this molecular biological constant relates the genome length and developmental robustness of organisms and this is confirmed by early embryonic mortality rates from different organisms. Good agreement of the biologically obtained data to the model predictions in all cases suggests our model efficiently captures the biological complexity involved in containing errors in the DNA replication process. Conceptually, the model thus portrays how simple ideas can help complex biology to elevate our understanding of the continuously increasing knowledge of biological details.
2

Delta hedge com custos de transação: uma análise comparativa

Ino, Naio 18 January 2013 (has links)
Submitted by Naio Ino (naio.ino@gmail.com) on 2013-02-07T22:41:54Z No. of bitstreams: 1 Dissertao_Naio_Final.pdf: 809445 bytes, checksum: e28769fb7eb5f239534349c5e85a2cc9 (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2013-02-08T10:19:58Z (GMT) No. of bitstreams: 1 Dissertao_Naio_Final.pdf: 809445 bytes, checksum: e28769fb7eb5f239534349c5e85a2cc9 (MD5) / Made available in DSpace on 2013-02-08T12:09:39Z (GMT). No. of bitstreams: 1 Dissertao_Naio_Final.pdf: 809445 bytes, checksum: e28769fb7eb5f239534349c5e85a2cc9 (MD5) Previous issue date: 2013-01-18 / Dentre as premissas do modelo de Black-Scholes, amplamente utilizado para o apreçamento de opções, assume-se que é possível realizar a replicação do payoff da opção através do rebalanceamento contínuo de uma carteira contendo o ativo objeto e o ativo livre de risco, ao longo da vida da opção. Não somente o rebalanceamento em tempo contínuo não é possível, como mesmo que o fosse, em um mercado com custos de transação, rebalanceamentos da estratégia replicante com intervalos de tempo muito baixos teriam o payoff da estratégia replicante comprometido pelos altos custos totais de transação envolvidos. Este trabalho procura comparar metodologias de delta-hedge, tomando como base uma call européia vendida em um mercado com custos de transação. Para comparar as diferentes estratégias, este estudo será dividido em duas partes: na primeira, serão escolhidas três ações do mercado brasileiro, e usando simulações de Monte Carlo para cada ação, avaliamos diferentes modelos de delta-hedge a partir do erro de replicação do portfolio global e dentro de um contexto de média × variância, considerando-se custos de transação. Em um segundo momento, para o mesmo grupo de ações, testamos os resultados práticos dos modelos, usando dados de mercado. / mong the assumptions of the Black-Scholes model, widely used for option pricing, it is assumed that it is possible to perform the replication of the option payoff through continuous rebalancing of a portfolio containing the underlying asset and the risk-free asset, along the life of the option. Not only rebalancing in continuous time is not possible, even as it were, in a market with transaction costs, replicating the rebalance strategy with very low time intervals would have the payoff of the replicating strategy compromised by the high total transaction costs involved. This paper seeks to compare delta-hedge methodologies, based on a short European call in a market with transaction costs. To compare the different strategies, this study will be divided into two parts: in the first part, three stocks of the Brazilian market will be chosen, and using Monte Carlo simulations for each stock, we evaluate different delta-hedging models based on the replication error in a mean × variance context, considering transaction costs. In the second part, for the same group of shares, we tested the practical results of the models using market data. The results indicate that move-based models have superior performance than time-based models.
3

Estratégias de hedge dinâmico: um estudo comparativo

Heilbrun, Daniel Montero 03 August 2017 (has links)
Submitted by Daniel Montero Heilbrun (daniel_heilbrun@hotmail.com) on 2017-08-31T18:50:44Z No. of bitstreams: 1 mestrado_dmh.pdf: 1853781 bytes, checksum: 1f83154ac6e3fdda760d5d9d053c4e72 (MD5) / Approved for entry into archive by Joana Martorini (joana.martorini@fgv.br) on 2017-08-31T18:54:10Z (GMT) No. of bitstreams: 1 mestrado_dmh.pdf: 1853781 bytes, checksum: 1f83154ac6e3fdda760d5d9d053c4e72 (MD5) / Made available in DSpace on 2017-09-01T12:18:19Z (GMT). No. of bitstreams: 1 mestrado_dmh.pdf: 1853781 bytes, checksum: 1f83154ac6e3fdda760d5d9d053c4e72 (MD5) Previous issue date: 2017-08-03 / Several theoretical works have been developed in the last five decades proposing texting delta-hedge strategies when the premises of the Black e Scholes (1973) are relaxed. This paper sets out to find the best delta-hedge strategy in the presence of transaction costs with the price series that follows a GARCH (1,1) process. This paper analyzes and compare four different delta-hedge strategies: Black e Scholes (1973), modified volatility (Leland (1985)), Asset Tolerance Strategy (Henrotte (1993)) e Variable Banwidth Around Delta (Whalley e Wilmott (1997)). / Diversos trabalhos teóricos foram desenvolvidos nas últimas cinco décadas propondo estratégias de delta-hedge quando as premissas do modelo de Black e Scholes (1973) são relaxadas. Mais recentemente, outros trabalhos comparando as estratégias surgiram, destacando-se os trabalhos de Zakamouline (2009) e Ino (2013). Como alternativa ao modelo utilizado por Ino (2013) para descrever a dinâmica das ações estudadas, mas utilizando-se da mesma metodologia, este trabalho se propõe a encontrar qual é a melhor estratégia de delta-hedge na presença de custos de transação e considerando-se que a série de preços da ação segue um processo GARCH (1,1). Para isso, avaliou-se quatro diferentes estratégias de delta-hedge: Black e Scholes (1973), volatilidade modificada (Leland (1985)), bandas de tolerância para o preço do ativo-objeto (Henrotte (1993)) e bandas de tolerância variáveis para a variação do delta (Whalley e Wilmott (1997)).

Page generated in 0.1217 seconds