Spelling suggestions: "subject:"rar"" "subject:"aar""
21 |
Inverse problems and data assimilation methods applied on protein polymerisation / Problèmes inverses et méthodes d’assimilation de données appliquées à la polymérisation de protéinesArmiento, Aurora 13 January 2017 (has links)
Cette thèse a pour objectif la mise en place d'une stratégie mathématique pour l'étude du processus physique de l'agrégation des protéines. L'étude de ce processus largement inconnu est particulièrement importante puisqu'il a été identifiée comme un élément clé d'une vaste gamme de maladies incurables, appelées maladies amyloïdes. Les maladies à prions appartiennent à cette classe et sont causées par l'agrégation d'une configuration mal pliée de la protéine prion. Notre travail contribue à la recherche sur les maladies à prions, en se concentrant sur deux types d'agrégats : les oligomères et les fibres.Les oligomères suspectés d'être les agrégats les plus toxiques sont étudiés dans la première partie de cette thèse. Nous fondons notre travail sur l'analyse de deux types de données expérimentales. D'une part, nous considérons les données de dispersion statique de la lumière (SLS), qui peuvent être interprétées biologiquement comme la mesure de la taille moyenne des oligomères et mathématiquement comme le deuxième moment de la concentration des agrégats. D'autre part, nous considérons les données de distribution de taille d'oligomère collectées à plusieurs instants en utilisant la Chromatographie d'Exclusion de Taille (SEC). Notre étude conduit à la conclusion importante selon laquelle au moins deux types différents d'oligomères sont présents. De plus, nous proposons une description de l'interaction entre ces oligomères en proposant pour la première fois un modèle à deux espèces. Notre modèle est composé d'un ensemble d'ODE avec les taux cinétiques comme paramètres. La description qualitative fournie par ce modèle a été couplée à l'information contenue dans les données expérimentales de SLS dans le cadre de l'assimilation de données. Au moyen de la méthode du filtre de Kalman étendue, nous résolvons un problème inverse non linéaire, estimant ainsi les coefficients cinétiques associés aux données expérimentales. Pour valider ce modèle, nous avons comparé notre estimation aux données expérimentales de SEC, en observant un très bon accord entre les deux. Notre caractérisation des espèces d'oligomères peut conduire à de nouvelles stratégies pour concevoir un premier traitement ciblé pour les maladies à prions.La méthodologie appliquée à l'étude des oligomères peut être considérée comme une première étape dans l'analyse des fibres. En raison des propriétés physiques de ces agrégats, des expériences moins nombreuses et moins précises peuvent être effectuées, et une approche mathématique peut donc apporter une contribution précieuse à leur étude. Notre contribution est de proposer une stratégie générale pour estimer l'état initial d'un système de fibres. Inspiré par la théorie de Lifshitz-Slyozov, nous décrivons ce système par une équation de transport couplée à une équation intégrale. L'estimation est faite en utilisant quelques observations empiriques sur le système. Nous considérons le cas général d'observation d'un moment d'ordre $n$. Il est en effet possible de mesurer le moment d'ordre $1$ par fluorescence de thioflavine T ou le moment d'ordre $2$ par SLS. Nous proposons une solution théorique et numérique du problème d'estimation de la condition initiale dans le cas linéaire d'un système de dépolymérisation. En particulier, pour des taux de dépolymérisation constants, nous proposons une stratégie de régularisation par noyau, qui fournit une première caractérisation de l'estimation. Dans le cas de taux de dépolymérisation variables, nous proposons la méthode d'assimilation variationnelle 4d-Var et la méthode d'assimilation de données séquentielle du filtrage de Kalman. Ces deux méthodes sont plus générales et peuvent être facilement adaptée pour traiter différents problèmes. Ce problème inverse est particulièrement intéressant puisqu'il peut également être appliqué dans d'autres domaines tels que le cycle cellulaire ou la formation de poussière. / The aim of this PhD thesis is to set up a mathematical strategy to investigate the physical process of protein aggregation. The study of this largely unknown process is particularly important since it has been identified as a key feature of a wide class of incurable diseases, called amyloid diseases. Prion diseases belong to this class and are caused by the aggregation of a misfolded configuration of the prion protein. Our work contributes to the research on prion diseases, by focusing on two kinds of aggregates: oligomers and fibrils. Oligomers, which are suspected of being the most toxic aggregates, are studied in the first part of this thesis. We base our work on the analysis of two types of experimental data. On the one hand, we consider Static Light Scattering (SLS) data, which can be interpreted biologically as the measurement of the average oligomer size and mathematically as the second moment of aggregate concentration. On the other hand, we consider oligomer size distribution data collected at several instants by using Size Exclusion Chromatography (SEC). Our study leads to the important conclusion that at least two different types of oligomers are present. Moreover, we provide a description of the interaction between these oligomers by proposing, for the first time, a two-species model. Our model is composed of a set of ODEs with the kinetic rates as parameters. The qualitative description provided by this model has been coupled to the information contained in the noisy experimental SLS data in a data assimilation framework. By means of the extended Kalman filter method, we solve a non-linear inverse problem, thereby estimating the kinetic coefficients associated to the experimental data. To validate this model we have compared our estimation to the experimental SEC data, observing a very good agreement between the two. Our oligomer species characterisation may lead to new strategies to design a first targeted treatment for prion diseases. The methodology applied to the study of oligomers can be seen as a first step in the analysis of fibrils. Due to the physical properties of these aggregates, fewer and less precise experiments can be performed and so a mathematical approach can provide a valuable contribution to their study. Our contribution is to propose a general strategy to estimate the initial condition of a fibril system. Inspired by the Lifshitz-Slyozov theory, we describe this system by a transport equation coupled with an integral equation. The estimation is performed making use of some empirical observations on the system. We consider the general case of observing a moment of order $n$. It is indeed possible to measure the first moment by Thioflavine T fluorescence or the second moment by SLS. We provide a theoretical and numerical solution of the initial condition estimation problem in the linear case of a depolymerising system. In particular, for constant depolymerisation rates, we propose a kernel regularisation strategy, that provides a first characterisation of the estimation. In the variable depolymerisation rates, we outline the variational data assimilation method $4$d-Var.This method is more general and can be easily adapted to treat different problems. This inverse problem is particularly interesting since it can also be applied in other fields such as the cell cycle or dust formation.
|
22 |
Confronting Theory with Data: the Case of DSGE ModelingPoudyal, Niraj 07 December 2012 (has links)
The primary objective of this is to confront the DSGE model (Ireland, 2011) with data in an attempt to evaluate its empirical adequacy. The perspective used for this evaluation is based on unveiling the statistical model (structural VAR) behind the DSGE model, with a view to test its probabilistic assumptions vis-a-vis the data. It is shown that the implicit statistical model is seriously misspecified and the information from mis-specification (M-S) testing is then used to respecify the original structural VAR in an attempt to achieve statistical adequacy. The latter provides a precondition for the reliability of any inference based on the statistical model. Once the statistical adequacy of the respecified model is secured through thorough M-S testing, inferences like the likelihood-ratio test for the overidentifying restrictions, forecasting, impulse response analysis are applied to the original DSGE model to evaluate its empirical adequacy. At the end, the same inferential procedure is applied to the CAPM model. / Ph. D.
|
23 |
Fiscal Policy in Sweden : Analyzing the Effectiveness of Fiscal policy During the Recent Business CycleAntonevich, Konstantin January 2010 (has links)
The economic downturn of 2008-2010 has encouraged many economists andpoliticians to reconsider the role of fiscal policy. Whereas there is a broadly acceptedmodel which describes the influence of monetary policy on the economy, there is noconsensus concerning the fiscal policy.This paper aims to study the effectiveness of fiscal policy actions in Sweden over thepast 15 years, starting from the end of the banking crisis of 1992-93 to date. It has aspecific focus on the measures which were introduced in 2007-2010 and employs bothqualitative and quantitative analyses.The qualitative analysis investigates different expansionary fiscal measures, inter alia,the earned income tax credit, the new legislation for crisis management of banks, theguarantee program and the establishment of stability fund.The quantitative analysis is based on a 4-variable Vector Autoregression model whichhelps to identify the influence of general government expenditure, revenue and centralgovernment debt on GDP fluctuations over the past 15 years. The results demonstrate apositive response of GDP to an increase in government expenditure, with the maximumvalue of response achieved after 8 quarters. GDP also grows in response to a positiveshock in the central government debt, which is in line with the macroeconomic theory ofexpansionary fiscal policy. The positive response to an increase of revenue is somewhatcontradictory, and can become a topic for a further in-depth research.The economic downturn of 2008-2010 has encouraged many economists andpoliticians to reconsider the role of fiscal policy. Whereas there is a broadly acceptedmodel which describes the influence of monetary policy on the economy, there is noconsensus concerning the fiscal policy.This paper aims to study the effectiveness of fiscal policy actions in Sweden over thepast 15 years, starting from the end of the banking crisis of 1992-93 to date. It has aspecific focus on the measures which were introduced in 2007-2010 and employs bothqualitative and quantitative analyses.The qualitative analysis investigates different expansionary fiscal measures, inter alia,the earned income tax credit, the new legislation for crisis management of banks, theguarantee program and the establishment of stability fund.The quantitative analysis is based on a 4-variable Vector Autoregression model whichhelps to identify the influence of general government expenditure, revenue and centralgovernment debt on GDP fluctuations over the past 15 years. The results demonstrate apositive response of GDP to an increase in government expenditure, with the maximumvalue of response achieved after 8 quarters. GDP also grows in response to a positiveshock in the central government debt, which is in line with the macroeconomic theory ofexpansionary fiscal policy. The positive response to an increase of revenue is somewhatcontradictory, and can become a topic for a further in-depth research.
|
24 |
noneYen, Shun-li 14 June 2004 (has links)
none
|
25 |
A Study on SPAN's Risk-measuring Methodology For Portfolio That Include Options-Apply Diagonal ModelTsai, Huei-Chen 11 July 2003 (has links)
none
|
26 |
Fiscal Policy in Sweden : Analyzing the Effectiveness of Fiscal policy During the Recent Business CycleAntonevich, Konstantin January 2010 (has links)
<p>The economic downturn of 2008-2010 has encouraged many economists andpoliticians to reconsider the role of fiscal policy. Whereas there is a broadly acceptedmodel which describes the influence of monetary policy on the economy, there is noconsensus concerning the fiscal policy.This paper aims to study the effectiveness of fiscal policy actions in Sweden over thepast 15 years, starting from the end of the banking crisis of 1992-93 to date. It has aspecific focus on the measures which were introduced in 2007-2010 and employs bothqualitative and quantitative analyses.The qualitative analysis investigates different expansionary fiscal measures, inter alia,the earned income tax credit, the new legislation for crisis management of banks, theguarantee program and the establishment of stability fund.The quantitative analysis is based on a 4-variable Vector Autoregression model whichhelps to identify the influence of general government expenditure, revenue and centralgovernment debt on GDP fluctuations over the past 15 years. The results demonstrate apositive response of GDP to an increase in government expenditure, with the maximumvalue of response achieved after 8 quarters. GDP also grows in response to a positiveshock in the central government debt, which is in line with the macroeconomic theory ofexpansionary fiscal policy. The positive response to an increase of revenue is somewhatcontradictory, and can become a topic for a further in-depth research.The economic downturn of 2008-2010 has encouraged many economists andpoliticians to reconsider the role of fiscal policy. Whereas there is a broadly acceptedmodel which describes the influence of monetary policy on the economy, there is noconsensus concerning the fiscal policy.This paper aims to study the effectiveness of fiscal policy actions in Sweden over thepast 15 years, starting from the end of the banking crisis of 1992-93 to date. It has aspecific focus on the measures which were introduced in 2007-2010 and employs bothqualitative and quantitative analyses.The qualitative analysis investigates different expansionary fiscal measures, inter alia,the earned income tax credit, the new legislation for crisis management of banks, theguarantee program and the establishment of stability fund.The quantitative analysis is based on a 4-variable Vector Autoregression model whichhelps to identify the influence of general government expenditure, revenue and centralgovernment debt on GDP fluctuations over the past 15 years. The results demonstrate apositive response of GDP to an increase in government expenditure, with the maximumvalue of response achieved after 8 quarters. GDP also grows in response to a positiveshock in the central government debt, which is in line with the macroeconomic theory ofexpansionary fiscal policy. The positive response to an increase of revenue is somewhatcontradictory, and can become a topic for a further in-depth research.</p>
|
27 |
Évolution socio-économique dans le Chasséen de la grotte de l'Église supérieure, Var : apport de l'analyse fonctionnelle des industries lithiques /Gassin, Bernard. January 1996 (has links)
Texte remanié de: Th. univ.--Archéol.--Paris 10. / L'ouvrage porte par erreur : ISSN 1151-5358. Bibliogr. p. 269-289.
|
28 |
Retornos e riscos na comercialização de milho no estado do Paraná: uma aplicação do modelo Value-at-Risk / Marketing returns and risks for corn in the state of Paraná, Brazil: an application of Value-at-RiskLeismann, Edison Luiz 10 July 2002 (has links)
Submitted by Marco Antônio de Ramos Chagas (mchagas@ufv.br) on 2016-10-25T13:39:50Z
No. of bitstreams: 1
texto completo.pdf: 1226895 bytes, checksum: 4abe72ee27fd22bb8668ffe2c52b8273 (MD5) / Made available in DSpace on 2016-10-25T13:39:50Z (GMT). No. of bitstreams: 1
texto completo.pdf: 1226895 bytes, checksum: 4abe72ee27fd22bb8668ffe2c52b8273 (MD5)
Previous issue date: 2002-07-10 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A fase de comercialização é a última e uma das mais importantes da produção. Desta forma, esta pesquisa procura examinar os riscos de mercado e os retornos envolvidos na comercialização de milho pelos produtores e empresas de comercialização no Estado do Paraná. O objetivo é avaliar o perfil de risco e retornos das principais estratégias de comercialização utilizadas. Assim, as análises são realizadas separadamente, primeiro envolvendo os produtores agrícolas e depois as empresas de comercialização (cooperativas e cerealistas). O ano foi dividido em 52 semanas e foram feitas análises dos retornos obtidos a partir das safras de inverno e verão do período de 1994 a 2001. As análises de risco foram realizadas através do modelo VaR (Value-at-Risk) através dos métodos Delta Normal, Simulação Histórica e Simulação de Monte Carlo. Os retornos foram ponderados pelo risco através do Índice de Sharpe modificado, o IS 2 , em que os retornos livres de risco são ponderados pelo VaR, assumindo-se assim um índice de retornos probabilístico. Os resultados mostraram a inviabilidade da estocagem pelos produtores, principalmente quando os retornos foram ponderados pelos riscos envolvidos. Assim, os produtores, ao venderem o milho no período de safra, em média, estão obtendo resultados mais favoráveis do que quando estocam para vender em data futura. Quanto aos agentes de comercialização, a estratégia de estocagem também mostrou-se inviável, assim como os resultados obtidos pelos produtores. Quanto à estratégia de Venda a Descoberto (VD), a mesma apresenta-se favorável quando analisada somente a partir dos retornos. Assim, a estratégia de VD é preferível à estocagem. No entanto, ao ponderar esses resultados pelo risco, os mesmos mostram-se menos que proporcionais, em média. Considerando-se como viável somente os IS 2 superiores a um, mostra-se superior a estratégia de Compra e Venda Simultânea (CVS). As análises de risco através do VaR evidenciaram riscos mais elevados a partir das safras de inverno do que a partir das safras de verão. Também, os riscos da estocagem são superiores aos riscos da VD. Os modelos de avaliação de riscos mostraram ser uma alternativa de instrumento de informação para a tomada de decisão empresarial na comercialização de milho e a metodologia aplicada pode ser facilmente adaptada para outros produtos agrícolas. Ressalta-se que, tendo em vista que utilizou-se dados do período pós-estabilização (Plano Real), com o passar do tempo e a existência de maior número de observações, esses modelos tendem a melhorar sua performance na identificação dos riscos. / Marketing is the last and the most important part of any agricultural system. So this research evaluates both risks and returns in corn marketing for farmers and marketing intermediaries (coops and grain elevators) in the State of Paraná, Brazil. The marketing strategies analyzed in this research are the most adopted ones in the State of Paraná. The analysis is carried out into two steps, the first involving only farmers’ strategies and the second on intermediaries’ strategies. Moreover, every year was divided into 52 weeks, and there were also carried out separate analyses for Winter and Summer. Risk analyses were done by means of Value-at-Risk (VaR) procedures following three empirical approaches: Normal Delta, Historical Simulation and Monte Carlo Simulation. The returns were weighted by risk through a modified version of Sharpe Index, where the VaR estimates were used as measures of risk. The results suggested that storage is not profitable for farmers, especially for risk-free returns. So for farmers it is better to sell their corn just after harvesting. Storage was not profitable for marketing intermediaries as well. On the other hand, the short selling strategy presents larger returns and lower risk than storage. But when the risk factor is included in the analysis, one can verify that the returns do not compensate the risk in all strategies, with the exception of the simultaneous buying and sale (SBS) strategy. Moreover, the analyses suggest that the market risk is higher during the Summer season than during the Winter season. A major limitation of this study was the usage of short time series, what was done to avoid the period previous to the Real Plan (1994). In the future, this limitation can be removed and the accuracy of the estimations can be improved. Finally, this application showed that the risk measures adopted in this study can be used as managerial tools for corn trades and for traders of other products as well. / Tese importada do Alexandria
|
29 |
Přelévání volatility v nově členských státech Evropské unie: Bayesovský model / Volatility Spillovers in New Member States: A Bayesian ModelJanhuba, Radek January 2012 (has links)
Volatility spillovers in stock markets have become an important phenomenon, especially in times of crises. Mechanisms of shock transmission from one mar- ket to another are important for the international portfolio diversification. Our thesis examines impulse responses and variance decomposition of main stock in- dices in emerging Central European markets (Czech Republic, Poland, Slovakia and Hungary) in the period of January 2007 to August 2009. Two models are used: A vector autoregression (VAR) model with constant variance of resid- uals and a time varying parameter vector autoregression (TVP-VAR) model with a stochastic volatility. Opposingly of other comparable studies, Bayesian methods are used in both models. Our results confirm the presence of volatility spillovers among all markets. Interestingly, we find significant opposite trans- mission of shocks from Czech Republic to Poland and Hungary, suggesting that investors see the Central European exchanges as separate markets. Bibliographic Record Janhuba, R. (2012): Volatility Spillovers in New Member States: A Bayesian Model. Master thesis, Charles University in Prague, Faculty of Social Sciences, Institute of Economic Studies. Supervisor: doc. Roman Horváth Ph.D. JEL Classification C11, C32, C58, G01, G11, G14 Keywords Volatility spillovers,...
|
30 |
Essai empirique sur les conséquences de l’expansion de la liquidité globale dans les pays destinataires / Empirical essay on the global liquidity spillovers on receiving countriesRapelanoro, Nady 12 July 2017 (has links)
Depuis l’article séminal de Baks et Kramer (1999), le concept de la liquidité globale est souvent revenu au cœur de l’actualité, car les facteurs de son développement ont été considérés comme ayant indirectement participé aux développements des déséquilibres précédents la crise financière de 2008. Face à ces enjeux, la littérature s’est largement concentrée sur l’approche de la stabilité financière dans les pays émetteurs. Contrairement à cette approche, les recherches développées dans cette thèse se concentrent la perspective des pays destinataires de la liquidité globale, en particulier les pays émergents. Ainsi pour répondre à la problématique principale de l’identification des effets de reports de la liquidité globale, cette thèse propose une analyse en trois chapitres du phénomène. Premièrement, à travers une généralisation de l’analyse de la problématique de la stabilité financière dans les pays émergents. Deuxièmement, en analysant comment le comportement d’accumulation des pays destinataires affecte les conditions de la liquidité globale dans les pays émetteurs. Troisièmement, en analysant au niveau national le comportement des autorités monétaires pour prémunir leurs économies des effets de l’expansion de la liquidité globale. / Since the seminal paper by Baks and Kramer (1999), the concept of global liquidity catch once again the attention because the factors of its expansion are considered in the literature as having contributed to the development of vulnerabilities prior to the global financial crisis. Given the importance of global liquidity issues, the literature has largely focused on the financial stability approach in the issuing countries. Contrary to this approach, the research developed in this Ph.D. thesis relies principally on the receiving countries perspective, particularly the emerging countries. Accordingly, in order to answer our main problematic regarding the identification of global liquidity spillovers into the receiving countries, this thesis proposes a three chapters analysis of the phenomenon. First, we focus on a generalization of the financial stability concerns into the emerging countries. Second, we analyze how the reserve accumulation behavior in the receiving countries affects the global liquidity conditions in the main issuing country. Third, we center on the monetary authorities behavior in order to isolate their economies from the effects of the global liquidity expansion.
|
Page generated in 0.0399 seconds