11 |
Extrapolative Beliefs and the Value PremiumZhaojing Chen (11089731) 22 July 2021 (has links)
<div>In models of stock returns where investors with extrapolative beliefs on future stocks (e.g., Barberis and Shleifer (2003)), price momentum and the value premium both arise naturally. The key insight from these models is that, the strength and timing of these cross- sectional return anomalies will be conditional on the degree of extrapolative bias. More specifically, higher (lower) degree of over-extrapolation leading to stronger value premium (momentum).</div><div> Using the time-series variation in the degree of over-extrapolation documented in Cassella and Gulen (2018), I first directly test the hypothesis that both value and momentum stem from over-extrapolation in financial markets. I find that the average momentum return is 1.00% (0.10%) per month when the degree of over-extrapolation is low (high), whereas the average value premium is 0.51% (1.29%) per month following low (high) levels of over- extrapolation.</div><div> Furthermore, I extend the model in Barberis and Shleifer (2003) by having both within- equity extrapolators and across asset-class extrapolator. The extension is based on the idea that when extrapolators move capital in and out of the equity market, they disproportionately buy growth stocks in good times and sell value stocks in bad times. The model predicts that the cross-sectional value premium should be stronger following states of large market- wide over- or undervaluation due to additional extrapolative demand to buy or sell. This prediction is tested empirically and I find strong support for it. The value premium is 3.42% per month following market-wide undervaluation and 1.70% per month following market overvaluation. In the remainder 60% to 80% of the sample, when the market is neither significantly over or under-valued, there is no significant value premium in a monthly horizon and the value premium is only 0.54% per month in an annual horizon. I provide some suggestive evidence regarding portfolio return dynamics, investor expectation errors and fund flows that supports the extrapolative demand channel. Overall, this work examines more closely at the effect of extrapolative beliefs on the cross-section of asset prices and offers some support for extrapolation-based asset-pricing theories.</div><div><br></div>
|
12 |
Methods Of Extrapolating Low Cycle Fatigue Data To High Stress AmplitudesRadonovich, David Charles 01 January 2007 (has links)
Modern gas turbine component design applies much effort into prediction and avoidance of fatigue. Advances in the prediction of low-cycle fatigue (LCF) cracks will reduce repair and replacement costs of turbine components. These cracks have the potential to cause component failure. Regression modeling of low-cycle fatigue (LCF)test data is typically restricted for use over the range of the test data. It is often difficult to characterize the plastic strain curve fit constants when the plastic strain is a small fraction of the total strain acquired. This is often the case with high strength, moderate ductility Ni-base superalloys. The intent of this project is to identify the optimal technique for extrapolating LCF test results into stress amplitudes approaching the ultimate strength. The proposed method to accomplish this is by finding an appropriate upper and lower bounds for the cyclic stress-strain and strain-life equations. Techniques investigated include: monotonic test data anchor points, strain-compatibility, and temperature independence of the Coffin-Manson relation. A Ni-base superalloy (IN738 LC) data set with fully reversed fatigue tests at several elevated temperatures with minimal plastic strain relative to the total strain range was used to model several options to represent the upper and lower bounds of material behavior. Several high strain LCF tests were performed with stress amplitudes approaching the ultimate strength. An augmented data set was developed by combining the high strain data with the original data set. The effectiveness of the bounding equations is judged by comparing the bounding equation results with the base data set to a linear regression model using the augmented data set.
|
13 |
Optimisation des posologies des antiépileptiques chez l’enfant à partir de données pharmacocinétiques pédiatriques et adultesOptimisation des posologies des antiépileptiques chez l’enfant à partir de données pharmacocinétiques pédiatriques et adultes / Posology optimization of antiepileptic drugs in children using adult and pediatric pharmacokinetic dataRodrigues, Christelle 28 November 2018 (has links)
Les enfants diffèrent des adultes non seulement en termes de dimension corporelle mais aussi en termes physiologiques. En effet, les phénomènes de développement et maturation interviennent au cours de la croissance. Ces processus ne sont pas linéaires et induisent des différences pharmacocinétiques et pharmacodynamiques. Ainsi, contrairement à la pratique commune, il n’est pas approprié de déterminer les posologies pédiatriques directement à partir des doses adultes. Étudier la pharmacocinétique chez l’enfant est fondamental pour pouvoir déterminer les posologies à administrer. La méthodologie idéale est l’analyse de population à travers des modèles non-linéaires à effets mixtes. Cependant, même si cette méthode permet l’analyse de données éparses et déséquilibrées, le manque de données individuelles doit être compensé par l’inclusion de plus d’individus. Cela pose un problème lorsque l’indication du traitement est une maladie rare, comme le sont les syndromes épileptiques de l’enfance. Dans ce cas, l’extrapolation de modèles adultes à la population pédiatrique peut s’avérer avantageuse. L’objectif de ce travail de thèse était d’évaluer les recommandations posologiques d’antiépileptiques lorsque des données pharmacocinétiques pédiatriques sont suffisamment informatives pour permettre la construction d’un modèle, ou lorsque celles-ci ne sont pas suffisamment importantes ou ne peuvent pas être exploitées correctement. Dans un premier temps, un modèle parent-métabolite de l’oxcarbazépine et de son dérivé mono-hydroxylé (MHD) a été développé chez l’enfant épileptique âgé de 2 à 12 ans. Ce modèle a permis de mettre en évidence que les plus jeunes enfants nécessitent des doses plus élevées, ainsi que les patients co-traités avec des inducteurs enzymatiques. Un modèle a aussi été développé pour les enfants épileptiques de 1 à 18 ans traités avec la formulation de microsphères à libération prolongée d’acide valproïque. Ce modèle a tenu en compte le flip-flop associé à la formulation et la relation non-linéaire entre la clairance et la dose due à la liaison protéique saturable de façon mécanistique. Encore une fois, il a été mis en évidence le besoin de doses plus élevées pour les enfants plus jeunes. Puis, un modèle adulte du vigabatrin a été extrapolé à l’enfant pour déterminer les posologies permettant d’atteindre des expositions similaires à l’adulte pour traiter les épilepsies focales résistantes. A partir des résultats obtenus, qui sont en accord avec les conclusions d’essais cliniques, nous avons pu proposer une dose de maintenance idéale dans cette indication. Enfin, nous avons étudié la pertinence de l’extrapolation par allométrie théorique dans un contexte de non-linéarité avec l’exemple du stiripentol. Nous avons pu en conclure que cette méthode semble apporter de bonnes prédictions à partir de l’âge de 8 ans, contrairement aux molécules à élimination linéaire où cela semble correct à partir de 5 ans. En conclusion, nous avons pu tester et comparer différentes approches pour aider à la détermination de recommandations posologiques chez l’enfant. L’étude de la pharmacocinétique pédiatrique par des essais spécifiques reste indispensable au bon usage du médicament. / Children greatly differ from adults not only in terms of size but also in physiological terms. Indeed, developmental changes occur during growth due to maturation. These processes occur in a nonlinear fashion and can cause pharmacokinetic and pharmacodynamic differences. Thus, oppositely to common practice, it is not appropriate to scale pediatric doses directly and linearly from adults. The study of pharmacokinetics in children is then essential to determine those pediatric dosages. The more commonly used methodology is population analysis through non-linear mixed effects models. This method allows the analysis of sparse and unbalanced data. In return, the lack of individual data has to be balanced with the inclusion of more individuals. This can be a problem when the indication of treatment is a rare disease, as are epileptic syndromes of childhood. In this case, extrapolation of adult pharmacokinetic models to the pediatric population may be interesting. The objective of this thesis was to evaluate the dosage recommendations of antiepileptic drugs when pediatric pharmacokinetic data are sufficient to be modeled, and when they are not, extrapolating adequately adult information. Firstly, a parent-metabolite model of oxcarbazepine and its monohydroxy derivative (MHD) was developed in epileptic children aged 2 to 12 years. This model showed that younger children require higher doses, as well as patients co-treated with enzyme inducers. A model was also developed for epileptic children aged 1 to 18 years treated with a valproic acid sustained release microsphere formulation. This model took into account the flip-flop associated with the formulation and the non-linear relationship between clearance and dose caused by a saturable protein binding. Again, the need for higher doses for younger children was highlighted. Then, an adult model of vigabatrin was extrapolated to children to determine which doses allow to achieve exposures similar to adults in resistant focal onset seizures. From the results obtained, which are in agreement with the conclusions of clinical trials, we have been able to propose an ideal maintenance dose for this indication. Finally, we studied the relevance of extrapolation by theoretical allometry in a context of non-linearity with the example of stiripentol. We concluded that this method seems to provide good predictions from the age of 8, unlike the linear elimination molecules where it seems correct from 5 years. In conclusion, we were able to test and compare different approaches to help determine dosing recommendations in children. The study of pediatric pharmacokinetics in specific trials remains essential for the proper use of drugs.
|
14 |
Vers l’Extrapolation à l’échelle continentale de l’impact des overshoots sur le bilan de l’eau stratosphérique / Toward the upscaling of the impact of overshoots on the stratospheric water budgetat a continental scaleBehera, Abhinna 12 February 2018 (has links)
Cette thèse a pour but de préparer un travail d’extrapolation de l’impact des overshoots stratosphériques (SOC) sur le bilan de vapeur d’eau (VE) dans la couche de la tropopause tropicale (TTL) et dans la basse stratosphère à l’échelle continentale.Pour ce faire, nous profitons des mesures de la campagne de terrain TRO-Pico tenue à Bauru, au Brésil, pendant deux saisons convectives/humides en 2012 et 2013, et de plusieurs simulations numériques de la TTL sur un domaine englobant une grande partie de l’Amérique du Sud avec le modèle méso-échelle BRAMS.Premièrement, nous effectuer une simulation d’une une saison humide complète sans tenir compte des SOC. Cette simulation est ensuite évaluée pour d’autres caractéristiques clés typiques (température de la TTL, VE, sommets de nuages et ondes de gravité) dans la TTL. En l’absence de SOC et avant d’extrapoler son leur impact, nous démontrons que le modèle reproduit correctement les caractéristiques principales de la TTL. L’importance de l’ascension lente à grande échelle par rapport aux processus convectifs profonds à échelle finie est ensuite discutée.Deuxièmement, à partir de simulations BRAMS à fine à échelle de cas de SOC observés pendant TRO-Pico, nous déduisons des quantités physiques (flux de glace, bilan de masse de glace, tailles des SOCs), qui serviront à définir un forçage de l’impact des overshoots dans des simulations à grande échelle. Nous montrons un impact maximum d’environ 2 kt en VE et 6 kt de glace par SOC. Ces chiffres sont 30% nférieurs pour un autre réglage microphysique du modèle. Nous montrons que seul trois types d’hydrométéores du modèle contribuent à cette hydratation. / This dissertation aims at laying a foundation on upscaling work of the impact of stratospheric overshooting convection (SOC) on the water vapor budget in the tropical tropopause layer (TTL) and lower stratosphere at a continental scale.To do so, we take advantage of the TRO-Pico field campaign measurements held at Bauru, Brazil, during two wet/convective seasons in 2012 and 2013, and perform accordingly several numerical simulations of the TTL which encompass through a large part of south America using the BRAMS mesoscale model.Firstly, we adopt a strategy of simulating a full wet season without considering SOC. This simulation is then evaluated for other typical key features (e.g., TTL temperature, convective clouds, gravity wave) of the TTL. In the absence of SOC and before upscaling its impact, we demonstrate that the model has a fair enough ability to reproduce a typical TTL. The importance of large-scale upwelling in comparison to the finite-scale deep convective processes is then discussed.Secondly, from fine scale BRAMS simulations of an observational case of SOC during TRO-Pico, we deduce physical parameters (mass flux, ice mass budget, SOC size) that will be used to set a nudging of the SOC impact in large-scale simulations. A typical maximum impact of about 2kt of water vapor, and 6kt of ice per SOC cell is computed. This estimation is 30% lower for another microphysical setup of the model. We also show that the stratospheric hydration by SOC is mainly due to two types of hydrometeors in the model.
|
15 |
Compressive sampling meets seismic imagingHerrmann, Felix J. January 2007 (has links)
No description available.
|
16 |
Compressed wavefield extrapolation with curveletsLin, Tim T. Y., Herrmann, Felix J. January 2007 (has links)
An \emph {explicit} algorithm for the extrapolation of one-way wavefields is proposed which combines recent developments in information theory and theoretical signal processing with the physics of wave propagation. Because of excessive memory requirements, explicit formulations for wave propagation have proven to be a challenge in {3-D}. By using ideas from ``\emph{compressed sensing}'', we are able to formulate the (inverse) wavefield extrapolation problem on small subsets of the data volume{,} thereby reducing the size of the operators. According {to} compressed sensing theory, signals can successfully be recovered from an imcomplete set of measurements when the measurement basis is \emph{incoherent} with the representation in which the wavefield is sparse. In this new approach, the eigenfunctions of the Helmholtz operator are recognized as a basis that is incoherent with curvelets that are known to compress seismic wavefields. By casting the wavefield extrapolation problem in this framework, wavefields can successfully be extrapolated in the modal domain via a computationally cheaper operatoion. A proof of principle for the ``compressed sensing'' method is given for wavefield extrapolation in {2-D}. The results show that our method is stable and produces identical results compared to the direct application of the full extrapolation operator.
|
17 |
Sur l'estimation de probabilités de queues multivariées / Estimating multivariate tails probabilitiesDalhoumi, Mohamed Néjib 25 September 2017 (has links)
Cette thèse présente des contributions à la modélisation multivariée des queues de distribution. Nous introduisons une nouvelle modélisation des probabilités de queue jointes d'une distribution multivariée avec des marges Pareto. Ce modèle est inspiré de celui de Wadsworth et Tawn (2013). Une nouvelle variation régulière multivariée non-standard de coefficient une fonction à deux variables est introduite, permettant de généraliser deux approches de modélisation respectivement proposées par Ramos et Ledford (2009)et Wadsworth et Tawn (2013). En nous appuyant sur cette modélisation nous proposons une nouvelle classe de modèles semi-paramétriques pour l'extrapolation multivariée selon des trajectoires couvrant tout le premier quadrant positif. Nous considérons aussi des modèles paramétriques construits grâce à une mesure non-négative satisfaisant une contrainte qui généralise celle de Ramos et Ledford (2009). Ces nouveaux modèles sont flexibles et conviennent tant pour les situations de dépendance que d'indépendance asymptotique. / This PhD thesis presents contributions to the modelling of multivariate extremevalues. We introduce a new tail model for multivariate distribution with Pareto margins. This model is inspired from the Wadsworth and Tawn (2013) one. A new non-standard multivariate regular variation with index equals to a function of two variables is thus introduced to generalize both modeling approaches proposedby Ramos and Ledford (2009) and Wadsworth and Tawn (2013), respectively. Building on this new approach we propose a new class of non-parametric models allowing multivariate extrapolation along trajectories covering the entire first positive quadrant. Similarly we consider parametric models built with a non-negative measure satisfying a constraint that generalizes the Ramos and Ledford (2009) one. These new models are flexible and valid in both situations of dependence or asymptotic independence.
|
18 |
Long term extrapolation and hedging of the South African yield curveThomas, Michael Patrick 17 June 2009 (has links)
The South African fixed interest rate market has historically had very little liquidity beyond 15 - 20 years. Most financial institutions are currently prepared to quote and trade interest rate risk up to a maximum term of 30 years. Any trades beyond 30 years usually attract very onerous spreads and raise relevant questions regarding an appropriate level of mid-rates. However, there are many South African entities whose business involves taking on exposure to interest rates beyond 30 years, such as life insurance companies and pension funds. These entities have historically used very traditional approaches to hedging their interest rate exposures across the whole term structure and have typically done little to gain any further protection. We can generalise the problems faced by any entity exposed to long term interest rate risk in South Africa: 1. The inadequacy of traditional matching methods (i.e. immunisation and bucketing) to cope with the long term interest rate risks. 2. The non-observability of interest rate data beyond the maximum term in the yield curve. Associated with this is the inability to adequately quantify interest rate risk. 3. The lack of liquidity in long term interest rate markets. Associated with this is the inability to adequately hedge long term interest rate risk. We examine various traditional approaches to matching / hedging interest rate risk using information available at observable / tradable terms on the nominal yield curve. We then look at the reasons why these approaches are not suitable for hedging long term interest rate risk. Some modern methods to forecast and hedge long term interest rate risks are then examined and the possibility of their use in managing long term interest risk is explored. On the back of these investigations, we propose a number of possible yield curve extrapolation procedures and methodology for performing calibrations. Using some general theoretical hedging results, we perform a case study which analyses the performance of various theoretical hedges over a historical period from October 2001 to March 2007. The results indicate that extrapolation and hedging of the yield curve is able to significantly reduce Value-At-Risk of long term interest rate exposures. A second case study is then performed which analyses performance of the various theoretical hedges using out-of-sample simulated yield curve data. We find that there appears to be a significant benefit to the use of yield curve extrapolation techniques, specifically when used in conjunction with a hedging strategy. In some cases we find that the more simple extrapolation techniques actually increase risk (significantly) when used in conjunction with hedging. However, for some of the more advanced techniques, risk can be significantly reduced. For an entity looking to deal with long term interest rate risk, we find that the choice of extrapolation technique and hedging strategy go hand-in-hand. For this reason the cost of hedging and reduction in risk are strongly correlated. The results we obtain suggest that it is necessary to weigh the benefits against the cost of hedging. Further, this cost seems to increase with increasing reduction in risk. The research and results presented here are related to those in the paper Long Term Forecasting and Hedging of the South African Yield Curve presented by Thomas and Maré at the 2007 Actuarial Convention in South Africa. Copyright / Dissertation (MSc)--University of Pretoria, 2009. / Mathematics and Applied Mathematics / unrestricted
|
19 |
Étude de procédés d'extrapolation en analyse numériqueLaurent, Pierre-Jean 15 June 1964 (has links) (PDF)
.
|
20 |
Statistics of amplitude and fluid velocity of large and rare waves in the oceanSuh, Il Ho 06 1900 (has links)
CIVINS / The understanding of large and rare waves in the ocean is becoming more important as these rare events are turning into more common observances. In order to design a marine structure or vehicle to withstand such a potentially devastating phenomenon, the designer must have knowledge of extreme waves with return periods of 50 and 100 years. Based on satellite radar altimeter data, researchers have successfully predicted extreme significant wave heights with the return periods of 50 and 100 years. This thesis extends their research further by estimating the most probable extreme wave heights and other wave statistics based on spectral analysis. The same technique used for extreme significant wave height prediction is applied to extrapolation of corresponding mean wave periods, and they are used to construct two parameter spectra representing storm sea conditions. The prediction of the most probable extreme wave heights as well as other statistical data is based on linear theory and short term order statistics. There exists sufficient knowledge of second order effects on wave generation, and it could be applied to a logical progression of the simulation approach in this thesis. However, because this greatly increases computation time, and the kinematics of deep sea spilling breakers are not yet fully understood for which substantial new research is required, the nonlinear effects are not included in this thesis. Spectral analysis can provide valuable statistical information in addition to extreme wave height data, and preliminary results show good agreement with other prediction methods including wave simulation based on the Pierson-Moskowitz spectrum. / Contract number: N662271-97-G-0025 / CIVINS / US Navy (USN) author
|
Page generated in 0.0904 seconds