Spelling suggestions: "subject:"leveraging""
221 |
EMPIRICAL EVIDENCE ON PREDICTABILITY OF EXCESS RETURNS: CONTRARIAN STRATEGY, DOLLAR COST AVERAGING, TACTICAL ASSET ALLOCATION BASED ON A THICK MODELING STRATEGYBORELLO, GIULIANA 15 March 2010 (has links)
Questa tesi è composta da 3 differenti lavori che ci confermano la prevedibilità degli extra rendimenti rispetto al mercato usando semplici strategie di portafoglio azionario utilizzabili sia dal semplice risparmiatore sia dall'investitore istituzionale.
Nel primo capitolo è stata analizzata la profittabilità della contrarian strategy nel mercato azionario Italiano. In letteratura é stato già abbondantemente dimostrato che i rendimenti azionari sono caratterizzati da un’autocorrelazione negativa nel breve periodo e da un effetto di ritorno alla media nel lungo periodo. La contrarian strategy é utilizzata per trarre profitto dalla correlazione seriale negativa dei rendimenti azionari, infatti, vendendo i titoli che si sono rivelati vincenti nel passato (in termini di rendimento) e acquistando quelli "perdenti" si ottengono profitti inaspettati.
Nel secondo paper, l'analisi si focalizza sulla strategia di portafoglio definita Dollar Cost Averaging (DCA). La Dollar Cost Averaging si riferisce a una semplice metodologia di portafoglio che prevede di investire una somma fissa di denaro in un'attività rischiosa a uguali intervalli di tempo, per tutto l'orizzonte temporale prefissato. Il lavoro si propone di confrontare i vantaggi, in termini di riduzione sostanziale del rischio, di questa strategia dal punto di vista di un semplice risparmiatore. Nell'ultimo capitolo, ipotizzando di essere un investitore istituzionale che possiede ogni giorno numerose informazioni e previsioni, ho cercato di capire come egli può usare tutte le informazioni in suo possesso per decidere prontamente come allocare al meglio il patrimonio del fondo. L’investitore normalmente cerca di identificare la migliore previsione possibile, ma quasi sempre non riesce ad identificare l’esatto processo dei prezzi sottostanti. Quest’osservazione ha condotto molti ricercatori ad utilizzare numerosi fattori esplicativi per ottenere un buona previsione. Il paper supporta l’esistente letteratura che utilizza un nuovo approccio per trasformare previsioni di rendimenti in scelte di gestione di portafoglio che possano offrire una maggiore performance del portafoglio.Partendo dal modello d’incertezza di Pesaran e Timmerman(1996), considero un cospicuo numero di fattori macroeconomici per identificare un modello predittivo che mi permetta di prevedere i movimenti del mercato tenendo presente i maggiori indicatori economici e finanziari e considerato che il loro rispettivo potere predittivo cambia nel tempo. / This thesis is composed by three different papers that confirm us the predictability of expected returns using different simple portfolio strategy and under different point of view (i.e. a generic saver and institutional investor).
In the first chapter, I investigate the profitability of contrarian strategy in the Italian Stock Market.
However empirical research has shown that asset returns tend to exhibit some form of negative autocorrelation in the short term and mean-reversion over long horizons. Contrarian strategy is used to take advantage of serial correlation in stock price returns, such that selling winners and buying losers generates abnormal profits.
On the second chapter, the analyse is focused in another classic portfolio strategy called Dollar Cost Averaging (DCA). Dollar Cost Averaging refers to an investment methodology in which a set dollar amount is invested in a risky asset at equal intervals over a holding period. The paper compares the advantages and risk of this strategy from the point of view of a saver.
Lastly, supposing to be an institutional investor who has a large number of information and forecasts, I tried to understand how using all them he decide with dispatch how to allocate the portfolio fund.
When a wide set of forecasts of some future economic events are available, decision makers usually attempt to discover which is the best forecast, but in almost all cases a decision maker cannot identify ex ante the true process. This observation has led researchers to introduce several sources of uncertainty in forecasting exercises. The paper supporting the existent literature employs a novel approaches to transform predicted returns into portfolio asset allocations, and their relative performances. First of all dealing with model uncertainty, as Pesaran and Timmerman (1996), I consider a richer parameterization for the forecasting model to find that the predictive power of various economic and financial factors over excess returns change through time.
|
222 |
Sequential experimental design under competing prior knowledgeVastola, Justin Timothy 11 December 2012 (has links)
This research focuses on developing a comprehensive framework for designing and modeling experiments in the presence of multiple sources of competing prior knowledge. In particular, methodology is proposed for process optimization in high-cost, low-resource experimental settings where the underlying response function can be highly non-linear. In the first part of this research, an initial experimental design criteria is proposed for optimization problems by combining multiple, potentially competing, sources of prior information--engineering models, expert opinion, and data from past experimentation on similar, non-identical systems. New methodology is provided for incorporating and combining conjectured models and data into both the initial modeling and design stages. The second part of this research focuses on the development of a batch sequential design procedure for optimizing high-cost, low-resource experiments with complicated response surfaces. The success in the proposed approach lies in melding a flexible, sequential design algorithm with a powerful local modeling approach. Batch experiments are designed sequentially to adapt to balance space-filling properties and the search for the optimal operating condition. Local model calibration and averaging techniques are introduced to easily allow incorporation of statistical models and engineering knowledge, even if such knowledge pertains to only subregions of the complete design space. The overall process iterates between adapting designs, adapting models, and updating engineering knowledge over time. Applications to nanomanufacturing are provided throughout.
|
223 |
Morphing arquitectónico: transformaciones entre las casas usonianas de Frank Lloyd WrightHerrera Velazco, Rodrigo 16 February 2012 (has links)
Esta tesis investiga sobre el proceso de transformación de la forma arquitectónica, analizando una técnica
específica denominada morphing. La técnica del morphing se utiliza en los gráficos por ordenador para la
transformación de la forma entre dos o más objetos dados. Desde un punto de vista técnico, se revisan y
actualizan las metodologías y aplicaciones existentes, sus características específicas y sus incidencias sobre
la arquitectura. Desde un punto de vista práctico, se utilizan una serie de modelos de las casas Usonianas de
Frank Lloyd Wright, con el fin de experimentar la técnica y ver qué utilidades se pueden obtener a partir de su
lógica de diseño. Como resultado de este análisis se obtiene una metodología genérica para el procedimiento
de un morphing arquitectónico. / This thesis investigates the transformation of architectural form, analyzing a specific technique called morphing.
Morphing is a technique used in computer graphics to transform a form between two or more given objects.
From a technical point of view, the existing techniques are reviewed and updated, as well as their specific
characteristics and impact on architecture. From a practical point of view, some models of Usonian houses of
Frank Lloyd Wright are used to experience the technique and see which utilities are available from his design
logic. As a result of this analysis a generic methodology for the process of architectural morphing is obtained.
|
224 |
Adaptive Random Search Methods for Simulation OptimizationPrudius, Andrei A. 26 June 2007 (has links)
This thesis is concerned with identifying the best decision among a set of possible decisions in the presence of uncertainty. We are primarily interested in situations where the objective function value at any feasible solution needs to be estimated, for example via a ``black-box' simulation procedure. We develop adaptive random search methods for solving such simulation optimization problems. The methods are adaptive in the sense that they use information gathered during previous iterations to decide how simulation effort is expended in the current iteration. We consider random search because such methods assume very little about the structure of the underlying problem, and hence can be applied to solve complex simulation optimization problems with little expertise required from an end-user. Consequently, such methods are suitable for inclusion in simulation software.
We first identify desirable features that algorithms for discrete simulation optimization need to possess to exhibit attractive empirical performance. Our approach emphasizes maintaining an appropriate balance between exploration, exploitation, and estimation. We also present two new and almost surely convergent random search methods that possess these desirable features and demonstrate their empirical attractiveness.
Second, we develop two frameworks for designing adaptive and almost surely convergent random search methods for discrete simulation optimization. Our frameworks involve averaging, in that all decisions that require estimates of the objective function values at various feasible solutions are based on the averages of all observations collected at these solutions so far. We present two new and almost surely convergent variants of simulated annealing and demonstrate the empirical effectiveness of averaging and adaptivity in the context of simulated annealing.
Finally, we present three random search methods for solving simulation optimization problems with uncountable feasible regions. One of the approaches is adaptive, while the other two are based on pure random search. We provide conditions under which the three methods are convergent, both in probability and almost surely. Lastly, we include a computational study that demonstrates the effectiveness of the methods when compared to some other approaches available in the literature.
|
225 |
Efficient Index Structures For Video DatabasesAcar, Esra 01 February 2008 (has links) (PDF)
Content-based retrieval of multimedia data has been still an active research area. The efficient retrieval of video data is proven a difficult task for content-based video retrieval systems. In this thesis study, a Content-Based Video Retrieval (CBVR) system that adapts two different index structures, namely Slim-Tree and BitMatrix, for efficiently retrieving videos based on low-level features such as color, texture, shape and motion is presented. The system represents low-level features of video data with MPEG-7 Descriptors extracted from video shots by using MPEG-7 reference software and stored in a native XML database. The low-level descriptors used in the study are Color Layout (CL), Dominant Color (DC), Edge Histogram (EH), Region Shape (RS) and Motion Activity (MA). Ordered Weighted Averaging (OWA) operator in Slim-Tree and BitMatrix aggregates these features to find final similarity between any two objects. The system supports three different types of queries: exact match queries, k-NN queries and range queries. The experiments included in this study are in terms of index construction, index update, query response time and retrieval efficiency using ANMRR performance metric and precision/recall scores. The experimental results show that using BitMatrix along with Ordered Weighted Averaging method is superior in content-based video retrieval systems.
|
226 |
Validation des modèles statistiques tenant compte des variables dépendantes du temps en prévention primaire des maladies cérébrovasculairesKis, Loredana 07 1900 (has links)
L’intérêt principal de cette recherche porte sur la validation d’une méthode
statistique en pharmaco-épidémiologie. Plus précisément, nous allons comparer
les résultats d’une étude précédente réalisée avec un devis cas-témoins niché dans
la cohorte utilisé pour tenir compte de l’exposition moyenne au traitement :
– aux résultats obtenus dans un devis cohorte, en utilisant la variable exposition
variant dans le temps, sans faire d’ajustement pour le temps passé
depuis l’exposition ;
– aux résultats obtenus en utilisant l’exposition cumulative pondérée par le
passé récent ;
– aux résultats obtenus selon la méthode bayésienne.
Les covariables seront estimées par l’approche classique ainsi qu’en utilisant
l’approche non paramétrique bayésienne. Pour la deuxième le moyennage bayésien
des modèles sera utilisé pour modéliser l’incertitude face au choix des modèles.
La technique utilisée dans l’approche bayésienne a été proposée en 1997 mais
selon notre connaissance elle n’a pas été utilisée avec une variable dépendante du
temps. Afin de modéliser l’effet cumulatif de l’exposition variant dans le temps,
dans l’approche classique la fonction assignant les poids selon le passé récent sera
estimée en utilisant des splines de régression.
Afin de pouvoir comparer les résultats avec une étude précédemment réalisée,
une cohorte de personnes ayant un diagnostique d’hypertension sera construite
en utilisant les bases des données de la RAMQ et de Med-Echo.
Le modèle de Cox incluant deux variables qui varient dans le temps sera
utilisé. Les variables qui varient dans le temps considérées dans ce mémoire sont
iv
la variable dépendante (premier évènement cérébrovasculaire) et une des variables
indépendantes, notamment l’exposition / The main interest of this research is the validation of a statistical method
in pharmacoepidemiology. Specifically, we will compare the results of a previous
study performed with a nested case-control which took into account the average
exposure to treatment to :
– results obtained in a cohort study, using the time-dependent exposure, with
no adjustment for time since exposure ;
– results obtained using the cumulative exposure weighted by the recent past ;
– results obtained using the Bayesian model averaging.
Covariates are estimated by the classical approach and by using a nonparametric
Bayesian approach. In the later, the Bayesian model averaging will be used to
model the uncertainty in the choice of models. To model the cumulative effect of
exposure which varies over time, in the classical approach the function assigning
weights according to recency will be estimated using regression splines.
In order to compare the results with previous studies, a cohort of people diagnosed
with hypertension will be constructed using the databases of the RAMQ
and Med-Echo.
The Cox model including two variables which vary in time will be used. The
time-dependent variables considered in this paper are the dependent variable (first
stroke event) and one of the independent variables, namely the exposure.
|
227 |
Analysis of the spatial heterogeneity of land surface parameters and energy flux densities / Analyse der räumlichen Heterogenität von Landoberflächenparametern und EnergieflussdichtenTittebrand, Antje 10 June 2010 (has links)
This work was written as a cumulative doctoral thesis based on reviewed publications.
Climate projections are mainly based on the results of numeric simulations from global or regional climate models. Up to now processes between atmosphere and land surface are only rudimentarily known. This causes one of the major uncertainties in existing models. In order to reduce parameterisation uncertainties and to find a reasonable description of sub grid heterogeneities, the determination and evaluation of parameterisation schemes for modelling require as many datasets from different spatial scales as possible. This work contributes to this topic by implying different datasets from different platforms. Its objective was to analyse the spatial heterogeneity of land surface parameters and energy flux densities obtained from both satellite observations with different spatial and temporal resolutions and in-situ measurements. The investigations were carried out for two target areas in Germany. First, satellite data for the years 2002 and 2003 were analysed and validated from the LITFASS-area (Lindenberg Inhomogeneous Terrain - Fluxes between Atmosphere and Surface: a longterm Study). Second, the data from the experimental field sites of the FLUXNET cluster around Tharandt from the years 2006 and 2007 were used to determine the NDVI (Normalised Difference Vegetation Index for identifying vegetated areas and their "condition").
The core of the study was the determination of land surface characteristics and hence radiant and energy flux densities (net radiation, soil heat flux, sensible and latent heat flux) using the three optical satellite sensors ETM+ (Enhanced Thematic Mapper), MODIS (Moderate Resolution Imaging Spektroradiometer) and AVHRR 3 (Advanced Very High Resolution Radiometer) with different spatial (30 m – 1 km) and temporal (1 day – 16 days) resolution. Different sensor characteristics and different data sets for land use classifications can both lead to deviations of the resultant energy fluxes between the sensors. Thus, sensor differences were quantified, sensor adaptation methods were implemented and a quality analysis for land use classifications was performed. The result is then a single parameterisation scheme that allows for the determination of the energy fluxes from all three different sensors.
The main focus was the derivation of the latent heat flux (L.E) using the Penman-Monteith (P-M) approach. Satellite data provide measurements of spectral reflectance and surface temperatures. The
P-M approach requires further surface parameters not offered by satellite data. These parameters include the NDVI, Leaf Area Index (LAI), wind speed, relative humidity, vegetation height and roughness length, for example. They were derived indirectly from the given satellite- or in-situ measurements. If no data were available so called default values from literature were taken. The quality of these parameters strongly influenced the exactness of the radiant- and energy fluxes. Sensitivity studies showed that NDVI is one of the most important parameters for determination of evaporation. In contrast it could be shown, that the parameters as vegetation height and measurement height have only minor influence on L.E, which justifies the use of default values for these parameters.
Due to the key role of NDVI a field study was carried out investigating the spatial variability and sensitivity of NDVI above five different land use types (winter wheat, corn, grass, beech and spruce). Methods to determine this parameter not only from space (spectral), but also from in-situ tower measurements (broadband) and spectrometer data (spectral) were compared. The best agreement between the methods was found for winter wheat and grass measurements in 2006. For these land use types the results differed by less than 10 % and 15 %, respectively. Larger differences were obtained for the forest measurements. The correlation between the daily MODIS-NDVI data and the in-situ NDVI inferred from the spectrometer and the broadband measurements were r=0.67 and r=0.51, respectively.
Subsequently, spatial variability of land surface parameters and fluxes were analysed. The several spatial resolutions of the satellite sensors can be used to describe subscale heterogeneity from one scale to the other and to study the effects of spatial averaging. Therefore land use dependent parameters and fluxes were investigated to find typical distribution patterns of land surface properties and energy fluxes. Implying the distribution patterns found here for albedo and NDVI from ETM+ data in models has high potential to calculate representative energy flux distributions on a coarser scale. The distribution patterns were expressed as probability density functions (PDFs). First results of applying PDFs of albedo, NDVI, relative humidity, and wind speed to the L.E computation are encouraging, and they show the high potential of this method.
Summing up, the method of satellite based surface parameter- and energy flux determination has been shown to work reliably on different temporal and spatial scales. The data are useful for detailed analyses of spatial variability of a landscape and for the description of sub grid heterogeneity, as it is needed in model applications. Their usability as input parameters for modelling on different scales is the second important result of this work. The derived vegetation parameters, e.g. LAI and plant cover, possess realistic values and were used as model input for the Lokalmodell of the German Weather Service. This significantly improved the model results for L.E. Additionally, thermal parameter fields, e.g. surface temperature from ETM+ with 30 m spatial resolution, were used as input for SVAT-modelling (Soil-Vegetation-Atmosphere-Transfer scheme). Thus, more realistic L.E results were obtained, providing highly resolved areal information. / Die vorliegende Arbeit wurde auf der Grundlage begutachteter Publikationen als kumulative Dissertation verfasst.
Klimaprognosen basieren im Allgemeinen auf den Ergebnissen numerischer Simulationen mit globalen oder regionalen Klimamodellen. Eine der entscheidenden Unsicherheiten bestehender Modelle liegt in dem noch unzureichenden Verständnis von Wechselwirkungsprozessen zwischen der Atmosphäre und Landoberflächen und dem daraus folgenden Fehlen entsprechender Parametrisierungen. Um das Problem einer unsicheren Modell-Parametrisierung aufzugreifen und zum Beispiel subskalige Heterogenität in einer Art und Weise zu beschreiben, dass sie für Modelle nutzbar wird, werden für die Bestimmung und Evaluierung von Modell-Parametrisierungsansätzen so viele Datensätze wie möglich benötigt. Die Arbeit trägt zu diesem Thema durch die Verwendung verschiedener Datensätze unterschiedlicher Plattformen bei. Ziel der Studie war es, aus Satellitendaten verschiedener räumlicher und zeitlicher Auflösung sowie aus in-situ Daten die räumliche Heterogenität von Landoberflächenparametern und Energieflussdichten zu bestimmen. Die Untersuchungen wurden für zwei Zielgebiete in Deutschland durchgeführt. Für das LITFASS-Gebiet (Lindenberg Inhomogeneous Terrain - Fluxes between Atmosphere and Surface: a longterm Study) wurden Satellitendaten der Jahre 2002 und 2003 untersucht und validiert. Zusätzlich wurde im Rahmen dieser Arbeit eine NDVI-Studie (Normalisierter Differenzen Vegetations Index: Maß zur Detektierung von Vegetationflächen, deren Vitalität und Dichte) auf den Testflächen des FLUXNET Clusters um Tharandt in den Jahren 2006 und 2007 realisiert.
Die Grundlage der Arbeit bildete die Bestimmung von Landoberflächeneigenschaften und daraus resultierenden Energieflüssen, auf Basis dreier optischer Sensoren (ETM+ (Enhanced Thematic Mapper), MODIS (Moderate Resolution Imaging Spectroradiometer) und AVHRR 3 (Advanced Very High Resolution Radiometer)) mit unterschiedlichen räumlichen (30 m – 1 km) und zeitlichen (1 – 16 Tage) Auflösungen. Unterschiedliche Sensorcharakteristiken, sowie die Verwendung verschiedener, zum Teil ungenauer Datensätze zur Landnutzungsklassifikation führen zu Abweichungen in den Ergebnissen der einzelnen Sensoren. Durch die Quantifizierung der Sensorunterschiede, die Anpassung der Ergebnisse der Sensoren aneinander und eine Qualitätsanalyse von verschiedenen Landnutzungsklassifikationen, wurde eine Basis für eine vergleichbare Parametrisierung der Oberflächenparameter und damit auch für die daraus berechneten Energieflüsse geschaffen.
Der Schwerpunkt lag dabei auf der Bestimmung des latenten Wärmestromes (L.E) mit Hilfe des Penman-Monteith Ansatzes (P-M). Satellitendaten liefern Messwerte der spektralen Reflexion und der Oberflächentemperatur. Die P-M Gleichung erfordert weitere Oberflächenparameter wie zum Beispiel den NDVI, den Blattflächenindex (LAI), die Windgeschwindigkeit, die relative Luftfeuchte, die Vegetationshöhe oder die Rauhigkeitslänge, die jedoch aus den Satellitendaten nicht bestimmt werden können. Sie müssen indirekt aus den oben genannten Messgrößen der Satelliten oder aus in-situ Messungen abgeleitet werden. Stehen auch aus diesen Quellen keine Daten zur Verfügung, können sogenannte Standard- (Default-) Werte aus der Literatur verwendet werden. Die Qualität dieser Parameter hat einen großen Einfluss auf die Bestimmung der Strahlungs- und Energieflüsse. Sensitivitätsstudien im Rahmen der Arbeit zeigen die Bedeutung des NDVI als einen der wichtigsten Parameter in der Verdunstungsbestimmung nach P-M. Im Gegensatz dazu wurde deutlich, dass z. B. die Vegetationshöhe und die Messhöhe einen relativ kleinen Einfluss auf L.E haben, so dass für diese Parameter die Verwendung von Standardwerten gerechtfertigt ist.
Aufgrund der Schlüsselrolle, welche der NDVI in der Bestimmung der Verdunstung einnimmt, wurden im Rahmen einer Feldstudie Untersuchungen des NDVI über fünf verschiedenen Landnutzungstypen (Winterweizen, Mais, Gras, Buche und Fichte) hinsichtlich seiner räumlichen Variabilität und Sensitivität, unternommen. Dabei wurden verschiedene Bestimmungsmethoden getestet, in welchen der NDVI nicht nur aus Satellitendaten (spektral), sondern auch aus in-situ Turmmessungen (breitbandig) und Spekrometermessungen (spektral) ermittelt wird. Die besten Übereinstimmungen der Ergebnisse wurden dabei für Winterweizen und Gras für das Jahr 2006 gefunden. Für diese Landnutzungstypen betrugen die Maximaldifferenzen aus den drei Methoden jeweils 10 beziehungsweise 15 %. Deutlichere Differenzen ließen sich für die Forstflächen verzeichnen. Die Korrelation zwischen Satelliten- und Spektrometermessung betrug r=0.67. Für Satelliten- und Turmmessungen ergab sich ein Wert von r=0.5.
Basierend auf den beschriebenen Vorarbeiten wurde die räumliche Variabilität von Landoberflächenparametern und Flüssen untersucht. Die unterschiedlichen räumlichen Auflösungen der Satelliten können genutzt werden, um zum einen die subskalige Heterogenität zu beschreiben, aber auch, um den Effekt räumlicher Mittelungsverfahren zu testen. Dafür wurden Parameter und Energieflüsse in Abhängigkeit der Landnutzungsklasse untersucht, um typische Verteilungsmuster dieser Größen zu finden. Die Verwendung der Verteilungsmuster (in Form von Wahrscheinlichkeitsdichteverteilungen – PDFs), die für die Albedo und den NDVI aus ETM+ Daten gefunden wurden, bietet ein hohes Potential als Modellinput, um repräsentative PDFs der Energieflüsse auf gröberen Skalen zu erhalten. Die ersten Ergebnisse in der Verwendung der PDFs von Albedo, NDVI, relativer Luftfeuchtigkeit und Windgeschwindigkeit für die Bestimmung von L.E waren sehr ermutigend und zeigten das hohe Potential der Methode.
Zusammenfassend lässt sich feststellen, dass die Methode der Ableitung von Oberflächenparametern und Energieflüssen aus Satellitendaten zuverlässige Daten auf verschiedenen zeitlichen und räumlichen Skalen liefert. Die Daten sind für eine detaillierte Analyse der räumlichen Variabilität der Landschaft und für die Beschreibung der subskaligen Heterogenität, wie sie oft in Modellanwendungen benötigt wird, geeignet. Ihre Nutzbarkeit als Inputparameter in Modellen auf verschiedenen Skalen ist das zweite wichtige Ergebnis der Arbeit. Aus Satellitendaten abgeleitete Vegetationsparameter wie der LAI oder die Pflanzenbedeckung liefern realistische Ergebnisse, die zum Beispiel als Modellinput in das Lokalmodell des Deutschen Wetterdienstes implementiert werden konnten und die Modellergebnisse von L.E signifikant verbessert haben. Aber auch thermale Parameter, wie beispielsweise die Oberflächentemperatur aus ETM+ Daten in 30 m Auflösung, wurden als Eingabeparameter eines Soil-Vegetation-Atmosphere-Transfer-Modells (SVAT) verwendet. Dadurch erhält man realistischere Ergebnisse für L.E, die hochaufgelöste Flächeninformationen bieten.
|
228 |
Fractional Stochastic Dynamics in Structural Stability AnalysisDeng, Jian January 2013 (has links)
The objective of this thesis is to develop a novel methodology of fractional
stochastic dynamics to study stochastic stability of viscoelastic
systems under stochastic loadings.
Numerous structures in civil engineering are driven by dynamic forces, such as
seismic and wind loads, which can be described satisfactorily only by using
probabilistic models, such as white noise processes, real noise processes, or
bounded noise processes. Viscoelastic materials exhibit time-dependent stress
relaxation and creep; it has been shown that fractional calculus provide a
unique and powerful mathematical tool to model such a hereditary property.
Investigation of stochastic stability of viscoelastic systems with fractional
calculus frequently leads to a parametrized family of fractional stochastic
differential equations of motion. Parametric excitation may cause parametric
resonance or instability, which is more dangerous than ordinary resonance as it
is characterized by exponential growth of the response amplitudes even in the
presence of damping.
The Lyapunov exponents and moment Lyapunov exponents provide not only the
information about stability or instability of stochastic systems, but also how
rapidly the response grows or diminishes with time. Lyapunov exponents
characterizes sample stability or instability. However, this sample stability
cannot assure the moment stability. Hence, to obtain a complete picture of the
dynamic stability, it is important to study both the top Lyapunov exponent and
the moment Lyapunov exponent. Unfortunately, it is very difficult to obtain the
accurate values of theses two exponents. One has to resort to numerical and
approximate approaches.
The main contributions of this thesis are: (1) A new numerical simulation
method is proposed to determine moment Lyapunov exponents of fractional
stochastic systems, in which three steps are involved: discretization of
fractional derivatives, numerical solution of the fractional equation, and an
algorithm for calculating Lyapunov exponents from small data sets. (2)
Higher-order stochastic averaging method is developed and applied to
investigate stochastic stability of fractional viscoelastic
single-degree-of-freedom structures under white noise, real noise, or bounded
noise excitation. (3) For two-degree-of-freedom coupled non-gyroscopic and
gyroscopic viscoelastic systems under random excitation, the Stratonovich
equations of motion are set up, and then decoupled into four-dimensional Ito
stochastic differential equations, by making use of the method of stochastic
averaging for the non-viscoelastic terms and the method of Larionov for
viscoelastic terms. An elegant scheme for formulating the eigenvalue problems
is presented by using Khasminskii and Wedig’s mathematical transformations from
the decoupled Ito equations. Moment Lyapunov exponents are approximately
determined by solving the eigenvalue problems through Fourier series expansion.
Stability boundaries, critical excitations, and stability index are obtained.
The effects of various parameters on the stochastic stability of the system are
discussed. Parametric resonances are studied in detail. Approximate analytical
results are confirmed by numerical simulations.
|
229 |
Modelling Primary Energy Consumption under Model UncertaintyCsereklyei, Zsuzsanna, Humer, Stefan 11 1900 (has links) (PDF)
This paper examines the long-term relationship between primary energy consumption and other key macroeconomic variables, including real GDP, labour force, capital stock and technology, using a panel dataset for 64 countries over the period 1965-2009. Deploying panel error correction models, we find that there is a positive relationship running from physical capital, GDP, and population to primary energy consumption. We observe however a negative relationship between total factor productivity and primary energy usage. Significant differences arise in the magnitude of the cointegration coefficients, when we allow for differences in geopolitics and wealth levels. We also argue that inference on the basis of a single model without taking model uncertainty into account can lead to biased conclusions. Consequently, we address this problem by applying simple model averaging techniques to the estimated panel cointegration models. We find that tackling the uncertainty associated with selecting a single model with model averaging techniques leads to a more accurate representation of the link between energy consumption and the other macroeconomic variables, and to a significantly increased out-of-sample forecast performance. (authors' abstract) / Series: Department of Economics Working Paper Series
|
230 |
Bayesian Methods for Genetic Association StudiesXu, Lizhen 08 January 2013 (has links)
We develop statistical methods for tackling two important problems in genetic association studies. First, we propose
a Bayesian approach to overcome the winner's curse in genetic studies. Second, we consider a Bayesian latent variable
model for analyzing longitudinal family data with pleiotropic phenotypes.
Winner's curse in genetic association studies refers to the estimation bias of the reported odds ratios (OR) for an associated
genetic variant from the initial discovery samples. It is a consequence of the sequential procedure in which the estimated
effect of an associated genetic
marker must first pass a stringent significance threshold. We propose
a hierarchical Bayes method in which a spike-and-slab prior is used to account
for the possibility that the significant test result may be due to chance.
We examine the robustness of the method using different priors corresponding
to different degrees of confidence in the testing results and propose a
Bayesian model averaging procedure to combine estimates produced by different
models. The Bayesian estimators yield smaller variance compared to
the conditional likelihood estimator and outperform the latter in the low power studies.
We investigate the performance of the method with simulations
and applications to four real data examples.
Pleiotropy occurs when a single genetic factor influences multiple quantitative or qualitative phenotypes, and it is present in
many genetic studies of complex human traits. The longitudinal family studies combine the features of longitudinal studies
in individuals and cross-sectional studies in families. Therefore, they provide more information about the genetic and environmental factors associated with the trait of interest. We propose a Bayesian latent variable modeling approach to model multiple
phenotypes simultaneously in order to detect the pleiotropic effect and allow for longitudinal and/or family data. An efficient MCMC
algorithm is developed to obtain the posterior samples by using hierarchical centering and parameter expansion techniques.
We apply spike and slab prior methods to test whether the phenotypes are significantly associated with the latent disease status. We compute
Bayes factors using path sampling and discuss their application in testing the significance of factor loadings and the indirect fixed effects. We examine the performance of our methods via extensive simulations and
apply them to the blood pressure data from a genetic study of type 1 diabetes (T1D) complications.
|
Page generated in 0.0707 seconds