Spelling suggestions: "subject:"causality"" "subject:"gausality""
291 |
Analysis for Real Estate Investment of China : Based on the Warning System of Monitoring Macro Economy ProsperityShu, Jingying, Song, Jiawei January 2011 (has links)
Real estate industry plays a significant role in high speed of economic development in China. However, with increasingly high housing price and scare land resources, real estate development is caught in a vicious circle. A large number of families could not afford their housing while housing prices have no trend to decrease which leads to huger gap between the rich and the poor and causes indirectly instability of society. Therefore, creating a healthy and stable real estate investment market is extremely urgent. The purpose of the thesis is to research the relationship between leading index of macro economy prosperity and real estate investment based on the reality. We found that leading indicator Granger causes real estate investment while real estate investment Granger causes leading indicator at the same time. Based on that, this paper also forecasts the real estate investment with VAR models in the following 7 years which was proved to a circle of real estate market. In the light of our research, some target suggestions are pointed out at last.
|
292 |
Analysis for Real Estate Investment of China : Based on the Warning System of Monitoring Macro Economy ProsperityJingying, Shu, Jiawei, Song January 2011 (has links)
Real estate industry plays a significant role in high speed of economic development in China. However, with increasingly high housing price and scare land resources, real estate development is caught in a vicious circle. A large number of families could not afford their housing while housing prices have no trend to decrease which leads to huger gap between the rich and the poor and causes indirectly instability of society. Therefore, creating a healthy and stable real estate investment market is extremly urgent. The purpose of the thesis is to research the relationship between leading index of macro economy prosperity and real estate investment based on the reality. We found that leading indicator Granger causes real estate investment while real estate investment Granger causes leading indicator at the same time. Based on that, this paper also forecasts the real estate investment with VAR models in the following 7 years which was proved to a circle of real estate market. In the light of our research, some target suggestions are pointed out at last.
|
293 |
Causal Discovery Algorithms for Context-Specific Models / Kausala Upptäckts Algoritmer för Kontext-Specifika ModellerIbrahim, Mohamed Nazaal January 2021 (has links)
Despite having a philosophical grounding from empiricism that spans some centuries, the algorithmization of causal discovery started only a few decades ago. This formalization of studying causal relationships relies on connections between graphs and probability distributions. In this setting, the task of causal discovery is to recover the graph that best describes the causal structure based on the available data. A particular class of causal discovery algorithms, called constraint-based methods rely on Directed Acyclic Graphs (DAGs) as an encoding of Conditional Independence (CI) relations that carry some level of causal information. However, a CI relation such as X and Y being independent conditioned on Z assumes the independence holds for all possible values Z can take, which can tend to be unrealistic in practice where causal relations are often context-specific. In this thesis we aim to develop constraint-based algorithms to learn causal structure from Context-Specific Independence (CSI) relations within the discrete setting, where the independence relations are of the form X and Y being independent given Z and C = a for some a. This is done by using Context-Specific trees, or CStrees for short, which can encode CSI relations. / Trots att ha en filosofisk grund från empirism som sträcker sig över några århundraden, algoritm isering av kausal upptäckt startade för bara några decennier sedan. Denna formalisering av att studera orsakssamband beror på samband mellan grafer och sannolikhetsfördelningar. I den här inställningen är kausal upptäckt att återställa grafen som bäst beskriver kausal strukturen baserat på tillgängliga data. En särskild klass av kausala upptäckts algoritmer, så kallade begränsnings baserade metoder, är beroende av Directed Acyclic Graphs (DAG) som en kodning av förhållanden med villkorlig självständighet (CI) som bär någon nivå av kausal information. En CI-relation som X och Y är oberoende förutsatt att Z förutsätter att oberoende gäller för alla möjliga värden som Z kan ta, vilket kan vara orealistiskt i praktiken där orsakssamband ofta är kontextspecifika. I denna avhandling strävar vi efter att utveckla begränsnings baserade algoritmer för att lära kausal struktur från Contex-Specific Independence (CSI) -relationer inom den diskreta miljön, där självständighet relationerna har formen X och Y är oberoende med tanke på Z och C = a för vissa a. Detta görs genom att använda sammanhang specifika träd, eller kortfattat CStrees, som kan koda CSI-relationer.
|
294 |
An Analysis of the Finance Growth Nexus in NigeriaChetty, Roheen 07 July 2021 (has links)
This study empirically examines the relationship between financial development and economic growth in Nigeria. It employs statistical techniques such as the Autoregressive Distributed Lag approach as well as a short and long run Granger Causality test on time series data spanning from 1960-2016. Empirical results reveal that the financial development indicators have a long run relationship with economic growth in Nigeria and the existence of unidirectional and bidirectional Granger causality was also discovered. This study recommends that policy should be geared towards promoting financial development in the country as well as encouraging more financial depth and openness – in order to foster economic growth in Nigeria.
|
295 |
The environmental Kuznets curve : Investigating the relationship between renewable energy and economic growthForsén, Emil January 2020 (has links)
The environmental Kuznets curve (EKC) hypothesis describes the relationship between economic growth and environmental degradation through an inverted U-shape where environmental degradation first increases with economic growth, to later stagnate and decline as economic growth reaches specific threshold limits. The aim of this study is to investigate the EKC hypothesis when environmental degradation is measured through a country’s renewable energy implementation. This is achieved through multiple scatterplots and a Granger causality test, and the key finding are (1) that a consensus regarding the relationship between economic growth and energy consumption is missing, (2) that countries seems to significantly increase their consumption of renewable energy between US$ 30 000 - 50 000 when measured in real GDP per capita, (3) that the theoretical shape of the EKC holds for most countries, (4) a unidirectional causality running from economic growth to fossil fuel consumption for a panel of developing countries, and (5) a unidirectional causality running from economic growth to both fossil fuel and renewable energy consumption as well as a unidirectional causality running from renewable energy consumption to fossil fuel consumption for a panel of developed countries. When the EKC is measured though a country’s renewable energy implementation the hypothesis seems to hold for most countries. However, the decrease in environmental degradation is so far limited to developed countries with smaller economies and populations. These countries also need to ensure that decreases in environmental degradation is a result of underlying mechanisms like energy efficiency improvements and not other more counterproductive behaviors like outsourcing and deindustrialization.
|
296 |
Apprentissage statistique pour séquences d’évènements à l’aide de processus ponctuels / Learning from Sequences with Point ProcessesAchab, Massil 09 October 2017 (has links)
Le but de cette thèse est de montrer que l'arsenal des nouvelles méthodes d'optimisation permet de résoudre des problèmes d'estimation difficile basés sur les modèles d'évènements.Alors que le cadre classique de l'apprentissage supervisé traite les observations comme une collection de couples de covariables et de label, les modèles d'évènements ne regardent que les temps d'arrivée d'évènements et cherchent alors à extraire de l'information sur la source de donnée.Ces évènements datés sont ordonnés de façon chronologique et ne peuvent dès lors être considérés comme indépendants.Ce simple fait justifie l'usage d'un outil mathématique particulier appelé processus ponctuel pour apprendre une certaine structure à partir de ces évènements.Deux exemples de processus ponctuels sont étudiés dans cette thèse.Le premier est le processus ponctuel derrière le modèle de Cox à risques proportionnels:son intensité conditionnelle permet de définir le ratio de risque, une quantité fondamentale dans la littérature de l'analyse de survie.Le modèle de régression de Cox relie la durée avant l'apparition d'un évènement, appelé défaillance, aux covariables d'un individu.Ce modèle peut être reformulé à l'aide du cadre des processus ponctuels.Le second est le processus de Hawkes qui modélise l'impact des évènements passés sur la probabilité d'apparition d'évènements futurs.Le cas multivarié permet d'encoder une notion de causalité entre les différentes dimensions considérées.Cette thèse est divisée en trois parties.La première s'intéresse à un nouvel algorithme d'optimisation que nous avons développé.Il permet d'estimer le vecteur de paramètre de la régression de Cox lorsque le nombre d'observations est très important.Notre algorithme est basé sur l'algorithme SVRG (Stochastic Variance Reduced Gradient) et utilise une méthode MCMC (Monte Carlo Markov Chain) pour approcher un terme de la direction de descente.Nous avons prouvé des vitesses de convergence pour notre algorithme et avons montré sa performance numérique sur des jeux de données simulés et issus de monde réel.La deuxième partie montre que la causalité au sens de Hawkes peut être estimée de manière non-paramétrique grâce aux cumulants intégrés du processus ponctuel multivarié.Nous avons développer deux méthodes d'estimation des intégrales des noyaux du processus de Hawkes, sans faire d'hypothèse sur la forme de ces noyaux. Nos méthodes sont plus rapides et plus robustes, vis-à-vis de la forme des noyaux, par rapport à l'état de l'art. Nous avons démontré la consistence statistique de la première méthode, et avons montré que la deuxième peut être réduite à un problème d'optimisation convexe.La dernière partie met en lumière les dynamiques de carnet d'ordre grâce à la première méthode d'estimation non-paramétrique introduite dans la partie précédente.Nous avons utilisé des données du marché à terme EUREX, défini de nouveaux modèles de carnet d'ordre (basés sur les précédents travaux de Bacry et al.) et appliqué la méthode d'estimation sur ces processus ponctuels.Les résultats obtenus sont très satisfaisants et cohérents avec une analysé économétrique.Un tel travail prouve que la méthode que nous avons développé permet d'extraire une structure à partir de données aussi complexes que celles issues de la finance haute-fréquence. / The guiding principle of this thesis is to show how the arsenal of recent optimization methods can help solving challenging new estimation problems on events models.While the classical framework of supervised learning treat the observations as a collection of independent couples of features and labels, events models focus on arrival timestamps to extract information from the source of data.These timestamped events are chronologically ordered and can't be regarded as independent.This mere statement motivates the use of a particular mathematical object called point process to learn some patterns from events.Two examples of point process are treated in this thesis.The first is the point process behind Cox proportional hazards model:its conditional intensity function allows to define the hazard ratio, a fundamental quantity in survival analysis literature.The Cox regression model relates the duration before an event called failure to some covariates.This model can be reformulated in the framework of point processes.The second is the Hawkes process which models how past events increase the probability of future events.Its multivariate version enables encoding a notion of causality between the different nodes.The thesis is divided into three parts.The first focuses on a new optimization algorithm we developed to estimate the parameter vector of the Cox regression in the large-scale setting.Our algorithm is based on stochastic variance reduced gradient descent (SVRG) and uses Monte Carlo Markov Chain to estimate one costly term in the descent direction.We proved the convergence rates and showed its numerical performance on both simulated and real-world datasets.The second part shows how the Hawkes causality can be retrieved in a nonparametric fashion from the integrated cumulants of the multivariate point process.We designed two methods to estimate the integrals of the Hawkes kernels without any assumption on the shape of the kernel functions. Our methods are faster and more robust towards the shape of the kernels compared to state-of-the-art methods. We proved the statistical consistency of the first method, and designed turned the second into a convex optimization problem.The last part provides new insights from order book data using the first nonparametric method developed in the second part.We used data from the EUREX exchange, designed new order book model (based on the previous works of Bacry et al.) and ran the estimation method on these point processes.The results are very insightful and consistent with an econometric analysis.Such work is a proof of concept that our estimation method can be used on complex data like high-frequency financial data.
|
297 |
Machine learning based on Hawkes processes and stochastic optimization / Apprentissage automatique avec les processus de Hawkes et l'optimisation stochastiqueBompaire, Martin 05 July 2019 (has links)
Le fil rouge de cette thèse est l'étude des processus de Hawkes. Ces processus ponctuels décryptent l'inter-causalité qui peut avoir lieu entre plusieurs séries d'événements. Concrètement, ils déterminent l'influence qu'ont les événements d'une série sur les événements futurs de toutes les autres séries. Par exemple, dans le contexte des réseaux sociaux, ils décrivent à quel point l'action d'un utilisateur, par exemple un Tweet, sera susceptible de déclencher des réactions de la part des autres.Le premier chapitre est une brève introduction sur les processus ponctuels suivie par un approfondissement sur les processus de Hawkes et en particulier sur les propriétés de la paramétrisation à noyaux exponentiels, la plus communément utilisée. Dans le chapitre suivant, nous introduisons une pénalisation adaptative pour modéliser, avec des processus de Hawkes, la propagation de l'information dans les réseaux sociaux. Cette pénalisation est capable de prendre en compte la connaissance a priori des caractéristiques de ces réseaux, telles que les interactions éparses entre utilisateurs ou la structure de communauté, et de les réfléchir sur le modèle estimé. Notre technique utilise des pénalités pondérées dont les poids sont déterminés par une analyse fine de l'erreur de généralisation.Ensuite, nous abordons l'optimisation convexe et les progrès réalisés avec les méthodes stochastiques du premier ordre avec réduction de variance. Le quatrième chapitre est dédié à l'adaptation de ces techniques pour optimiser le terme d'attache aux données le plus couramment utilisé avec les processus de Hawkes. En effet, cette fonction ne vérifie pas l'hypothèse de gradient-Lipschitz habituellement utilisée. Ainsi, nous travaillons avec une autre hypothèse de régularité, et obtenons un taux de convergence linéaire pour une version décalée de Stochastic Dual Coordinate Ascent qui améliore l'état de l'art. De plus, de telles fonctions comportent beaucoup de contraintes linéaires qui sont fréquemment violées par les algorithmes classiques du premier ordre, mais, dans leur version duale ces contraintes sont beaucoup plus aisées à satisfaire. Ainsi, la robustesse de notre algorithme est d'avantage comparable à celle des méthodes du second ordre dont le coût est prohibitif en grandes dimensions.Enfin, le dernier chapitre présente une nouvelle bibliothèque d'apprentissage statistique pour Python 3 avec un accent particulier mis sur les modèles temporels. Appelée tick, cette bibliothèque repose sur une implémentation en C++ et les algorithmes d'optimisation issus de l'état de l'art pour réaliser des estimations très rapides dans un environnement multi-cœurs. Publiée sur Github, cette bibliothèque a été utilisée tout au long de cette thèse pour effectuer des expériences. / The common thread of this thesis is the study of Hawkes processes. These point processes decrypt the cross-causality that occurs across several event series. Namely, they retrieve the influence that the events of one series have on the future events of all series. For example, in the context of social networks, they describe how likely an action of a certain user (such as a Tweet) will trigger reactions from the others.The first chapter consists in a general introduction on point processes followed by a focus on Hawkes processes and more specifically on the properties of the widely used exponential kernels parametrization. In the following chapter, we introduce an adaptive penalization technique to model, with Hawkes processes, the information propagation on social networks. This penalization is able to take into account the prior knowledge on the social network characteristics, such as the sparse interactions between users or the community structure, to reflect them on the estimated model. Our technique uses data-driven weighted penalties induced by a careful analysis of the generalization error.Next, we focus on convex optimization and recall the recent progresses made with stochastic first order methods using variance reduction techniques. The fourth chapter is dedicated to an adaptation of these techniques to optimize the most commonly used goodness-of-fit of Hawkes processes. Indeed, this goodness-of-fit does not meet the gradient-Lipschitz assumption that is required by the latest first order methods. Thus, we work under another smoothness assumption, and obtain a linear convergence rate for a shifted version of Stochastic Dual Coordinate Ascent that improves the current state-of-the-art. Besides, such objectives include many linear constraints that are easily violated by classic first order algorithms, but in the Fenchel-dual problem these constraints are easier to deal with. Hence, our algorithm's robustness is comparable to second order methods that are very expensive in high dimensions.Finally, the last chapter introduces a new statistical learning library for Python 3 with a particular emphasis on time-dependent models, tools for generalized linear models and survival analysis. Called tick, this library relies on a C++ implementation and state-of-the-art optimization algorithms to provide very fast computations in a single node multi-core setting. Open-sourced and published on Github, this library has been used all along this thesis to perform benchmarks and experiments.
|
298 |
Does the price development on housing in Stockholm make sense? : An empirical analysis of a possible price bubble on the housing market of StockholmHedberg, Rebecca January 2021 (has links)
The indebtedness of Swedish households has more than doubled in the last ten decades despite the implementation of a mortgage ceiling and stricter amortization requirements. This study takes form to investigate how it is possible that debt related to housing is rising while new regulations against it has been set and how housing prices continues to increase when lending is supposed to be harder.This analysis estimates whether there are indications of an existing price bubble in the housing market of Stockholm. It is done by testing fundamental economic factors to the price index of housing in Stockholm, to see if they support the price development. If the analysis shows that housing prices cannot be predicted by the fundamental economic factors, it is possible that the price is a self-running series1 which could be an indicator of a price bubble. If fundamental factors that are being used as control variables seem to follow the same trend as the price development of the housing market, the speculation of price bubble will be rejected.
|
299 |
Biomarker discovery and statistical modeling with applications in childhood epilepsy and Angelman syndromeSpencer, Elizabeth Rose Stevens 04 February 2022 (has links)
Biomarker discovery and statistical modeling reveals the brain activity that supports brain function and dysfunction. Detecting abnormal brain activity is critical for developing biomarkers of disease, elucidating disease mechanisms and evolution, and ultimately improving disease course. In my thesis, we develop statistical methodology to characterize neural activity in disease from noisy electrophysiological recordings.
First, we develop a modification of a classic statistical modeling approach - multivariate Granger causality - to infer coordinated activity between brain regions. Assuming the signaling dependencies vary smoothly, we propose to write the history terms in autoregressive models of the signals using a lower dimensional spline basis. This procedure requires fewer parameters than the standard approach, thus increasing the statistical power. we show that this procedure accurately estimates brain dynamics in simulations and examples of physiological recordings from a patient with pharmacoresistant epilepsy. This work provides a statistical framework to understand alternations in coordinated brain activity in disease.
Second, we demonstrate that sleep spindles, thalamically-driven neural rhythms (9-15 Hz) associated with sleep-dependent learning, are a reliable biomarker for Rolandic epilepsy. Rolandic epilepsy is the most common form of childhood epilepsy and characterized by nocturnal focal epileptic discharges as well as neurocognitive deficits. We show that sleep spindle rate is reduced regionally across cortex and correlated with poor cognitive performance in epilepsy. These results provide evidence for a regional disruption to the thalamocortical circuit in Rolandic epilepsy, and a potential mechanistic explanation for the cognitive deficits observed.
Finally, we develop a procedure to utilize delta rhythms (2-4 Hz), a sensitive biomarker for Angelman syndrome, as a non-invasive measure of treatment efficacy in clinical trials. Angelman syndrome is a rare neurodevelopmental disorder caused by reduced expression of the UBE3A protein. Many disease-modifying treatments are being developed to reinstate UBE3A expression. To aid in clinical trials, we propose a procedure that detects therapeutic improvements in delta power outside of the natural variability over age by developing a longitudinal natural history model of delta power.
These results demonstrate the utility of biomarker discovery and statistical modeling for elucidating disease course and mechanisms with the long-term goal of improving patient outcomes.
|
300 |
The Relationship Between Servant Leadership, Other Orientation, and Autonomous Causality OrientationBamber, Mary Beth January 2020 (has links)
No description available.
|
Page generated in 0.0589 seconds