• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 287
  • 138
  • 49
  • 34
  • 33
  • 31
  • 26
  • 10
  • 9
  • 6
  • 6
  • 6
  • 4
  • 4
  • 4
  • Tagged with
  • 747
  • 196
  • 96
  • 71
  • 64
  • 61
  • 58
  • 49
  • 49
  • 47
  • 46
  • 46
  • 45
  • 40
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

La spontanéité en français parlé : caractérisation de l'élan énonciatif à travers différents types de corpus / Spontaneity in spoken french : caracterization of enunciative impulse in different corpora

Stabarin, Isabelle 25 November 2019 (has links)
Qu’est-ce que la spontanéité en langue ? Comment investit-elle les différents types de discours oraux ? Avec quelles limites ? Nous cherchons à mettre au jour les aspects formels (marqueurs) de la spontanéité à partir de corpus de différents types. Nous partons de l’hypothèse selon laquelle un locuteur a tendance à produire des énoncés syntaxiquement plus complexes s’il parle spontanément que s’il surveille son énonciation, et que cette complexité linguistique se manifeste par une réduction syntaxique. La réduction des énoncés ou des formes qui le constituent, alliée à d’autres critères tels que leur caractère prédicatif, et le rôle de l’intonation pour leur complétude, s’observe effectivement dans des énoncés où l’élan de la spontanéité est patent. Cet élan est scalaire, il varie même au sein d’un tour de parole, comme le montrent les auto-ajustements. Sa variation chez un même locuteur se manifeste par des variations linguistiques intrapersonnelles.Ces variations corrélées (élan/langue) sont observables grâce au corpus spécifique que nous avons constitué : au cours d’interactions informelles, un procédé de sollicitation freine l’élan du locuteur, et celui-ci est amené à reformuler son énoncé avec plus d’attention à la forme. Les équivalences sémantiques produites dans ce contexte sont confrontées et mettent au jour l’effet de la scalarité de la spontanéité sur la grammaire des énoncés. Il se confirme que la réduction est bien un marqueur de spontanéité. Mais surtout, cette réduction investit tous les niveaux de la langue. Cette concomitance d’éléments réduits est non seulement compatible avec l’élan de la spontanéité mais elle le nourrit. / What is spontaneity in language? What role does it play in different types of oral discourse, and which constraints is it subject to? This study sets out to identify formal markers of spontaneity in different corpora. We hypothesize that a speaker tends to produce more complex syntactic statements when speaking spontaneously than when monitoring his /her spoken discourse, and that this linguistic complexity is manifested in syntactic reduction. The reduction of the statements or their component forms, combined with other criteria such as their predictive nature, and the role of intonation in their completeness, is indeed observed in statements where the impulse of spontaneity is very evident. This impulse varies, even within a single speaking turn, as can be seen in self-adjustments. Impulse variation in a single speaker is manifested by intrapersonal linguistic variations.These correlated variations (impulse and language) can be observed thanks to the specific corpus compiled for this study, which permits the comparison of semantic equivalences, revealing the effect of the degree of spontaneity on the grammar of statements. The study confirms that reduction is indeed a marker of spontaneity. But reduction affects all levels of language. This concomitance of reduced elements is both compatible with and also fosters the impulse of spontaneity.
592

Entwicklung und Qualifizierung eines neuen Bohrsystems für die Tiefbohrtechnik auf der Basis des Elektro-Impuls-Verfahrens

Lehmann, Franziska 19 January 2022 (has links)
Die wirtschaftlich vorteilhafte Gestaltung von Geothermiesystemen ist für die Energiewende von zentraler Bedeutung. Das Elektro-Impuls-Verfahren (EIV) bietet ein großes Potential für eine signifikante Reduktion des wirtschaftlichen Risikos beim Abteufen einer Bohrung im Hartgestein für tiefe Geothermie, da es einerseits die Bohrgeschwindigkeit erhöhen sowie andererseits die Standzeit des „Meißels“ erheblich verlängern kann. Es nutzt die zerstörende Wirkung elektrischer Entladungen. Der Hauptvorteil ist, dass nahezu kein mechanischer Verschleiß vorliegt. Der Abbrand an den Elektrodenspitzen durch die elektrischen Impulse ist vernachlässigbar gering. Ziel dieser Arbeit war es zu untersuchen, ob und unter welchen Voraussetzungen das neuartige auf dem EIV basierende Bohrsystem in der Tiefbohrtechnik und im speziellen zum Abteufen tiefer Geothermiebohrung eingesetzt werden kann. Die Untersuchung des Standes der Technik erbrachte, dass es bereits F&E-Projekte für den Einsatz des EIV in der Tiefbohrtechnik gibt. Keines der entwickelten Systeme konnte bisher zur Marktreife gebracht werden. Um diesen wichtigen Schritt mit dem in der Arbeit vorgestellten System zu gewährleisten, wurden alle normativen und regulativen Randbedingungen zusammengestellt, bewertet und auf deren Einhaltung in allen Entwicklungsschritten geachtet. Die im Labor mit dem EIV-Bohrsystem durchgeführten Versuche wurden hinsichtlich spezifischer Energie und Bohrlochqualität ausgewertet und die Ergebnisse mit Werten aus der Praxis verglichen. Es zeigte sich, dass der benötigte Energiebedarf zum Lösen des Gesteins sowie die erreichte Bohrlochqualität vergleichbar mit herkömmlichen Bohrverfahren ist. Somit ist eine wichtige Voraussetzung für den Einsatz in der Tiefbohrtechnik gegeben. Darüber hinaus wurde die Wirtschaftlichkeit an einer Beispielbohrung betrachtet. Die Wirtschaftlichkeitsbetrachtung zeigte, dass durch die Erhöhung der Standzeit und der damit einhergehenden Reduzierung der nicht-produktiven Zeit eine Kostenersparnis von bis zu 30 % möglich ist. Ein Feldversuch mit dem Laborprototyp in einer flachen Bohrung führte zu dem Ergebnis, dass es möglich ist, das EIV unter realen Bedingungen einzusetzen und einen Abtrag zu erzielen. Im Ergebnis des Praxisversuches und dessen Auswertung steht der Nachweis, dass die angestrebte Zielstellung erreicht wurde und das EIV wirtschaftlich eingesetzt werden kann.:Symbolverzeichnis VII Abkürzungsverzeichnis X Tabellenverzeichnis XI Abbildungsverzeichnis XII 1 Einleitung 1 2 Das Elektro-Impuls-Verfahren 4 2.1 Grundprinzip 4 2.1.1 Hochspannungsentladung 4 2.1.2 Funktionsweise des Elektro-Impuls-Verfahrens 6 2.1.3 Erzeugung der Hochspannungsimpulse 10 2.2 Stand der Technik 11 2.2.1 EIV zur Gesteinszerkleinerung 11 2.2.2 EIV in der Tiefbohrtechnik 15 2.2.3 Hochspannungsentladungen in anderen Anwendungsgebieten 23 3 Entwicklung eines EIV-Bohrsystems 28 3.1 Konzept und Aufbau des EIV-Bohrsystems 28 3.1.1 Gehäuse 29 3.1.2 Elektrode 30 3.1.3 Impulsspannungsgenerator 32 3.1.4 Gleichrichter, Transformator und Generator 33 3.1.5 Getriebe 36 3.1.6 Dichtungssystem 43 3.1.7 Antrieb für den Generator 48 3.2 Anforderungen an die Komponenten des EIV-Bohrsystems 54 3.2.1 Normen 54 3.2.2 Aufbau eines konventionellen Bohrstranges 55 3.2.3 Mechanische und physikalische Eigenschaften der Bohrgarnitur 59 3.2.4 Geometrische Eigenschaften der Bohrgarnitur 61 3.3 Beanspruchungen der Komponenten des EIV-Bohrsystems 63 4 EIV – Laborversuche 70 4.1 Versuchsstand Grundlagenversuche 70 4.1.1 Aufbau des Versuchsstandes und Versuchsdurchführung 70 4.1.2 Ergebnisse der Grundlagenversuche 71 4.2 Versuchsstand Hochdruckversuche 73 4.2.1 Aufbau des Versuchsstandes und Versuchsdurchführung 73 4.2.2 Ergebnisse der Hochdruckversuche 74 4.3 Versuchsstand Bohrlochmaßstab 76 4.3.1 Aufbau des Versuchsstandes und Versuchsdurchführung 76 4.3.2 Ergebnisse der Versuche im Bohrlochmaßstab 78 5 In-Situ-Versuch 79 5.1 Versuchsvorbereitung 79 5.2 Bohrplatz 81 5.3 Versuchsdurchführung 85 5.4 Ergebnisse 86 6 Vergleich mit anderen Bohrverfahren 90 6.1 Spezifische Energie 90 6.1.1 Definition spezifische Energie 90 6.1.2 Beispiele für die spezifische Energie 95 6.1.3 Spezifische Energie des Elektro-Impuls-Verfahrens 96 6.2 Beurteilung der Bohrlochqualität 98 6.2.1 Definition der Bohrlochqualität 98 6.2.2 Kaliberlog 104 6.2.3 Werte aus der Praxis 106 6.3 Wirtschaftlichkeitsbetrachtung 110 7 Zusammenfassung 116 8 Literaturverzeichnis 120 Anlagen 132
593

Assessment of the dopamine system in addiction using positron emission tomography

Albrecht, Daniel Strakis January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Drug addiction is a behavioral disorder characterized by impulsive behavior and continued intake of drug in the face of adverse consequences. Millions of people suffer the financial and social consequences of addiction, and yet many of the current therapies for addiction treatment have limited efficacy. Therefore, there is a critical need to characterize the neurobiological substrates of addiction in order to formulate better treatment options. In the first chapter, the striatal dopamine system is interrogated with [11C]raclopride PET to assess differences between chronic cannabis users and healthy controls. The results of this chapter indicate that chronic cannabis use is not associated with a reduction in striatal D2/D3 receptor availability, unlike many other drugs of abuse. Additionally, recent cannabis consumption in chronic users was negatively correlated with D2/D3 receptor availability. Chapter 2 describes a retrospective analysis in which striatal D2/D3 receptor availability is compared between three groups of alcohol-drinking and tobacco-smoking subjects: nontreatment-seeking alcoholic smokers, social-drinking smokers, and social-drinking non-smokers. Results showed that smokers had reduced D2/D3 receptor availability throughout the striatum, independent of drinking status. The results of the first two chapters suggest that some combustion product of marijuana and tobacco smoke may have an effect on striatal dopamine concentration. Furthermore, they serve to highlight the effectiveness of using baseline PET imaging to characterize dopamine dysfunction in addictions. The final chapter explores the use of [18F]fallypride PET in a proof-of-concept study to determine whether changes in cortical dopamine can be detected during a response inhibition task. We were able to detect several cortical regions of significant dopamine changes in response to the task, and the amount of change in three regions was significantly associated with task performance. Overall, the results of Chapter 3 validate the use of [18F]fallypride PET to detect cortical dopamine changes during a impulse control task. In summary, the results reported in the current document demonstrate the effectiveness of PET imaging as a tool for probing resting and activated dopamine systems in addiction. Future studies will expand on these results, and incorporate additional methods to further elucidate the neurobiology of addiction.
594

Theoretical and computational considerations of Quasi-Free (p; 2p) reactions using the distorted-wave impulse approximation and Monte Carlo simulations in Geant4

Lisa, Nyameko 09 1900 (has links)
Under current investigation is the re-implementation of the Distorted-Wave Impulse Approximation (DWIA), originally formulated in FORTRAN by N.S. Chant and P.G. Roos, with the intention of developing it in a portable Python environment. This will be complimented by developing a GEANT4 detector simulation application. These two techniques will be used to model the (p,2p) proton knock-out reaction 40Ca(p; 2p)39K (2.52 MeV)1 2 + first excited state, at intermediate incident energies of 150 MeV. This study is a test-bed that lays the foundation and platform from which one may develop an interactive workbench and toolkit in GEANT4 which: (i.) accurately models an accelerator-detector experimental set-up, such as those found at iThemba Labs, and (ii.) incorporates the DWIA formalism as a built-in physics process within the framework of GEANT4. Furthermore the Python modules developed for the specific proton knock-out reaction studied here, can be generalized for an arbitrary set of nuclear scattering reactions and packaged as a suite of scientific Python codes. / Theoretical and Computational Nuclear Physics / M. Sc. (Theoretical and Computational Nuclear Physics)
595

Fundamentals of molecular communication over microfluidic channels

Bicen, Ahmet Ozan 27 May 2016 (has links)
The interconnection of molecular machines with different functionalities to form molecular communication systems can increase the number of design possibilities and overcome the limited reliability of the individual molecular machines. Artificial information exchange using molecular signals would also expand the capabilities of single engineered cell populations by providing them a way to cooperate across heterogeneous cell populations for the applications of synthetic biology and lab-on-a-chip systems. The realization of molecular communication systems necessitates analysis and design of the communication channel, where the information carrying molecular signal is transported from the transmitter to the receiver. In this thesis, significant progress towards the use of microfluidic channels to interconnect molecular transmitter and receiver pairs is presented. System-theoretic analysis of the microfluidic channels are performed, and a finite-impulse response filter is designed using microfluidic channels. The spectral density of the propagation noise is studied and the additive white Gaussian noise channel model is developed. Memory due to inter-diffusion of the transmitted molecular signals is also modeled. Furthermore, the interference modeling is performed for multiple transmitters and its impact on the communication capacity is shown. Finally, the efficient sampling of the signal transduction by engineered bacterial receivers connected to a microfluidic channel is investigated for the detection of the pulse-amplitude modulated molecular signals. This work lays the foundation for molecular communication over microfluidic channels that will enable interconnection of engineered molecular machines.
596

After Two Decades of Integration: How Interdependent are Eastern European Economies and the Euro Area?

Prettner, Catherine, Prettner, Klaus 03 1900 (has links) (PDF)
This article investigates the interrelations between the initial members of the Euro area and five important Central and Eastern European economies. We set up a theoretical open economy model to derive the Purchasing Power Parity, the Interest Rate Parity, the Fisher Inflation Parity, and an output gap relation. After taking convergence into account, they are used as restrictions on the cointegration space of a structural vector error correction model. We then employ generalized impulse response analysis to assess the dynamic effects of shocks in output and interest rates on the respective other area as well as the implications of shocks in the exchange rate and in relative prices on both areas. The results show a high degree of interconnectedness between the two economies. There are strong positive spillovers in output to the respective other region with the magnitude of the impact being similarly strong in both areas. Furthermore, we find a multiplier effect being present in Eastern Europe and some evidence for the European Central Banks' desire towards price stability. (author's abstract) / Series: Department of Economics Working Paper Series
597

Alumina Thin Films : From Computer Calculations to Cutting Tools

Wallin, Erik January 2008 (has links)
The work presented in this thesis deals with experimental and theoretical studies related to alumina thin films. Alumina, Al2O3, is a polymorphic material utilized in a variety of applications, e.g., in the form of thin films. However, controlling thin film growth of this material, in particular at low substrate temperatures, is not straightforward. The aim of this work is to increase the understanding of the basic mechanisms governing alumina growth and to investigate novel ways of synthesizing alumina coatings. The thesis can be divided into two main parts, where the first part deals with fundamental studies of mechanisms affecting alumina growth and the second part with more application-oriented studies of high power impulse magnetron sputter (HiPIMS) deposition of the material. In the first part, it was shown that the thermodynamically stable α phase, which normally is synthesized at substrate temperatures of around 1000 °C, can be grown using reactive sputtering at a substrate temperature of merely 500 °C by controlling the nucleation surface. This was done by predepositing a Cr2O3 nucleation layer. Moreover, it was found that an additional requirement for the formation of the α phase is that the depositions are carried out at low enough total pressure and high enough oxygen partial pressure. Based on these observations, it was concluded that energetic bombardment, plausibly originating from energetic oxygen, is necessary for the formation of α-alumina (in addition to the effect of the chromia nucleation layer). Moreover, the effects of residual water on the growth of crystalline films were investigated by varying the partial pressure of water in the ultra high vacuum (UHV) chamber. Films deposited onto chromia nucleation layers exhibited a columnar structure and consisted of crystalline α-alumina if deposited under UHV conditions. However, as water to a partial pressure of 1*10-5 Torr was introduced, the columnar α-alumina growth was disrupted. Instead, a microstructure consisting of small, equiaxed grains was formed, and the γ-alumina content was found to increase with increasing film thickness. To gain a better understanding of the atomistic processes occurring on the surface, density functional theory based computational studies of adsorption and diffusion of Al, O, AlO, and O2 on different α-alumina (0001) surfaces were also performed. The results give possible reasons for the difficulties in growing the α phase at low temperatures through the identification of several metastable adsorption sites and also show how adsorbed hydrogen might inhibit further growth of α-alumina crystallites. In addition, it was shown that the Al surface diffusion activation energies are unexpectedly low, suggesting that limited surface diffusivity is not the main obstacle for low-temperature α-alumina growth. Instead, it is suggested to be more important to find ways of reducing the amount of impurities, especially hydrogen, in the process and to facilitate α-alumina nucleation when designing new processes for low-temperature deposition of α-alumina. In the second part of the thesis, reactive HiPIMS deposition of alumina was studied. In HiPIMS, a high-density plasma is created by applying very high power to the sputtering magnetron at a low duty cycle. It was found, both from experiments and modeling, that the use of HiPIMS drastically influences the characteristics of the reactive sputtering process, causing reduced target poisoning and thereby reduced or eliminated hysteresis effects and relatively high deposition rates of stoichiometric alumina films. This is not only of importance for alumina growth, but for reactive sputter deposition in general, where hysteresis effects and loss of deposition rate pose a substantial problem. Moreover, it was found that the energetic and ionized deposition flux in the HiPIMS discharge can be used to lower the deposition temperature of α-alumina. Coatings predominantly consisting of the α phase were grown at temperatures as low as 650 °C directly onto cemented carbide substrates without the use of nucleation layers. Such coatings were also deposited onto cutting inserts and were tested in a steel turning application. The coatings were found to increase the crater wear resistance compared to a benchmark TiAlN coating, and the process consequently shows great potential for further development towards industrial applications.
598

Factor models, VARMA processes and parameter instability with applications in macroeconomics

Stevanovic, Dalibor 05 1900 (has links)
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients. / As information technology improves, the availability of economic and finance time series grows in terms of both time and cross-section sizes. However, a large amount of information can lead to the curse of dimensionality problem when standard time series tools are used. Since most of these series are highly correlated, at least within some categories, their co-variability pattern and informational content can be approximated by a smaller number of (constructed) variables. A popular way to address this issue is the factor analysis. This framework has received a lot of attention since late 90's and is known today as the large dimensional approximate factor analysis. Given the availability of data and computational improvements, a number of empirical and theoretical questions arises. What are the effects and transmission of structural shocks in a data-rich environment? Does the information from a large number of economic indicators help in properly identifying the monetary policy shocks with respect to a number of empirical puzzles found using traditional small-scale models? Motivated by the recent financial turmoil, can we identify the financial market shocks and measure their effect on real economy? Can we improve the existing method and incorporate another reduction dimension approach such as the VARMA modeling? Does it help in forecasting macroeconomic aggregates and impulse response analysis? Finally, can we apply the same factor analysis reasoning to the time varying parameters? Is there only a small number of common sources of time instability in the coefficients of empirical macroeconomic models? This thesis concentrates on the structural factor analysis and VARMA modeling and answers these questions through five articles. The first two articles study the effects of monetary policy and credit shocks in a data-rich environment. The third article proposes a new framework that combines the factor analysis and VARMA modeling, while the fourth article applies this method to measure the effects of credit shocks in Canada. The contribution of the final chapter is to impose the factor structure on the time varying parameters in popular macroeconomic models, and show that there are few sources of this time instability. The first article analyzes the monetary transmission mechanism in Canada using a factor-augmented vector autoregression (FAVAR) model. For small open economies like Canada, uncovering the transmission mechanism of monetary policy using VARs has proven to be an especially challenging task. Such studies on Canadian data have often documented the presence of anomalies such as a price, exchange rate, delayed overshooting and uncovered interest rate parity puzzles. We estimate a FAVAR model using large sets of monthly and quarterly macroeconomic time series. We find that the information summarized by the factors is important to properly identify the monetary transmission mechanism and contributes to mitigate the puzzles mentioned above, suggesting that more information does help. Finally, the FAVAR framework allows us to check impulse responses for all series in the informational data set, and thus provides the most comprehensive picture to date of the effect of Canadian monetary policy. As the recent financial crisis and the ensuing global economic have illustrated, the financial sector plays an important role in generating and propagating shocks to the real economy. Financial variables thus contain information that can predict future economic conditions. In this paper we examine the dynamic effects and the propagation of credit shocks using a large data set of U.S. economic and financial indicators in a structural factor model. Identified credit shocks, interpreted as unexpected deteriorations of the credit market conditions, immediately increase credit spreads, decrease rates on Treasury securities and cause large and persistent downturns in the activity of many economic sectors. Such shocks are found to have important effects on real activity measures, aggregate prices, leading indicators and credit spreads. In contrast to other recent papers, our structural shock identification procedure does not require any timing restrictions between the financial and macroeconomic factors, and yields an interpretation of the estimated factors without relying on a constructed measure of credit market conditions from a large set of individual bond prices and financial series. In third article, we study the relationship between VARMA and factor representations of a vector stochastic process, and propose a new class of factor-augmented VARMA (FAVARMA) models. We start by observing that in general multivariate series and associated factors do not both follow a finite order VAR process. Indeed, we show that when the factors are obtained as linear combinations of observable series, their dynamic process is generally a VARMA and not a finite-order VAR as usually assumed in the literature. Second, we show that even if the factors follow a finite-order VAR process, this implies a VARMA representation for the observable series. As result, we propose the FAVARMA framework that combines two parsimonious methods to represent the dynamic interactions between a large number of time series: factor analysis and VARMA modeling. We apply our approach in two pseudo-out-of-sample forecasting exercises using large U.S. and Canadian monthly panels taken from Boivin, Giannoni and Stevanovic (2010, 2009) respectively. The results show that VARMA factors help in predicting several key macroeconomic aggregates relative to standard factor forecasting models. Finally, we estimate the effect of monetary policy using the data and the identification scheme as in Bernanke, Boivin and Eliasz (2005). We find that impulse responses from a parsimonious 6-factor FAVARMA(2,1) model give an accurate and comprehensive picture of the effect and the transmission of monetary policy in U.S.. To get similar responses from a standard FAVAR model, Akaike information criterion estimates the lag order of 14. Hence, only 84 coefficients governing the factors dynamics need to be estimated in the FAVARMA framework, compared to FAVAR model with 510 VAR parameters. In fourth article we are interested in identifying and measuring the effects of credit shocks in Canada in a data-rich environment. In order to incorporate information from a large number of economic and financial indicators, we use the structural factor-augmented VARMA model. In the theoretical framework of the financial accelerator, we approximate the external finance premium by credit spreads. On one hand, we find that an unanticipated increase in US external finance premium generates a significant and persistent economic slowdown in Canada; the Canadian external finance premium rises immediately while interest rates and credit measures decline. From the variance decomposition analysis, we observe that the credit shock has an important effect on several real activity measures, price indicators, leading indicators, and credit spreads. On the other hand, an unexpected increase in Canadian external finance premium shows no significant effect in Canada. Indeed, our results suggest that the effects of credit shocks in Canada are essentially caused by the unexpected changes in foreign credit market conditions. Finally, given the identification procedure, we find that our structural factors do have an economic interpretation. The behavior of economic agents and environment may vary over time (monetary policy strategy shifts, stochastic volatility) implying parameters' instability in reduced-form models. Standard time varying parameter (TVP) models usually assume independent stochastic processes for all TVPs. In the final article, I show that the number of underlying sources of parameters' time variation is likely to be small, and provide empirical evidence on factor structure among TVPs of popular macroeconomic models. To test for the presence of, and estimate low dimension sources of time variation in parameters, I apply the factor time varying parameter (Factor-TVP) model, proposed by Stevanovic (2010), to a standard monetary TVP-VAR model. I find that one factor explains most of the variability in VAR coefficients, while the stochastic volatility parameters vary in the idiosyncratic way. The common factor is highly and positively correlated to the unemployment rate. To incorporate the recent financial crisis, the same exercise is conducted with data updated to 2010Q3. The VAR parameters present an important change after 2007, and the procedure suggests two factors. When applied to a large-dimensional structural factor model, I find that four dynamic factors govern the time instability in almost 700 coefficients.
599

Bubliny na akciových trzích: identifikace a efekty měnové politiky / Stock Price Bubbles: Identification and the Effects of Monetary Policy

Koza, Oldřich January 2014 (has links)
This thesis studies bubbles in the U.S. stock market and how they are influenced by monetary policy pursued by the FED. Using Kalman filtering, the log-real price of S&P 500 is decomposed into a market-fundamentals component and a bubble component. The market-fundamentals component depends on the expected future dividends and the required rate of return, while the bubble component is treated as an unobserved state vector in the state-space model. The results suggest that, mainly in recent decades, the bubble has accounted for a substantial portion of S&P 500 price dynamics and might have played a significant role during major bull and bear markets. The innovation of this thesis is that it goes one step further and investigates the effects of monetary policy on both estimated components of S&P 500. For this purpose, the block- restriction VAR model is employed. The findings indicate that the decreasing interest rates have a significant short-term positive effect on the market-fundamentals component but not on the bubble. On the other hand, quantitative easing seems to have a positive effect on the bubble but not on the market-fundamentals component. Finally, the results suggest that the FED has not been successful at distinguishing between stock price movements due to fundamentals or the price misalignment.
600

Škálování arteriální vstupní funkce v DCE-MRI / Scaling of arterial input function in DCE-MRI

Holeček, Tomáš Unknown Date (has links)
Perfusion magnetic resonance imaging is modern diagnostic method used mainly in oncology. In this method, contrast agent is injected to the subject and then is continuously monitored the progress of its concentration in the affected area in time. Correct determination of the arterial input function (AIF) is very important for perfusion analysis. One possibility is to model AIF by multichannel blind deconvolution but the estimated AIF is necessary to be scaled. This master´s thesis is focused on description of scaling methods and their influence on perfussion parameters in dependence on used model of AIF in different tissues.

Page generated in 0.0419 seconds