• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 750
  • 286
  • 49
  • 12
  • 1
  • 1
  • Tagged with
  • 1404
  • 702
  • 603
  • 435
  • 340
  • 340
  • 324
  • 315
  • 247
  • 240
  • 240
  • 207
  • 204
  • 204
  • 194
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

A framework for exchange-based trading of cloud computing commodities

Watzl, Johannes 07 April 2014 (has links) (PDF)
Cloud computing is a paradigm for using IT services with characteristics such as flexible and scalable service usage, on-demand availability, and pay-as-you-go billing. Respective services are called cloud services and their nature usually motivates a differentiation in three layers: Infrastructure as a Service (IaaS) for cloud services offering functionality of hardware resources in a virtualised way, Platform as a Service (PaaS) for services acting as execution platforms, and Software as a Service (SaaS) representing applications provided in a cloud computing way. Any of these services is offered with the illusion of unlimited scalability. The infinity gained by this illusion implies the need for some kind of regulation mechanism to manage sup- ply and demand. Today’s static pricing mechanisms are limited in their capabilities to adapt to dynamic characteristics of cloud environments such as changing workloads. The solution is a dy- namic pricing approch compareable to today’s exchanges. This requires comparability of cloud services and the need of standardised access to avoid vendor lock-in. To achieve comparability, a classification for cloud services is introcuced, where classes of cloud services representing tradable goods are expressed by the minimum requirements for a certain class. The main result of this work is a framework for exchange-based trading of cloud com- puting commodities, which is composed of four core components derived from existing ex- change market places. An exchange component takes care of accepting orders from buyers and sellers and determines the price for the goods. A clearing component is responsible for the fi- nancial closing of a trade. The settlement component takes care of the delivery of the cloud service. A rating component monitors cloud services and logs service level agreement breaches to calculate provider ratings, especially for reliability, which is an important factor in cloud computing. The framework establishes a new basis for using cloud services and more advanced business models. Additionally, an overview of selected economic aspects including ideas for derivative financial instruments like futures, options, insurances, and more complex ones is pro- vided. A first version of the framework is currently being implemented and in use at Deutsche Bo ̈rse Cloud Exchange AG. / Cloud Computing repra ̈sentiert eine neue Art von IT-Diensten mit bestimmten Eigenschaften wie Flexibilita ̈t, Skalierbarkeit, sta ̈ndige Verfu ̈gbarkeit und nutzungsbezogene (pay-as-you-go) Abrechnung. IT-Dienste, die diese Eigenschaften besitzen, werden als Cloud Dienste bezeichnet und lassen sich in drei Ebenen einteilen: Infrastructure as a Service (IaaS), womit virtuelle Hardware Ressourcen zur Verfu ̈gung gestellt werden, Platform as a Service (PaaS), das eine Ausfu ̈hrungsumgebung darstellt und Software as a Service (SaaS), welches das Anbieten von Applikationen als Cloud Dienst bezeichnet. Cloud Dienste werden mit der Illusion unendlicher Skalierbarkeit angeboten. Diese Unendlichkeit erfordert einen Mechanismus, der in der Lage ist, Angebot und Nachfrage zu regulieren. Derzeit eingesetzte Preisbildungsmechanismen sind in ihren Mo ̈glichkeiten beschra ̈nkt sich auf die Dynamik in Cloud Umgebungen, wie schnell wechselnde Bedarfe an Ressourcen, einzustellen. Eine mo ̈gliche Lo ̈sung stellt ein dynamischer Preisbildungsmechanismus dar, der auf dem Modell heutiger Bo ̈rsen beruht. Dieser erfordert die Standardisierung und Vergleichbarkeit von Cloud Diensten und eine standardisierte Art des Zugriffs. Um die Vergleichbarkeit von Cloud Diensten zu erreichen, werden diese in Klassen eingeteilt, die jeweils ein am Bo ̈rsenplatz handelbares Gut darstellen. Das Ergebnis dieser Arbeit ist ein Rahmenwerk zum bo ̈rsenbasierten Handel von Cloud Computing Commodities, welches aus vier Kernkomponenten besteht, die von existieren- den Bo ̈rsen und Rohstoffhandeslpla ̈tzen abgeleitet werden ko ̈nnen. Die Bo ̈rsenkomponente nimmt Kauf- und Verkaufsorders entgegen und bestimmt die aktuellen Preise der handelbaren Cloud Rohstoffe. Die Clearing Komponente stellt die finanzielle Abwicklung eines Gescha ̈ftes sicher, das Settlement ist fu ̈r die tatsa ̈chliche Lieferung zusta ̈ndig und die Rating Komponente u ̈berwacht die Cloud Dienste im Hinblick auf die Nichteinhaltung von Service Level Agree- ments und vor allem deren Zuverla ̈ssigkeit, die einen wichtigen Faktor im Cloud Computing darstellt. Das Rahmenwerk begru ̈ndet eine neue Basis fu ̈r die Cloudnutzung und ermo ̈glicht fort- geschrittenere Gescha ̈ftsmodelle. In der Arbeit wird weiters ein U ̈berblick u ̈ber o ̈konomis- che Aspekte wie Ideen zu derivaten Finanzinstrumenten auf Cloud Computing Commodities gegeben. Dieses Rahmenwerk wird derzeit an der Deutsche Bo ̈rse Cloud Exchange AG imple- mentiert und bereits in einer ersten Version eingesetzt.
242

Nearcritical percolation and crystallisation

Aumann, Simon 26 November 2014 (has links) (PDF)
This thesis contains results on singularity of nearcritical percolation scaling limits, on a rigidity estimate and on spontaneous rotational symmetry breaking. First it is shown that - on the triangular lattice - the laws of scaling limits of nearcritical percolation exploration paths with different parameters are singular with respect to each other. This generalises a result of Nolin and Werner, using a similar technique. As a corollary, the singularity can even be detected from an infinitesimal initial segment. Moreover, nearcritical scaling limits of exploration paths are mutually singular under scaling maps. Second full scaling limits of planar nearcritical percolation are investigated in the Quad-Crossing-Topology introduced by Schramm and Smirnov. It is shown that two nearcritical scaling limits with different parameters are singular with respect to each other. This result holds for percolation models on rather general lattices, including bond percolation on the square lattice and site percolation on the triangular lattice. Third a rigidity estimate for 1-forms with non-vanishing exterior derivative is proven. It generalises a theorem on geometric rigidity of Friesecke, James and Müller. Finally this estimate is used to prove a kind of spontaneous breaking of rotational symmetry for some models of crystals, which allow almost all kinds of defects, including unbounded defects as well as edge, screw and mixed dislocations, i.e. defects with Burgers vectors.
243

Program extraction from coinductive proofs and its application to exact real arithmetic

Miyamoto, Kenji 27 December 2013 (has links) (PDF)
Program extraction has been initiated in the field of constructive mathematics, and it attracts interest not only from mathematicians but also from computer scientists nowadays. From a mathematical viewpoint its aim is to figure out computational meaning of proofs, while from a computer-scientific viewpoint its aim is the study of a method to obtain correct programs. Therefore, it is natural to have both theoretical results and a practical computer system to develop executable programs via program extraction. In this Thesis we study the computational interpretation of constructive proofs involving inductive and coinductive reasoning. We interpret proofs by translating the computational content of proofs into executable program code. This translation is the procedure we call program extraction and it is given through Kreisel's modified realizability. Here we study a proof-theoretic foundation for program extraction, enriching the proof assistant system Minlog based on this theoretical improvement. Once a proof of a formula is written in Minlog, a program can be extracted from the proof by the system itself, and the extracted program can be executed in Minlog. Moreover, extracted programs are provably correct with respect to the proven formula due to a soundness theorem which we prove. We practice program extraction by elaborating some case studies from exact real arithmetic within our formal theory. Although these case studies have been studied elsewhere, here we offer a formalization of them in Minlog, and also machine-extraction of the corresponding programs. / Die Methode der Programmextraktion hat ihren Ursprung im Bereich der konstruktiven Mathematik, und stößt in letzter Zeit auf viel Interesse nicht nur bei Mathematikern sondern auch bei Informatikern. Vom Standpunkt der Mathematik ist ihr Ziel, aus Beweisen ihre rechnerische Bedeutung abzulesen, während vom Standpunkt der Informatik ihr Ziel die Untersuchung einer Methode ist, beweisbar korrekte Programme zu erhalten. Es ist deshalb naheliegend, neben theoretischen Ergebnissen auch ein praktisches Computersystem zur Verfügung zu haben, mit dessen Hilfe durch Programmextraktion lauffähige Programme entwickelt werden können. In dieser Doktorarbeit wird eine rechnerische Interpretation konstruktiver Beweise mit induktiven und koinduktiven Definitionen angegeben und untersucht. Die Interpretation geschieht dadurch, daß der rechnerische Gehalt von Beweisen in eine Programmiersprache übersetzt wird. Diese übersetzung wird Programmextraktion genannt; sie basiert auf Kreisels modifizierter Realisierbarkeit. Wir untersuchen die beweistheoretischen Grundlagen der Programmextraktion und erweitern den Beweisassistenten Minlog auf der Basis der erhaltenen theoretischen Resultate. Wenn eine Formel in Minlog formal bewiesen ist, läßt sich ein Programm aus dem Beweis extrahieren, und dieses extrahierte Programm kann in Minlog ausgeführt werden. Ferner sind extrahierte Programme beweisbar korrekt bezüglich der entsprechenden Formel aufgrund eines Korrektheitsatzes, den wir beweisen werden. Innerhalb unserer formalen Theorie bearbeiten wir einige aus der Literatur bekannte Fallstudien im Bereich der exakten reellen Arithmetik. Wir entwickeln eine vollständige Formalisierung der entsprechenden Beweise und diskutieren die in Minlog automatisch extrahierten Programme.
244

Forest-fire models and their critical limits

Graf, Robert 15 December 2014 (has links) (PDF)
Forest-fire processes were first introduced in the physics literature as a toy model for self-organized criticality. The term self-organized criticality describes interacting particle systems which are governed by local interactions and are inherently driven towards a perpetual critical state. As in equilibrium statistical physics, the critical state is characterized by long-range correlations, power laws, fractal structures and self-similarity. We study several different forest-fire models, whose common features are the following: All models are continuous-time processes on the vertices of some graph. Every vertex can be "vacant" or "occupied by a tree". We start with some initial configuration. Then the process is governed by two competing random mechanisms: On the one hand, vertices become occupied according to rate 1 Poisson processes, independently of one another. On the other hand, occupied clusters are "set on fire" according to some predefined rule. In this case the entire cluster is instantaneously destroyed, i.e. all of its vertices become vacant. The self-organized critical behaviour of forest-fire models can only occur on infinite graphs such as planar lattices or infinite trees. However, in all relevant versions of forest-fire models, the destruction mechanism is a priori only well-defined for finite graphs. For this reason, one starts with a forest-fire model on finite subsets of an infinite graph and then takes the limit along increasing sequences of finite subsets to obtain a new forest-fire model on the infinite graph. In this thesis, we perform this kind of limit for two classes of forest-fire models and investigate the resulting limit processes.
245

A network QoS management architecture for virtualization environments

Metzker, Martin 08 December 2014 (has links) (PDF)
Network quality of service (QoS) and its management are concerned with providing, guaranteeing and reporting properties of data flows within computer networks. For the past two decades, virtualization has been becoming a very popular tool in data centres, yet, without network QoS management capabilities. With virtualization, the management focus shifts from physical components and topologies, towards virtual infrastructures (VI) and their purposes. VIs are designed and managed as independent isolated entities. Without network QoS management capabilities, VIs cannot offer the same services and service levels as physical infrastructures can, leaving VIs at a disadvantage with respect to applicability and efficiency. This thesis closes this gap and develops a management architecture, enabling network QoS management in virtulization environments. First, requirements are dervied, based on real world scenarios, yielding a validation reference for the proposed architecture. After that, a life cycle for VIs and a taxonomy for network links and virtual components are introduced, to arrange the network QoS management task with the general management of virtualization environments and enabling the creation of technology specific adaptors for integrating the technologies and sub-services used in virtualization environments. The core aspect, shaping the proposed management architecture, is a management loop and its corresponding strategy for identifying and ordering sub-tasks. Finally, a prototypical implementation showcases that the presented management approach is suited for network QoS management and enforcement in virtualization environments. The architecture fulfils its purpose, fulfilling all identified requirements. Ultimately, network QoS management is one amongst many aspects to management in virtualization environments and the herin presented architecture shows interfaces to other management areas, where integration is left as future work. / Verwaltungsaufgaben für Netzdienstgüte umfassen das Bereitstellen, Sichern und Berichten von Flusseigenschaften in Rechnernetzen. Während der letzen zwei Jahrzehnte entwickelte sich Virtualisierung zu einer Schlüsseltechnologie für Rechenzentren, bisher ohne Möglichkeiten zum Verwalten der Netzdienstgüte. Der Einsatz von Virtualisierung verschiebt den Fokus beim Betrieb von Rechenzentren weg von physischen Komponenten und Netzen, hin zu virtuellen Infrastrukturen (VI) und ihren Einsatzzwecken. VIs werden als unabhängige, voneinander isolierte Einheiten entwickelt und verwaltet. Ohne Netzdienstgüte, sind VIs nicht so vielseitig und effizient einsetzbar wie physische Aufbauten. Diese Arbeit schließt diese Lücke mit der Entwicklung einer Managementarchitektur zur Verwaltung der Netzdienstgüte in Virtualisierungsumgebungen. Zunächst werden Anforderungen aus realen Szenarios abgeleitet, mit denen Architekturen bewertet werden können. Zur Abgrenzung der speziellen Aufgabe Netzdienstgüteverwaltung innerhalb des allgemeinen Managementproblems, wird anschließend ein Lebenszyklusmodell für VIs vorgestellt. Die Entwicklung einer Taxonomie für Kopplungen und Komponenten ermöglicht technologiespezifische Adaptoren zur Integration von in Virtualisierungsumgebungen eingesetzten Technologien. Kerngedanke hinter der entwickelten Architektur ist eine Rückkopplungsschleife und ihre einhergehende Methode zur Strukturierung und Anordnung von Teilproblemen. Abschließend zeigt eine prototypische Implementierung, dass dieser Ansatz für Verwaltung und Durchsetzung von Netzdienstgüte in Virtualisierungsumgebungen geeignet ist. Die Architektur kann ihren Zweck sowie die gestellten Anforderungen erfüllen. Schlussendlich ist Netzdienstgüte ein Bereich von vielen beim Betrieb von Virtualisierungsumgebungen. Die Architektur zeigt Schnittstellen zu anderen Bereichen auf, deren Integration zukünftigen Arbeiten überlassen bleibt.
246

Mathematical models for financial bubbles

Nedelcu, Sorin 22 December 2014 (has links) (PDF)
Financial bubbles have been present in the history of financial markets from the early days up to the modern age. An asset is said to exhibit a bubble when its market value exceeds its fundamental valuation. Although this phenomenon has been thoroughly studied in the economic literature, a mathematical martingale theory of bubbles, based on an absence of arbitrage has only recently been developed. In this dissertation, we aim to further contribute to the developement of this theory. In the first part we construct a model that allows us to capture the birth of a financial bubble and to describe its behavior as an initial submartingale in the build-up phase, which then turns into a supermartingale in the collapse phase. To this purpose we construct a flow in the space of equivalent martingale measures and we study the shifting perception of the fundamental value of a given asset. In the second part of the dissertation, we study the formation of financial bubbles in the valuation of defaultable claims in a reduced-form setting. In our model a bubble is born due to investor heterogeneity. Furthermore, our study shows how changes in the dynamics of the defaultable claim's market price may lead to a different selection of the martingale measure used for pricing. In this way we are able to unify the classical martingale theory of bubbles with a constructive approach to the study of bubbles, based on the interactions between investors. / Finanz-Blasen sind seit der Entstehung der Finanzmärkte bis zur heutigen Zeit gegenwärtig. Es gilt, dass ein Vermögenswert eine Finanzblase aufweist, sobald dessen Marktwert die fundamentale Bewertung übersteigt. Obwohl dieses Phänomen in der Wirtschaftsliteratur ausgiebig behandelt wurde, ist eine mathematische Martingaltheorie von Blasen, die auf der Abwesenheit von Arbitragemöglichkeiten beruht, erst in letzter Zeit entwickelt worden. Das Ziel dieser Dissertation ist es einen Beitrag zur Weiterentwicklung dieser Theorie zu leisten. Im ersten Abschnitt konstruieren wir ein Model mit Hilfe dessen man die Entstehung einer Finanz-Blase erfassen und deren Verhalten anfänglich als Submartingal in der build-up phase beschrieben werden kann, welches dann in der collapse phase zu einem Supermartingal wird. Zu diesem Zweck entwickeln wir einen Zahlungsstrom im Raum der äquivalenten Martingalmaße und wir untersuchen die zu dem Vermögenswert passende Verschiebung des fundamentalen Werts. Der zweite Teil der Dissertation beschäftigt sich mit der Bildung von Finanz-Blasen bei der Bewertung von Forderungen, die mit Ausfallrisiken behaftet sind, in einer reduzierten Marktumgebung. In unserem Model ist die Entstehung einer Blase die Folge der Heterogenität der Investoren. Des Weiteren zeigen unsere Untersuchungen, inwieweit Veränderungen der Dynamik des Marktpreises einer risikobehafteten Forderung zu einer Veränderung des zur Bewertung verwendeten Martingalmaß es führen kann. Dadurch sind wir in der Lage die klassische Martingaltheorie von Finanz-Blasen mit einem konstruktivem Ansatz zur Untersuchung von Finanz-Blasen zu vereinigen, der auf den Interaktionen zwischen Marktteilnehmern basiert.
247

Integrating prior knowledge into factorization approaches for relational learning

Jiang, Xueyan 16 December 2014 (has links) (PDF)
An efficient way to represent the domain knowledge is relational data, where information is recorded in form of relationships between entities. Relational data is becoming ubiquitous over the years for knowledge representation due to the fact that many real-word data is inherently interlinked. Some well-known examples of relational data are: the World Wide Web (WWW), a system of interlinked hypertext documents; the Linked Open Data (LOD) cloud of the Semantic Web, a collection of published data and their interlinks; and finally the Internet of Things (IoT), a network of physical objects with internal states and communications ability. Relational data has been addressed by many different machine learning approaches, the most promising ones are in the area of relational learning, which is the focus of this thesis. While conventional machine learning algorithms consider entities as being independent instances randomly sampled from some statistical distribution and being represented as data points in a vector space, relational learning takes into account the overall network environment when predicting the label of an entity, an attribute value of an entity or the existence of a relationship between entities. An important feature is that relational learning can exploit contextual information that is more distant in the relational network. As the volume and structural complexity of the relational data increase constantly in the era of Big Data, scalability and the modeling power become crucial for relational learning algorithms. Previous relational learning algorithms either provide an intuitive representation of the model, such as Inductive Logic Programming (ILP) and Markov Logic Networks (MLNs), or assume a set of latent variables to explain the observed data, such as the Infinite Hidden Relational Model (IHRM), the Infinite Relational Model (IRM) and factorization approaches. Models with intuitive representations often involve some form of structure learning which leads to scalability problems due to a typically large search space. Factorizations are among the best-performing approaches for large-scale relational learning since the algebraic computations can easily be parallelized and since they can exploit data sparsity. Previous factorization approaches exploit only patterns in the relational data itself and the focus of the thesis is to investigate how additional prior information (comprehensive information), either in form of unstructured data (e.g., texts) or structured patterns (e.g., in form of rules) can be considered in the factorization approaches. The goal is to enhance the predictive power of factorization approaches by involving prior knowledge for the learning, and on the other hand to reduce the model complexity for efficient learning. This thesis contains two main contributions: The first contribution presents a general and novel framework for predicting relationships in multirelational data using a set of matrices describing the various instantiated relations in the network. The instantiated relations, derived or learnt from prior knowledge, are integrated as entities' attributes or entity-pairs' attributes into different adjacency matrices for the learning. All the information available is then combined in an additive way. Efficient learning is achieved using an alternating least squares approach exploiting sparse matrix algebra and low-rank approximation. As an illustration, several algorithms are proposed to include information extraction, deductive reasoning and contextual information in matrix factorizations for the Semantic Web scenario and for recommendation systems. Experiments on various data sets are conducted for each proposed algorithm to show the improvement in predictive power by combining matrix factorizations with prior knowledge in a modular way. In contrast to a matrix, a 3-way tensor si a more natural representation for the multirelational data where entities are connected by different types of relations. A 3-way tensor is a three dimensional array which represents the multirelational data by using the first two dimensions for entities and using the third dimension for different types of relations. In the thesis, an analysis on the computational complexity of tensor models shows that the decomposition rank is key for the success of an efficient tensor decomposition algorithm, and that the factorization rank can be reduced by including observable patterns. Based on these theoretical considerations, a second contribution of this thesis develops a novel tensor decomposition approach - an Additive Relational Effects (ARE) model - which combines the strengths of factorization approaches and prior knowledge in an additive way to discover different relational effects from the relational data. As a result, ARE consists of a decomposition part which derives the strong relational leaning effects from a highly scalable tensor decomposition approach RESCAL and a Tucker 1 tensor which integrates the prior knowledge as instantiated relations. An efficient least squares approach is proposed to compute the combined model ARE. The additive model contains weights that reflect the degree of reliability of the prior knowledge, as evaluated by the data. Experiments on several benchmark data sets show that the inclusion of prior knowledge can lead to better performing models at a low tensor rank, with significant benefits for run-time and storage requirements. In particular, the results show that ARE outperforms state-of-the-art relational learning algorithms including intuitive models such as MRC, which is an approach based on Markov Logic with structure learning, factorization approaches such as Tucker, CP, Bayesian Clustered Tensor Factorization (BCTF), the Latent Factor Model (LFM), RESCAL, and other latent models such as the IRM. A final experiment on a Cora data set for paper topic classification shows the improvement of ARE over RESCAL in both predictive power and runtime performance, since ARE requires a significantly lower rank.
248

New challenges for interviewers when innovating social surveys

Korbmacher, Julie M. 15 December 2014 (has links) (PDF)
The combination of survey data with more objective information, such as administrative records, is a promising innovation within social science research. The advantages of such projects are manifold, but implementing them also bears challenges to be considered. For example, the survey respondents typically have to consent to the linking of external data sources and interviewers have to feel comfortable with this task. This dissertation investigates whether and to what extent the interviewers have an influence on the willingness of the respondents to participate in two new projects within the Survey of Health, Ageing and Retirement in Europe (SHARE). Both projects had the goal to reduce the burden for respondents and to increase the data quality by linking the survey data with additional, more objective data. Both linkages required the interviewers to collect respondents’ written consent during the interview. The starting point of this dissertation is the question of what influences respondents’ decisions to consent to link their survey answers with administrative data. Three different areas are considered: characteristics of the respondents, the interviewers, and the interaction between respondents and interviewers. The results suggest that although respondent and household characteristics are important, a large part of the variation is explained by the interviewers. However, the information available about interviewers in SHARE is limited to a few demographic characteristics. Therefore, it is difficult to identify key interviewer characteristics that influence the consent process. To close this research gap, a detailed interviewer survey was developed and implemented in SHARE. This survey covers four different dimensions of interviewer characteristics: interviewers’ attitudes, their own behavior, experiences in surveys and special measurements, and their expectations regarding their success. These dimensions are applied to several aspects of the survey process, such as unit or item nonresponse as well as the specific projects of the corresponding SHARE questionnaire. The information collected in the interviewer survey is then used to analyze interviewer effects on respondents’ willingness to consent to the collection of blood samples. Those samples are analyzed in a laboratory and the results linked with the survey data. Interviewers’ experience and their expectations are of special interest, because as these are two characteristics that can be influenced during interviewer training and selection. The results in this dissertation show that the interviewers have a considerable effect on respondents’ consent to the collection of biomarkers. Moreover, the information collected in the interviewer survey can explain most of the variance on the interviewer level. A motivation for linking survey data with more objective data is the assumption that survey data suffer from recall error. In the last step, the overlap of information collected in the survey and provided in the administrative records is used to analyze recall error in the year of retirement. The comparison of the two datasets shows that most of respondents remember the year they retired correctly. Nevertheless, a considerable proportion of respondents make recall errors. Characteristics can be identified which increase the likelihood of a misreport, However, the error seems to be unsystematic, meaning that no pattern of reporting the event of retirement too late or too early is found.
249

Multivariate GARCH and dynamic copula models for financial time series

Grziska, Martin 30 September 2014 (has links) (PDF)
This thesis presents several non-parametric and parametric models for estimating dynamic dependence between financial time series and evaluates their ability to precisely estimate risk measures. Furthermore, the different dependence models are used to analyze the integration of emerging markets into the world economy. In order to analyze numerous dependence structures and to discover possible asymmetries, two distinct model classes are investigated: the multivariate GARCH and Copula models. On the theoretical side a new dynamic dependence structure for multivariate Archimedean Copulas is introduced which lifts the prevailing restriction to two dimensions and extends the multivariate dynamic Archimedean Copulas to more than two dimensions. On this basis a new mixture copula is presented using the newly invented multivariate dynamic dependence structure for the Archimedean Copulas and mixing it with multivariate elliptical copulas. Simultaneously a new process for modeling the time-varying weights of the mixture copula is introduced: this specification makes it possible to estimate various dependence structures within a single model. The empirical analysis of different portfolios shows that all equity portfolios and the bond portfolios of the emerging markets exhibit negative asymmetries, i.e. increasing dependence during market downturns. However, the portfolio consisting of the developed market bonds does not show any negative asymmetries. Overall, the analysis of the risk measures reveals that parametric models display portfolio risk more precisely than non-parametric models. However, no single parametric model dominates all other models for all portfolios and risk measures. The investigation of dependence between equity and bond portfolios of developed countries, proprietary, and secondary emerging markets reveals that secondary emerging markets are less integrated into the world economy than proprietary. Thus, secondary emerging markets are more suitable to diversify a portfolio consisting of developed equity or bond indices than proprietary
250

Penalized regression for discrete structures

Oelker, Margret-Ruth 08 January 2015 (has links) (PDF)
Penalisierte Regressionsmodelle stellen eine Möglichkeit dar die Selektion von Kovariablen in die Schätzung eines Modells zu integrieren. Penalisierte Ansätze eignen sich insbesondere dafür, komplexen Strukturen in den Kovariablen eines Modells zu berücksichtigen. Diese Arbeit beschäftigt sich mit verschiedenen Penalisierungsansätzen für diskrete Strukturen, wobei der Begriff "diskrete Struktur" in dieser Arbeit alle Arten von kategorialen Einflussgrößen, von effekt-modifizierenden, kategorialen Einflussgrößen sowie von gruppenspezifischen Effekten in hierarchisch strukturierten Daten bezeichnet. Ihnen ist gemein, dass sie zu einer verhältnismäßig großen Anzahl an zu schätzenden Koeffizienten führen können. Deswegen besteht ein besonderes Interesse daran zu erfahren, welche Kategorien einer Einflussgröße die Zielgröße beeinflussen, und welche Kategorien unterschiedliche beziehungsweise ähnliche Effekte auf die Zielgröße haben. Kategorien mit ähnlichen Effekten können beispielsweise durch fused Lasso Penalties identifiziert werden. Jedoch beschränken sich einige, bestehende Ansätze auf das lineare Modell. Die vorliegende Arbeit überträgt diese Ansätze auf die Klasse der generalisierten linearen Regressionsmodelle. Das beinhaltet computationale wie theoretische Aspekte. Konkret wird eine fused Lasso Penalty für effekt-modifizierende kategoriale Einflussgrößen in generalisierten linearen Regressionsmodellen vorgeschlagen. Sie ermöglicht es, Einflussgrößen zu selektieren und Kategorien einer Einflussgröße zu fusionieren. Gruppenspezifische Effekte, die die Heterogenität in hierarchisch strukturierten Daten berücksichtigen, sind ein Spezialfall einer solchen effekt-modifizierenden, kategorialen Größe. Hier bietet der penalisierte Ansatz zwei wesentliche Vorteile: (i) Im Gegensatz zu gemischten Modellen, die stärkere Annahmen treffen, kann der Grad der Heterogenität sehr leicht reduziert werden. (ii) Die Schätzung ist effizienter als im unpenalisierten Ansatz. In orthonormalen Settings können Fused Lasso Penalties konzeptionelle Nachteile haben. Als Alternative wird eine L0 Penalty für diskrete Strukturen in generalisierten linearen Regressionsmodellen diskutiert, wobei die sogenannte L0 "Norm" eine Indikatorfunktion für Argumente ungleich Null bezeichnet. Als Penalty ist diese Funktion so interessant wie anspruchsvoll. Betrachtet man eine Approximation der L0 Norm als Verlustfunktion wird im Grenzwert der bedingte Modus einer Zielgröße geschätzt. / Penalties are an established approach to stabilize estimation and to select predictors in regression models. Penalties are especially useful when discrete structures matter. In this thesis, the term "discrete structure" subsumes all kinds of categorical effects, categorical effect modifiers and group-specific effects for hierarchical settings. Discrete structures can be challenging as they need to be coded, and as they can result in a huge number of coefficients. Moreover, users are interested in which levels of a discrete covariate are to be distinguished with respect to the response of a model, or in whether some levels have the same impact on the response. One wants to detect non-influential coefficients and to allow for coefficients with the same estimates. That requires carefully tailored penalization as, for example, provided by different variations of the fused Lasso. However, the reach of many existing methods is restricted as mostly, the response is assumed to be Gaussian. In this thesis, some efforts to extend these approaches are made. The focus is on appropriate penalization strategies for discrete structures in generalized linear models (GLMs). Lasso-type penalties in GLMs require special estimation procedures. In a first step, an existing Fisher scoring algorithm, that allows to combine different types of penalties in one model, is generalized. This algorithm provides the computational basis for the subsequent topics. In a second step, varying coefficients with categorical effect modifiers are considered. Existing methodology for linear models is extended to GLMs. In hierarchical settings, fixed effects models, which are also called group-specific models and which are a special case of categorical effect modifiers, are a common choice to account for the heterogeneity in the data. Applying the proposed penalization techniques for categorical effect modifiers to hierarchical settings offers some benefits: In comparison to mixed models, the approach is able to fuse second level units easily. In comparison to unpenalized group-specific models, efficiency is gained. In a third step, fused Lasso-type penalties for discrete structures are considered in more detail. Especially in orthonormal settings, Lasso-type penalties for categorical effects have some drawbacks regarding the clustering of the coefficients. To overcome these problems, an L0 penalty for discrete structures is proposed. Again, computational issues are met by a quadratic approximation. This approximation is not only useful in the context of penalized regression for discrete structures, but also when an approximation of the L0 norm is employed as a loss function. That is, it is useful for regression models that approximate the conditional mode of a response. For linear predictors, a close link to kernel methods allows to show that the proposed estimator is consistent and asymptotically normal. Regression models with semiparametric predictors are possible.

Page generated in 0.0805 seconds