• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 35
  • 6
  • 4
  • 1
  • Tagged with
  • 93
  • 38
  • 30
  • 28
  • 23
  • 22
  • 22
  • 17
  • 14
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cross-polity time-series data.

January 1971 (has links)
Assembled by Arthur S. Banks and the staff of the Center for Comparative Political Research, State University of New York at Binghamton. / Includes bibliographical references.
2

Estimation of covariance, correlation and precision matrices for high-dimensional data

Huang, Na January 2016 (has links)
The thesis concerns estimating large correlation and covariance matrices and their inverses. Two new methods are proposed. First, tilting-based methods are proposed to estimate the precision matrix of a p-dimensional random variable, X, when p is possibly much larger than the sample size n. Each 2 by 2 block indexed by (i, j) of the precision matrix can be estimated by the inversion of the pairwise sample conditional covariance matrix of Xi and Xj controlling for all the other variables. However, in the high dimensional setting, including too many or irrelevant controlling variables may distort the results. To determine the controlling subsets, the tilting technique is applied to measure the contribution of each remaining variable to the covariance matrix of Xi and Xj , and only puts the (hopefully) highly relevant remaining variables into the controlling subsets. Four types of tilting-based methods are introduced and the properties are demonstrated. The simulation results are presented under different scenarios for the underlying precision matrix. The second method NOVEL Integration of the Sample and Thresholded covariance estimators (NOVELIST) performs shrinkage of the sample covariance (correlation) towards its thresholded version. The sample covariance (correlation) component is non-sparse and can be low-rank in high dimensions. The thresholded sample covariance (correlation) component is sparse, and its addition ensures the stable invertibility of NOVELIST. The benefits of the NOVELIST estimator include simplicity, ease of implementation, computational efficiency and the fact that its application avoids eigenanalysis. We obtain an explicit convergence rate in the operator norm over a large class of covariance (correlation) matrices when p and n satisfy log p/n → 0. In empirical comparisons with several popular estimators, the NOVELIST estimator performs well in estimating covariance and precision matrices over a wide range of models. An automatic algorithm for NOVELIST is developed. Comprehensive applications and real data examples of NOVELIST are presented. Moreover, intensive real data applications of NOVELIST are presented.
3

Schätzer des Artenreichtums bei speziellen Erscheinungshäufigkeiten / Species richness estimation

Englert, Stefan January 2009 (has links) (PDF)
Bei vielen Fragestellungen, in denen sich eine Grundgesamtheit in verschiedene Klassen unterteilt, ist weniger die relative Klassengröße als vielmehr die Anzahl der Klassen von Bedeutung. So interessiert sich beispielsweise der Biologe dafür, wie viele Spezien einer Gattung es gibt, der Numismatiker dafür, wie viele Münzen oder Münzprägestätten es in einer Epoche gab, der Informatiker dafür, wie viele unterschiedlichen Einträge es in einer sehr großen Datenbank gibt, der Programmierer dafür, wie viele Fehler eine Software enthält oder der Germanist dafür, wie groß der Wortschatz eines Autors war oder ist. Dieser Artenreichtum ist die einfachste und intuitivste Art und Weise eine Population oder Grundgesamtheit zu charakterisieren. Jedoch kann nur in Kollektiven, in denen die Gesamtanzahl der Bestandteile bekannt und relativ klein ist, die Anzahl der verschiedenen Spezien durch Erfassung aller bestimmt werden. In allen anderen Fällen ist es notwendig die Spezienanzahl durch Schätzungen zu bestimmen.
4

High inter-individual diversity of point mutations, insertions, and deletions in human influenza virus nucleoprotein-specific memory B cells

Reiche, Sven, Dwai, Yamen, Bussmann, Bianca M., Horn, Susanne, Sieg, Michael, Jassoy, Christian January 2015 (has links)
The diversity of virus-specific antibodies and of B cells among different individuals is unknown. Using single-cell cloning of antibody genes, we generated recombinant human monoclonal antibodies from influenza nucleoprotein-specific memory B cells in four adult humans with and without preceding influenza vaccination. We examined the diversity of the antibody repertoires and found that NP-specific B cells used numerous immunoglobulin genes. The heavy chains (HCs) originated from 26 and the kappa light chains (LCs) from 19 different germ line genes. Matching HC and LC chains gave rise to 43 genetically distinct antibodies that bound influenza NP. The median lengths of the CDR3 of the HC, kappa and lambda LC were 14, 9 and 11 amino acids, respectively. We identified changes at 13.6% of the amino acid positions in the V gene of the antibody heavy chain, at 8.4 % in the kappa and at 10.6 % in the lambda V gene. We identified somatic insertions or deletions in 8.1% of the variable genes. We also found several small groups of clonal relatives that were highly diversified. Our findings demonstrate broadly diverse memory B cell repertoires for the influenza nucleoprotein. We found extensive variation within individuals with a high number of point mutations, insertions, and deletions, and extensive clonal diversification. Thus, structurally conserved proteins can elicit broadly diverse and highly mutated B-cell responses.
5

Advanced digital financial reporting formats: The determinants and consequences of HTML usage and XBRL adoption

Pieper, Hendrik 31 July 2023 (has links)
This dissertation comprises five essays on advanced digital financial reporting formats, their determinants, and their impact on capital markets. The first chapter leads through the outline of the dissertation, introduces the research context, and provides a framework for the topic of advanced digital reporting formats in research and practice. Chapters two and three analyze the usage and empirical determinants of the voluntary usage of online financial reports (OFR) based on HTML and its impact on the information environment in Europe. In the fourth chapter, the dissertation analyzes the quality of OFR and its corporate governance determinants. Chapter five deals with a literature review on the adoption of XBRL worldwide, its potential impact, and respective influencing factors. The closing chapter comprises a qualitative research study based on semi-structured interviews with experts from large, listed firms as well as auditing and advisory companies in Germany in the context of the mandatory adoption of digital reporting formats and tackles the topics of organizational and processual integration as well as financial reporting and communication considerations.
6

Le nombre de sujets dans les panels d'analyse sensorielle : une approche base de données / The number of subjects in sensory panels : a data base approch

Mammasse, Nadra 22 March 2012 (has links)
Le nombre de sujets du panel détermine en grande partie le coût des études descriptives et hédoniques de l'analyse sensorielle. Une fois les risques α et β fixés, ce nombre peut théoriquement être calculé, dés lors que l'on connait la variabilité de la mesure due à l'hétérogénéité de la population visée et que l'on fixe la taille de la différence que l'on désire mettre en évidence. En général, l'ordre de grandeur du premier de ces paramètres est inconnu alors que celui du second est délicat à préciser pour l'expérimentateur. Ce travail propose une documentation systématique des valeurs prises dans la réalité par ces deux paramètres grâce à l'exploitation de deux bases de données, SensoBase et PrefBase, contenant respectivement un millier de jeux de données descriptives et quelques centaines de jeux de données hédoniques. Pratiquement, des recommandations pour la taille de panel sont établies sous forme d'abaques prenant en compte trois niveaux pour chacun des deux risques et des deux paramètres.D'autre part, ce travail étudie le nombre de sujets dans chacun des deux types de panel par une approche de ré-échantillonnage qui consiste à réduire progressivement le nombre de sujets tant que les résultats de l'analyse statistique demeurent stables. En moyenne, la taille des panels descriptifs pourrait être réduite d'un quart du nombre de sujets, mais cette moyenne cache une forte hétérogénéité selon le type de descripteurs considéré. La taille optimale des panels hédoniques serait elle très variable et cette variabilité est induite beaucoup plus par la nature et l'importance des différences entre les produits que par l'hétérogénéité des préférences individuelles. De plus, une même approche de ré-échantillonnage appliquée aux répétitions en tests descriptifs suggère que les répétitions ne sont plus nécessaires en phase de mesure, c'est-à-dire une fois le panel entraîné / The costs associated with sensory evaluation increase with the number of panelists to be enrolled. Classical power computation can be used to derive the minimal number of subjects of a sensory panel in order to control both type I (α risk) and type II (β risk) errors. However, this power computation requires estimates of the size of the product effect to be sought and of the residual variability of the ANOVA model used. Generally, both product effect size and residual variability are difficult to estimate a priori by the sensory analyst. This work offers estimations of these two parameters thanks to the analysis of hundreds descriptive andhedonic studies collected respectively in two databases, SensoBase and PrefBase. The meta-analysis of the data allowed to quantify these two parameters and made possible the calculation of the number of panelists. Hence, tables of panel sizes were proposed for 3 levels of respectively product effect size, residual variability and type I and II errors. Of course, this was done independently for descriptive and hedonic tests.Another approach based on resampling in numerous datasets was applied for both descriptive and hedonic studies. The method used to derive adequate panel size consisted in removing k subjects from the N of the original panel and then measuring the loss of information in product comparisons. For descriptive panels, panel size could be reduced by a quarter but this reduction strongly depends on the type of attributes. For hedonic panels, panel sizes varied extremely and depended mainly on the size of the liking differences between products to be compared. We expect that this difference is directly affected by the level of sensory complexity of the products. Finally, the resampling approach was applied to examine the need to replicate with trained sensory panels. Results suggested that replicates are no longer necessary at the testing phase, that is once the panel is trained
7

Advances in applied nonlinear time series modeling

Khan, Muhammad Yousaf 17 June 2015 (has links) (PDF)
Time series modeling and forecasting are of vital importance in many real world applications. Recently nonlinear time series models have gained much attention, due to the fact that linear time series models faced various limitations in many empirical applications. In this thesis, a large variety of standard and extended linear and nonlinear time series models is considered in order to compare their out-of-sample forecasting performance. We examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Heterogeneous Autoregressive (HAR), Autoregressive Conditional Duration (ACD), Threshold Autoregressive (TAR), Self-Exciting Threshold Autoregressive (SETAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR) and Artificial Neural Network (ANN) models and also the extended Heterogeneous Threshold Autoregressive (HTAR) or Heterogeneous Self-Exciting Threshold Autoregressive (HSETAR) model for financial, economic and seismic time series. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear models for the above mentioned time series. Unlike previous studies that typically consider the threshold models specifications by using internal threshold variable, we specified the threshold models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark HAR and AR models by using the financial, economic and seismic time series. According to our knowledge, this is the first study of its kind that extends the usage of linear and nonlinear time series models in the field of seismology by utilizing the seismic data from the Hindu Kush region of Pakistan. The question addressed in this study is whether nonlinear models produce 1 through 4 step-ahead forecasts that improve upon linear models. The answer is that linear model mostly yields more accurate forecasts than nonlinear ones for financial, economic and seismic time series. Furthermore, while modeling and forecasting the financial (DJIA, FTSE100, DAX and Nikkei), economic (the USA GDP growth rate) and seismic (earthquake magnitudes, consecutive elapsed times and consecutive distances between earthquakes occurred in the Hindu Kush region of Pakistan) time series, it appears that using various external threshold variables in threshold models improve their out-of-sample forecasting performance. The results of this study suggest that constructing the nonlinear models with external threshold variables has a positive effect on their forecasting accuracy. Similarly for seismic time series, in some cases, TVAR and VAR models provide improved forecasts over benchmark linear AR model. The findings of this study could somehow bridge the analytical gap between statistics and seismology through the potential use of linear and nonlinear time series models. Secondly, we extended the linear Heterogeneous Autoregressive (HAR) model in a nonlinear framework, namely Heterogeneous Threshold Autoregressive (HTAR) model, to model and forecast a time series that contains simultaneously nonlinear and long-range dependence phenomena. The model has successfully been applied to financial data (DJIA, FTSE100, DAX and Nikkei) and the results show that HTAR model has improved 1-step-ahead forecasting performance than linear HAR model by utilizing the financial data of DJIA. For DJIA, the combination of the forecasts from HTAR and linear HAR models are improved over those obtained from the benchmark HAR model. Furthermore, we conducted a simulated study to assess the performance of HAR and HSETAR models in the presence of spurious long-memory type phenomena contains by a time series. The main purpose of this study is to answer the question, for a time series, whether the HAR and HSETAR models have an ability to detect spurious long-memory type phenomena. The simulation results show that HAR model is completely unable to discriminate between true and spurious long-memory type phenomena. However the extended HSETAR model is capable of detecting spurious long-memory type phenomena. This study provides an evidence that it is better to use HSETAR model, when it is suspected that the underlying time series contains some spurious long-memory type phenomena. To sum up, this thesis is a vital tool for researchers who have to choose the best forecasting model from a large variety of models discussed in this thesis for modeling and forecasting the economic, financial, and mainly seismic time series.
8

Extensions of semiparametric expectile regression

Schulze Waltrup, Linda 11 February 2015 (has links) (PDF)
Expectile regression can be seen as an extension of available (mean) regression models as it describes more general properties of the response distribution. This thesis introduces to expectile regression and presents new extensions of existing semiparametric regression models. The dissertation consists of four central parts. First, the one-to-one-connection between expectiles, the cumulative distribution function (cdf) and quantiles is used to calculate the cdf and quantiles from a fine grid of expectiles. Quantiles-from-expectiles-estimates are introduced and compared with direct quantile estimates regarding e�ciency. Second, a method to estimate non-crossing expectile curves based on splines is developed. Also, the case of clustered or longitudinal observations is handled by introducing random individual components which leads to an extension of mixed models to mixed expectile models. Third, quantiles-from-expectiles-estimates in the framework of unequal probability sampling are proposed. All methods are implemented and available within the package expectreg via the open source software R. As fourth part, a description of the package expectreg is given at the end of this thesis.
9

Die Auswirkung sozioökonomischer Faktoren auf die CO2-Bilanz im Zusammenhang zum ökologischen Fußabdruck

Nguyen, Phuong Anh 26 October 2021 (has links)
Das Interesse nach einer nachhaltigeren Welt wächst von Tag zu Tag. Dieses wichtige Thema kann im 21.Jahrundert nicht mehr ignoriert werden, denn der Klimawandel und ihre Folgen bedrohen immer mehr das Leben auf dem Planeten Erde. Die Auswirkungen werden durch das Handeln der Menschheit bestärkt. Denn der Konsumverbrauch und das stetige Streben nach Wirtschaftswachstum verbrauchen die natürlichen Ressourcen dieser Erde schneller, als der natürliche Kreislauf der Erde abläuft. Infolgedessen liegt der Fokus dieser wissenschaftlichen Abschlussarbeit auf den ökologischen Fußabdruck. Ein Nachhaltigkeitsindikator, welcher den menschlichen Konsumbedarf nach ökologischen Ressourcen misst. Aktuell verbraucht die Menschheit 70 Prozent mehr Ressourcen als die Erde an natürlichen Kapazitäten zur Verfügung hat. Um diesen Trend ändern zu können, müssen signifikante Einflussfaktoren untersucht werden. Mit Hilfe einer linearen multiplen Regressionsanalyse wird daher die Größe eines Fußabdrucks mit verschiedenen sozioökonomischen Variablen auf einen Zusammenhang geprüft. Zusätzlich wird die Teilkomponente des ökologischen Fußabdrucks, der CO2-Fußabdruck, mit in die Analyse hinzugenommen. Dies ermöglicht konsistente Aussagen zu generieren. Ein zentrales Ziel dieser Arbeit ist die Auseinandersetzung mit der Entwicklung zwischen dem Jahr 2000 und 2017 zu ziehen. Es wurden 9 unabhängige Variablen analysiert und insgesamt 8 Regressionsmodelle formuliert. Dabei stützt sich die Datengrundlage auf 100 Länder. Beim Vergleich wurden deutliche Unterschiede innerhalb der Einflussfaktoren sichtbar. Hierbei kam zum Ergebnis, dass industrialisierte Länder verantwortlich für den Großteil der gegenwärtigen Lage sind und wirtschaftliche Faktoren, wie das Bruttoinlandsprodukt, den größten Einfluss auf die Größe des ökologischen Fußabdrucks haben. Die Einführung und Förderung von umweltfreundlicheren und effizienten Technologien sollten demzufolge eines der wichtigsten Hauptziele für Entscheidungsträger in Politik und Wirtschaft sein, um die den Einsatz des Naturkapitals zu reduzieren.:KAPITEL 1 EINLEITUNG 1 KAPITEL 2 ÖKOLOGISCHER FUßABDERUCK 4 2.1 Begriff 4 2.2 Konsumkategorien 5 2.3 Flächenkategorien 6 2.4 Globale Hektar 8 2.5 Overshoot 8 2.6 Aktueller Stand 10 2.7 Ziel 13 KAPITEL 3 THEORETISCHE GRUNDLAGEN 14 3.1 Multivariate Analysemethoden 14 3.2 Regressionsanalyse 15 3.2.1 Modellformulierung 15 3.2.2 Schätzung der Regressionsfunktion 16 3.2.3 Prüfung der Regressionsfunktion (Globale Güte) 18 3.2.3.1 Das Bestimmtheitsmaß 19 3.2.3.2 F-Statistik 19 3.2.3.3 Standardfehler 20 3.2.4 Prüfung der Regressionskoeffizienten (Lokale Güte) 21 3.2.5 Prüfung der Modellprämissen 22 KAPITEL 4 ANWENDUNG DER REGRESSIONSANALYSE 28 4.1 Datengrundlage 28 4.2 Regressionsanalyse 29 4.2.1 Modellformulierung 29 4.2.2 Schätzung der Regressionsfunktion 32 4.2.3 Prüfung der Regressionsfunktion 34 4.2.4 Prüfung der Regressionskoeffizienten 36 4.2.5 Prüfung der Modellprämissen 38 KAPITEL 5 ERGEBNISSE UND INTERPRETATION 45 KAPITEL 6 KRITISCHE WÜRDIGUNG UND FAZIT 47 QUELLENVERZEICHNIS XIII ANHANG XVII
10

Temperaturmodellierung durch Shape Invariant Modeling

Bartl, Stine 05 February 2016 (has links)
Anhand von Temperaturdaten des Standortes Berlin-Tegel wird der Temperaturverlauf für einen Zeitraum von 15 Jahren modelliert. Es werden zwei verschiedene Regressionsansätze gegenübergestellt. Zuerst werden die Daten anhand einer Zeitreihenanalyse geschätzt. Dieses parametrische Verfahren wird mit der nicht-parametrischen Methode des Shape Invariant Modeling verglichen. Beiden Ansätzen liegt die Methode der Kleinsten Fehlerquadrate zugrunde. Die Zeitreihenanalyse als Spezialgebiet der Regressionsanalyse wird mit Hilfe einer Fourierreihe realisiert, um den periodischen Verlauf der Funktion abbilden zu können. Beim Shape Invariant Model werden mithilfe einer Referenzkurve die individuellen Regressionsfunktionen ermittelt. Als Referenz dient ein Basistag. Die Parameterschätzer werden durch Achsentransformation ermittelt.

Page generated in 0.0203 seconds