• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 224
  • 51
  • 49
  • 18
  • 16
  • 15
  • 14
  • 12
  • 11
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 490
  • 490
  • 165
  • 101
  • 79
  • 67
  • 67
  • 53
  • 49
  • 39
  • 38
  • 38
  • 36
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Analýha a komparace inflace v ČR a SRN / Inflation analysis and its comparison in the Czech Republic and Germany

Maxa, Jan January 2012 (has links)
The aim of this paper is to analyse and compare inflation and its dynamics between two countries -- the Czech Republic and Germany -- applying a special kind of econometric models. The first part of this paper is dedicated to economic theory of inflation -- fundamental terms, measuring methods and its targeting. The monetary policy in the Czech Republic and Germany is also shortly introduced. Next chapter tries to describe the econometric concept which is used in this paper -- vector autoregression model (VAR model). In connection with the VAR models, Granger causality, impulse response function, cointegration and error correction model are mentioned as well. The empirical part includes application of selected models on real time series of macroeconomic indicators. Next to the interpretation of results, the forecasts are also implemented.
362

Zabezpečení přenosu dat proti dlouhým shlukům chyb / Protection of data transmission against long error bursts

Malach, Roman January 2008 (has links)
This Master´s thesis discuses the protection of data transmission against long error bursts. The data is transmited throught the channel with defined error rate. Parameters of the channel are error-free interval 2000 bits and length of burst error 250 bits. One of the aims of this work is to make a set of possible methods for the realization of a system for data correction. The basic selection is made from most known codes. These codes are divided into several categories and then the best one is chosen for higher selection. Of course interleaving is used too. Only one code from each category can pass on to the higher level of the best code selection. At the end the codes are compared and the best three are simulated using the Matlab program to check correct function. From these three options, one is chosen as optimal regarding practical realization. Two options exist, hardware or software realization. The second one would seem more useful. The real codec is created in validator language C. Nowadays, considering language C and from a computer architecture point of view the 8 bits size of element unit is convenient. That´s why the code RS(255, 191), which works with 8 bits symbols, is optimal. In the next step the codec of this code is created containing the coder and decoder of the code above. The simulation of error channel is ensured by last program. Finally the results are presented using several examples.
363

Correction de données de séquençage de troisième génération / Error correction of third-generation sequencing data

Morisse, Pierre 26 September 2019 (has links)
Les objectifs de cette thèse s’inscrivent dans la large problématique du traitement des données issues de séquenceurs à très haut débit, et plus particulièrement des reads longs, issus de séquenceurs de troisième génération.Les aspects abordés dans cette problématiques se concentrent principalement sur la correction des erreurs de séquençage, et sur l’impact de la correction sur la qualité des analyses sous-jacentes, plus particulièrement sur l’assemblage. Dans un premier temps, l’un des objectifs de cette thèse est de permettre d’évaluer et de comparer la qualité de la correction fournie par les différentes méthodes de correction hybride (utilisant des reads courts en complément) et d’auto-correction (se basant uniquement sur l’information contenue dans les reads longs) de l’état de l’art. Une telle évaluation permet d’identifier aisément quelle méthode de correction est la mieux adaptée à un cas donné, notamment en fonction de la complexité du génome étudié, de la profondeur de séquençage, ou du taux d’erreurs des reads. De plus, les développeurs peuvent ainsi identifier les limitations des méthodes existantes, afin de guider leurs travaux et de proposer de nouvelles solutions visant à pallier ces limitations. Un nouvel outil d’évaluation, proposant de nombreuses métriques supplémentaires par rapport au seul outil disponible jusqu’alors, a ainsi été développé. Cet outil, combinant une approche par alignement multiple à une stratégie de segmentation, permet également une réduction considérable du temps nécessaire à l’évaluation. À l’aide de cet outil, un benchmark de l’ensemble des méthodes de correction disponibles est présenté, sur une large variété de jeux de données, de profondeur de séquençage, de taux d’erreurs et de complexité variable, de la bactérie A. baylyi à l’humain. Ce benchmark a notamment permis d’identifier deux importantes limitations des outils existants : les reads affichant des taux d’erreurs supérieurs à 30%, et les reads de longueur supérieure à 50 000 paires de bases. Le deuxième objectif de cette thèse est alors la correction des reads extrêmement bruités. Pour cela, un outil de correction hybride, combinant différentes approches de l’état de l’art, a été développé afin de surmonter les limitations des méthodes existantes. En particulier, cet outil combine une stratégie d’alignement des reads courts sur les reads longs à l’utilisation d’un graphe de de Bruijn, ayant la particularité d’être d’ordre variable. Le graphe est ainsi utilisé afin de relier les reads alignés, et donc de corriger les régions non couvertes des reads longs. Cette méthode permet ainsi de corriger des reads affichant des taux d’erreurs atteignant jusqu’à 44%, tout en permettant un meilleur passage à l’échelle sur de larges génomes et une diminution du temps de traitement, par rapport aux méthodes de l’état de l’art les plus efficaces. Enfin, le troisième objectif de cette thèse est la correction des reads extrêmement longs. Pour cela, un outil utilisant cette fois une approche par auto-correction a été développé, en combinant, de nouveau, différentes méthodologies de l’état de l’art. Plus précisément, une stratégie de calcul des chevauchements entre les reads, puis une double étape de correction, par alignement multiple puis par utilisation de graphes de de Bruijn locaux, sont utilisées ici. Afin de permettre à cette méthode de passer efficacement à l’échelle sur les reads extrêmement longs, la stratégie de segmentation mentionnée précédemment a été généralisée. Cette méthode d’auto-correction permet ainsi de corriger des reads atteignant jusqu’à 340 000 paires de bases, tout en permettant un excellent passage à l’échelle sur des génomes plus complexes, tels que celui de l’humain. / The aims of this thesis are part of the vast problematic of high-throughput sequencing data analysis. More specifically, this thesis deals with long reads from third-generation sequencing technologies. The aspects tackled in this topic mainly focus on error correction, and on its impact on downstream analyses such a de novo assembly. As a first step, one of the objectives of this thesis is to evaluate and compare the quality of the error correction provided by the state-of-the-art tools, whether they employ a hybrid (using complementary short reads) or a self-correction (relying only on the information contained in the long reads sequences) strategy. Such an evaluation allows to easily identify which method is best tailored for a given case, according to the genome complexity, the sequencing depth, or the error rate of the reads. Moreover, developpers can thus identify the limiting factors of the existing methods, in order to guide their work and propose new solutions allowing to overcome these limitations. A new evaluation tool, providing a wide variety of metrics, compared to the only tool previously available, was thus developped. This tool combines a multiple sequence alignment approach and a segmentation strategy, thus allowing to drastically reduce the evaluation runtime. With the help of this tool, we present a benchmark of all the state-of-the-art error correction methods, on various datasets from several organisms, spanning from the A. baylyi bacteria to the human. This benchmark allowed to spot two major limiting factors of the existing tools: the reads displaying error rates above 30%, and the reads reaching more than 50 000 base pairs. The second objective of this thesis is thus the error correction of highly noisy long reads. To this aim, a hybrid error correction tool, combining different strategies from the state-of-the-art, was developped, in order to overcome the limiting factors of existing methods. More precisely, this tool combines a short reads alignmentstrategy to the use of a variable-order de Bruijn graph. This graph is used in order to link the aligned short reads, and thus correct the uncovered regions of the long reads. This method allows to process reads displaying error rates as high as 44%, and scales better to larger genomes, while allowing to reduce the runtime of the error correction, compared to the most efficient state-of-the-art tools.Finally, the third objectif of this thesis is the error correction of extremely long reads. To this aim, aself-correction tool was developed, by combining, once again, different methologies from the state-of-the-art. More precisely, an overlapping strategy, and a two phases error correction process, using multiple sequence alignement and local de Bruijn graphs, are used. In order to allow this method to scale to extremely long reads, the aforementioned segmentation strategy was generalized. This self-correction methods allows to process reads reaching up to 340 000 base pairs, and manages to scale very well to complex organisms such as the human genome.
364

Trade openness and economic growth: experience from three SACU countries

Malefane, Malefa Rose 02 1900 (has links)
This study uses annual data for the period 1975-2014 for South Africa and Botswana, and 1979-2013 for Lesotho to examine empirically the impact of trade openness on economic growth in these three South African Customs Union (SACU) countries. The motivation for this study is that SACU countries are governed by the common agreement for the union that oversees the movement of goods that enter the SACU area. However, although these countries are in a com-mon union, they have quite different levels of development. Based on the country’s level of development, Lesotho is a lower middle-income and least developed country, whereas Botswana and South Africa are upper middle-income economies. Thus, these disparities in the levels of economic development of SACU countries i are expected to have different implications in relation to the extent to which trade openness affects economic growth. It is within this background that the current study seeks to examine what impact trade openness has on economic growth in each of the three selected countries. To check the robustness of the empirical results, this study uses four equations based on four different indicators of trade openness to examine the linkage between trade openness and economic growth. While Equation 1, Equation 2 and Equation 3 employ trade-based indicators of openness, Equation 4 uses a modified version of the UNCTAD (2012a) trade openness index that incorporates differences in country size and geography. Using the autoregressive distributed lag (ARDL) bounds testing approach to cointegration and error-correction modelling, the study found that the impact of trade openness on economic growth varies across the three SACU countries. Based on the results for the first three equations, the study found that trade openness has a positive impact on economic growth in South Africa and Botswana, whereas it has no significant impact on economic growth in Lesotho. Based on Equation 4 results, the study found that after taking the differences in country size and geography into account, trade openness has a positive impact on economic growth in Botswana, but an insignificant impact in South Africa and Lesotho. For South Africa and Botswana, the main recommendation from this study is that policy makers should pursue policies that promote total trade to increase economic growth in both the short and the long run. For Lesotho, the study recommends, among other things, the adoption of policies aimed at enhancing human capital and infrastructural development as well as the broadening of exports, so as to enable the economy to grow to a threshold level necessary for the realisation of significant gains from trade. / Economics
365

Les dépenses en infrastructures publiques et la croissance économique : Le cas de la Mauritanie / Public infrastructure spending and economic growth : The case of Mauritania

El Moctar Ellah Taher, Mohamed 21 November 2017 (has links)
Si la majorité des études obtiennent des impacts positifs des infrastructures publiques sur l’activité économique, la problématique entre dépenses publiques et bonne affectation de ressources reste présente. Cette thèse empirique présente un travail inédit pour la Mauritanie et se limite sur trois types d’infrastructures. En premier lieu, nous étudions le lien entre l’évolution du stock routier total, et le PIB par tête à travers une fonction de production de type Coob-Douglas. Notre résultat principal est le suivant : le stock routier en Mauritanie a bien impacté le PIB par tête de manière positive et significative. En second lieu, nous analysons la contribution du capital santé à la croissance économique. En estimant plusieurs modèles, trois principaux résultats émergent : 1) Le niveau des dépenses publiques de santé n’a pas d’effet significatif sur la croissance de l’espérance de vie, mais elles semblent avoir des impacts positifs sur la réduction de la mortalité brut pour 1000 personnes. 2) Les dépenses publiques de santé ont un effet positif sur le PIB global, mais cet effet devient non significatif lorsqu’il s’agit du PIB par tête. 3) L’espérance de vie initiale, et sa croissance ont des effets positifs et significatifs sur le PIB par tête. Enfin, nous explorons l’impact de TIC sur la croissance économique. En étudiant une fonction de production et un modèle VAR, Nous mettons en évidence à la fois que le capital TIC, et l’évolution des abonnés au téléphone fixe ont stimulé significativement l’activité économique. / While the majority of studies obtain positive impacts of public infrastructure on economic activity, the problem between public spending and good resource allocation remains.This empirical thesis presents an unpublished work for Mauritania and is limited to three types of infrastructure.First, we study the relationship between the evolution of the total road stock and the per capita GDP through a Coob-Douglas production function.Our main result is that the road stock in Mauritania has impacted the GDP per capita in a positive and significant way.Second, we analyze the contribution of health capital to economic growth. In estimating several models, three main results emerge: 1) The level of public health spending has no significant effect on the growth of life expectancy, but appears to have positive impacts on the reduction of gross mortality per 1000 people. 2) Public expenditure on health has a positive effect on overall GDP, but this effect becomes insignificant when it comes to GDP per capita.3) Initial life expectancy and its growth have positive and significant effects on GDP per capita.Finally, we explore the impact of ICT on economic growth.By studying a production function and a VAR model, we show that ICT capital and the evolution of fixed telephone subscribers have significantly stimulated economic activity.
366

Closed-End Funds and their Net Asset Value over time : A study of the relationship between Swedish closed-end funds' market prices and their underlying assets over a period of time. / Investmentbolag och deras substansvärde över tid : En studie om förhålladet mellan svenska investmentbolags marknadspris och dess underliggande tillgångar över en tid.

Cederberg, Erik, Schnitzer, Linus January 2020 (has links)
Closed-end funds (CEFs) are popular investments amongst the Swedish population as they provide diversification to investors and have in many cases historically outperformed the market. In deciding whether to invest in a CEF, the method of valuation differs from classical financial ratios used to value most companies, as the revenue-bringing operations differ significantly. The Net Asset Value (NAV) per share is compared to the market price per share of a CEF, to determine if the share is traded at a discount or premium. The purpose is based upon the rationalization that a share’s market price and the value of the closed-end fund’s underlying assets cannot drift too far apart from each other. In other words, the discount cannot drift too far from its mean over time, as there would be an upward pressure on the share price if the NAV-discount is large, and a downward pressure on the share price if the premium is large. Tests of unit roots and cointegration are applied and analysed in the light of previous findings for discounts in CEFs. Our findings show that the majority of selected CEFs’ prices and NAVs have long-run equilibrium relationships. Additionally, the discount appears to be stationary over time for the majority of CEFs, supporting the notion of mean reversion in the discount. For certain Swedish CEFs, the findings allow for investment decisions to be made upon the deviation from the mean. This study contributes to previous research done on the topic of mean reversion in the financial market as it finds statistical evidence of mean-reverting process for the NAV-discount of Swedish CEFs. The thesis also provides additional value to the plethora of research provided in the financial field as it specifies its findings to the Swedish market of CEFs.
367

The impact of dividend policy on shareholders' wealth : evidence from the Vector Error Correction Model

Mvita, Mpinda Freddy 18 July 2013 (has links)
Dividend policy is widely researched in financial management, but determining whether it affects the market price per share is difficult. There has been much published on the subject, which presented theories such as the Modigliani, Miller, Gordon, Lintner, Walter and Richardson propositions and the relevance and irrelevance theories. However, little research has been done on the impact of dividend policy on shareholders’ wealth while considering the short- and long-run effects. The Vector Error Correction Model (VECM) was used to describe the short-run and long-run dynamics or the adjustment of the cointegrated variables towards their equilibrium values in South Africa. This study attempts to explain the effect of dividend policy on the market price per share. A sample of 46 companies listed on the Johannesburg Securities Exchange (JSE) was selected for the period 1995-2010. Three variables were used, namely the market price per share, the dividend per share and the earnings per share. The market price per share was used as a proxy in measuring shareholders’ wealth and the dividend per share was used as a proxy in measuring the dividend policy. Fixed and random effects models were applied to panel data to determine the relation between dividend policy and market price per share. The fixed effects method was used to control the stable characteristics of the companies over a fixed period. The random effects model was applied when the companies’ characteristics differed. Results for both models indicated that dividend yield is positively related to market price per share, while earnings per share do not have a significant impact on the market price per share. To test the strength of the long-run relationship, the VECM was applied. The coefficient for dividend per share in the co-integrating equation was positive, while the coefficient for earnings per share was negative. This confirms previous research findings. The results suggest that there is a long-run relationship between dividend per share and market price per share. The Granger causality test indicates there is bi-directional Granger causality between market price per share and dividend per share in South Africa. Therefore dividend policy does have a significant long-run impact on the share price and therefore provides a signal about the company’s financial success. / Dissertation (MCom)--University of Pretoria, 2012. / Financial Management / Unrestricted
368

An Econometric Analysis of the Office Market Rent in Istanbul : Long-run Equilibrium Rent Estimation / En ekonometrisk analys av kontorsmarknadshyrorna i Istanbul : Långsiktiga jämviktslägen uppskattning

Karahan, Gözde January 2019 (has links)
The Istanbul metropolitan area is the largest office investment made in Turkey. According to the CBRE ERIX data, the total office stock in Istanbul by the end of 2018 exceeded 7 million sqm. There is approximately 1 million sqm of pipeline figures. The biggest problem for office projects which in the hold-on status and under construction status, Turkey's economy is rapidly affecting office rents and tenants of office the selection criteria. In particular, high financing costs and construction costs increase the importance of predicting the rent figures in office investments. This degree project aims at contributing to the understanding of the Istanbul rental office market underlying mechanisms. The office market data will be analyzed between Q1 2005-Q4 2018 period. Long-term equilibrium rents will be reached for the Istanbul office market and examined sub-markets. With the econometric analysis method, the long-term causality for rent with employment, stock and vacancy will be examined. Short-term estimates will be made with an error correction model. / Istanbuls storstadsområde är den största kontorsinvesteringen i Turkiet. Enligt CBRE ERIX-data översteg det totala kontorslagret i Istanbul i slutet av 2018 7 miljoner kvm. Det finns cirka 1 miljon kvm rörledningsfigurer. Det största problemet för kontorsprojekt som i fasthållningsstatus och under byggnadsstatus, påverkar Turkiets ekonomi snabbt kontoruthyrningar och hyresgäster i urvalskriterierna. Höga finansieringskostnader och byggkostnader ökar i synnerhet vikten av att förutsäga hyrestalterna i kontorsinvesteringar. Detta examensarbete syftar till att bidra till förståelsen av de underliggande mekanismerna för hyreskontor i Istanbul. Kontorsmarknadsdata analyseras mellan F1 2005-F4 2018 perioden. Långsiktiga jämviktshyror kommer att nås för Istanbul-kontorsmarknaden och undersökta delmarknader. Med den ekonometriska analysmetoden kommer den långsiktiga orsaken till uthyrning med sysselsättning, lager och ledighet att undersökas. Kortfristiga uppskattningar kommer att göras med en felkorrigeringsmodell.
369

Ensemble Methods for Historical Machine-Printed Document Recognition

Lund, William B. 03 April 2014 (has links) (PDF)
The usefulness of digitized documents is directly related to the quality of the extracted text. Optical Character Recognition (OCR) has reached a point where well-formatted and clean machine- printed documents are easily recognizable by current commercial OCR products; however, older or degraded machine-printed documents present problems to OCR engines resulting in word error rates (WER) that severely limit either automated or manual use of the extracted text. Major archives of historical machine-printed documents are being assembled around the globe, requiring an accurate transcription of the text for the automated creation of descriptive metadata, full-text searching, and information extraction. Given document images to be transcribed, ensemble recognition methods with multiple sources of evidence from the original document image and information sources external to the document have been shown in this and related work to improve output. This research introduces new methods of evidence extraction, feature engineering, and evidence combination to correct errors from state-of-the-art OCR engines. This work also investigates the success and failure of ensemble methods in the OCR error correction task, as well as the conditions under which these ensemble recognition methods reduce the Word Error Rate (WER), improving the quality of the OCR transcription, showing that the average document word error rate can be reduced below the WER of a state-of-the-art commercial OCR system by between 7.4% and 28.6% depending on the test corpus and methods. This research on OCR error correction contributes within the larger field of ensemble methods as follows. Four unique corpora for OCR error correction are introduced: The Eisenhower Communiqués, a collection of typewritten documents from 1944 to 1945; The Nineteenth Century Mormon Articles Newspaper Index from 1831 to 1900; and two synthetic corpora based on the Enron (2001) and the Reuters (1997) datasets. The Reverse Dijkstra Heuristic is introduced as a novel admissible heuristic for the A* exact alignment algorithm. The impact of the heuristic is a dramatic reduction in the number of nodes processed during text alignment as compared to the baseline method. From the aligned text, the method developed here creates a lattice of competing hypotheses for word tokens. In contrast to much of the work in this field, the word token lattice is created from a character alignment, preserving split and merged tokens within the hypothesis columns of the lattice. This alignment method more explicitly identifies competing word hypotheses which may otherwise have been split apart by a word alignment. Lastly, this research explores, in order of increasing contribution to word error rate reduction: voting among hypotheses, decision lists based on an in-domain training set, ensemble recognition methods with novel feature sets, multiple binarizations of the same document image, and training on synthetic document images.
370

THREE ESSAYS ON PRICING AND VOLUME DISTRIBUTIONS OF CROSS-LISTED STOCKS

Wang, Jing January 2014 (has links)
No description available.

Page generated in 0.2149 seconds