• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 219
  • 51
  • 49
  • 18
  • 16
  • 15
  • 14
  • 12
  • 11
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 486
  • 486
  • 163
  • 101
  • 79
  • 67
  • 66
  • 51
  • 47
  • 39
  • 38
  • 38
  • 36
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Essays on Wage and Price Formation in Sweden

Friberg, Kent January 2004 (has links)
Study IReal Wage Determination in the Swedish Engineering Industry This study uses the monopoly union model to examine the determination of real wages and in particular the effects of active labour market programmes (ALMPs) on real wages in the engineering industry. Quarterly data for the period 1970:1 to 1996:4 are used in a cointegration framework, utilising the Johansen's maximum likelihood procedure. On a basis of the Johansen (trace) test results, vector error correction (VEC) models are created in order to model the determination of real wages in the engineering industry. The estimation results support the presence of a long-run wage-raising effect to rises in the labour productivity, in the tax wedge, in the alternative real consumer wage and in real UI benefits. The estimation results also support the presence of a long-run wage-raising effect due to positive changes in the participation rates regarding ALMPs, relief jobs and labour market training. This could be interpreted as meaning that the possibility of being a participant in an ALMP increases the utility for workers of not being employed in the industry, which in turn could increase real wages in the industry in the long run. Finally, the estimation results show evidence of a long-run wage-reducing effect due to positive changes in the unemployment rate. Study IIIntersectoral Wage Linkages in Sweden The purpose of this study is to investigate whether the wage-setting in certain sectors of the Swedish economy affects the wage-setting in other sectors. The theoretical background is the Scandinavian model of inflation, which states that the wage-setting in the sectors exposed to international competition affects the wage-setting in the sheltered sectors of the economy. The Johansen maximum likelihood cointegration approach is applied to quarterly data on Swedish sector wages for the period 1980:1–2002:2. Different vector error correction (VEC) models are created, based on assumptions as to which sectors are exposed to international competition and which are not. The adaptability of wages between sectors is then tested by imposing restrictions on the estimated VEC models. Finally, Granger causality tests are performed in the different restricted/unrestricted VEC models to test for sector wage leadership. The empirical results indicate considerable adaptability in wages as between manufacturing, construction, the wholesale and retail trade, the central government sector and the municipalities and county councils sector. This is consistent with the assumptions of the Scandinavian model. Further, the empirical results indicate a low level of adaptability in wages as between the financial sector and manufacturing, and between the financial sector and the two public sectors. The Granger causality tests provide strong evidence for the presence of intersectoral wage causality, but no evidence of a wage-leading role in line with the assumptions of the Scandinavian model for any of the sectors. Study IIIWage and Price Determination in the Private Sector in Sweden The purpose of this study is to analyse wage and price determination in the private sector in Sweden during the period 1980–2003. The theoretical background is a variant of the “Imperfect competition model of inflation”, which assumes imperfect competition in the labour and product markets. According to the model wages and prices are determined as a result of a “battle of mark-ups” between trade unions and firms. The Johansen maximum likelihood cointegration approach is applied to quarterly Swedish data on consumer prices, import prices, private-sector nominal wages, private-sector labour productivity and the total unemployment rate for the period 1980:1–2003:3. The chosen cointegration rank of the estimated vector error correction (VEC) model is two. Thus, two cointegration relations are assumed: one for private-sector nominal wage determination and one for consumer price determination. The estimation results indicate that an increase of consumer prices by one per cent lifts private-sector nominal wages by 0.8 per cent. Furthermore, an increase of private-sector nominal wages by one per cent increases consumer prices by one per cent. An increase of one percentage point in the total unemployment rate reduces private-sector nominal wages by about 4.5 per cent. The long-run effects of private-sector labour productivity and import prices on consumer prices are about –1.2 and 0.3 per cent, respectively. The Rehnberg agreement during 1991–92 and the monetary policy shift in 1993 affected the determination of private-sector nominal wages, private-sector labour productivity, import prices and the total unemployment rate. The “offensive” devaluation of the Swedish krona by 16 per cent in 1982:4, and the start of a floating Swedish krona and the substantial depreciation of the krona at this time affected the determination of import prices.
122

Study the relationship between real exchange rate and interest rate differential – United States and Sweden

Wang, Zhiyuan January 2007 (has links)
This paper uses co-integration method and error-correction model to re-examine the relationship between real exchange rate and expected interest rate differentials, including cumulated current account balance, over floating exchange rate periods. As indicated by the dynamic model, I find that there is a long run relationship among the variables using Johansen co-integration method. Final conclusion is that the empirical evidence is provided to show that our error-correction model leads to a good real exchange rate forecast.
123

The alleged negative consequence of higher productivity : An empirical analysis on the effect of relative productivity on terms of trade

Malmström, Anna January 2007 (has links)
The relationship between increased productivity and improved standard of living is not a questioned statement on the global level, but does productivity growth necessarily lead to higher standard of living on the national level? Supported by empirical results it is suggested that a high relative productivity growth should not always be worth striving for, since it translates into decreased welfare, in terms of deteriorated terms of trade. This study attempts to examine the impact of relative productivity on the terms of trade in the OECD-countries and in Sweden, with an error-correction model. Further is an extension of the purpose made in order to estimate the impact of increased relative productivity growth on the welfare. The results suggest that the method for measuring productivity has a great impact on the findings, but concludes that a 1% higher relative labour productivity growth is associated with a 0.23% decline in the terms of trade.
124

Forecasting the Stock Market : A Neural Network Approch

Andersson, Magnus, Palm, Johan January 2009 (has links)
Forecasting the stock market is a complex task, partly because of the random walk behavior of the stock price series. The task is further complicated by the noise, outliers and missing values that are common in financial time series. Despite of this, the subject receives a fair amount of attention, which probably can be attributed to the potential rewards that follows from being able to forecast the stock market. Since artificial neural networks are capable of exploiting non-linear relations in the data, they are suitable to use when forecasting the stock market. In addition to this, they are able to outperform the classic autoregressive linear models. The objective of this thesis is to investigate if the stock market can be forecasted, using the so called error correction neural network. This is accomplished through the development of a method aimed at finding the optimum forecast model. The results of this thesis indicates that the developed method can be applied successfully when forecasting the stock market. Of the five stocks that were forecasted in this thesis using forecast models based on the developed method, all generated positive returns. This suggests that the stock market can be forecasted using neural networks.
125

An Unsupervised Approach to Detecting and Correcting Errors in Text

Islam, Md Aminul 01 June 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors. The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions: Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature? What is the magnitude of error coverage, in terms of the number of errors that can be corrected? We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible. We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors. The approach is language-independent, it can be applied to other languages, as long as n-grams are available. The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
126

Evaluating forecast accuracy for Error Correction constraints and Intercept Correction

Eidestedt, Richard, Ekberg, Stefan January 2013 (has links)
This paper examines the forecast accuracy of an unrestricted Vector Autoregressive (VAR) model for GDP, relative to a comparable Vector Error Correction (VEC) model that recognizes that the data is characterized by co-integration. In addition, an alternative forecast method, Intercept Correction (IC), is considered for further comparison. Recursive out-of-sample forecasts are generated for both models and forecast techniques. The generated forecasts for each model are objectively evaluated by a selection of evaluation measures and equal accuracy tests. The result shows that the VEC models consistently outperform the VAR models. Further, IC enhances the forecast accuracy when applied to the VEC model, while there is no such indication when applied to the VAR model. For certain forecast horizons there is a significant difference in forecast ability between the VEC IC model compared to the VAR model.
127

The Relationship Between Students&#039 / Preference For Written Feedback And Improvement In Writing: Is The Preferred One The Best One?

Kagitci, Burcin 01 February 2013 (has links) (PDF)
This study aimed to investigate a) which type of written feedback (direct feedback or use of error codes) university prep-school EFL students with elementary level of proficiency prefer to receive on their written texts, b) whether or not the (mis)match between students&rsquo / preferences and received feedback affect their level of improvement in writing, and c) to what extent the students&rsquo / previous writing experience affect their preference for the type of written feedback. In order to determine the students&rsquo / preferences for a specific type of feedback and to find out their previous writing experiences, a questionnaire was designed. Moreover, the participants were given two subsequent writing tasks with the purpose of determining the level of improvement in their linguistic accuracy after receiving their (not) preferred type of feedback. The results show that the majority of the students in the preparatory class with Elementary level prefer to receive use of error codes in their written texts / however, giving them what they ask for may not contribute to their improvement as would be expected. Moreover, some conclusions are made as to the relationship between the students&rsquo / previous writing experience and their current practices.
128

An Unsupervised Approach to Detecting and Correcting Errors in Text

Islam, Md Aminul 01 June 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors. The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions: Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature? What is the magnitude of error coverage, in terms of the number of errors that can be corrected? We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible. We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors. The approach is language-independent, it can be applied to other languages, as long as n-grams are available. The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
129

Correcting Syntactic Annotation Errors Using a Synchronous Tree Substitution Grammar

MATSUBARA, Shigeki, KATO, Yoshihide 01 September 2010 (has links)
No description available.
130

Low-power 8-bit Pipelined ADC with current mode Multiplying Digital-to-Analog Converter (MDAC)

Shahzad, Khurram January 2009 (has links)
In order to convert the analog information in the digital domain, pipelined analog-to-digital converter (ADC) offers an optimum balance of resolution, speed, power consumption, size and design effort. In this thesis work we design and optimize a 8-bit pipelined ADC for low-power. The ADC has stage resolution of 1.5-bit and employ current mode multiplying analog-to-digital converter (MDAC). The main focus is to design and optimize the MDAC. Based on the analysis of "On current mode circuits" discussed in chapter 2, we design and optimize the MDAC circuit for the best possible effective number of bits (ENOB), speed and power consumption. Each of the first six stages consisting of Sample-and-Hold, 1.5-bit flash ADC and MDAC is realized at the circuit level. The last stage consisting of 2-bit flash ADC is also realized at circuit level. The delay logic for synchronization is implemented in Verilog-A and MATLAB. A first order digital error-correction algorithm is implemented in MATLAB. The design is simulated in UMC 0.18um technology in Cadence environment. The choice of technology is made as the target application for the ADC, 'X-ray Detector System' is designed in the same technology. The simulation results obtained in-term of ENOB and power consumption are satisfactory for the target application.

Page generated in 0.0853 seconds