• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 223
  • 51
  • 49
  • 18
  • 16
  • 15
  • 14
  • 12
  • 11
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 489
  • 489
  • 165
  • 101
  • 79
  • 67
  • 67
  • 52
  • 48
  • 39
  • 38
  • 38
  • 36
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Teoria de correção de erros quânticos durante operações lógicas e medidas de diagnóstico de duração finita / Quantum error-correction theory during logical gates and finitetime syndrome measurements

Castro, Leonardo Andreta de 17 February 2012 (has links)
Neste trabalho, estudamos a teoria quântica de correção de erros, um dos principais métodos de prevenção de perda de informação num computador quântico. Este método, porém, normalmente é estudado considerando-se condições ideais em que a atuação das portas lógicas que constituem o algoritmo quântico não interfere com o tipo de erro que o sistema sofre. Além disso, as medidas de síndrome empregadas no método tradicional são consideradas instantâneas. Nossos objetivos neste trabalho serão avaliar como a alteração dessas duas suposições modificaria o processo de correção de erros. Com relação ao primeiro objetivo, verificamos que, para erros causados por ambientes externos, a atuação de uma porta lógica simultânea ao ruído pode gerar erros que, a princípio, podem não ser corrigíveis pelo código empregado. Propomos em seguida um método de correção a pequenos passos que pode ser usado para tornar desprezíveis os erros incorrigíveis, além de poder ser usado para reduzir a probabilidade de erros corrigíveis. Para o segundo objetivo, estudamos primeiro como medidas de tempo finito afetam a descoerência de apenas um qubit, concluindo que esse tipo de medida pode na verdade proteger o estado que está sendo medido. Motivados por isso, mostramos que, em certos casos, medidas de síndrome finitas realizadas conjuntamente ao ruído são capazes de proteger o estado dos qubits contra os erros mais eficientemente do que se as medidas fossem realizadas instantaneamente ao fim do processo. / In this work, we study the theory of quantum error correction, one of the main methods of preventing loss of information in a quantum computer. This method, however, is normally studied under ideal conditions in which the operation of the quantum gates that constitute the quantum algorithm do not interefere with the kind of error the system undergoes. Moreover, the syndrome measurements employed in the traditional method are considered instantaneous. Our aims with this work are to evaluate how altering these two suppositions would modify the quantum error correction process. In respect with the first objective, we verify that, for errors caused by external environments, the action of a logical gate simultaneously to the noise can provoke errors that, in principle, may not be correctable by the code employed. We subsequently propose a short-step correction method that can be used to render negligible the uncorrectable errors, besides being capable of reducing the probability of occurrence of correctable errors. For the second objective, we first study how finite-time measurements affect the decoherence of a single qubit, concluding that this kind of measurement can actually protect the state under scrutiny. Motivated by that, we demonstrate, that, in certain cases, finite syndrome measurements performed concurrently with the noise are capable of protecting more efficiently the state of the qubits against errors than if the measurements had been performed instantaneously at the the end of the process.
112

Repasse cambial no Brasil: uma investigação a nível agregado a partir de um SVEC / Exchange-Rate pass-through in Brazil: a SVEC investigation

Godoi, Lucas Gonçalves 14 June 2018 (has links)
O impacto de movimentos cambiais nos níveis de preços é de suma importância para a formulação de políticas econômicas. Nesse contexto, este trabalho tem como objetivo a utilização de uma nova metodologia para a estimação e cálculo do repasse para diferentes índices de preço no período de 2003-2017. Estudos anteriores nesse campo identificam ignoram as relações de longo-prazo presentes no sistema ou não utilizam as restrições dadas pela estrutura de cointegração do sistema. Assim a identificação dos choques estruturais é discutida a partir da premissa de separação entre choques permanentes e estruturais sendo que a mesma é fundamentada pela teoria com o auxílio de testes estatísticos. Além dessa estrutura não-recursiva, uma alternativa é apresentada a partir de estruturas recursivas de Cholesky de forma a tornar possível a comparação. Três distintas especificações são estimadas de maneira a gerar estimativas para o repasse aos preços de importação, no atacado e ao consumidor para o Brasil. Para a estrutura não recursiva os repasses para os preços de importação variam de 48 a 65% a depender da especificação sendo diferentes de completo no longo-prazo. Para os preços no atacado os repasses variam de 11 a 15% se mostrando em duas das três especificações estatisticamente diferentes de zero. Os repasses ao consumidor variam de 4 a 13% se mostrando estatisticamente diferente de zero em duas das três especificações. / The impact of exchange rate movements on price levels is of utmost importance for the formulation of economic policies. In this context, this paper aims to use a new methodology for the estimation and calculation of the pass-through for different price index in the period 2003-2017. Previous studies in this field identify ignore the long-term relationships present in the system or do not use the constraints given by the system cointegration structure. Thus, the identification of structural shocks is discussed from the premise of separation between permanent and structural shocks, and it is based on theory with the aid of statistical tests. In addition to this non-recursive structure, one is estimated from Cholesky\'s recursive structures in order to make the comparison possible. Three different specifications are estimated in order to generate estimates for the transfer of import, wholesale and consumer prices to Brazil. For the non-recursive structure, pass-through for import prices range from 48 to 65 % depending on the specification being different from complete in the long run. For producer prices, pass-through range from 11 to 15 % and in two of three specifications they are statistically different from zero. Pass-through to the consumer prices ranges from 4 to 13 % and it is statistically different from zero in two of the three specifications.
113

The effects of error correction with and without reinforcement on skill acquisition and preferences of children with autism spectrum disorder

Yuan, Chengan 01 August 2018 (has links)
Children with autism spectrum disorder (ASD) often require early intensive behavioral interventions (EIBI) to improve their skills in a variety of domains. Error correction is a common instructional component in EIBI programs because children with ASD tend to make persistent errors. Ineffective error correction can result in a lack of learning or undesirable behavior. To date, research has not systematically investigated the use of reinforcement during error correction for children with ASD. This study compared the effects of correcting errors with and without reinforcement and their impact on preferences of young children with autism spectrum disorder (ASD). Four boys with ASD between 3 to 7 years old in China participated in this study. In the context of a repeated-acquisition design, each participant completed three sets of matching-to-sample task under the two error-correction procedures. During the error correction with reinforcement condition, the participants received the reinforcers after correct responses prompted by the researcher following errors. During the without-reinforcement condition, the participants did not receive any reinforcers after prompted responses. The number of sessions required to reach mastery criterion under the two conditions varied among the participants. Visual analysis did not confirm a functional relation between the error-correction procedures and the sessions required to reach mastery. With regard to children’s preferences, three children preferred the with-reinforcement condition and one preferred the without-reinforcement condition. The findings had conceptual implications and suggested practical implications relating to treatment preference.
114

Broad-Band Space Conservative On Wafer Network Analyzer Calibrations With More Complex SOLT Definitions

Padmanabhan, Sathya 29 March 2004 (has links)
An improved Short-Open-Load-Thru (SOLT) on-wafer vector network calibration method for broad-band accuracy is proposed. Accurate measurement of on-wafer devices over a wide range of frequency, from DC to high frequencies with a minimum number of space conservative standards has always been desirable. Therefore, the work is aimed at improving the existing calibration methods and suggesting a best "practice" strategy that could be adopted to obtain greater accuracy with a simplified procedure and calibration set. Quantitative and qualitative comparisons are made to the existing calibration techniques. The advantages and drawbacks of each calibration are analyzed. Prior work done at the University of South Florida by an improved SOLT calibration is briefed. The presented work is a culmination and refinement of the prior USF work that suggested that SOLT calibration improves with more complex definitions for the calibration standards. Modeling of the load and thru standards is shown to improve accuracy as the frequency variation of the two standards can be significant. The load is modeled with modified equivalent circuit to include the high frequency parasitics. The model is physically verified on different substrates. The relation of load impedance with DC resistance is verified and its significance in SOLT calibrations is illustrated. The thru equation accounts for the losses in a transmission line reflections and phase shift including dielectric and conductor losses. The equations used are important for cases where a non-zero length of thru is assumed for the calibration. The complex definitions of the calibration standards are included in the calibration algorithm with LabView and tested on two different VNA's -- Wiltron 360B and Anritsu Lightning. The importance of including the forward and reverse switch terms error correction in the algorithm is analyzed and measurements that verify the improvement are shown. The concept using same foot size calibration standards to simplify the calibration process is highlighted with results to verify the same. The proposed technique thus provides for calibration strategy that can overcome the low frequency problems of TRL, retain TRL accuracy at high frequencies while enabling the use of a compact common footprint calibration set.
115

Essays on Wage and Price Formation in Sweden

Friberg, Kent January 2004 (has links)
<p>Study I<i>Real Wage Determination in the Swedish Engineering Industry</i></p><p>This study uses the monopoly union model to examine the determination of real wages and in particular the effects of active labour market programmes (ALMPs) on real wages in the engineering industry. Quarterly data for the period 1970:1 to 1996:4 are used in a cointegration framework, utilising the Johansen's maximum likelihood procedure. On a basis of the Johansen (trace) test results, vector error correction (VEC) models are created in order to model the determination of real wages in the engineering industry. The estimation results support the presence of a long-run wage-raising effect to rises in the labour productivity, in the tax wedge, in the alternative real consumer wage and in real UI benefits. The estimation results also support the presence of a long-run wage-raising effect due to positive changes in the participation rates regarding ALMPs, relief jobs and labour market training. This could be interpreted as meaning that the possibility of being a participant in an ALMP increases the utility for workers of not being employed in the industry, which in turn could increase real wages in the industry in the long run. Finally, the estimation results show evidence of a long-run wage-reducing effect due to positive changes in the unemployment rate.</p><p>Study II<i>Intersectoral Wage Linkages in Sweden</i></p><p>The purpose of this study is to investigate whether the wage-setting in certain sectors of the Swedish economy affects the wage-setting in other sectors. The theoretical background is the Scandinavian model of inflation, which states that the wage-setting in the sectors exposed to international competition affects the wage-setting in the sheltered sectors of the economy. The Johansen maximum likelihood cointegration approach is applied to quarterly data on Swedish sector wages for the period 1980:1–2002:2. Different vector error correction (VEC) models are created, based on assumptions as to which sectors are exposed to international competition and which are not. The adaptability of wages between sectors is then tested by imposing restrictions on the estimated VEC models. Finally, Granger causality tests are performed in the different restricted/unrestricted VEC models to test for sector wage leadership. The empirical results indicate considerable adaptability in wages as between manufacturing, construction, the wholesale and retail trade, the central government sector and the municipalities and county councils sector. This is consistent with the assumptions of the Scandinavian model. Further, the empirical results indicate a low level of adaptability in wages as between the financial sector and manufacturing, and between the financial sector and the two public sectors. The Granger causality tests provide strong evidence for the presence of intersectoral wage causality, but no evidence of a wage-leading role in line with the assumptions of the Scandinavian model for any of the sectors. </p><p>Study III<i>Wage and Price Determination in the Private Sector in Sweden</i></p><p>The purpose of this study is to analyse wage and price determination in the private sector in Sweden during the period 1980–2003. The theoretical background is a variant of the “Imperfect competition model of inflation”, which assumes imperfect competition in the labour and product markets. According to the model wages and prices are determined as a result of a “battle of mark-ups” between trade unions and firms. The Johansen maximum likelihood cointegration approach is applied to quarterly Swedish data on consumer prices, import prices, private-sector nominal wages, private-sector labour productivity and the total unemployment rate for the period 1980:1–2003:3. The chosen cointegration rank of the estimated vector error correction (VEC) model is two. Thus, two cointegration relations are assumed: one for private-sector nominal wage determination and one for consumer price determination. </p><p>The estimation results indicate that an increase of consumer prices by one per cent lifts private-sector nominal wages by 0.8 per cent. Furthermore, an increase of private-sector nominal wages by one per cent increases consumer prices by one per cent. An increase of one percentage point in the total unemployment rate reduces private-sector nominal wages by about 4.5 per cent. The long-run effects of private-sector labour productivity and import prices on consumer prices are about –1.2 and 0.3 per cent, respectively. The Rehnberg agreement during 1991–92 and the monetary policy shift in 1993 affected the determination of private-sector nominal wages, private-sector labour productivity, import prices and the total unemployment rate. The “offensive” devaluation of the Swedish krona by 16 per cent in 1982:4, and the start of a floating Swedish krona and the substantial depreciation of the krona at this time affected the determination of import prices.</p>
116

Forecasting the Stock Market : A Neural Network Approch

Andersson, Magnus, Palm, Johan January 2009 (has links)
<p>Forecasting the stock market is a complex task, partly because of the random walk behavior of the stock price series. The task is further complicated by the noise, outliers and missing values that are common in financial time series. Despite of this, the subject receives a fair amount of attention, which probably can be attributed to the potential rewards that follows from being able to forecast the stock market.</p><p>Since artificial neural networks are capable of exploiting non-linear relations in the data, they are suitable to use when forecasting the stock market. In addition to this, they are able to outperform the classic autoregressive linear models.</p><p>The objective of this thesis is to investigate if the stock market can be forecasted, using the so called error correction neural network. This is accomplished through the development of a method aimed at finding the optimum forecast model.</p><p>The results of this thesis indicates that the developed method can be applied successfully when forecasting the stock market. Of the five stocks that were forecasted in this thesis using forecast models based on the developed method, all generated positive returns. This suggests that the stock market can be forecasted using neural networks.</p>
117

Low-power 8-bit Pipelined ADC with current mode Multiplying Digital-to-Analog Converter (MDAC)

Shahzad, Khurram January 2009 (has links)
<p>In order to convert the analog information in the digital domain, pipelined analog-to-digital converter (ADC) offers an optimum balance of resolution, speed, power consumption, size and design effort.</p><p>In this thesis work we design and optimize a 8-bit pipelined ADC for low-power. The ADC has stage resolution of 1.5-bit and employ current mode multiplying analog-to-digital converter (MDAC). The main focus is to design and optimize the MDAC. Based on the analysis of "On current mode circuits" discussed in chapter 2, we design and optimize the MDAC circuit for the best possible effective number of bits (ENOB), speed and power consumption. Each of the first six stages consisting of Sample-and-Hold, 1.5-bit flash ADC and MDAC is realized at the circuit level. The last stage consisting of 2-bit flash ADC is also realized at circuit level. The delay logic for synchronization is implemented in Verilog-A and MATLAB. A first order digital error-correction algorithm is implemented in MATLAB.</p><p>The design is simulated in UMC 0.18um technology in Cadence environment. The choice of technology is made as the target application for the ADC, 'X-ray Detector System' is designed in the same technology. The simulation results obtained in-term of ENOB and power consumption are satisfactory for the target application.</p>
118

Meta-Model Guided Error Correction for UML Models

Bäckström, Fredrik, Ivarsson, Anders January 2007 (has links)
<p>Modeling is a complex process which is quite hard to do in a structured and controlled way. Many companies provide a set of guidelines for model structure, naming conventions and other modeling rules. Using meta-models to describe these guidelines makes it possible to check whether an UML model follows the guidelines or not. Providing this error checking of UML models is only one step on the way to making modeling software an even more valuable and powerful tool.</p><p>Moreover, by providing correction suggestions and automatic correction of these errors, we try to give the modeler as much help as possible in creating correct UML models. Since the area of model correction based on meta-models has not been researched earlier, we have taken an explorative approach. </p><p>The aim of the project is to create an extension of the program MetaModelAgent, by Objektfabriken, which is a meta-modeling plug-in for IBM Rational Software Architect. The thesis shows that error correction of UML models based on meta-models is a possible way to provide automatic checking of modeling guidelines. The developed prototype is able to give correction suggestions and automatic correction for many types of errors that can occur in a model.</p><p>The results imply that meta-model guided error correction techniques should be further researched and developed to enhance the functionality of existing modeling software.</p>
119

Feedback and Error Corrections : on Swedish Students' Written English Assignments

Eriksson, Maria January 2006 (has links)
<p>It is important to think about how to correct an essay and what the students should learn from it. My aim in this paper, is to look into what different researchers have said about feedback on written assignments and carry out a study of the kind of feedback that is actually used in secondary school today – and of what students and teachers think about it.</p><p>The results show that underlining is the marking technique mostly used in the secondary school where I did my investigation. This technique was also mostly preferred amongst the students. Two teachers were interviewed and both said that they used underlining because experience has shown that this marking technique is the most effective one. Furthermore, the results from the essays differed when analyzing errors corrected with complete underlining, partial underlining, crossing out and giving the right answer. One marking technique got good results when dealing with one kind of error, and worse in others. My conclusion is that teachers need to vary their marking technique depending on the specific kind of error.</p><p>Also, the results from a questionnaire showed that most of the students would like to get feedback on every written assignment. Not many of them said that they were already getting it, although this was what both teachers claimed. To conclude, there are many different ways to deal with marking and feedback. The key-word seems to be variation. As long as teachers vary their ways of dealing with marking and giving feedback, they will eventually find one or two that are most effective. Involving the students in this decision can also be a good idea, if they are interested.</p>
120

The alleged negative consequence of higher productivity : An empirical analysis on the effect of relative productivity on terms of trade

Malmström, Anna January 2007 (has links)
<p>The relationship between increased productivity and improved standard of living is not a questioned statement on the global level, but does productivity growth necessarily lead to higher standard of living on the national level? Supported by empirical results it is suggested that a high relative productivity growth should not always be worth striving for, since it translates into decreased welfare, in terms of deteriorated terms of trade. This study attempts to examine the impact of relative productivity on the terms of trade in the OECD-countries and in Sweden, with an error-correction model. Further is an extension of the purpose made in order to estimate the impact of increased relative productivity growth on the welfare. The results suggest that the method for measuring productivity has a great impact on the findings, but concludes that a 1% higher relative labour productivity growth is associated with a 0.23% decline in the terms of trade.</p>

Page generated in 0.0902 seconds