• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 219
  • 51
  • 49
  • 18
  • 16
  • 15
  • 14
  • 12
  • 11
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 486
  • 486
  • 163
  • 101
  • 79
  • 67
  • 66
  • 51
  • 47
  • 39
  • 38
  • 38
  • 36
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Effects of Error Correction During Assessment Probes on the Acquisition of Sight Words for Students with Moderate Intellectual Disabilities

Waugh, Rebecca E 25 June 2010 (has links)
Simultaneous prompting is an errorless learning strategy designed to reduce the number of errors students make; however, research has shown a disparity in the number of errors students make during instructional versus probe trials. This study directly examined the effects of error correction versus no error correction during probe trials on the effectiveness and efficiency of simultaneous prompting on the acquisition of sight words by three middle school students with moderate intellectual disabilities. A single-case adapted alternating treatments design (Sindelar, Rosenberg, & Wilson, 1985) was employed to examine the effects of error correction during probe trials in order to reduce error rates. A functional relation was established for two of the three students for the use of error correction during probe sessions to reduce error rates. Error correction during assessment probes required fewer sessions to criterion, resulted in fewer probe errors, resulted in a higher percentage of correct responding on the next subsequent trial, and required less total probe time. For two of the three students, probes with error correction resulted in a more rapid acquisition rate requiring fewer sessions to criterion.
152

A vector error correction model for the relationship between public debt and inflation in Germany

Nastansky, Andreas, Mehnert, Alexander, Strohe, Hans Gerhard January 2014 (has links)
In the paper, the interaction between public debt and inflation including mutual impulse response will be analysed. The European sovereign debt crisis brought once again the focus on the consequences of public debt in combination with an expansive monetary policy for the development of consumer prices. Public deficits can lead to inflation if the money supply is expansive. The high level of national debt, not only in the Euro-crisis countries, and the strong increase in total assets of the European Central Bank, as a result of the unconventional monetary policy, caused fears on inflating national debt. The transmission from public debt to inflation through money supply and long-term interest rate will be shown in the paper. Based on these theoretical thoughts, the variables public debt, consumer price index, money supply m3 and long-term interest rate will be analysed within a vector error correction model estimated by Johansen approach. In the empirical part of the article, quarterly data for Germany from 1991 by 2010 are to be examined.
153

An Unsupervised Approach to Detecting and Correcting Errors in Text

Islam, Md Aminul 01 June 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors. The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions: Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature? What is the magnitude of error coverage, in terms of the number of errors that can be corrected? We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible. We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors. The approach is language-independent, it can be applied to other languages, as long as n-grams are available. The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
154

An intelligent spelling error correction system based on the results of an analysis which has established a set of phonological and sequential rules obeyed by misspellings

Fawthrop, David January 1984 (has links)
This thesis describes the analysis of over 1300 spelling and typing errors. It introduces and describes many empirical rules which these errors obey and shows that a vast majority of errors are variations on some 3000 basic forms. It also describes and tests an intelligent, knowledge based spelling error correction algorithm based on the above work. Using the Shorter Oxford English dictionary it correctly identifies over 90% of typical spelling errors and over 80% of all spelling errors, where the correct word is in the dictionary. The methodology used is as follows: An error form is compared with each word in that small portion of the dictionary likely to contain the intended word, but examination of improbable words is rapidly abandoned using heuristic rules. Any differences between the dictionary word and the error form are compared with the basic forms. Any dictionary word which differs from the error form only by one or two basic forms is transferred to a separate list. The program then acts as an expert system where each of the basic forms is a production or rule with a subjective Bayesian probability. A choice is made from the list by calculating the Bayesian probability for each word in the separate list. An interactive spelling error corrector using the concepts and methods developed here is operating on the Bradford University Cyber 170/720 Computer, and was used to correct this thesis. The corrector also runs on VAX and Prime computers.
155

The effects of direct and indirect written corrective feedback (CF) on English-as-a-second-language (ESL) students’ revision accuracy and writing skills

Karim, Khaled Mahmud Rezaul 10 January 2014 (has links)
Since the publication of Truscott’s paper in 1996 arguing against the effectiveness of grammar correction in second language (L2) writing, there has been an ongoing debate regarding the effectiveness of written corrective feedback (WCF) in the field of second language acquisition (SLA). This debate has continued due to conflicting research results from research examining short-term effects of WCF and scarcity of research investigating its long-term effects (Ferris, 2004, 2006). Using a mixed-method research design, this study investigated the effects of direct and indirect WCF on students’ revision accuracy of the same piece of writing as well as its transfer effects on new pieces of writing over time. The present study also investigated the differential effects of direct and indirect CF on grammatical and non-grammatical errors. Using a stimulated recall strategy, the study further explored students’ perception and attitude regarding the types of feedback they received. Fifty-three intermediate level English-as-a-second-language (ESL) students were divided randomly into four groups: direct, underlining only, Underlining+meta- linguistic, and a control group. Students produced three pieces of writings from three different picture prompts and revised those over a three-week period. To examine the delayed effects of feedback on students’ writing skills, each group was also asked to produce a new piece of writing two weeks later. The results demonstrated that all three feedback groups significantly outperformed the control group with respect to revision accuracy in all three writing tasks. WCF did not have any significant delayed transfer effects on improving students’ writing skills. Short-term transfer effects on overall accuracy, however, were found for Underlining+metalinguistic CF, but not for other feedback types. In terms of grammatical and non-grammatical accuracy, only Direct CF displayed significant short-term transfer effects on improving grammatical accuracy. These findings suggest that while Direct CF was successful in improving short-term grammatical accuracy, both direct and indirect CF has the potential to improve accuracy in writing. The findings also clarify that no single form of CF can be effective in addressing all types of linguistic errors. Findings from the qualitative study demonstrated that different aspects of direct and indirect CF helped learners in different ways to successfully attend to different types of CF. In the case of Direct CF, learners who successfully corrected errors believed that the explicit information or correction was useful for them. They believed that it helped them understand what errors they made and helped them remember the corrections. Learners who were successful in correcting errors from indirect CF in the form of underlining and in the form of underline in combination with metalinguistic CF indicated that these two types of indirect CF helped them notice the errors, think about the errors, guess the correct form(s) or feature(s) and also remember the correction. The findings also indicated that both grammatical and non-grammatical errors could be difficult for learners to correct from indirect CF if they do not have sufficient L2 proficiency. Findings from the qualitative study also indicated that while learners considered both direct and the two indirect CF as useful, indirect CF in the form of underlining together with metalinguistic CF was preferred by a majority of learners as it provided valuable information about the errors made as well as promoting thinking and better understanding. / Graduate / 0290 / khaledk@uvic.ca
156

The relationship between market value and book value for five selected Japanese firms

Omura, Teruyo January 2005 (has links)
Studies of the value relevance of accounting number in capital market research are consistent with the simple view that, in equilibrium, book values are equal to or have some long-term relationship with market values, and that market returns are related to book returns. This dissertation examines the value relevance of annually-reported book values of net assets, earnings and dividends to the year-end market values of five Japanese firms between 1950 and 2004 (a period of 54 years). Econometric techniques are used to develop dynamic models of the relationship between markets, book values and a number of macro-economic variables. In constructing the models, the focus is to provide an accurate statistical description of the underlying relationships between market and book value. It is expected that such research will add to the body of knowledge on factors that are influential to Japanese stock prices. The significant findings of the study are as follows: 1) well-specified models of the data generating process for market value based on the information set used to derive the models are log-linear in form. Additive, linear models in untransformed variables are not well-specified and forecast badly out of sample; 2) the book value of net assets has relevance for market value in the five Japanese firms examined, in the long run.
157

Investigating the relationship between market values and accounting numbers for 30 selected Australian listed companies

Clout, Victoria Jane January 2007 (has links)
In capital market research (CMR) studies of the value relevance of accounting numbers are founded upon the concept that, in equilibrium, the book values are equal to or have some long-term relationship with the market value and that market returns are related to book returns. This thesis seeks to resolve a gap in the CMR by examining 30 selected individual firms listed on the Australian stock market during the period 1950 to 2004, using equilibrium correction modelling techniques. Even these limited prior works used cross-sectional techniques rather than the long-run, time-series, analysis used in this study. Moreover, dynamic analysis in the CMR has tended to focus on indexes or portfolio data rather than using firm-specific case study data of the type modelled here. No prior research has taken this approach using Australian data. The results of this thesis indicated that an equilibrium correction relationship between market values and book values for firms listed on the Australian Stock Exchange (ASX) could be determined by using accounting and macroeconomic regressors. The findings of the thesis were consistent with the literature in terms of the variables suggested and important in the firm's valuation from the three main approaches, the analysts (industry) approach, the finance and accounting theory (textbook) approach and the CMR literature approach. The earnings, dividends and book value variables are significant in their relationships with the firm's market values. The models constructed were typically more informative and had an increased forecasting performance compared with the a priori models tested, based on theory and the literature.
158

The Australian Housing Market: Price Dynamics and Capital Stock Growth

Mikhailitchenko, Serguei, na January 2008 (has links)
This study was motivated by the desire to contribute to the understanding of the movement of house prices and the role of the so-called economic ‘fundamentals’ in the housing market, especially within an Australian context. The core objective of this thesis is to aid understanding of the economic and other mechanisms by which the Australian housing market operates. We do this by constructing an analytical framework, or model, that encompasses the most important characteristics of the housing market. This thesis examines two important aspects of the Australian housing market: movements of house prices and changes in the net capital stock of dwellings in Australia. Movements of house prices are modelled from two perspectives: firstly, using the ‘fundamental’ approach, which explains the phenomena by changes in such ‘fundamental’ explanatory variables as income, interest rates, population and prices of building materials, and secondly, by analysing spatial interdependence of house prices in Australian capital cities. Changes in stock of dwellings were also modelled on the basis of a ‘fundamental’ approach by states and for Australia as a whole...
159

Design of a retransmission strategy for error control in data communication networks

January 1976 (has links)
by Seyyed J. Golestaani. / Bibliography: p.107-108. / Prepared under Grant NSF-ENG75-14103. Originally presented as the author's thesis, (M.S.) in the M.I.T. Dept. of Electrical Engineering and Computer Science, 1976.
160

Διόρθωση λαθών με τη χρήση κωδίκων RS-LDPC

Γκίκα, Ζαχαρούλα 07 June 2013 (has links)
Σήμερα, σε όλα σχεδόν τα τηλεπικοινωνιακά συστήματα τα οποία προορίζονται για αποστολή δεδομένων σε υψηλούς ρυθμούς, έχουν υιοθετηθεί κώδικες διόρθωσης λαθών για την αύξηση της αξιοπιστίας τους και τη μείωση της απαιτούμενης ισχύος εκπομπής τους. Οι κώδικες αυτοί δίνουν τη δυνατότητα ανίχνευσης και διόρθωσης των λαθών που μπορεί να δημιουργήσει το μέσο μετάδοσης (κανάλι) σε κάποιο τμήμα πληροφορίας που μεταφέρεται μέσω του τηλεπικοινωνιακού δικτύου. Μία κατηγορία τέτοιων κωδίκων, και μάλιστα με εξαιρετικές επιδόσεις, είναι η οικογένεια των LDPC (Low Density Parity Check) κωδίκων. Πρόκειται για γραμμικούς μπλοκ κώδικες, με απόδοση πολύ κοντά στο όριο Shannon. Στην παρούσα διπλωματική μελετώνται οι κώδικες LDPC και σχετικές αρχιτεκτονικές υλικού. Oι κώδικες LDPC χρησιμοποιούνται όλο και περισσότερο σε εφαρμογές που απαιτούν αξιόπιστη και υψηλής απόδοσης μετάδοση, υπό την παρουσία ισχυρού θορύβου. Η κατασκευή τους στηρίζεται στη χρήση πινάκων ελέγχου ισοτιμίας χαμηλής πυκνότητας, ενώ η αποκωδικοποίηση εκτελείται με τη χρήση επαναληπτικών αλγορίθμων. Σε υψηλά επίπεδα θορύβου παρουσιάζουν πολύ καλή διορθωτική ικανότητα, αλλά υστερούν σε χαμηλότερα επίπεδα θορύβου, όπου υποφέρουν από το φαινόμενο του error floor. Στη συγκεκριμένη εργασία μελετάται εκτενώς μία αλγεβρική μέθοδος για την κατασκευή regular LDPC κωδίκων που βασίζεται σε κώδικες Reed-Solomon με δύο σύμβολα πληροφορίας. Η μέθοδος αυτή μας επιτρέπει την κατασκευή ενός πίνακα ελέγχου ισοτιμίας Η για τον κώδικα LDPC, όπου το διάγραμμα Tanner που του αντιστοιχεί δεν περιέχει κύκλους μήκους 4 (ελάχιστο μήκος κύκλου 6). Οι κύκλοι μικρού μήκους στο διάγραμμα Tanner «εγκλωβίζουν» τον αποκωδικοποιητή σε καταστάσεις που δεν μπορεί να ανιχνεύσει και να διορθώσει τα λάθη που δημιουργήθηκαν στη μετάδοση. Έτσι χρησιμοποιώντας την παραπάνω μέθοδο μπορούμε να κατασκευάσουμε απλούς σε δομή κώδικες, που σε συνδυασμό με τους επαναληπτικούς αλγορίθμους αποκωδικοποίησης οδηγούν σε αποκωδικοποιητές με εξαιρετικές διορθωτικές ικανότητες και εμφάνιση error floor σε πολύ χαμηλές τιμές του BER. Ακόμα, αυτού του τύπου οι πίνακες ισοτιμίας επιβάλλουν μία συγκεκριμένη δομή για το γεννήτορα πίνακα G που χρησιμοποιείται για την κωδικοποίηση. Για το λόγο αυτό μελετάται επίσης ο τρόπος για να κατασκευάσουμε ένα συστηματικό πίνακα G, ο οποίος απλουστεύει κατά πολύ τη διαδικασία της κωδικοποίησης. Όλες οι παραπάνω διαδικασίες εφαρμόζονται για την κατασκευή του κώδικα (2048,1723) RS-LDPC. Πρόκειται για έναν κώδικα ρυθμού 0,84 που χρησιμοποιείται από το πρότυπο 802.3an της IEEE για το 10GBASE-T Ethernet και παρουσιάζει ιδιαίτερο ενδιαφέρον λόγω των επιδόσεών του. Για τον κώδικα αυτό προτείνεται σχεδίαση για τον κωδικοποιητή και τον αποκωδικοποιητή καθώς και για όλα τα εξωτερικά κυκλώματα που απαιτούνται ώστε να δημιουργηθεί ένα ολοκληρωμένο σύστημα αποστολής, λήψης και διόρθωσης δεδομένων. Έχοντας όλο το υπόβαθρο για την κατασκευή ενός RS-LDPC συστήματος κωδικοποίησης-αποκωδικοποίησης, υλοποιήσαμε τη σχεδίαση του συστήματος σε κώδικα VHDL ενώ εκτελέστηκαν οι απαραίτητες εξομοιώσεις (Modelsim). Στη συνέχεια εκτελέστηκαν οι διαδικασίες της σύνθεσης (εργαλείο XST του Xilinx ISE) και της πλήρους υλοποίησης σε fpga (Virtex 5 XC5VLX330T-1FF1738), δίνοντας μας έτσι τη δυνατότητα διεξαγωγής ταχύτατων εξομοιώσεων ειδικά σε χαμηλά επίπεδα θορύβου σε σχέση με τις αντίστοιχες υλοποιήσεις σε λογισμικό (MATLAB). Πραγματοποιώντας πειράματα στο υλικό παρατηρούμε τη διορθωτική ικανότητα του αλγορίθμου αποκωδικοποίησης και συγκρίνουμε τα αποτελέσματα με αυτά των υλοποιήσεων σε λογισμικό. Επίσης μελετάται ο τρόπος μεταβολής της διορθωτικής ικανότητας του αλγορίθμου ανάλογα με τον αριθμό των επαναλήψεων που εκτελεί. Τέλος, πήραμε κάποιες μετρήσεις για το throughput του αποκωδικοποιητή, ώστε σε περίπτωση που θέλουμε να πετύχουμε ένα συγκεκριμένο ρυθμό επεξεργασίας δεδομένων να μπορούμε να υπολογίσουμε τον αριθμό των αποκωδικοποιητών που θα χρειαστούμε. / Nowadays, almost every telecommunication system that aims to achieve high transmission rates has adopted error correction codes in order to increase its reliability while decreasing the required power of transmission. The information signal is transmitted over a communication channel with the presence of noise. Error correction codes allow systems to detect and correct errors that occurred to the information signal due to the noise. LDPC (Low Density Parity Check) codes compose a large family of error correcting linear block codes with great performance, close to the Shannon limit. In this thesis we analyze LDPC codes and the corresponding hardware designs. LDPC codes are used in applications that require reliable and highly efficient transmission under high levels of noise. Any LDPC code is fully defined by a sparse parity-check-matrix and all of them use iterative belief propagation techniques for the decoding process. In general, LDPC codes perform very well in high levels of noise, but in very low levels they suffer from “error floor” effect. First we present a thorough analysis of an algebraic method for constructing regular LDPC codes based on Reed-Solomon codes with two information symbols. This construction method results in a class of LDPC codes which are free of cycles of length 4 in their Tanner graphs (so the girth of their Tanner graphs is at least 6). The existence of circles with length 4 in the Tanner graph “traps” the decoder in states that it cannot detect and correct any error occuring in the transmitted codeword. So by using the previous constructing method we can create simply structured codes which, combined with iterative decoding algorithms, may provide decoders with great performance and error floor at very low levels of BER. Furthermore, this type of decoders requires that the generator matrix G used for the encoding process of the system must have specific structural properties. For this reason we are going to study the way of constructing a proper systematic generator matrix which also simplifies the decoding process. All the previous stages are carried out in order to construct the (2048, 1723) RS-LDPC code. This code was adopted in 802.3an IEEE standard for the 10GBASE-T and is of high interest due to its remarkable efficiency. For this code we demonstrate a specific implementation for the encoder, decoder and all the additional components required in order to design a complete transmitter-receiver system, coupled with error correction capabilities. We utilize the above mentioned background so as to implement our design in VHDL code and run the proper simulations (Modelsim tool). Later on we synthesized (XST tool, Xilinx ISE) and implemented our design on an fpga board (Virtex 5 XC5VLX330T-1FF1738). This enabled us to accomplish rapid simulation times, especially under low level of noise in contrast to the corresponding software implementations (MATLAB). We evaluate the error correction capability of the decoding algorithm by running experiments in hardware and we compare these results with software implementations. Moreover we observe how the effectiveness of the decoding algorithm is affected by its number of iterations. Finally, we measure the decoder throughput so that in case we want to achieve a specific decoding rate we are able to estimate the required number of decoders for this rate.

Page generated in 0.1083 seconds