• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 8
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

國中生拼字錯誤中母音字母代換分析 / An Analysis on Vowel Substitution in Spelling Errors by Junior High School Students

胡明玉, Hu, Ming-yu Unknown Date (has links)
國立政治大學英國語文學系在職碩士班 碩士論文提要 論文名稱:國中生拼音錯誤中母音字母代換分析 指導教授:尤雪瑛 研究生:胡明玉 論文提要內容: 本研究旨在探討國中生母音字母的拼字錯誤,試圖在其看似混亂,毫無根據錯誤中,尋找隱藏其中共通的原則或模式。 本研究以實驗方式實際進行國中學生在拼字錯誤母音字母替代上表現的觀察。實驗分兩階段, 第一階段收集學生語料,探討拼字錯誤中隱藏共通的原則或模式。第二階段設計實驗,進一步印證第一階段所得的發現。 研究結果顯示拼字錯誤和學生的發音有極大的關係。在母音字母替代的表現上,以下列四類最為顯著:(一)字母a取代其他字母;(二)字母e取代其他字母;(三)字母i取代其他字母;(四)字母o取代其他字母。其原因在於:(一)相近音的混淆,例如對舌位相近的/e/、 /Q/、 /E/產生混淆;(二)母語的影響,例如對鬆緊母音的混淆;(三)對不同系統代表符號的混淆,例如對音標和字母的混淆。 本研究在最後一個章節提出教學之建議,及未來進一步研究可行之方向。 / ABSTRACT This study investigates Chinese subjects’ vowel letter substitution found in spelling errors. An empirical experiment designed in two stages is conducted to collect data from junior high school students. The first stage of experiment is to find hidden patterns behind the misspellings. Then data from the second stage serve to confirm the finding in the first stage. Data collected from the investigation shows that pronunciation plays a significant role in students’ spelling errors. In fact, spelling errors indicate subjects’ development in spelling ability; they are not random and groundless, and on the contrary, most of them are phonetically plausible. Major patterns found in vowel substitution include: (1) Substitute letter a, (2) Substitute letter e, (3) Substitute letter o, and (4) Substitute letter i. There are three main reasons for the substitution: (1) confusion of similar sounds, which may result from nearby tongue positions, (2) L1 transfer, such as the lack of awareness of tense and lax vowels, (3) confusion of different representing systems, such as the confusion of letter names and letter sounds and the confusion of alphabet forms and phonetic symbols. Finally, pedagogical implications and suggestions for further research are provided.
2

Diagnostika chyb v počítačových sítích založená na překlepech / Diagnosing Errors inside Computer Networks Based on the Typo Errors

Bohuš, Michal January 2020 (has links)
The goal of this diploma thesis is to create system for network data diagnostics based on detecting and correcting spelling errors. The system is intended to be used by network administrators as next diagnostics tool. As opposed to the primary use of detection and correction spelling error in common text, these methods are applied to network data, which are given by the user. Created system works with NetFlow data, pcap files or log files. Context is modeled with different created data categories. Dictionaries are used to verify the correctness of words, where each category uses its own. Finding a correction only according to the edit distance leads to many results and therefore a heuristic for evaluating candidates was proposed for selecting the right candidate. The created system was tested in terms of functionality and performance.
3

An Unsupervised Approach to Detecting and Correcting Errors in Text

Islam, Md Aminul 01 June 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors. The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions: Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature? What is the magnitude of error coverage, in terms of the number of errors that can be corrected? We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible. We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors. The approach is language-independent, it can be applied to other languages, as long as n-grams are available. The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
4

Analysis of Third- and Fifth-Grade Spelling Errors on the Test of Written Spelling-4: Do Error Types Indicate Levels of Linguistic Knowledge?

Conway, Barbara Tenney 2011 August 1900 (has links)
A standardized test of spelling ability, Test of Written Spelling – 4, was used to explore the error patterns of Grade 3 and Grade 5 students in public and private schools in the southwestern region of the US. The study was for the purpose of examining the relationship between types of errors students make within a grade level (Grades 3 & 5 for this study), and the students’ spelling proficiency. A qualitative analysis of errors on the Test of Written Spelling – 4 (TWS-4) resulted in distributions of errors categorized as phonological, phonetic, orthographic, etymological, and morphological. For both Grades 3 and 5, a higher proportion of phonological and phonetic errors were made by students in the lowest spelling achievement group. Students with higher standard spelling scores made a lower proportion of phonological and phonetic errors and a higher proportion of errors categorized as etymological and morphological. The Test of Silent Word Reading Fluency (TOSWRF; Mather, Allen, Hammill, & Roberts, 2004) was also administered to the students to examine the relationship of these error types to literacy. The correlation between reading fluency standard scores and phonological and phonetic errors was negative, whereas the correlation between reading fluency and orthographic, etymological, and morphological error types was positive. This study underscores the value of looking at spelling achievement as a part of students’ literacy profiles. In addition, the study highlights the importance of making sure students beyond the years of very early reading and spelling development (Grades 3-5), especially those with low spelling proficiency, have the basic skills of phonological awareness and basic sound/symbol correspondences in place to support their ability to spell and to read, and that spelling must be taught in such a way as to meet students’ individual student needs.
5

An Unsupervised Approach to Detecting and Correcting Errors in Text

Islam, Md Aminul 01 June 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors. The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions: Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature? What is the magnitude of error coverage, in terms of the number of errors that can be corrected? We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible. We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors. The approach is language-independent, it can be applied to other languages, as long as n-grams are available. The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
6

An Unsupervised Approach to Detecting and Correcting Errors in Text

Islam, Md Aminul 01 June 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors. The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions: Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature? What is the magnitude of error coverage, in terms of the number of errors that can be corrected? We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible. We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors. The approach is language-independent, it can be applied to other languages, as long as n-grams are available. The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
7

An intelligent spelling error correction system based on the results of an analysis which has established a set of phonological and sequential rules obeyed by misspellings

Fawthrop, David January 1984 (has links)
This thesis describes the analysis of over 1300 spelling and typing errors. It introduces and describes many empirical rules which these errors obey and shows that a vast majority of errors are variations on some 3000 basic forms. It also describes and tests an intelligent, knowledge based spelling error correction algorithm based on the above work. Using the Shorter Oxford English dictionary it correctly identifies over 90% of typical spelling errors and over 80% of all spelling errors, where the correct word is in the dictionary. The methodology used is as follows: An error form is compared with each word in that small portion of the dictionary likely to contain the intended word, but examination of improbable words is rapidly abandoned using heuristic rules. Any differences between the dictionary word and the error form are compared with the basic forms. Any dictionary word which differs from the error form only by one or two basic forms is transferred to a separate list. The program then acts as an expert system where each of the basic forms is a production or rule with a subjective Bayesian probability. A choice is made from the list by calculating the Bayesian probability for each word in the separate list. An interactive spelling error corrector using the concepts and methods developed here is operating on the Bradford University Cyber 170/720 Computer, and was used to correct this thesis. The corrector also runs on VAX and Prime computers.
8

An Unsupervised Approach to Detecting and Correcting Errors in Text

Islam, Md Aminul January 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors. The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions: Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature? What is the magnitude of error coverage, in terms of the number of errors that can be corrected? We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible. We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors. The approach is language-independent, it can be applied to other languages, as long as n-grams are available. The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
9

Effects of Online Company Review Valence and Quality on Organizational Attraction

Cooper, Ashley Elizabeth 07 September 2016 (has links)
No description available.
10

A Spelling Error Analysis of Words with Closed Syllables for At-risk Readers

Nolan, Susan K. 09 August 2007 (has links)
No description available.

Page generated in 0.0928 seconds