• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2077
  • 469
  • 321
  • 181
  • 169
  • 71
  • 68
  • 65
  • 53
  • 51
  • 49
  • 43
  • 28
  • 23
  • 22
  • Tagged with
  • 4366
  • 717
  • 538
  • 529
  • 506
  • 472
  • 432
  • 408
  • 390
  • 323
  • 316
  • 306
  • 296
  • 286
  • 275
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Minimal Interference from Possessor Phrases in the Production of Subject-Verb Agreement

Nicol, Janet L., Barss, Andrew, Barker, Jason E. 02 May 2016 (has links)
We explore the language production process by eliciting subject-verb agreement errors. Participants were asked to create complete sentences from sentence beginnings such as The elf's/elves' house with the tiny window/windows and The statue in the eirs/elves' gardens. These are subject noun phrases containing a head noun and controller of agreement (statue), and two nonheads, a "local noun" (window(s)/garden(s)), and a possessor noun (elf's/elves'). Past research has shown that a plural nonhead noun (an "attractor") within a subject noun phrase triggers the production of verb agreement errors, and further, that the nearer the attractor to the head noun, the greater the interference. This effect can be interpreted in terms of relative hierarchical distance from the head noun, or via a processing window account, which claims that during production, there is a window in which the head and modifying material may be co-active, and an attractor must be active at the same time as the head to give rise to errors. Using possessors attached at different heights within the same window, we are able to empirically distinguish these accounts. Possessors also allow us to explore two additional issues. First, case marking of local nouns has been shown to reduce agreement errors in languages with "rich" inflectional systems, and we explore whether English speakers attend to case. Secondly, formal syntactic analyses differ regarding the structural position of the possessive marker, and we distinguish them empirically with the relative magnitude of errors produced by possessors and local nouns. Our results show that, across the board, plural possessors are significantly less disruptive to the agreement process than plural local nouns. Proximity to the head noun matters: a possessor directly modifying the head noun induce a significant number of errors, but a possessor within a modifying prepositional phrase did not, though the local noun did. These findings suggest that proximity to a head noun is independent of a "processing window" effect. They also support a noun phrase-internal, case-like analysis of the structural position of the possessive ending and show that even speakers of inflectionally impoverished languages like English are sensitive to morphophonological case-like marking.
462

Standardising written feedback on L2 student writing / Henk Louw

Louw, Henk January 2006 (has links)
The primary aim of this study is to determine whether it is possible to standardize written feedback on L2 student writing for use in a computerised marking environment. It forms part of a bigger project aimed at enhancing the feedback process as a whole The study attempts to establish "best practice" with regards to feedback on writing, by establishing from the literature what works and what should be avoided. Also, an empirical study was launched to establish what lecturers focus on and what marking techniques they use. A set of randomly selected essays from the Tswana Learner English Corpus and the Afrikaans Learner English Corpus were sent to the English departments of different tertiary institutions across the country. The essays were marked by the English lecturers at the relevant institutions. The conclusion was that lecturers typically focus on surface structures, and use ineffective marking techniques. The best practice (and data from the empirical study) was then used to create a set of standardised feedback comments (tag set) that can be used in a specially programmed software package in which students submit their texts electronically. Lecturers can then mark the student essays on the computer, hopefully speeding up the process, while at the same time giving much more detailed feedback. In later stages of the bigger project, students will get individualized exercises based on the feedback, and there are experiments currently being run to try and automate certain pans of the marking process in order to take some strain off the lecturers when marking. The immense archiving abilities of the computer will also be utilized in order to create opportunities for longitudinal studies. The effectiveness of the feedback tag set was tested in comparison to the marking techniques used by the lecturers in the empirical study and a self-correcting exercise. The conclusion was that the feedback tag set is more effective than the other two techniques. but students seem to perform weak overall when it gets to the revision of cohesive devices and supporting arguments. I argue that students are not used to revising these features, since lecturers seldom (if ever) comment on the structural elements of texts. However, the experiment proves that standardization of written feedback is possible to an extent. The implications of the findings are discussed, and recommendations for further research are made. / Thesis (M.A. (English))--North-West University, Potchefstroom Campus, 2006
463

An Analysis of the Effect of Environmental and Systems Complexity on Information Systems Failures

Zhang, Xiaoni 08 1900 (has links)
Companies have invested large amounts of money on information systems development. Unfortunately, not all information systems developments are successful. Software project failure is frequent and lamentable. Surveys and statistical analysis results underscore the severity and scope of software project failure. Limited research relates software structure to information systems failures. Systematic study of failure provides insights into the causes of IS failure. More importantly, it contributes to better monitoring and control of projects and enhancing the likelihood of the success of management information systems. The underlining theories and literature that contribute to the construction of theoretical framework come from general systems theory, complexity theory, and failure studies. One hundred COBOL programs from a single company are used in the analysis. The program log clearly documents the date, time, and the reasons for changes to the programs. In this study the relationships among the variables of business requirements change, software complexity, program size and the error rate in each phase of software development life cycle are tested. Interpretations of the hypotheses testing are provided as well. The data shows that analysis error and design error occur more often than programming error. Measurement criteria need to be developed at each stage of the software development cycle, especially in the early stage. The quality and reliability of software can be improved continuously. The findings from this study suggest that it is imperative to develop an adaptive system that can cope with the changes to the business environment. Further, management needs to focus on processes that improve the quality of the system design stage.
464

Analysis of bounded distance decoding for Reed Solomon codes

Babalola, Oluwaseyi Paul January 2017 (has links)
Masters Report A report submitted in ful llment of the requirements for the degree of Master of Science (50/50) in the Centre for Telecommunication Access and Services (CeTAS) School of Electrical and Information Engineering Faculty of Engineering and the Built Environment February 2017 / Bounded distance decoding of Reed Solomon (RS) codes involves nding a unique codeword if there is at least one codeword within the given distance. A corrupted message having errors that is less than or equal to half the minimum distance cor- responds to a unique codeword, and therefore will decode errors correctly using the minimum distance decoder. However, increasing the decoding radius to be slightly higher than half of the minimum distance may result in multiple codewords within the Hamming sphere. The list decoding and syndrome extension methods provide a maximum error correcting capability whereby the radius of the Hamming ball can be extended for low rate RS codes. In this research, we study the probability of having unique codewords for (7; k) RS codes when the decoding radius is increased from the error correcting capability t to t + 1. Simulation results show a signi cant e ect of the code rates on the probability of having unique codewords. It also shows that the probability of having unique codeword for low rate codes is close to one. / MT2017
465

Error propagation analysis for remotely sensed aboveground biomass

Alboabidallah, Ahmed Hussein Hamdullah January 2018 (has links)
Above-Ground Biomass (AGB) assessment using remote sensing has been an active area of research since the 1970s. However, improvements in the reported accuracy of wide scale studies remain relatively small. Therefore, there is a need to improve error analysis to answer the question: Why is AGB assessment accuracy still under doubt? This project aimed to develop and implement a systematic quantitative methodology to analyse the uncertainty of remotely sensed AGB, including all perceptible error types and reducing the associated costs and computational effort required in comparison to conventional methods. An accuracy prediction tool was designed based on previous study inputs and their outcome accuracy. The methodology used included training a neural network tool to emulate human decision making for the optimal trade-off between cost and accuracy for forest biomass surveys. The training samples were based on outputs from a number of previous biomass surveys, including 64 optical data based studies, 62 Lidar data based studies, 100 Radar data based studies, and 50 combined data studies. The tool showed promising convergent results of medium production ability. However, it might take many years until enough studies will be published to provide sufficient samples for accurate predictions. To provide field data for the next steps, 38 plots within six sites were scanned with a Leica ScanStation P20 terrestrial laser scanner. The Terrestrial Laser Scanning (TLS) data analysis used existing techniques such as 3D voxels and applied allometric equations, alongside exploring new features such as non-plane voxel layers, parent-child relationships between layers and skeletonising tree branches to speed up the overall processing time. The results were two maps for each plot, a tree trunk map and branch map. An error analysis tool was designed to work on three stages. Stage 1 uses a Taylor method to propagate errors from remote sensing data for the products that were used as direct inputs to the biomass assessment process. Stage 2 applies a Monte Carlo method to propagate errors from the direct remote sensing and field inputs to the mathematical model. Stage 3 includes generating an error estimation model that is trained based on the error behaviour of the training samples. The tool was applied to four biomass assessment scenarios, and the results show that the relative error of AGB represented by the RMSE of the model fitting was high (20-35% of the AGB) in spite of the relatively high correlation coefficients. About 65% of the RMSE is due to the remote sensing and field data errors, with the remaining 35% due to the ill-defined relationship between the remote sensing data and AGB. The error component that has the largest influence was the remote sensing error (50-60% of the propagated error), with both the spatial and spectral error components having a clear influence on the total error. The influence of field data errors was close to the remote sensing data errors (40-50% of the propagated error) and its spatial and non-spatial Overall, the study successfully traced the errors and applied certainty-scenarios using the software tool designed for this purpose. The applied novel approach allowed for a relatively fast solution when mapping errors outside the fieldwork areas.
466

USING A NUMERICAL ALGORITHM TO SEARCH FOR DECOHERENCE-FREE SUB-SYSTEMS

Thakre, Purva 01 December 2018 (has links)
In this paper, we discuss the need for quantum error correction. We also describe some basic techniques used in quantum error correction which includes decoherence-free subspaces and subsystems. These subspaces and subsystems are described in detail. We also introduce a numerical algorithm that was used previously to search for these decoherence-free subspaces and subsystems under collective error. It is useful to search for them as they can be used to store quantum information. We use this algorithm in some specific examples involving qubits and qutrits. The results of these algorithm are then compared with the error algebra obtained using Young tableaux. We use these results to describe how the specific numerical algorithm can be used for the search of approximate decoherence-free subspaces and subsystems and minimal noise subsystems.
467

Numerical model error in data assimilation

Jenkins, Siân January 2015 (has links)
In this thesis, we produce a rigorous and quantitative analysis of the errors introduced by finite difference schemes into strong constraint 4D-Variational (4D-Var) data assimilation. Strong constraint 4D-Var data assimilation is a method that solves a particular kind of inverse problem; given a set of observations and a numerical model for a physical system together with a priori information on the initial condition, estimate an improved initial condition for the numerical model, known as the analysis vector. This method has many forms of error affecting the accuracy of the analysis vector, and is derived under the assumption that the numerical model is perfect, when in reality this is not true. Therefore it is important to assess whether this assumption is realistic and if not, how the method should be modified to account for model error. Here we analyse how the errors introduced by finite difference schemes used as the numerical model, affect the accuracy of the analysis vector. Initially the 1D linear advection equation is considered as our physical system. All forms of error, other than those introduced by finite difference schemes, are initially removed. The error introduced by `representative schemes' is considered in terms of numerical dissipation and numerical dispersion. A spectral approach is successfully implemented to analyse the impact on the analysis vector, examining the effects on unresolvable wavenumber components and the l2-norm of the error. Subsequently, a similar also successful analysis is conducted when observation errors are re-introduced to the problem. We then explore how the results can be extended to weak constraint 4D-Var. The 2D linear advection equation is then considered as our physical system, demonstrating how the results from the 1D problem extend to 2D. The linearised shallow water equations extend the problem further, highlighting the difficulties associated with analysing a coupled system of PDEs.
468

Tamper-Resistant Arithmetic for Public-Key Cryptography

Gaubatz, Gunnar 01 March 2007 (has links)
Cryptographic hardware has found many uses in many ubiquitous and pervasive security devices with a small form factor, e.g. SIM cards, smart cards, electronic security tokens, and soon even RFIDs. With applications in banking, telecommunication, healthcare, e-commerce and entertainment, these devices use cryptography to provide security services like authentication, identification and confidentiality to the user. However, the widespread adoption of these devices into the mass market, and the lack of a physical security perimeter have increased the risk of theft, reverse engineering, and cloning. Despite the use of strong cryptographic algorithms, these devices often succumb to powerful side-channel attacks. These attacks provide a motivated third party with access to the inner workings of the device and therefore the opportunity to circumvent the protection of the cryptographic envelope. Apart from passive side-channel analysis, which has been the subject of intense research for over a decade, active tampering attacks like fault analysis have recently gained increased attention from the academic and industrial research community. In this dissertation we address the question of how to protect cryptographic devices against this kind of attacks. More specifically, we focus our attention on public key algorithms like elliptic curve cryptography and their underlying arithmetic structure. In our research we address challenges such as the cost of implementation, the level of protection, and the error model in an adversarial situation. The approaches that we investigated all apply concepts from coding theory, in particular the theory of cyclic codes. This seems intuitive, since both public key cryptography and cyclic codes share finite field arithmetic as a common foundation. The major contributions of our research are (a) a generalization of cyclic codes that allow embedding of finite fields into redundant rings under a ring homomorphism, (b) a new family of non-linear arithmetic residue codes with very high error detection probability, (c) a set of new low-cost arithmetic primitives for optimal extension field arithmetic based on robust codes, and (d) design techniques for tamper resilient finite state machines.
469

Error analysis for distributed fibre optic sensing technology based on Brillouin scattering

Mei, Ying January 2018 (has links)
This dissertation describes the work conducted on error analysis for Brillouin Optical Time Domain Reflectometry (BOTDR), a distributed strain sensing technology used for monitoring the structural performance of infrastructures. Although BOTDR has been recently applied to many infrastructure monitoring applications, its measurement error has not yet been thoroughly investigated. The challenge to accurately monitor structures using BOTDR sensors lies in the fact that the measurement error is dependent on the noise and the spatial resolution of the sensor as well as the non-uniformity of the monitored infrastructure strain conditions. To improve the reliability of this technology, measurement errors (including precision error and systematic error) need to be carefully investigated through fundamental analysis, lab testing, numerical modelling, and real site monitoring verification. The relationship between measurement error and sensor characteristics is firstly studied experimentally and theoretically. In the lab, different types of sensing cables are compared with regard to their measurement errors. Influences of factors including fibre diameters, polarization and cable jacket on measurement error are characterized. Based on experimental characterization results, an optics model is constructed to simulate the Brillouin back scattering process. The basic principle behind this model is the convolution between the injected pulse and the intrinsic Brillouin spectrum. Using this model, parametric studies are conducted to theoretically investigate the impacts of noise, frequency step and spectrum bandwidth on final strain measurement error. The measurement precision and systematic error are then investigated numerically and experimentally. Measurement results of field sites with installed optical fibres displayed that a more complicated strain profile leads to a larger measurement error. Through extensive experimental and numerical verifications using a Brillouin Optical Time Domain Reflectometry (BOTDR), the dependence of precision error and systematic error on input strain were then characterized in the laboratory and the results indicated that a) the measurement precision error can be predicted using analyzer frequency resolution and the location determination error and b) the characteristics of the measurement systematic error can be described using the error to strain gradient curve. This is significant because for current data interpretation process, data quality is supposed to be constant along the fibre although the monitored strain for most of the site cases is non-uniformly distributed, which is verified in this thesis leading to a varying data quality. A novel data quality quantification method is therefore proposed as a function of the measured strain shape. Although BOTDR has been extensively applied in infrastructure monitoring in the past decade, their data interpretation has been proven to be nontrivial, due to the nature of field monitoring. Based on the measurement precision and systematic error characterization results, a novel data interpretation methodology is constructed using the regularization decomposing method, taking advantages of the measured data quality. Experimental results indicate that this algorithm can be applied to various strain shapes and levels, and the accuracy of the reconstructed strain can be greatly improved. The developed algorithm is finally applied to real site applications where BOTDR sensing cables were implemented in two load bearing piles to monitor the construction loading and ground heaving processes.
470

Bornhuetterova-Fergusonova metoda, odhadování parametrů a chyba predikce / The Bornhuetter-Ferguson method, parameter estimation and prediction error

Santnerová, Petra January 2012 (has links)
This diploma thesis describes the Bornhuetter-Ferguson method, which is used to calculate the IBNR reserve. It is divided into deterministic and stochastic parts. The deterministic part deals with the derivation of development pattern and ultimate loss amount, which are needed to calculate the reserve. The stochastic part deals with reserve estimation error and prediction error. The calculation results of the reserve estimate and its error are compared with the results of the chain ladder method. The last chapter deals with the problematic areas of the described method.

Page generated in 0.0686 seconds