• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 9
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 125
  • 125
  • 23
  • 23
  • 22
  • 19
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Data-driven Methods for Spoken Dialogue Systems : Applications in Language Understanding, Turn-taking, Error Detection, and Knowledge Acquisition

Meena, Raveesh January 2016 (has links)
Spoken dialogue systems are application interfaces that enable humans to interact with computers using spoken natural language. A major challenge for these systems is dealing with the ubiquity of variability—in user behavior, in the performance of the various speech and language processing sub-components, and in the dynamics of the task domain. However, as the predominant methodology for dialogue system development is to handcraft the sub-components, these systems typically lack robustness in user interactions. Data-driven methods, on the other hand, have been shown to offer robustness to variability in various domains of computer science and are increasingly being used in dialogue systems research.     This thesis makes four novel contributions to the data-driven methods for spoken dialogue system development. First, a method for interpreting the meaning contained in spoken utterances is presented. Second, an approach for determining when in a user’s speech it is appropriate for the system to give a response is presented. Third, an approach for error detection and analysis in dialogue system interactions is reported. Finally, an implicitly supervised learning approach for knowledge acquisition through the interactive setting of spoken dialogue is presented.      The general approach taken in this thesis is to model dialogue system tasks as a classification problem and investigate features (e.g., lexical, syntactic, semantic, prosodic, and contextual) to train various classifiers on interaction data. The central hypothesis of this thesis is that the models for the aforementioned dialogue system tasks trained using the features proposed here perform better than their corresponding baseline models. The empirical validity of this claim has been assessed through both quantitative and qualitative evaluations, using both objective and subjective measures. / Den här avhandlingen utforskar datadrivna metoder för utveckling av talande dialogsystem. Motivet bakom sådana metoder är att dialogsystem måste kunna hantera en stor variation, i såväl användarnas beteende, som i prestandan hos olika tal- och språkteknologiska delkomponenter. Traditionella tillvägagångssätt, som baseras på handskrivna komponenter i dialogsystem, har ofta svårt att uppvisa robusthet i hanteringen av sådan variation. Datadrivna metoder har visat sig vara robusta mot variation i olika problem inom datavetenskap och artificiell intelligens, och har på senare tid blivit populära även inom forskning kring talande dialogsystem. Den här avhandlingen presenterar fyra nya bidrag till datadrivna metoder för utveckling av talande dialogsystem. Det första bidraget är en datadriven metod för semantisk tolkning av talspråk. Den föreslagna metoden har två viktiga egenskaper: robust hantering av ”ogrammatisk” indata (på grund av talets spontana natur samt fel i taligenkänning), samt bevarande av strukturella relationer mellan koncept i den semantiska representationen. Tidigare metoder för semantisk tolkning av talspråk har typiskt sett endast hanterat den ena av dessa två utmaningar. Det andra bidraget i avhandlingen är en datadriven metod för turtagning i dialogsystem. Den föreslagna modellen utnyttjar prosodi, syntax, semantik samt dialogkontext för att avgöra när i användarens tal som det är lämpligt för systemet att ge respons. Det tredje bidraget är en data-driven metod för detektering av fel och missförstånd i dialogsystem. Där tidigare arbeten har fokuserat på detektering av fel on-line och endast testats i enskilda domäner, presenterats här modeller för analys av fel såväl off-line som on-line, och som tränats samt utvärderats på tre skilda dialogsystemkorpusar. Slutligen presenteras en metod för hur dialogsystem ska kunna tillägna sig ny kunskap genom interaktion med användaren. Metoden är utvärderad i ett scenario där systemet ska bygga upp en kunskapsbas i en geografisk domän genom så kallad "crowdsourcing". Systemet börjar med minimal kunskap och använder den talade dialogen för att både samla ny information och verifiera den kunskap som inhämtats. Den generella ansatsen i den här avhandlingen är att modellera olika uppgifter för dialogsystem som  klassificeringsproblem, och undersöka särdrag i diskursens kontext som kan användas för att träna klassificerare. Under arbetets gång har olika slags lexikala, syntaktiska, prosodiska samt kontextuella särdrag undersökts. En allmän diskussion om dessa särdrags bidrag till modellering av ovannämnda uppgifter utgör ett av avhandlingens huvudsakliga bidrag. En annan central del i avhandlingen är att träna modeller som kan användas direkt i dialogsystem, varför endast automatiskt extraherbara särdrag (som inte kräver manuell uppmärkning) används för att träna modellerna. Vidare utvärderas modellernas prestanda på såväl taligenkänningsresultat som transkriptioner för att undersöka hur robusta de föreslagna metoderna är. Den centrala hypotesen i denna avhandling är att modeller som tränas med de föreslagna kontextuella särdragen presterar bättre än en referensmodell. Giltigheten hos denna hypotes har bedömts med såväl kvalitativa som kvantitativa utvärderingar, som nyttjar både objektiva och subjektiva mått. / <p>QC 20160225</p>
72

PSEUDO ERROR DETECTION IN SMART ANTENNA/DIVERSITY SYSTEMS

Haghdad, Mehdi, Feher, Kamilo 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / An implementation of a Pseudo Error Detection (PSED) system is presented and its performance in conjunction with smart antenna and smart diversity systems tested and evaluated. Non redundancy, instant response and relative simplicity make the Pseudo Error Detectors excellent real time error monitoring systems in smart antenna and smart diversity systems. Because of the Non-redundant Error Detection mechanism in Pseudo Error Detectors, we can monitor the error quality without any coding or overhead. The output of the pseudo error detector in AWGN, selective fading Doppler shift and other interference environments is directly correlated to the BER and BLER. This direct correlation makes it a great tool for online error monitoring of a system and can have numerous applications In a PSED the Eye diagram from the demodulator is sampled once per symbol. By monitoring and comparison of the eye at sampled intervals at different thresholds, we would know if an error has occurred. By integrating this result over a period of time we can get the averaged error level. The results provided in this paper were obtained and verified by both MatLab simulations using dynamic simulation techniques and hardware measurements over dynamic channels.
73

SMART ANTENNA (DIVERSITY) AND NON-FEEDBACK IF EQUALIZATION TECHNIQUES FOR LEO SATELLITE COMMUNICATIONS IN A COMPLEX INTERFERENCE ENVIRONMENT

Haghdad, Mehdi, Feher, Kamilo 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / An improved performance smart diversity was invented to improve the signal performance in a combined selective fading, Additive White Gaussian Noise (AWGN), Co-channel interference (CCI) and Doppler shift environment such as the LEO satellite channel. This system is also applicable to aeronautical and telemetry channels. Smart diversity is defined here as a mechanism that selects at each moment the best branch in a n-branch diversity system based on the error quality with no default branch and no prioritization. The predominant novelty of this discovery is the introduction of multi level analog based Pseudo Error Detectors (PSED) in every branch. One of the advantages of PSED is that it is a non redundant error detection system, with no requirement for overhead and no need for additional valuable spectrum. This research was motivated by problems in LEO satellite systems due to low orbit and high relative speed with respect to the ground stations. The system is independent of the modulation techniques and is applicable to both coherent and non-coherent detections. The results from simulations using dynamic simulation techniques and hardware measurements over dynamic channels show significant improvement of both the Bit Error Rate (BER) and the Block Error Rate (BLER).
74

The use of classification methods for gross error detection in process data

Gerber, Egardt 12 1900 (has links)
Thesis (MScEng)-- Stellenbosch University, 2013. / ENGLISH ABSTRACT: All process measurements contain some element of error. Typically, a distinction is made between random errors, with zero expected value, and gross errors with non-zero magnitude. Data Reconciliation (DR) and Gross Error Detection (GED) comprise a collection of techniques designed to attenuate measurement errors in process data in order to reduce the effect of the errors on subsequent use of the data. DR proceeds by finding the optimum adjustments so that reconciled measurement data satisfy imposed process constraints, such as material and energy balances. The DR solution is optimal under the assumed statistical random error model, typically Gaussian with zero mean and known covariance. The presence of outliers and gross errors in the measurements or imposed process constraints invalidates the assumptions underlying DR, so that the DR solution may become biased. GED is required to detect, identify and remove or otherwise compensate for the gross errors. Typically GED relies on formal hypothesis testing of constraint residuals or measurement adjustment-based statistics derived from the assumed random error statistical model. Classification methodologies are methods by which observations are classified as belonging to one of several possible groups. For the GED problem, artificial neural networks (ANN’s) have been applied historically to resolve the classification of a data set as either containing or not containing a gross error. The hypothesis investigated in this thesis is that classification methodologies, specifically classification trees (CT) and linear or quadratic classification functions (LCF, QCF), may provide an alternative to the classical GED techniques. This hypothesis is tested via the modelling of a simple steady-state process unit with associated simulated process measurements. DR is performed on the simulated process measurements in order to satisfy one linear and two nonlinear material conservation constraints. Selected features from the DR procedure and process constraints are incorporated into two separate input vectors for classifier construction. The performance of the classification methodologies developed on each input vector is compared with the classical measurement test in order to address the posed hypothesis. General trends in the results are as follows: - The power to detect and/or identify a gross error is a strong function of the gross error magnitude as well as location for all the classification methodologies as well as the measurement test. - For some locations there exist large differences between the power to detect a gross error and the power to identify it correctly. This is consistent over all the classifiers and their associated measurement tests, and indicates significant smearing of gross errors. - In general, the classification methodologies have higher power for equivalent type I error than the measurement test. - The measurement test is superior for small magnitude gross errors, and for specific locations, depending on which classification methodology it is compared with. There is significant scope to extend the work to more complex processes and constraints, including dynamic processes with multiple gross errors in the system. Further investigation into the optimal selection of input vector elements for the classification methodologies is also required. / AFRIKAANSE OPSOMMING: Alle prosesmetings bevat ʼn sekere mate van metingsfoute. Die fout-element van ʼn prosesmeting word dikwels uitgedruk as bestaande uit ʼn ewekansige fout met nul verwagte waarde, asook ʼn nie-ewekansige fout met ʼn beduidende grootte. Data Rekonsiliasie (DR) en Fout Opsporing (FO) is ʼn versameling van tegnieke met die doelwit om die effek van sulke foute in prosesdata op die daaropvolgende aanwending van die data te verminder. DR word uitgevoer deur die optimale veranderinge aan die oorspronklike prosesmetings aan te bring sodat die aangepaste metings sekere prosesmodelle gehoorsaam, tipies massa- en energie-balanse. Die DR-oplossing is optimaal, mits die statistiese aannames rakende die ewekansige fout-element in die prosesdata geldig is. Dit word tipies aanvaar dat die fout-element normaal verdeel is, met nul verwagte waarde, en ʼn gegewe kovariansie matriks. Wanneer nie-ewekansige foute in die data teenwoordig is, kan die resultate van DR sydig wees. FO is daarom nodig om nie-ewekansige foute te vind (Deteksie) en te identifiseer (Identifikasie). FO maak gewoonlik staat op die statistiese eienskappe van die meting aanpassings wat gemaak word deur die DR prosedure, of die afwykingsverskil van die model vergelykings, om formele hipoteses rakende die teenwoordigheid van nie-ewekansige foute te toets. Klassifikasie tegnieke word gebruik om die klasverwantskap van observasies te bepaal. Rakende die FO probleem, is sintetiese neurale netwerke (SNN) histories aangewend om die Deteksie en Identifikasie probleme op te los. Die hipotese van hierdie tesis is dat klassifikasie tegnieke, spesifiek klassifikasiebome (CT) en lineêre asook kwadratiese klassifikasie funksies (LCF en QCF), suksesvol aangewend kan word om die FO probleem op te los. Die hipotese word ondersoek deur middel van ʼn simulasie rondom ʼn eenvoudige gestadigde toestand proses-eenheid wat aan een lineêre en twee nie-lineêre vergelykings onderhewig is. Kunsmatige prosesmetings word geskep met behulp van lukrake syfers sodat die foutkomponent van elke prosesmeting bekend is. DR word toegepas op die kunsmatige data, en die DR resultate word gebruik om twee verskillende insetvektore vir die klassifikasie tegnieke te skep. Die prestasie van die klassifikasie metodes word vergelyk met die metingstoets van klassieke FO ten einde die gestelde hipotese te beantwoord. Die onderliggende tendense in die resultate is soos volg: - Die vermoë om ‘n nie-ewekansige fout op te spoor en te identifiseer is sterk afhanklik van die grootte asook die ligging van die fout vir al die klassifikasie tegnieke sowel as die metingstoets. - Vir sekere liggings van die nie-ewekansige fout is daar ‘n groot verskil tussen die vermoë om die fout op te spoor, en die vermoë om die fout te identifiseer, wat dui op smering van die fout. Al die klassifikasie tegnieke asook die metingstoets baar hierdie eienskap. - Oor die algemeen toon die klassifikasie metodes groter sukses as die metingstoets. - Die metingstoets is meer suksesvol vir relatief klein nie-ewekansige foute, asook vir sekere liggings van die nie-ewekansige fout, afhangende van die klassifikasie tegniek ter sprake. Daar is verskeie maniere om die bestek van hierdie ondersoek uit te brei. Meer komplekse, niegestadigde prosesse met sterk nie-lineêre prosesmodelle en meervuldige nie-ewekansige foute kan ondersoek word. Die moontlikheid bestaan ook om die prestasie van klassifikasie metodes te verbeter deur die gepaste keuse van insetvektor elemente.
75

Metacognition in decision making

Boldt, Annika January 2015 (has links)
Humans effortlessly and accurately judge their subjective probability of being correct in a given decision, leading to the view that metacognition is integral to decision making. This thesis reports a series of experiments assessing people’s confidence and error-detection judgements. These different types of metacognitive judgements are highly similar with regard to their methodology, but have been studied largely separately. I provide data indicating that these judgements are fundamentally linked and that they rely on shared cognitive and neural mechanisms. As a first step towards such a joint account of confidence and error detection, I present simulations from a computational model that is based on the notion these judgements are based on the same underlying processes. I next focus on how metacognitive signals are utilised to enhance cognitive control by means of a modulation of information seeking. I report data from a study in which participants received performance feedback, testing the hypothesis that participants will focus more on feedback when they are uncertain whether they were correct in the current trial, whilst ignoring feedback when they are certain regarding their accuracy. A final question addressed in this thesis asks which information contributes internally to the formation of metacognitive judgements, given that it remains a challenge for most models of confidence to explain the precise mechanisms by which confidence reflects accuracy, under which circumstances this correlation is reduced, and the role other influences might have, such as the inherent reliability of a source of evidence. The results reported here suggest that multiple variables – such as response time and reliability of evidence – play a role in the generation of metacognitive judgements. Inter-individual differences with regard to the utilisation of these cues to confidence are tested. Taken together, my results suggest that metacognition is crucially involved in decision making and cognitive control.
76

A Visual-Aural Self-Instructional Program: In Pitch-Error Detection for Student Choral Conductors

Michels, Walter Joseph, 1930- 08 1900 (has links)
This study seeks to develop and evaluate a program of selfinstructional drill materials for improving the ability of students to detect pitch errors in choral singing. The specific purposes of the study are as follows: (1) To develop and validate a visualaural test for pitch-error detection; (2) to develop a visual-aural, self-instructional program for improving the ability of students to detect pitch errors; and (3) To determine whether the program of self-instructional drill materials modifies the ability to detect pitch errors. In the first phase of this three-phase study, a body of testing materials was assembled, pilot-tested, edited, and judged reliable for use. In Phase II a body of self-instructional, programmed drill materials was assembled, pilot-tested, corrected, and judged ready for evaluation. In Phase III the procedures were as follows: (1) the subjects for whom the program was intended were administered a pretest of their pitch-error detection ability; (2) one group (A) participated in the programmed drill materials developed, while the other group (B) used no programmed materials; (3) both groups were administered a midtest to determine whether there was any change; (4) the latter group (B) participated in the programed drill materials developed, while the first group (A) no longer used the programmed materials; (5) students in both groups were administered a posttest to deterine the effectiveness of the programmed drill materials in developing the ability to detect pitch errors while reading the vocal score.
77

Detection and Correction of Inconsistencies in the Multilingual Treebank HamleDT / Detection and Correction of Inconsistencies in the Multilingual Treebank HamleDT

Mašek, Jan January 2015 (has links)
We studied the treebanks included in HamleDT and partially unified their label sets. Afterwards, we used a method based on variation n-grams to automatically detect errors in morphological and dependency annotation. Then we used the output of a part-of-speech tagger / dependency parser trained on each treebank to correct the detected errors. The performance of both the detection and the correction of errors on both annotation levels was manually evaluated on a randomly selected samples of suspected errors from several treebanks. Powered by TCPDF (www.tcpdf.org)
78

Quaternary CLB a falul tolerant quaternary FPGA

Rhod, Eduardo Luis January 2012 (has links)
A diminuição no tamanho dos transistores vem aumentando cada vez mais o número de funções que os dispositivos eletrônicos podem realizar. Apesar da diminuição do tamanho mínimo dos transistores, a velocidade máxima dos circuitos não consegue seguir a mesma taxa de aumento. Um dos grandes culpados apontados pelos pesquisadores são as interconexões entre os transistores e também entre os componentes. O aumento no número de interconexões dos circuitos traz consigo um significativo aumento do cosumo de energia, aumento do atraso de propagação dos sinais, além de um aumento da complexidade e custo do projeto dos circuitos integrados. Como uma possível solução a este problema é proposta a utilização de lógica multivalorada, mais especificamente, a lógica quaternária. Os dispositivos FPGAs são caracterizados principalmente pela grande flexibilidade que oferecem aos projetistas de sistemas digitais. Entretanto, com o avanço nas tecnologias de fabricação de circuitos integrados e diminuição das dimensões de fabricação, os problemas relacionados ao grande número de interconexões são uma preocupação para as próximas tecnologias de FPGAs. As tecnologias menores que 90nm possuem um grande aumento na taxa de erros dos circuitos, na lógica combinacional e sequencial. Apesar de algumas potenciais soluções começara a ser investigadas pela comunidade, a busca por circuitos tolerantes a erros induzidos por radiação, sem penalidades no desempenho, área ou potência, ainda é um assunto de pesquisa em aberto. Este trabalho propõe o uso de circuitos quaternários com modificações para tolerar falhas provenientes de eventos transientes. Como principal contribuição deste trabalho destaca-se o desenvolvimento de uma CLB (do inglês Configurable Logic Block) quaternária capaz de suportar eventos transientes e, na possibilidade de um erro, evitá-lo ou corrigi-lo. / The decrease in transistor size is increasing the number of functions that can be performed by the electronic devices. Despite this reduction in the transistors minimum size, the circuit’s speed does not follow the same rate. One of the major reasons pointed out by researchers are the interconnections between the transistors and between the components. The increase in the number of circuit interconnections brings a significant increase in energy consumption, propagation delay of signals, and an increase in the complexity and cost of new technologies IC designs. As a possible solution to this problem the use of multivalued logic is being proposed, more specifically, the quaternary logic. FPGA devices are characterized mainly by offering greater flexibility to designers of digital systems. However, with the advance in IC manufacturing technologies and the reduced size of the minimum fabricated dimensions, the problems related to the large number of interconnections are a concern for future technologies of FPGAs. The sub 90nm technologies have a large increase in the error rate of its functions for the combinational and sequential logic. Although potential solutions are being investigated by the community, the search for circuits tolerant to radiation induced errors, without performance, area, or power penalties, is still an open research issue. This work proposes the use of quaternary circuits with modifications to tolerate faults from transient events. The main contribution of this work is the development of a quaternary CLB (Configurable Logic Block) able to withstand transient events and the occurrence of soft errors.
79

Error Detection and Recovery for Robot Motion Planning with Uncertainty

Donald, Bruce Randall 01 July 1987 (has links)
Robots must plan and execute tasks in the presence of uncertainty. Uncertainty arises from sensing errors, control errors, and uncertainty in the geometry of the environment. The last, which is called model error, has received little previous attention. We present a framework for computing motion strategies that are guaranteed to succeed in the presence of all three kinds of uncertainty. The motion strategies comprise sensor-based gross motions, compliant motions, and simple pushing motions.
80

Concurrent Error Detection in Finite Field Arithmetic Operations

Bayat Sarmadi, Siavash January 2007 (has links)
With significant advances in wired and wireless technologies and also increased shrinking in the size of VLSI circuits, many devices have become very large because they need to contain several large units. This large number of gates and in turn large number of transistors causes the devices to be more prone to faults. These faults specially in sensitive and critical applications may cause serious failures and hence should be avoided. On the other hand, some critical applications such as cryptosystems may also be prone to deliberately injected faults by malicious attackers. Some of these faults can produce erroneous results that can reveal some important secret information of the cryptosystems. Furthermore, yield factor improvement is always an important issue in VLSI design and fabrication processes. Digital systems such as cryptosystems and digital signal processors usually contain finite field operations. Therefore, error detection and correction of such operations have become an important issue recently. In most of the work reported so far, error detection and correction are applied using redundancies in space (hardware), time, and/or information (coding theory). In this work, schemes based on these redundancies are presented to detect errors in important finite field arithmetic operations resulting from hardware faults. Finite fields are used in a number of practical cryptosystems and channel encoders/decoders. The schemes presented here can detect errors in arithmetic operations of finite fields represented in different bases, including polynomial, dual and/or normal basis, and implemented in various architectures, including bit-serial, bit-parallel and/or systolic arrays.

Page generated in 0.0436 seconds