• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 693
  • 223
  • 199
  • 91
  • 75
  • 48
  • 25
  • 23
  • 17
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • Tagged with
  • 1732
  • 534
  • 243
  • 183
  • 165
  • 153
  • 153
  • 125
  • 113
  • 108
  • 107
  • 94
  • 80
  • 78
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

On the status of contrast : evidence from the prosodic domain

Stavropoulou, Pepi January 2013 (has links)
Recent models of Information Structure (IS) identify a low level contrast feature that functions within the topic and focus of the utterance. This study investigates the exact nature of this feature based on empirical evidence from a controlled read speech experiment on the prosodic realization of different levels of contrast in Modern Greek. Results indicate that only correction is truly contrastive, and that it is similarly realized in both topic and focus, suggesting that contrast is an independent IS dimension. Non default focus position is further identified as a parameter that triggers a prosodically marked rendition, similar to correction.
182

Error correction model estimation of the Canada-US real exchange rate

Ye, Dongmei 18 January 2008
Using the error correction model, we link the long-run behavior of the Canada-US real exchange rate to its short-run dynamics. The equilibrium real exchange rate is determined by the energy and non-energy commodity prices over the period 1973Q1-1992Q1. However such a single long-run relationship does not hold when the sample period is extended to 2004Q4. This breakdown can be explained by the break point which we find at 1993Q3. At the break point, the effect of the energy price shocks on Canadas real exchange rate turns from negative to positive while the effect of the non-energy commodity price shocks is constantly positive. We find that after one year 40.03% of the gap between the actual and equilibrium real exchange rate is closed. The Canada-US interest rate differential affects the real exchange rate temporarily. The Canadas real exchange rate depreciates immediately after a decrease in Canadas interest rate and appreciates next quarter but not by as much as it has depreciated.
183

An iterative reconstruction algorithm for quantitative tissue decomposition using DECT / En iterativ rekonstruktions algoritm för kvantitativ vävnadsklassificering via DECT

Grandell, Oscar January 2012 (has links)
The introduction of dual energy CT, DECT, in the field of medical healthcare has made it possible to extract more information of the scanned objects. This in turn has the potential to improve the accuracy in radiation therapy dose planning. One problem that remains before successful material decomposition can be achieved however, is the presence of beam hardening and scatter artifacts that arise in a scan. Methods currently in clinical use for removal of beam hardening often bias the CT numbers. Hence, the possibility for an appropriate tissue decomposition is limited. Here a method for successful decomposition as well as removal of the beam hardening artifact is presented. The method uses effective linear attenuations for the five base materials, water, protein, adipose, cortical bone and marrow, to perform the decomposition on reconstructed simulated data. This is performed inside an iterative loop together with the polychromatic x-ray spectra to remove the beam hardening
184

Automated Pose Correction for Face Recognition

Godzich, Elliot J. 01 January 2012 (has links)
This paper describes my participation in a MITRE Corporation sponsored computer science clinic project at Harvey Mudd College as my senior project. The goal of the project was to implement a landmark-based pose correction system as a component in a larger, existing face recognition system. The main contribution I made to the project was the implementation of the Active Shape Models (ASM) algorithm; the inner workings of ASM are explained as well as how the pose correction system makes use of it. Included is the most recent draft (as of this writing) of the final report that my teammates and I produced highlighting the year's accomplishments. Even though there are few quantitative results to show because the clinic program is ongoing, our qualitative results are quite promising.
185

Automatic segmentation of skin lesions from dermatological photographs

Glaister, Jeffrey Luc January 2013 (has links)
Melanoma is the deadliest form of skin cancer if left untreated. Incidence rates of melanoma have been increasing, especially among young adults, but survival rates are high if detected early. Unfortunately, the time and costs required for dermatologists to screen all patients for melanoma are prohibitively expensive. There is a need for an automated system to assess a patient's risk of melanoma using photographs of their skin lesions. Dermatologists could use the system to aid their diagnosis without the need for special or expensive equipment. One challenge in implementing such a system is locating the skin lesion in the digital image. Most existing skin lesion segmentation algorithms are designed for images taken using a special instrument called the dermatoscope. The presence of illumination variation in digital images such as shadows complicates the task of finding the lesion. The goal of this research is to develop a framework to automatically correct and segment the skin lesion from an input photograph. The first part of the research is to model illumination variation using a proposed multi-stage illumination modeling algorithm and then using that model to correct the original photograph. Second, a set of representative texture distributions are learned from the corrected photograph and a texture distinctiveness metric is calculated for each distribution. Finally, a texture-based segmentation algorithm classifies regions in the photograph as normal skin or lesion based on the occurrence of representative texture distributions. The resulting segmentation can be used as an input to separate feature extraction and melanoma classification algorithms. The proposed segmentation framework is tested by comparing lesion segmentation results and melanoma classification results to results using other state-of-the-art algorithms. The proposed framework has better segmentation accuracy compared to all other tested algorithms. The segmentation results produced by the tested algorithms are used to train an existing classification algorithm to identify lesions as melanoma or non-melanoma. Using the proposed framework produces the highest classification accuracy and is tied for the highest sensitivity and specificity.
186

On the learnibility of Mildly Context-Sensitive languages using positive data and correction queries

Becerra Bonache, Leonor 06 March 2006 (has links)
Con esta tesis doctoral aproximamos la teoría de la inferencia gramatical y los estudios de adquisición del lenguaje, en pos de un objetivo final: ahondar en la comprensión del modo como los niños adquieren su primera lengua mediante la explotación de la teoría inferencial de gramáticas formales.Nuestras tres principales aportaciones son:1. Introducción de una nueva clase de lenguajes llamada Simple p-dimensional external contextual (SEC). A pesar de que las investigaciones en inferencia gramatical se han centrado en lenguajes regulares o independientes del contexto, en nuestra tesis proponemos centrar esos estudios en clases de lenguajes más relevantes desde un punto de vista lingüístico (familias de lenguajes que ocupan una posición ortogonal en la jerarquía de Chomsky y que son suavemente dependientes del contexto, por ejemplo, SEC).2. Presentación de un nuevo paradigma de aprendizaje basado en preguntas de corrección. Uno de los principales resultados positivos dentro de la teoría del aprendizaje formal es el hecho de que los autómatas finitos deterministas (DFA) se pueden aprender de manera eficiente utilizando preguntas de pertinencia y preguntas de equivalencia. Teniendo en cuenta que en el aprendizaje de primeras lenguas la corrección de errores puede jugar un papel relevante, en nuestra tesis doctoral hemos introducido un nuevo modelo de aprendizaje que reemplaza las preguntas de pertinencia por preguntas de corrección.3. Presentación de resultados basados en las dos previas aportaciones. En primer lugar, demostramos que los SEC se pueden aprender a partir de datos positivos. En segundo lugar, demostramos que los DFA se pueden aprender a partir de correcciones y que el número de preguntas se reduce considerablemente.Los resultados obtenidos con esta tesis doctoral suponen una aportación importante para los estudios en inferencia gramatical (hasta el momento las investigaciones en este ámbito se habían centrado principalmente en los aspectos matemáticos de los modelos). Además, estos resultados se podrían extender a diversos campos de aplicación que gozan de plena actualidad, tales como el aprendizaje automático, la robótica, el procesamiento del lenguaje natural y la bioinformática. / With this dissertation, we bring together the Theory of the Grammatical Inference and Studies of language acquisition, in pursuit of our final goal: to go deeper in the understanding of the process of language acquisition by using the theory of inference of formal grammars. Our main three contributions are:1. Introduction of a new class of languages called Simple p-dimensional external contextual (SEC). Despite the fact that the field of Grammatical Inference has focused its research on learning regular or context-free languages, we propose in our dissertation to focus these studies in classes of languages more relevant from a linguistic point of view (families of languages that occupy an orthogonal position in the Chomsky Hierarchy and are Mildly Context-Sensitive, for example SEC).2. Presentation of a new learning paradigm based on correction queries. One of the main results in the theory of formal learning is that deterministic finite automata (DFA) are efficiently learnable from membership query and equivalence query. Taken into account that in first language acquisition the correction of errors can play an important role, we have introduced in our dissertation a novel learning model by replacing membership queries with correction queries.3. Presentation of results based on the two previous contributions. First, we prove that SEC is learnable from only positive data. Second, we prove that it is possible to learn DFA from corrections and that the number of queries is reduced considerably.The results obtained with this dissertation suppose an important contribution to studies of Grammatical Inference (the current research in Grammatical Inference has focused mainly on the mathematical aspects of the models). Moreover, these results could be extended to studies related directly to machine translation, robotics, natural language processing, and bioinformatics.
187

El uso del reconocimiento vocal para la corrección fonética de la vibrante múltiple del espanol de los estudiantes francófonos

Zaldívar Turrent, Maria Fernanda January 2009 (has links) (PDF)
Ce mémoire propose une étude comparative sur l'enseignement de la vibrante multiple de l'espagnol chez les francophones. Il est connu que ce son présente des difficultés aux francophones qui apprennent l'espagnol. Pour aider lesdits étudiants dans leur correction phonétique nous avons fait cette étude à l'aide de deux logiciels: CAN8 et un logiciel de reconnaissance vocale. Leur grande différence est que le premier est un logiciel qui fonctionne comme une enregistreuse, c'est-à-dire que l'étudiant enregistre sa prononciation et peut l'écouter autant de fois qu'il le veut, mais sans savoir s'il le fait de façon correcte. Par contre, dans le cas du logiciel de reconnaissance vocale, l'étudiant reçoit une note comme rétroaction pour chacun des segments qu'il prononce. Nous avons créé des exercices spécifiques pour la pratique des vibrantes de l'espagnol et nous avons séparé les participants en deux groupes, un groupe a fait les exercices à l'aide de CAN8 et l'autre à l'aide du prototype de reconnaissance vocale. De cette façon, nous avons voulu savoir si l'enseignement avec un logiciel qui donne de la rétroaction est avantageux par rapport à un que ne le fait pas. Nos résultats montrent une mince différence entre l'utilisation des deux logiciels, mais une évidente amélioration de la prononciation des participants. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Correction phonétique, Prononciation, Reconnaissance vocale, Espagnol.
188

On Constructing Low-Density Parity-Check Codes

Ma, Xudong January 2007 (has links)
This thesis focuses on designing Low-Density Parity-Check (LDPC) codes for forward-error-correction. The target application is real-time multimedia communications over packet networks. We investigate two code design issues, which are important in the target application scenarios, designing LDPC codes with low decoding latency, and constructing capacity-approaching LDPC codes with very low error probabilities. On designing LDPC codes with low decoding latency, we present a framework for optimizing the code parameters so that the decoding can be fulfilled after only a small number of iterative decoding iterations. The brute force approach for such optimization is numerical intractable, because it involves a difficult discrete optimization programming. In this thesis, we show an asymptotic approximation to the number of decoding iterations. Based on this asymptotic approximation, we propose an approximate optimization framework for finding near-optimal code parameters, so that the number of decoding iterations is minimized. The approximate optimization approach is numerically tractable. Numerical results confirm that the proposed optimization approach has excellent numerical properties, and codes with excellent performance in terms of number of decoding iterations can be obtained. Our results show that the numbers of decoding iterations of the codes by the proposed design approach can be as small as one-fifth of the numbers of decoding iterations of some previously well-known codes. The numerical results also show that the proposed asymptotic approximation is generally tight for even non-extremely limiting cases. On constructing capacity-approaching LDPC codes with very low error probabilities, we propose a new LDPC code construction scheme based on $2$-lifts. Based on stopping set distribution analysis, we propose design criteria for the resulting codes to have very low error floors. High error floors are the main problems of previously constructed capacity-approaching codes, which prevent them from achieving very low error probabilities. Numerical results confirm that codes with very low error floors can be obtained by the proposed code construction scheme and the design criteria. Compared with the codes by the previous standard construction schemes, which have error floors at the levels of $10^{-3}$ to $10^{-4}$, the codes by the proposed approach do not have observable error floors at the levels higher than $10^{-7}$. The error floors of the codes by the proposed approach are also significantly lower compared with the codes by the previous approaches to constructing codes with low error floors.
189

A comparative study of the existing methods for their suitability to beam stabilization in Storage Ring at Canadian Light Source

2013 August 1900 (has links)
The stabilization of electron beam in the Storage Ring (SR) is an important task in the 3rd generation synchrotron facility worldwide. Deviations in the position and angle of electron beam with respect to a desired orbit must be below 10% of the beam size. This requirement corresponds to about 3 μm deviations at the Canadian Light Source (CLS). Further, the higher the correction bandwidth, the better in the stabilization process. The correction bandwidth at CLS was expected to increase to be 45 Hz or higher from the current operating rate at 18 Hz. In addition, there is requirement to control the beam deviation at specific positions on the orbit. To meet these requirements, a comparative study of the existing methods for the stabilization of electron beam in the SR is thus necessary, which is the main motivation of this thesis study. The overall objective of this thesis study was to find the most suitable method for CLS so that the correction bandwidth can be 45 Hz or higher. The study was primarily conducted by simulation due to the restriction in performing experiments on the whole beamline. The transfer functions of three important devices at the storage ring, which are Beam Position Monitor (BPM), Orbit Correction Magnets (OCM) and Vacuum Chamber (VC), were identified. Noises on the storage ring were also identified to improve the reliability of the simulation study. The existing methods for beam orbit correction, such as (1) Singular Value Decomposition (SVD), (2) Eigen Vector method with Constraints (EVC) and (3) SVD plus Proportional integral derivative (PID), were compared based on the simulation technique. Several conclusions can be drawn from this study: (1) there is no significant difference between the EVC method and SVD method in terms of overall orbit correction performance, and they both can meet the correction bandwidth of 45 Hz. The EVC method is however much better than the SVD method in terms of the beam orbit correction performance at specific positions; (2) the SVD plus PID method is much better than the SVD method as well as EVC method in terms of the overall orbit correction performance, and its performance for specific position orbit correction is comparable with the performance of EVC. Therefore, the SVD plus PID method is recommended for CLS. This study has made the following contributions on the problem of beam stabilization the storage ring in the synchrotron technology: (1) provision of the models of BPM and OCM and the PID controller tailored to specific BPM and OCM devices, which is useful to other synchrotron facilities in the world; (2) generation of the knowledge regarding the performances of SVD, EVC and SVD plus PID methods on one synchrotron facility is valuable, and this knowledge is useful to other synchrotron facilities in selection of the best methods for electron orbit correction.
190

Post-Correction of Analog to Digital Converters

Gong, Pu, Guo, Hua January 2008 (has links)
As the rapid development of the wireless communication system and mobile video devices, the integrated chip with low power consuming and high conversion efficiency is widely needed. ADC and DAC are playing an important role in these applications. The aim of this thesis is to verify a post-correction method which is used for improving the performance of ADC. First of all, this report introduces the development and present status of ADC, and expatiate its important parameters from two different classes (static performance and dynamic performance). Based on the fundamental principle, the report then focuses on the dynamic integral non-linearity modeling of ADC. Refer to this model, one post-correction method is described and verified. Upon the face of post-correction, this method is to modify the output signals which have been converted from analog to digital format by adding a correction term. Improvement made by the post-correction needs to be checked out. Thus the performance analysis mainly relay on the measures of total harmonic distortion and signal to noise and distortion ratio is also included in this thesis.

Page generated in 0.1002 seconds