• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 2
  • 2
  • 1
  • Tagged with
  • 23
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Psalm 47 - how universal is its universalism? An intra-, inter- and extratextual analysis of the poem

Schader, Jo-Mari 10 March 2010 (has links)
The hypothesis of this study is as follows: If Psalm 47 is analysed intra-, inter and extratextually, we will be able to gain greater insight into the cultural and historical context in which it originated, the cultic use of the Psalm in later periods, as well as its general meaning. All this is done to determine whether there are any indications of universalism in Psalm 47 and that has indeed been found to be the case on various grounds. Each chapter deals with one of these aspects. Chapter 1 forms the introduction that stipulates the research question and how the study will go about resolving it. Chapter 2 focuses on an intratextual analysis of Psalm 47 in an attempt to determine the interrelatedness of all textual features (morphology, syntax, poetic stratagems, structure, genre) on the literary level. This analysis will aid the interpreter in establishing a structure of the text, suggesting one that could meet with relative consensus amongst some exegetes. It, in turn, forms the framework for the socio-historical interpretation of the text. Other interpretation problems such as its Gattung, Sitz im Leben and dating are also discussed in this section. Chapter 3 investigates Psalm 47 from an intertextual perspective. Attention is paid to similarities with other texts in the immediate and more remote context of the psalm. An intertextual analysis is conducted between Psalm 47 and Psalms 46 and 48, and a brief overview of intertextual relations between Psalm 47 and the rest of the Korahite Psalms are given. Here the study links up with a recent trend in Psalms research, namely to concentrate less upon individual poems and their so-called Sitz im Leben and more upon the composition and redaction of the Psalter as a book especially by focussing on concatenation of a psalm and the psalms which follow on it and precede it. Attention is also given to a spatial reading of these texts to understand how they fit into the Ancient Near Eastern spatial orientation, but also transcend it. Chapter 4 consists of an extratextual analysis of Psalm 47. It had three aims: First, to identify and explain terminology referring to patronage and how patron-client/vassal relationships functioned in the Ancient Near East. This was done through a socio-scientific investigation of the poem in its social context, in order to understand the behaviour of the different role-players in the psalm. Second, to identify and explain war terminology occurring in Psalm 47. Third, to “illustrate” the psalm by investigating Ancient Near Eastern iconography and art. The main goal of this chapter was to gain a clearer understanding of the relationship between Israel and her neighbours. Are the nations considered to be incorporated into Israel or do they function merely as a vassal to their patron in Psalm 47:10? Chapter 5 is a summary of the insights gained in the previous chapters. It critically discusses the results of the study, the conclusions reached, the contribution of this work to the field of study, areas opened for further research, and possible shortcomings in the researcher’s own approach. Copyright / Dissertation (MA)--University of Pretoria, 2008. / Ancient Languages / unrestricted
12

Molecular Studies of South American Teiid Lizards (Teiidae: Squamata) from Deep Time to Shallow Divergences

Tucker, Derek B. 01 June 2016 (has links)
I focus on phylogenetic relationships of teiid lizards beginning with generic and species relationship within the family, followed by a detailed biogeographical examination of the Caribbean genus Pholidoscelis, and end by studying species boundaries and phylogeographic patterns of the widespread Giant Ameiva Ameiva ameiva. Genomic data (488,656 bp of aligned nuclear DNA) recovered a well-supported phylogeny for Teiidae, showing monophyly for 18 genera including those recently described using morphology and smaller molecular datasets. All three methods of phylogenetic estimation (two species tree, one concatenation) recovered identical topologies except for some relationships within the subfamily Tupinambinae (i.e. position of Salvator and Dracaena) and species relationships within Pholidoscelis, but these were unsupported in all analyses. Phylogenetic reconstruction focused on Caribbean Pholidoscelis recovered novel relationships not reported in previous studies that were based on significantly smaller datasets. Using fossil data, I improve upon divergence time estimates and hypotheses for the biogeographic history of the genus. It is proposed that Pholidoscelis colonized the Caribbean islands through the Lesser Antilles based on biogeographic analysis, the directionality of ocean currents, and evidence that most Caribbean taxa originally colonized from South America. Genetic relationships among populations within the Ameiva ameiva species complex have been poorly understood as a result of its continental-scale distribution and an absence of molecular data for the group. Mitochondrial ND2 data for 357 samples from 233 localities show that A. ameiva may consist of up to six species, with pairwise genetic distances among these six groups ranging from 4.7–12.8%. An examination of morphological characters supports the molecular findings with prediction accuracy of the six clades reaching 72.5% using the seven most diagnostic predictors.
13

On Non-Binary Constellations for Channel Encoded Physical Layer Network Coding

Faraji-Dana, Zahra 18 April 2012 (has links)
This thesis investigates channel-coded physical layer network coding, in which the relay directly transforms the noisy superimposed channel-coded packets received from the two end nodes, to the network-coded combination of the source packets. This is in contrast to the traditional multiple-access problem, in which the goal is to obtain each message explicitly at the relay. Here, the end nodes $A$ and $B$ choose their symbols, $S_A$ and $S_B$, from a small non-binary field, $\mathbb{F}$, and use non-binary PSK constellation mapper during the transmission phase. The relay then directly decodes the network-coded combination ${aS_A+bS_B}$ over $\mathbb{F}$ from the noisy superimposed channel-coded packets received from two end nodes. Trying to obtain $S_A$ and $S_B$ explicitly at the relay is overly ambitious when the relay only needs $aS_B+bS_B$. For the binary case, the only possible network-coded combination, ${S_A+S_B}$ over the binary field, does not offer the best performance in several channel conditions. The advantage of working over non-binary fields is that it offers the opportunity to decode according to multiple decoding coefficients $(a,b)$. As only one of the network-coded combinations needs to be successfully decoded, a key advantage is then a reduction in error probability by attempting to decode against all choices of decoding coefficients. In this thesis, we compare different constellation mappers and prove that not all of them have distinct performance in terms of frame error rate. Moreover, we derive a lower bound on the frame error rate performance of decoding the network-coded combinations at the relay. Simulation results show that if we adopt concatenated Reed-Solomon and convolutional coding or low density parity check codes at the two end nodes, our non-binary constellations can outperform the binary case significantly in the sense of minimizing the frame error rate and, in particular, the ternary constellation has the best frame error rate performance among all considered cases.
14

On Non-Binary Constellations for Channel Encoded Physical Layer Network Coding

Faraji-Dana, Zahra 18 April 2012 (has links)
This thesis investigates channel-coded physical layer network coding, in which the relay directly transforms the noisy superimposed channel-coded packets received from the two end nodes, to the network-coded combination of the source packets. This is in contrast to the traditional multiple-access problem, in which the goal is to obtain each message explicitly at the relay. Here, the end nodes $A$ and $B$ choose their symbols, $S_A$ and $S_B$, from a small non-binary field, $\mathbb{F}$, and use non-binary PSK constellation mapper during the transmission phase. The relay then directly decodes the network-coded combination ${aS_A+bS_B}$ over $\mathbb{F}$ from the noisy superimposed channel-coded packets received from two end nodes. Trying to obtain $S_A$ and $S_B$ explicitly at the relay is overly ambitious when the relay only needs $aS_B+bS_B$. For the binary case, the only possible network-coded combination, ${S_A+S_B}$ over the binary field, does not offer the best performance in several channel conditions. The advantage of working over non-binary fields is that it offers the opportunity to decode according to multiple decoding coefficients $(a,b)$. As only one of the network-coded combinations needs to be successfully decoded, a key advantage is then a reduction in error probability by attempting to decode against all choices of decoding coefficients. In this thesis, we compare different constellation mappers and prove that not all of them have distinct performance in terms of frame error rate. Moreover, we derive a lower bound on the frame error rate performance of decoding the network-coded combinations at the relay. Simulation results show that if we adopt concatenated Reed-Solomon and convolutional coding or low density parity check codes at the two end nodes, our non-binary constellations can outperform the binary case significantly in the sense of minimizing the frame error rate and, in particular, the ternary constellation has the best frame error rate performance among all considered cases.
15

Study of unit selection text-to-speech synthesis algorithms / Étude des algorithmes de sélection d’unités pour la synthèse de la parole à partir du texte

Guennec, David 22 September 2016 (has links)
La synthèse de la parole par corpus (sélection d'unités) est le sujet principal de cette thèse. Tout d'abord, une analyse approfondie et un diagnostic de l'algorithme de sélection d'unités (algorithme de recherche dans le treillis d'unités) sont présentés. L'importance de l'optimalité de la solution est discutée et une nouvelle mise en œuvre de la sélection basée sur un algorithme A* est présenté. Trois améliorations de la fonction de coût sont également présentées. La première est une nouvelle façon – dans le coût cible – de minimiser les différences spectrales en sélectionnant des séquences d'unités minimisant un coût moyen au lieu d'unités minimisant chacune un coût cible de manière absolue. Ce coût est testé pour une distance sur la durée phonémique mais peut être appliqué à d'autres distances. Notre deuxième proposition est une fonction de coût cible visant à améliorer l'intonation en se basant sur des coefficients extraits à travers une version généralisée du modèle de Fujisaki. Les paramètres de ces fonctions sont utilisés au sein d'un coût cible. Enfin, notre troisième contribution concerne un système de pénalités visant à améliorer le coût de concaténation. Il pénalise les unités en fonction de classes reposant sur une hiérarchie du degré de risque qu'un artefact de concaténation se produise lors de la concaténation sur un phone de cette classe. Ce système est différent des autres dans la littérature en cela qu'il est tempéré par une fonction floue capable d'adoucir le système de pénalités pour les unités présentant des coûts de concaténation parmi les plus bas de leur distribution. / This PhD thesis focuses on the automatic speech synthesis field, and more specifically on unit selection. A deep analysis and a diagnosis of the unit selection algorithm (lattice search algorithm) is provided. The importance of the solution optimality is discussed and a new unit selection implementation based on a A* algorithm is presented. Three cost function enhancements are also presented. The first one is a new way – in the target cost – to minimize important spectral differences by selecting sequences of candidate units that minimize a mean cost instead of an absolute one. This cost is tested on a phonemic duration distance but can be applied to others. Our second proposition is a target sub-cost addressing intonation that is based on coefficients extracted through a generalized version of Fujisaki's command-response model. This model features gamma functions modeling F0 called atoms. Finally, our third contribution concerns a penalty system that aims at enhancing the concatenation cost. It penalizes units in function of classes defining the risk a concatenation artifact occurs when concatenating on a phone of this class. This system is different to others in the literature in that it is tempered by a fuzzy function that allows to soften penalties for units presenting low concatenation costs.
16

A Research Bed For Unit Selection Based Text To Speech Synthesis System

Konakanchi, Parthasarathy 02 1900 (has links) (PDF)
After trying Festival Speech Synthesis System, we decided to develop our own TTS framework, conducive to perform the necessary research experiments for developing good quality TTS for Indian languages. In most of the attempts on Indian language TTS, there is no prosody model, provision for handling foreign language words and no phrase break prediction leading to the possibility of introducing appropriate pauses in the synthesized speech. Further, in the Indian context, there is a real felt need for a bilingual TTS, involving English, along with the Indian language. In fact, it may be desirable to also have a trilingual TTS, which can also take care of the language of the neighboring state or Hindi, in addition. Thus, there is a felt need for a full-fledged TTS development framework, which lends itself for experimentation involving all the above issues and more. This thesis work is therefore such a serious attempt to develop a modular, unit selection based TTS framework. The developed system has been tested for its effectiveness to create intelligible speech in Tamil and Kannada. The created system has also been used to carry out two research experiments on TTS. The first part of the work is the design and development of corpus-based concatenative Tamil speech synthesizer in Matlab and C. A synthesis database has been created with 1027 phonetically rich, pre-recorded sentences, segmented at the phone level. From the sentence to be synthesized, specifications of the required target units are predicted. During synthesis, database units are selected that best match the target specification according to a distance metric and a concatenation quality metric. To accelerate matching, the features of the end frames of the database units have been precomputed and stored. The selected units are concatenated to produce synthetic speech. The high values of the obtained mean opinion scores for the TTS output reveal that speech synthesized using our TTS is intelligible and acceptably natural and can possibly be put to commercial use with some additional features. Experiments carried out by others using my TTS framework have shown that, whenever the required phonetic context is not available in the synthesis database., similar phones that are perceptually indistinguishable may be substituted. The second part of the work deals with the design and modification of the developed TTS framework to be embedded in mobile phones. Commercial GSM FR, EFR and AMR speech codecs are used for compressing our synthesis database. Perception experiments reveal that speech synthesized using a highly compressed database is reasonably natural. This holds promise in the future to read SMSs and emails on mobile phones in Indian languages. Finally, we observe that incorporating prosody and pause models for Indian language TTS would further enhance the quality of the synthetic speech. These are some of the potential, unexplored areas ahead, for research in speech synthesis in Indian languages.
17

Modulation sur les canaux vocodés / Modulation over speech coded channels

Chmayssani, Toufic 03 May 2010 (has links)
Les canaux vocodés sont les canaux de communications dédiés à la voix et dans lesquels le signal traverse divers équipements destinés au transport de la voix tels que des codeurs de parole, des détecteurs d’activité vocale (VAD), des systèmes de transmission discontinue (DTX). Il peut s’agir de systèmes de communications téléphoniques filaires ou mobiles (réseaux cellulaires 2G/3G, satellites INMARSAT…) ou de voix sur IP. Les codeurs de parole dans les normes récentes pour les réseaux de téléphonie mobiles ou de voix sur IP font appel à des algorithmes de compression dérivés de la technique CELP (Code Excited Linear Prediction) qui permettent d’atteindre des débits de l’ordre de la dizaine de Kb/s bien inférieurs aux codeurs des réseaux téléphoniques filaires (typiquement 64 ou 32 Kb/s). Ces codeurs tirent leur efficacité de l’utilisation de caractéristiques spécifiques aux signaux de parole et à l’audition humaine. Aussi les signaux autres que la parole sont-ils généralement fortement distordus par ces codeurs. La transmission de données sur les canaux vocodés peut être intéressante pour des raisons liées à la grande disponibilité des canaux dédiés à la voix et pour des raisons de discrétion de la communication (sécurité). Mais le signal modulé transmis sur ces canaux vocodés est soumis aux dégradations causées par les codeurs de parole, ce qui impose des contraintes sur le type de modulation utilisé. Cette thèse a porté sur la conception et l’évaluation de modulations permettant la transmission de données sur les canaux vocodés. Deux approches de modulations ont été proposées pour des applications correspondant à des débits de transmission possibles assez différents. La principale application visée par la thèse concerne la transmission de parole chiffrée, transmission pour laquelle le signal de parole est numérisé, comprimé à bas débit par un codeur de parole puis sécurisé par un algorithme de cryptage. Pour cette application, nous nous sommes focalisés sur les réseaux de communications utilisant des codeurs CELP de débits supérieurs à la dizaine de Kb/s typiquement les canaux de communication mobiles de deuxième ou troisième génération. La première approche de modulation proposée concerne cette application. Elle consiste à utiliser des modulations numériques après optimisation de leurs paramètres de façon à prendre en compte les contraintes imposées par le canal et à permettre des débits et des performances en probabilité d’erreur compatibles avec la transmission de parole chiffrée (typiquement un débit supérieur à 1200 b/s avec un BER de l’ordre de 10-3). Nous avons montré que la modulation QPSK optimisée permet d’atteindre ces performances. Un système de synchronisation est aussi étudié et adapté aux besoins et aux contraintes du canal vocodé. Les performances atteintes par la modulation QPSK avec le système de synchronisation proposé, ainsi que la qualité de la parole sécurisée transmise ont été évalués par simulation et validés expérimentalement sur un canal GSM réel grâce à un banc de test développé dans la thèse.La deuxième approche de modulation a privilégié la robustesse du signal modulé lors de la transmission à travers un codeur de parole quelconque, même un codeur à bas débit tels que les codeurs MELP à 2400 ou 1200 b/s. Dans ce but, nous avons proposé une modulation effectuée par concaténation de segments de parole naturelle associée à une technique de démodulation qui segmente le signal reçu et identifie les segments de parole par programmation dynamique avec taux de reconnaissance élevé. Cette modulation a été évaluée par simulation sur différents codeurs de parole. Elle a aussi été testée sur des canaux GSM réels. Les résultats obtenus montrent une probabilité d’erreur très faible quelque soit le canal vocodé et le débit des codeurs de parole utilisés mais pour des débits possibles relativement faibles. Les applications envisageables sont restreintes à des débits typiquement inférieurs à 200 b/s.Enfin nous nous sommes intéressés aux détecteurs d’activité vocale dont l’effet peut-être très dommageable pour les signaux de données. Nous avons proposé une méthode permettant de contrer les VAD utilisés dans les réseaux GSM. Son principe consiste à rompre la stationnarité du spectre du signal modulé, stationnarité sur laquelle s’appuie le VAD pour décider que le signal n’est pas de la parole / Pas de résumé
18

Minimizing memory requirements for deterministic test data in embedded testing

Ahlström, Daniel January 2010 (has links)
<p>Embedded and automated tests reduce maintenance costs for embedded systems installed in remote locations. Testing multiple components of an embedded system, connected on a scan chain, using deterministic test patterns stored in a system provide high fault coverage but require large system memory. This thesis presents an approach to reduce test data memory requirements by the use of a test controller program, utilizing the observation of that there are multiple components of the same type in a system. The program use deterministic test patterns specific to every component type, which is stored in system memory, to create fully defined test patterns when needed. By storing deterministic test patterns specific to every component type, the program can use the test patterns for multiple tests and several times within the same test. The program also has the ability to test parts of a system without affecting the normal functional operation of the rest of the components in the system and without an increase of test data memory requirements. Two experiments were conducted to determine how much test data memory requirements are reduced using the approach presented in this thesis. The results for the experiments show up to 26.4% reduction of test data memory requirements for ITC´02 SOC test benchmarks and in average 60% reduction of test data memory requirements for designs generated to gain statistical data.</p>
19

Noise Reduction in Flash X-ray Imaging Using Deep Learning

Sundman, Tobias January 2018 (has links)
Recent improvements in deep learning architectures, combined with the strength of modern computing hardware such as graphics processing units, has lead to significant results in the field of image analysis. In this thesis work, locally connected architectures are employed to reduce noise in flash X-ray diffraction images. The layers in these architectures use convolutional kernels, but without shared weights. This combines the benefits of lower model memory footprint in convolutional networks with the higher model capacity of fully connected networks. Since the camera used to capture the diffraction images has pixelwise unique characteristics, and thus lacks equivariance, this compromise can be beneficial. The background images of this thesis work were generated with an active laser but without injected samples. Artificial diffraction patterns were then added to these background images allowing for training U-Net architectures to separate them. Architecture A achieved a performance of 0.187 on the test set, roughly translating to 35 fewer photon errors than a model similar to state of the art. After smoothing the photon errors this performance increased to 0.285, since the U-Net architectures managed to remove flares where state of the art could not. This could be taken as a proof of concept that locally connected networks are able to separate diffraction from background in flash X-Ray imaging.
20

Minimizing memory requirements for deterministic test data in embedded testing

Ahlström, Daniel January 2010 (has links)
Embedded and automated tests reduce maintenance costs for embedded systems installed in remote locations. Testing multiple components of an embedded system, connected on a scan chain, using deterministic test patterns stored in a system provide high fault coverage but require large system memory. This thesis presents an approach to reduce test data memory requirements by the use of a test controller program, utilizing the observation of that there are multiple components of the same type in a system. The program use deterministic test patterns specific to every component type, which is stored in system memory, to create fully defined test patterns when needed. By storing deterministic test patterns specific to every component type, the program can use the test patterns for multiple tests and several times within the same test. The program also has the ability to test parts of a system without affecting the normal functional operation of the rest of the components in the system and without an increase of test data memory requirements. Two experiments were conducted to determine how much test data memory requirements are reduced using the approach presented in this thesis. The results for the experiments show up to 26.4% reduction of test data memory requirements for ITC´02 SOC test benchmarks and in average 60% reduction of test data memory requirements for designs generated to gain statistical data.

Page generated in 0.2044 seconds