• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 10
  • 9
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 103
  • 103
  • 103
  • 101
  • 28
  • 22
  • 18
  • 16
  • 15
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

The development of accented English synthetic voices

Malatji, Promise Tshepiso January 2019 (has links)
Thesis (M. Sc. (Computer Science)) --University of Limpopo, 2019 / A Text-to-speech (TTS) synthesis system is a software system that receives text as input and produces speech as output. A TTS synthesis system can be used for, amongst others, language learning, and reading out text for people living with different disabilities, i.e., physically challenged, visually impaired, etc., by native and non-native speakers of the target language. Most people relate easily to a second language spoken by a non-native speaker they share a native language with. Most online English TTS synthesis systems are usually developed using native speakers of English. This research study focuses on developing accented English synthetic voices as spoken by non-native speakers in the Limpopo province of South Africa. The Modular Architecture for Research on speech sYnthesis (MARY) TTS engine is used in developing the synthetic voices. The Hidden Markov Model (HMM) method was used to train the synthetic voices. Secondary training text corpus is used to develop the training speech corpus by recording six speakers reading the text corpus. The quality of developed synthetic voices is measured in terms of their intelligibility, similarity and naturalness using a listening test. The results in the research study are classified based on evaluators’ occupation and gender and the overall results. The subjective listening test indicates that the developed synthetic voices have a high level of acceptance in terms of similarity and intelligibility. A speech analysis software is used to compare the recorded synthesised speech and the human recordings. There is no significant difference in the voice pitch of the speakers and the synthetic voices except for one synthetic voice.
82

Energy-efficient digital design of reliable, low-throughput wireless biomedical systems

Tolbert, Jeremy Reynard 24 August 2012 (has links)
The main objective of this research is to improve the energy efficiency of low throughput wireless biomedical systems by employing digital design techniques. The power consumed in conventional wireless EEG (biomedical) systems is dominated by digital microcontroller and the radio frequency (RF) transceiver. To reduce the power associated with the digital processor, data compression can reduce the volume of data transmitted. An adaptive data compression algorithm has been proposed to ensure accurate representations of critical epileptic signals, while also preserving the overall power. Further advances in power reduction are also presented by designing a custom baseband processor for data compression. A functional system has been hardware verified and ASIC optimized to reduce the power by over 9X compared to existing methods. The optimized processor can operate at 32MHz with a near threshold supply of 0.5V in a conventional 45nm technology. While attempting to reach high frequencies in the near threshold regime, the probability of timing violations can reduce the robustness of the system. To further optimize the implementation, a low voltage clock tree design has been investigated to improve the reliability of the digital processor. By implementing the proposed clock tree design methodology, the digital processor can improve its robustness (by reducing the probability of timing violations) while reducing the overall power by more than 5 percent. Future work suggests examining new architectures for low-throughput processing and investigating the proposed systems' potential for a multi-channel EEG implementation.
83

Discovering and Tracking Interesting Web Services

Rocco, Daniel J. (Daniel John) 01 December 2004 (has links)
The World Wide Web has become the standard mechanism for information distribution and scientific collaboration on the Internet. This dissertation research explores a suite of techniques for discovering relevant dynamic sources in a specific domain of interest and for managing Web data effectively. We first explore techniques for discovery and automatic classification of dynamic Web sources. Our approach utilizes a service class model of the dynamic Web that allows the characteristics of interesting services to be specified using a service class description. To promote effective Web data management, the Page Digest Web document encoding eliminates tag redundancy and places structure, content, tags, and attributes into separate containers, each of which can be referenced in isolation or in conjunction with the other elements of the document. The Page Digest Sentinel system leverages our unique encoding to provide efficient and scalable change monitoring for arbitrary Web documents through document compartmentalization and semantic change request grouping. Finally, we present XPack, an XML document compression system that uses a containerized view of an XML document to provide both good compression and efficient querying over compressed documents. XPack's queryable XML compression format is general-purpose, does not rely on domain knowledge or particular document structural characteristics for compression, and achieves better query performance than standard query processors using text-based XML. Our research expands the capabilities of existing dynamic Web techniques, providing superior service discovery and classification services, efficient change monitoring of Web information, and compartmentalized document handling. DynaBot is the first system to combine a service class view of the Web with a modular crawling architecture to provide automated service discovery and classification. The Page Digest Web document encoding represents Web documents efficiently by separating the individual characteristics of the document. The Page Digest Sentinel change monitoring system utilizes the Page Digest document encoding for scalable change monitoring through efficient change algorithms and intelligent request grouping. Finally, XPack is the first XML compression system that delivers compression rates similar to existing techniques while supporting better query performance than standard query processors using text-based XML.
84

High-performance memory system architectures using data compression

Baek, Seungcheol 22 May 2014 (has links)
The Chip Multi-Processor (CMP) paradigm has cemented itself as the archetypal philosophy of future microprocessor design. Rapidly diminishing technology feature sizes have enabled the integration of ever-increasing numbers of processing cores on a single chip die. This abundance of processing power has magnified the venerable processor-memory performance gap, which is known as the ”memory wall”. To bridge this performance gap, a high-performing memory structure is needed. An attractive solution to overcoming this processor-memory performance gap is using compression in the memory hierarchy. In this thesis, to use compression techniques more efficiently, compressed cacheline size information is studied, and size-aware cache management techniques and hot-cacheline prediction for dynamic early decompression technique are proposed. Also, the proposed works in this thesis attempt to mitigate the limitations of phase change memory (PCM) such as low write performance and limited long-term endurance. One promising solution is the deployment of hybridized memory architectures that fuse dynamic random access memory (DRAM) and PCM, to combine the best attributes of each technology by using the DRAM as an off-chip cache. A dual-phase compression technique is proposed for high-performing DRAM/PCM hybrid environments and a multi-faceted wear-leveling technique is proposed for the long-term endurance of compressed PCM. This thesis also includes a new compression-based hybrid multi-level cell (MLC)/single-level cell (SLC) PCM management technique that aims to combine the performance edge of SLCs with the higher capacity of MLCs in a hybrid environment.
85

Multiresolution strategies for the numerical solution of optimal control problems

Jain, Sachin 26 March 2008 (has links)
Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a nonlinear programming (NLP) problem that is solved using standard NLP codes. The novelty of the proposed approach hinges on the automatic calculation of a suitable, nonuniform grid over which the NLP problem is solved, which tends to increase numerical efficiency and robustness. Control and/or state constraints are handled with ease, and without any additional computational complexity. The proposed algorithm is based on a simple and intuitive method to balance several conflicting objectives, such as accuracy of the solution, convergence, and speed of the computations. The benefits of the proposed algorithm over uniform grid implementations are demonstrated with the help of several nontrivial examples. Furthermore, two sequential multiresolution trajectory optimization algorithms for solving problems with moving targets and/or dynamically changing environments have been developed.
86

Compressão de dados ambientais em redes de sensores sem fio usando código de Huffman

Maciel, Marcos Costa 21 February 2013 (has links)
Fundação do Amparo à Pesquisa do Estado do Amazonas (FAPEAM) / Nesta dissertação de mestrado é apresentada uma proposta de um método simples de compressão de dados sem perda para Redes de Sensores sem Fio (RSSF). Este método é baseado numa codificação Huffman convencional aplicada a um conjunto de amostras de parâmetros monitorados que possuam uma forte correlação temporal, fazendo com que seja gerado um dicionário Huffman a partir dessas probabilidades e que possam ser utilizadas em outros conjuntos de parâmetros de mesma característica. Os resultados de simulação usando temperatura e umidade relativa mostram que este método supera alguns dos mais populares mecanismos de compressão projetados especificamente para RSSF. / In this masters thesis we present a lightweight lossless data compression method for wireless sensor networks(WSN). This method is based on a conventional Huffman coding applied to a sample set of monitored parameters that have a strong temporal correlation, so that a Huffman dictionary is generated from these probabilities, and which may be used in other sets of parameters with same characteristic. Simulations results using temperature and relative humidity measurements show that the proposed method outperforms popular compression mechanisms designed specifically for wireless sensor networks.
87

Desenvolvimento de tecnicas quimiometricas de compressão de dados e deredução de ruido instrumental aplicadas a oleo diesel e madeira de eucalipto usando espectroscopia NIR / Development of chemometric technics for data compression and reduction of diesel oil and eucalypus wood employing NIR spectroscopy

Dantas Filho, Heronides Adonias 16 March 2007 (has links)
Orientadores: Celio Pasquini, Mario Cesar Ugulino de Araujo / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Quimica / Made available in DSpace on 2018-08-09T13:24:36Z (GMT). No. of bitstreams: 1 DantasFilho_HeronidesAdonias_D.pdf: 2337564 bytes, checksum: b5a44bf3eec3ce95ab683c5b2621b012 (MD5) Previous issue date: 2007 / Resumo: Neste trabalho foram desenvolvidas e aplicadas técnicas de seleção de amostras e de variáveis espectrais para calibração multivariada a partir do Algoritmo das Projeções Sucessivas (APS). Também foi utilizada a transformada wavelet para resolver problemas de redução de ruído associado a dados de espectroscopia NIR (Infravermelho Próximo), na construção de modelos de calibração multivariada baseados em Regressão Linear Múltipla (MLR) para estimativa de parâmetros de qualidade de óleo diesel combustível e também de madeira de eucalipto. Os espectros NIR de transmitância para óleo diesel e de reflectância para madeira de eucalipto foram registrados empregando-se um equipamento NIR-Bomem com detector de Arseneto de Gálio e Índio. Para a aplicação em óleo diesel, foram estudadas as regiões espectrais: 850 - 1.100 nm, 1.100 - 1.570 nm e 1.570 - 2.500 nm. Para as amostras de madeira de eucalipto foi empregada a região de 1.100 - 2.500 nm. Os resultados do uso de técnicas de seleção de variáveis e amostras por MLR comprovaram sua simplicidade frente os modelos de regressão por mínimos quadrados parciais (PLS) que empregam toda a região espectral e transformação em variáveis latentes e são mais complexos de interpretar. O emprego de seleção de amostras demonstrou ainda a possibilidade de procedimentos de recalibrações e transferência de calibração que utilizam um número reduzido de amostras, sem a perda significativa da capacidade preditiva dos modelos MLR. O uso de filtragem wavelet também teve sua eficiência comprovada tanto no contexto da calibração multivariada quanto na filtragem de espectros NIR a partir de varreduras individuais. Na maioria dos casos de que trata esta tese e por conseqüência das técnicas quimiométricas empregadas, melhorias quanto à minimização do erro (RMSEP) associado à quantificação dos parâmetros de qualidade, bem como redução do tempo empregado na aquisição de varreduras de espectros NIR foram as principais contribuições fornecidas / Abstract: This work describes two techniques for spectral variable and sample selection based on the Successive Projections Algorithm (SPA), aiming the construction of multivariate regression models. Also, the wavelet transform was employed to solve problems related to noise reduction associated with spectroscopic data in the near infrared spectral region (NIR), and employed in the construction of multivariate calibration models based in Linear Multiple Regression (LMR) to estimate the quality parameters of diesel fuel and eucalyptus wood. The NIR transmission spectra for diesel samples and the reflectance spectra obtained for wood samples were acquired by using a NIR-Bomen equipment with AsGaIn detector. For application in diesel, the following spectral regions have been investigated: 850 - 1100 nm, 1100 - 1570 nm and 1570 - 2500 nm. For wood samples the spectral region employed was from 1100 - 2500 nm. The results obtained by using the variable selection techniques and LMR demonstrate their simplicity when compared with its counterpart Partial Least Square (PLS) which employs full spectral region and latent variables, being, therefore, more difficult to be interpreted. The use of wavelet filtering also demonstrates its efficiency both for multivariate calibration and NIR spectral data filtering. In most of the cases approached in this work, and inconsequence for the chemometric techniques employed, improvements in the error (RMSEP) associated with the quality parameters as well a decrease in the analysis time were the main achievements of this work / Doutorado / Quimica Analitica / Doutor em Ciências
88

Compressão de dados baseada nos modelos de Markov minimos / Data compression based on the minimal Markov models

Yaginuma, Karina Yuriko 03 December 2010 (has links)
Orientador: Jesus Enrique Garcia / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-15T18:18:34Z (GMT). No. of bitstreams: 1 Yaginuma_KarinaYuriko_M.pdf: 793513 bytes, checksum: 80908040b7ddf985dbe851b78dc4f279 (MD5) Previous issue date: 2010 / Resumo: Nesta dissertação e proposta uma metodologia de compressão de dados usando Modelos de Markov Mínimos (MMM). Para tal fim estudamos cadeias de Markov de alcance variavel (VLMC, Variable Length Markov Chains) e MMM. Apresentamos entao uma aplicacão dos MMM a dados linguísticos. Paralelamente estudamos o princípio MDL (Minimum Description Length) e o problema de compressão de dados. Propomos uma metodologia de compressao de dados utilizando MMM e apresentamos um algoritmo próprio para compressao usando MMM. Comparamos mediante simulacão e aplicacao a dados reais as características da compressao de dados utilizando as cadeias completas de Markov, VLMC e MMM / Abstract: In this dissertation we propose a methodology for data compression using Minimal Markov Models (MMM). To this end we study Variable Length Markov Chains (VLMC) and MMM. Then present an application of MMM to linguistic data. In parallel we studied the MDL principle (Minimum Description Length) and the problem of data compression. We propose a method of data compression using MMM and present an algorithm suitable for compression using MMM. Compared through simulation and application to real data characteristics of data compression using the complete Markov chains, VLMC and MMM / Mestrado / Probabilidade e Estatistica / Mestre em Estatística
89

Compressão de dados de demanda elétrica em Smart Metering / Data compression electricity demand in Smart Metering

Flores Rodriguez, Andrea Carolina, 1987- 08 August 2014 (has links)
Orientador: Gustavo Fraidenraich / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-26T03:16:11Z (GMT). No. of bitstreams: 1 FloresRodriguez_AndreaCarolina_M.pdf: 1415054 bytes, checksum: 6b986968e8d7ec4e6459e4cea044d379 (MD5) Previous issue date: 2014 / Resumo: A compressão dos dados de consumo residencial de energia elétrica registrados torna-se extremadamente necessária em Smart Metering, a fim de resolver o problema de grandes volumes de dados gerados pelos medidores. A principal contribuição desta tese é a proposta de um esquema de representação teórica da informação registrada na forma mais compacta, sugerindo uma forma de atingir o limite fundamental de compressão estabelecido pela entropia da fonte sobre qualquer técnica de compressão disponibilizada no medidor. A proposta consiste na transformação de codificação dos dados, baseado no processamento por segmentação: no tempo em taxas de registros de 1/900 Hz a 1 Hz, e nos valores de consumo residencial de energia elétrica. Este último subdividido em uma compressão por amplitude mudando sua granularidade e compressão dos dados digitais para representar o consumo com o menor número de bits possíveis usando: PCM-Huffman, DPCM-Huffman e codificação de entropia supondo diferentes ordens de distribuição da fonte. O esquema é aplicado sobre dados modelados por cadeias de Markov não homogêneas para as atividades dos membros da casa que influenciam no consumo elétrico e dados reais disponibilizados publicamente. A avaliação do esquema é feita analisando o compromisso da compressão entre as altas taxas de registro, distorção resultante da digitalização dos dados, e exploração da correlação entre amostras consecutivas. Vários exemplos numéricos são apresentados ilustrando a eficiência dos limites de compressão. Os resultados revelam que os melhores esquemas de compressão de dados são encontrados explorando a correlação entre as amostras / Abstract: Data compression of recorded residential electricity consumption becomes extremely necessary on Smart Metering, in order to solve the problem of large volumes of data generated by meters. The main contribution of this thesis is to propose a scheme of theoretical representation of recorded information in the most compact form, which suggests a way to reach the fundamental limit of compression set by the entropy of the source, of any compression technique available in the meter. The proposal consists in the transformation of data encoding, based on the processing by segmentation: in time by registration rate from 1/900 Hz to 1 Hz, and in the values of residential electricity consumption. The latter is subdivided into compression: by amplitude changing their regularity, and digital data compression to represent consumption as few bits as possible. It is using PCM-Huffman, DPCM-Huffman and entropy encoding by assuming different orders of the source. The scheme is applied to modeled data by inhomogeneous Markov chains to create the activities of household members that influence electricity consumption, and real data publicly available. The assessment scheme is made by analyzing the trade off of compression between high registration rates, the distortion resulting from the digitization of data, and analyzing the correlation of consecutive samples. Several examples are presented to illustrate the efficiency of the compression limits. The analysis reveals that better data compression schemes can be found by exploring the correlation among the samples / Mestrado / Telecomunicações e Telemática / Mestra em Engenharia Elétrica
90

Novas abordagens para compressão de documentos XML / New approaches for compression of XML documents

Teixeira, Márlon Amaro Coelho 19 August 2018 (has links)
Orientador: Leonardo de Souza Mendes / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-19T13:11:21Z (GMT). No. of bitstreams: 1 Teixeira_MarlonAmaroCoelho_M.pdf: 1086156 bytes, checksum: 1acfaaf659e42716010448c6781b6313 (MD5) Previous issue date: 2011 / Resumo: Atualmente, alguns dos fatores que determinam o sucesso ou fracasso das corporações estão ligados a velocidade e a eficiência da tomada de suas decisões. Para que estes quesitos sejam alcançados, a integração dos sistemas computacionais legados aos novos sistemas computacionais é de fundamental importância, criando assim a necessidade de que velhas e novas tecnologias interoperem. Como solução a este problema surge a linguagem XML, uma linguagem auto-descritiva, independente de tecnologia e plataforma, que vem se tornando um padrão de comunicação entre sistemas heterogêneos. Por ser auto-descritiva, a XML se torna redundante, o que gera mais informações a ser transferida e armazenada, exigindo mais recursos dos sistemas computacionais. Este trabalho consiste em apresentar novas abordagens de compressão específicas para a linguagem XML, com o objetivo de reduzir o tamanho de seus documentos, diminuindo os impactos sobre os recursos de rede, armazenamento e processamento. São apresentadas 2 novas abordagens, assim como os casos de testes que as avaliam, considerando os quesitos: taxa de compressão, tempo de compressão e tolerância dos métodos a baixas disponibilidades de memória. Os resultados obtidos são comparados aos métodos de compressão de XML que se destacam na literatura. Os resultados demonstram que a utilização de compressores de documentos XML pode reduzir consideravelmente os impactos de desempenho criados pela linguagem / Abstract: Actually, some of the factors that determine success or failure of a corporation are on the speed and efficiency of making their decisions. For these requirements are achieved, the integration of legacy computational systems to new computational systems is of fundamental importance, thus creating the need for old and new technologies interoperate. As a solution to this problem comes to XML, a language self-descriptive and platform-independent technology, and it is becoming a standard for communication between heterogeneous systems. Being self-descriptive, the XML becomes redundant, which generates more information to be transferred and stored, requiring more resources of computational systems. This work presents new approaches to specific compression for XML, in order to reduce the size of your documents, reducing the impacts on the reducing the impact on network, storage and processing resources. Are presented two new approaches as well as test cases that evaluate, considering the questions: compression ratio, compression time and tolerance of the methods to low memory availability. The results are compared to the XML compression methods that stand out in the literature. The results demonstrate that the use of compressors XML documents can significantly reduce the performance impacts created by the language / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica

Page generated in 0.0966 seconds