• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 10
  • 9
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 105
  • 105
  • 105
  • 103
  • 29
  • 22
  • 18
  • 16
  • 15
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

PBIW : um esquema de codificação baseado em padrões de instrução / PBIW : an encoding technique based on instruction patterns

Batistella, Rafael Fernandes 28 February 2008 (has links)
Orientador: Rodolfo Jardim de Azevedo / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-11T00:49:37Z (GMT). No. of bitstreams: 1 Batistella_RafaelFernandes_M.pdf: 3411156 bytes, checksum: 7e6b46824189243405a180e949db65c6 (MD5) Previous issue date: 2008 / Resumo: Trabalhos não muito recentes já mostravam que o aumento de velocidade nas memórias DRAM não acompanha o aumento de velocidade dos processadores. Mesmo assim, pesquisadores na área de arquitetura de computadores continuam buscando novas abordagens para aumentar o desempenho dos processadores. Dentro do objetivo de minimizar essa diferença de velocidade entre memória e processador, este trabalho apresenta um novo esquema de codificação baseado em instruções codificadas e padrões de instruções ¿ PBIW (Pattern Based Instruction Word). Uma instrução codificada não contém redundância de dados e é armazenada em uma I-cache. Os padrões de instrução, de forma diferente, são armazenados em uma nova cache, chamada Pattern cache (P-cache) e são utilizados pelo circuito decodificador na preparação da instrução que será repassada aos estágios de execução. Esta técnica se mostrou uma boa alternativa para estilos arquiteturais conhecidos como arquiteturas VLIW e EPIC. Foi realizado um estudo de caso da técnica PBIW sobre uma arquitetura de alto desempenho chamada de 2D-VLIW. O desempenho da técnica de codificação foi avaliado através de experimentos com programas dos benchmarks MediaBench, SPECint e SPECfp. Os experimentos estáticos avaliaram a eficiência da codificação PBIW no aspecto de redução de código. Nestes experimentos foram alcançadas reduções no tamanho dos programas de até 81% sobre programas codificados com a estratégia de codifica¸c¿ao 2D-VLIW e reduções de até 46% quando comparados á programas utilizando o modelo de codificação EPIC. Experimentos dinâmicos mostraram que a codificação PBIW também é capaz que gerar ganhos com relação ao tempo de execução dos programas. Quando comparada à codificação 2D-VLIW, o speedup alcançado foi de at'e 96% e quando comparada à EPIC, foi de até 69% / Abstract: Past works has shown that the increase of DRAM memory speed is not the same of processor speed. Even though, computer architecture researchers keep searching for new approaches to enhance the processor performance. In order to minimize this difference between the processor and memory speed, this work presents a new encoding technique based on encoded instructions and instruction patterns - PBIW (Pattern Based Instruction Word). An encoded instruction contains no redundancy of data and it is stored into an I-cache. The instruction patterns, on the other hand, are stored into a new cache, named Pattern cache (P-cache) and are used by the decoder circuit to build the instruction to be executed in the execution stages. This technique has shown a suitable alternative to well-known architectural styles such as VLIW and EPIC architectures. A case study of this technique was carried out in a high performance architecture called 2D-VLIW. The performance of the encoding technique has been evaluated through trace-driven experiments with MediaBench, SPECint and SPECfp programs. The static experiments have evaluated the PBIW code reduction efficiency. In these experiments, PBIW encoding has achieved up to 81% code reduction over 2D-VLIW encoded programs and up to 46% code reduction over EPIC encoded programs. Dynamic experiments have shown that PBIW encoding can also improve the processor performance. When compared to 2D-VLIW encoding, the speedup was up to 96% while compared to EPIC, the speedup was up to 69% / Mestrado / Arquitetura de Computadores / Mestre em Ciência da Computação
82

Linear dimensionality reduction applied to SIFT and SURF feature descriptors / Redução linear de dimensionalidade aplicada aos descritores de características SIFT e SURF

González Valenzuela, Ricardo Eugenio, 1984- 24 August 2018 (has links)
Orientadores: Hélio Pedrini, William Robson Schwartz / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-24T12:45:45Z (GMT). No. of bitstreams: 1 GonzalezValenzuela_RicardoEugenio_M.pdf: 22940228 bytes, checksum: 972bc5a0fac686d7eda4da043bbd61ab (MD5) Previous issue date: 2014 / Resumo: Descritores locais robustos normalmente compõem-se de vetores de características de alta dimensionalidade para descrever atributos discriminativos em imagens. A alta dimensionalidade de um vetor de características implica custos consideráveis em termos de tempo computacional e requisitos de armazenamento afetando o desempenho de várias tarefas que utilizam descritores de características, tais como correspondência, recuperação e classificação de imagens. Para resolver esses problemas, pode-se aplicar algumas técnicas de redução de dimensionalidade, escencialmente, construindo uma matrix de projeção que explique adequadamente a importancia dos dados em outras bases. Esta dissertação visa aplicar técnicas de redução linear de dimensionalidade aos descritores SIFT e SURF. Seu principal objetivo é demonstrar que, mesmo com o risco de diminuir a precisão dos vetores de caraterísticas, a redução de dimensionalidade pode resultar em um equilíbrio adequado entre tempo computacional e recursos de armazenamento. A redução linear de dimensionalidade é realizada por meio de técnicas como projeções aleatórias (RP), análise de componentes principais (PCA), análise linear discriminante (LDA) e mínimos quadrados parciais (PLS), a fim de criar vetores de características de menor dimensão. Este trabalho avalia os vetores de características reduzidos em aplicações de correspondência e de recuperação de imagens. O tempo computacional e o uso de memória são medidos por comparações entre os vetores de características originais e reduzidos / Abstract: Robust local descriptors usually consist of high dimensional feature vectors to describe distinctive characteristics of images. The high dimensionality of a feature vector incurs into considerable costs in terms of computational time and storage requirements, which affects the performance of several tasks that employ feature vectors, such as matching, image retrieval and classification. To address these problems, it is possible to apply some dimensionality reduction techniques, by building a projection matrix which explains adequately the importance of the data in other basis. This dissertation aims at applying linear dimensionality reduction to SIFT and SURF descriptors. Its main objective is to demonstrate that, even risking to decrease the accuracy of the feature vectors, the dimensionality reduction can result in a satisfactory trade-off between computational time and storage. We perform the linear dimensionality reduction through Random Projections (RP), Independent Component Analysis (ICA), Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Partial Least Squares (PLS) in order to create lower dimensional feature vectors. This work evaluates such reduced feature vectors in a matching application, as well as their distinctiveness in an image retrieval application. The computational time and memory usage are then measured by comparing the original and the reduced feature vectors. OBSERVAÇÃONa segunda folha, do arquivo em anexo, o meu nome tem dois pequenos erros / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
83

The development of accented English synthetic voices

Malatji, Promise Tshepiso January 2019 (has links)
Thesis (M. Sc. (Computer Science)) --University of Limpopo, 2019 / A Text-to-speech (TTS) synthesis system is a software system that receives text as input and produces speech as output. A TTS synthesis system can be used for, amongst others, language learning, and reading out text for people living with different disabilities, i.e., physically challenged, visually impaired, etc., by native and non-native speakers of the target language. Most people relate easily to a second language spoken by a non-native speaker they share a native language with. Most online English TTS synthesis systems are usually developed using native speakers of English. This research study focuses on developing accented English synthetic voices as spoken by non-native speakers in the Limpopo province of South Africa. The Modular Architecture for Research on speech sYnthesis (MARY) TTS engine is used in developing the synthetic voices. The Hidden Markov Model (HMM) method was used to train the synthetic voices. Secondary training text corpus is used to develop the training speech corpus by recording six speakers reading the text corpus. The quality of developed synthetic voices is measured in terms of their intelligibility, similarity and naturalness using a listening test. The results in the research study are classified based on evaluators’ occupation and gender and the overall results. The subjective listening test indicates that the developed synthetic voices have a high level of acceptance in terms of similarity and intelligibility. A speech analysis software is used to compare the recorded synthesised speech and the human recordings. There is no significant difference in the voice pitch of the speakers and the synthetic voices except for one synthetic voice.
84

Energy-efficient digital design of reliable, low-throughput wireless biomedical systems

Tolbert, Jeremy Reynard 24 August 2012 (has links)
The main objective of this research is to improve the energy efficiency of low throughput wireless biomedical systems by employing digital design techniques. The power consumed in conventional wireless EEG (biomedical) systems is dominated by digital microcontroller and the radio frequency (RF) transceiver. To reduce the power associated with the digital processor, data compression can reduce the volume of data transmitted. An adaptive data compression algorithm has been proposed to ensure accurate representations of critical epileptic signals, while also preserving the overall power. Further advances in power reduction are also presented by designing a custom baseband processor for data compression. A functional system has been hardware verified and ASIC optimized to reduce the power by over 9X compared to existing methods. The optimized processor can operate at 32MHz with a near threshold supply of 0.5V in a conventional 45nm technology. While attempting to reach high frequencies in the near threshold regime, the probability of timing violations can reduce the robustness of the system. To further optimize the implementation, a low voltage clock tree design has been investigated to improve the reliability of the digital processor. By implementing the proposed clock tree design methodology, the digital processor can improve its robustness (by reducing the probability of timing violations) while reducing the overall power by more than 5 percent. Future work suggests examining new architectures for low-throughput processing and investigating the proposed systems' potential for a multi-channel EEG implementation.
85

Discovering and Tracking Interesting Web Services

Rocco, Daniel J. (Daniel John) 01 December 2004 (has links)
The World Wide Web has become the standard mechanism for information distribution and scientific collaboration on the Internet. This dissertation research explores a suite of techniques for discovering relevant dynamic sources in a specific domain of interest and for managing Web data effectively. We first explore techniques for discovery and automatic classification of dynamic Web sources. Our approach utilizes a service class model of the dynamic Web that allows the characteristics of interesting services to be specified using a service class description. To promote effective Web data management, the Page Digest Web document encoding eliminates tag redundancy and places structure, content, tags, and attributes into separate containers, each of which can be referenced in isolation or in conjunction with the other elements of the document. The Page Digest Sentinel system leverages our unique encoding to provide efficient and scalable change monitoring for arbitrary Web documents through document compartmentalization and semantic change request grouping. Finally, we present XPack, an XML document compression system that uses a containerized view of an XML document to provide both good compression and efficient querying over compressed documents. XPack's queryable XML compression format is general-purpose, does not rely on domain knowledge or particular document structural characteristics for compression, and achieves better query performance than standard query processors using text-based XML. Our research expands the capabilities of existing dynamic Web techniques, providing superior service discovery and classification services, efficient change monitoring of Web information, and compartmentalized document handling. DynaBot is the first system to combine a service class view of the Web with a modular crawling architecture to provide automated service discovery and classification. The Page Digest Web document encoding represents Web documents efficiently by separating the individual characteristics of the document. The Page Digest Sentinel change monitoring system utilizes the Page Digest document encoding for scalable change monitoring through efficient change algorithms and intelligent request grouping. Finally, XPack is the first XML compression system that delivers compression rates similar to existing techniques while supporting better query performance than standard query processors using text-based XML.
86

High-performance memory system architectures using data compression

Baek, Seungcheol 22 May 2014 (has links)
The Chip Multi-Processor (CMP) paradigm has cemented itself as the archetypal philosophy of future microprocessor design. Rapidly diminishing technology feature sizes have enabled the integration of ever-increasing numbers of processing cores on a single chip die. This abundance of processing power has magnified the venerable processor-memory performance gap, which is known as the ”memory wall”. To bridge this performance gap, a high-performing memory structure is needed. An attractive solution to overcoming this processor-memory performance gap is using compression in the memory hierarchy. In this thesis, to use compression techniques more efficiently, compressed cacheline size information is studied, and size-aware cache management techniques and hot-cacheline prediction for dynamic early decompression technique are proposed. Also, the proposed works in this thesis attempt to mitigate the limitations of phase change memory (PCM) such as low write performance and limited long-term endurance. One promising solution is the deployment of hybridized memory architectures that fuse dynamic random access memory (DRAM) and PCM, to combine the best attributes of each technology by using the DRAM as an off-chip cache. A dual-phase compression technique is proposed for high-performing DRAM/PCM hybrid environments and a multi-faceted wear-leveling technique is proposed for the long-term endurance of compressed PCM. This thesis also includes a new compression-based hybrid multi-level cell (MLC)/single-level cell (SLC) PCM management technique that aims to combine the performance edge of SLCs with the higher capacity of MLCs in a hybrid environment.
87

Multiresolution strategies for the numerical solution of optimal control problems

Jain, Sachin 26 March 2008 (has links)
Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a nonlinear programming (NLP) problem that is solved using standard NLP codes. The novelty of the proposed approach hinges on the automatic calculation of a suitable, nonuniform grid over which the NLP problem is solved, which tends to increase numerical efficiency and robustness. Control and/or state constraints are handled with ease, and without any additional computational complexity. The proposed algorithm is based on a simple and intuitive method to balance several conflicting objectives, such as accuracy of the solution, convergence, and speed of the computations. The benefits of the proposed algorithm over uniform grid implementations are demonstrated with the help of several nontrivial examples. Furthermore, two sequential multiresolution trajectory optimization algorithms for solving problems with moving targets and/or dynamically changing environments have been developed.
88

Compressão de dados ambientais em redes de sensores sem fio usando código de Huffman

Maciel, Marcos Costa 21 February 2013 (has links)
Fundação do Amparo à Pesquisa do Estado do Amazonas (FAPEAM) / Nesta dissertação de mestrado é apresentada uma proposta de um método simples de compressão de dados sem perda para Redes de Sensores sem Fio (RSSF). Este método é baseado numa codificação Huffman convencional aplicada a um conjunto de amostras de parâmetros monitorados que possuam uma forte correlação temporal, fazendo com que seja gerado um dicionário Huffman a partir dessas probabilidades e que possam ser utilizadas em outros conjuntos de parâmetros de mesma característica. Os resultados de simulação usando temperatura e umidade relativa mostram que este método supera alguns dos mais populares mecanismos de compressão projetados especificamente para RSSF. / In this masters thesis we present a lightweight lossless data compression method for wireless sensor networks(WSN). This method is based on a conventional Huffman coding applied to a sample set of monitored parameters that have a strong temporal correlation, so that a Huffman dictionary is generated from these probabilities, and which may be used in other sets of parameters with same characteristic. Simulations results using temperature and relative humidity measurements show that the proposed method outperforms popular compression mechanisms designed specifically for wireless sensor networks.
89

Desenvolvimento de tecnicas quimiometricas de compressão de dados e deredução de ruido instrumental aplicadas a oleo diesel e madeira de eucalipto usando espectroscopia NIR / Development of chemometric technics for data compression and reduction of diesel oil and eucalypus wood employing NIR spectroscopy

Dantas Filho, Heronides Adonias 16 March 2007 (has links)
Orientadores: Celio Pasquini, Mario Cesar Ugulino de Araujo / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Quimica / Made available in DSpace on 2018-08-09T13:24:36Z (GMT). No. of bitstreams: 1 DantasFilho_HeronidesAdonias_D.pdf: 2337564 bytes, checksum: b5a44bf3eec3ce95ab683c5b2621b012 (MD5) Previous issue date: 2007 / Resumo: Neste trabalho foram desenvolvidas e aplicadas técnicas de seleção de amostras e de variáveis espectrais para calibração multivariada a partir do Algoritmo das Projeções Sucessivas (APS). Também foi utilizada a transformada wavelet para resolver problemas de redução de ruído associado a dados de espectroscopia NIR (Infravermelho Próximo), na construção de modelos de calibração multivariada baseados em Regressão Linear Múltipla (MLR) para estimativa de parâmetros de qualidade de óleo diesel combustível e também de madeira de eucalipto. Os espectros NIR de transmitância para óleo diesel e de reflectância para madeira de eucalipto foram registrados empregando-se um equipamento NIR-Bomem com detector de Arseneto de Gálio e Índio. Para a aplicação em óleo diesel, foram estudadas as regiões espectrais: 850 - 1.100 nm, 1.100 - 1.570 nm e 1.570 - 2.500 nm. Para as amostras de madeira de eucalipto foi empregada a região de 1.100 - 2.500 nm. Os resultados do uso de técnicas de seleção de variáveis e amostras por MLR comprovaram sua simplicidade frente os modelos de regressão por mínimos quadrados parciais (PLS) que empregam toda a região espectral e transformação em variáveis latentes e são mais complexos de interpretar. O emprego de seleção de amostras demonstrou ainda a possibilidade de procedimentos de recalibrações e transferência de calibração que utilizam um número reduzido de amostras, sem a perda significativa da capacidade preditiva dos modelos MLR. O uso de filtragem wavelet também teve sua eficiência comprovada tanto no contexto da calibração multivariada quanto na filtragem de espectros NIR a partir de varreduras individuais. Na maioria dos casos de que trata esta tese e por conseqüência das técnicas quimiométricas empregadas, melhorias quanto à minimização do erro (RMSEP) associado à quantificação dos parâmetros de qualidade, bem como redução do tempo empregado na aquisição de varreduras de espectros NIR foram as principais contribuições fornecidas / Abstract: This work describes two techniques for spectral variable and sample selection based on the Successive Projections Algorithm (SPA), aiming the construction of multivariate regression models. Also, the wavelet transform was employed to solve problems related to noise reduction associated with spectroscopic data in the near infrared spectral region (NIR), and employed in the construction of multivariate calibration models based in Linear Multiple Regression (LMR) to estimate the quality parameters of diesel fuel and eucalyptus wood. The NIR transmission spectra for diesel samples and the reflectance spectra obtained for wood samples were acquired by using a NIR-Bomen equipment with AsGaIn detector. For application in diesel, the following spectral regions have been investigated: 850 - 1100 nm, 1100 - 1570 nm and 1570 - 2500 nm. For wood samples the spectral region employed was from 1100 - 2500 nm. The results obtained by using the variable selection techniques and LMR demonstrate their simplicity when compared with its counterpart Partial Least Square (PLS) which employs full spectral region and latent variables, being, therefore, more difficult to be interpreted. The use of wavelet filtering also demonstrates its efficiency both for multivariate calibration and NIR spectral data filtering. In most of the cases approached in this work, and inconsequence for the chemometric techniques employed, improvements in the error (RMSEP) associated with the quality parameters as well a decrease in the analysis time were the main achievements of this work / Doutorado / Quimica Analitica / Doutor em Ciências
90

Compressão de dados baseada nos modelos de Markov minimos / Data compression based on the minimal Markov models

Yaginuma, Karina Yuriko 03 December 2010 (has links)
Orientador: Jesus Enrique Garcia / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-15T18:18:34Z (GMT). No. of bitstreams: 1 Yaginuma_KarinaYuriko_M.pdf: 793513 bytes, checksum: 80908040b7ddf985dbe851b78dc4f279 (MD5) Previous issue date: 2010 / Resumo: Nesta dissertação e proposta uma metodologia de compressão de dados usando Modelos de Markov Mínimos (MMM). Para tal fim estudamos cadeias de Markov de alcance variavel (VLMC, Variable Length Markov Chains) e MMM. Apresentamos entao uma aplicacão dos MMM a dados linguísticos. Paralelamente estudamos o princípio MDL (Minimum Description Length) e o problema de compressão de dados. Propomos uma metodologia de compressao de dados utilizando MMM e apresentamos um algoritmo próprio para compressao usando MMM. Comparamos mediante simulacão e aplicacao a dados reais as características da compressao de dados utilizando as cadeias completas de Markov, VLMC e MMM / Abstract: In this dissertation we propose a methodology for data compression using Minimal Markov Models (MMM). To this end we study Variable Length Markov Chains (VLMC) and MMM. Then present an application of MMM to linguistic data. In parallel we studied the MDL principle (Minimum Description Length) and the problem of data compression. We propose a method of data compression using MMM and present an algorithm suitable for compression using MMM. Compared through simulation and application to real data characteristics of data compression using the complete Markov chains, VLMC and MMM / Mestrado / Probabilidade e Estatistica / Mestre em Estatística

Page generated in 0.138 seconds