Spelling suggestions: "subject:"data compression (computer cience)"" "subject:"data compression (computer cscience)""
81 |
Linear dimensionality reduction applied to SIFT and SURF feature descriptors / Redução linear de dimensionalidade aplicada aos descritores de características SIFT e SURFGonzález Valenzuela, Ricardo Eugenio, 1984- 24 August 2018 (has links)
Orientadores: Hélio Pedrini, William Robson Schwartz / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-24T12:45:45Z (GMT). No. of bitstreams: 1
GonzalezValenzuela_RicardoEugenio_M.pdf: 22940228 bytes, checksum: 972bc5a0fac686d7eda4da043bbd61ab (MD5)
Previous issue date: 2014 / Resumo: Descritores locais robustos normalmente compõem-se de vetores de características de alta dimensionalidade para descrever atributos discriminativos em imagens. A alta dimensionalidade de um vetor de características implica custos consideráveis em termos de tempo computacional e requisitos de armazenamento afetando o desempenho de várias tarefas que utilizam descritores de características, tais como correspondência, recuperação e classificação de imagens. Para resolver esses problemas, pode-se aplicar algumas técnicas de redução de dimensionalidade, escencialmente, construindo uma matrix de projeção que explique adequadamente a importancia dos dados em outras bases. Esta dissertação visa aplicar técnicas de redução linear de dimensionalidade aos descritores SIFT e SURF. Seu principal objetivo é demonstrar que, mesmo com o risco de diminuir a precisão dos vetores de caraterísticas, a redução de dimensionalidade pode resultar em um equilíbrio adequado entre tempo computacional e recursos de armazenamento. A redução linear de dimensionalidade é realizada por meio de técnicas como projeções aleatórias (RP), análise de componentes principais (PCA), análise linear discriminante (LDA) e mínimos quadrados parciais (PLS), a fim de criar vetores de características de menor dimensão. Este trabalho avalia os vetores de características reduzidos em aplicações de correspondência e de recuperação de imagens. O tempo computacional e o uso de memória são medidos por comparações entre os vetores de características originais e reduzidos / Abstract: Robust local descriptors usually consist of high dimensional feature vectors to describe distinctive characteristics of images. The high dimensionality of a feature vector incurs into considerable costs in terms of computational time and storage requirements, which affects the performance of several tasks that employ feature vectors, such as matching, image retrieval and classification. To address these problems, it is possible to apply some dimensionality reduction techniques, by building a projection matrix which explains adequately the importance of the data in other basis. This dissertation aims at applying linear dimensionality reduction to SIFT and SURF descriptors. Its main objective is to demonstrate that, even risking to decrease the accuracy of the feature vectors, the dimensionality reduction can result in a satisfactory trade-off between computational time and storage. We perform the linear dimensionality reduction through Random Projections (RP), Independent Component Analysis (ICA), Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Partial Least Squares (PLS) in order to create lower dimensional feature vectors. This work evaluates such reduced feature vectors in a matching application, as well as their distinctiveness in an image retrieval application. The computational time and memory usage are then measured by comparing the original and the reduced feature vectors. OBSERVAÇÃONa segunda folha, do arquivo em anexo, o meu nome tem dois pequenos erros / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
|
82 |
The development of accented English synthetic voicesMalatji, Promise Tshepiso January 2019 (has links)
Thesis (M. Sc. (Computer Science)) --University of Limpopo, 2019 / A Text-to-speech (TTS) synthesis system is a software system that receives text as input and produces speech as output. A TTS synthesis system can be used for, amongst others, language learning, and reading out text for people living with different disabilities, i.e., physically challenged, visually impaired, etc., by native and non-native speakers of the target language. Most people relate easily to a second language spoken by a non-native speaker they share a native language with. Most online English TTS synthesis systems are usually developed using native speakers of English. This research study focuses on developing accented English synthetic voices as spoken by non-native speakers in the Limpopo province of South Africa. The Modular Architecture for Research on speech sYnthesis (MARY) TTS engine is used in developing the synthetic voices. The Hidden Markov Model (HMM) method was used to train the synthetic voices. Secondary training text corpus is used to develop the training speech corpus by recording six speakers reading the text corpus. The quality of developed synthetic voices is measured in terms of their intelligibility, similarity and naturalness using a listening test. The results in the research study are classified based on evaluators’ occupation and gender and the overall results. The subjective listening test indicates that the developed synthetic voices have a high level of acceptance in terms of similarity and intelligibility. A speech analysis software is used to compare the recorded synthesised speech and the human recordings. There is no significant difference in the voice pitch of the speakers and the synthetic voices except for one synthetic voice.
|
83 |
Energy-efficient digital design of reliable, low-throughput wireless biomedical systemsTolbert, Jeremy Reynard 24 August 2012 (has links)
The main objective of this research is to improve the energy efficiency of low throughput wireless biomedical systems by employing digital design techniques. The power consumed in conventional wireless EEG (biomedical) systems is dominated by digital microcontroller and the radio frequency (RF) transceiver. To reduce the power associated with the digital processor, data compression can reduce the volume of data transmitted. An adaptive data compression algorithm has been proposed to ensure accurate representations of critical epileptic signals, while also preserving the overall power. Further advances in power reduction are also presented by designing a custom baseband processor for data compression. A functional system has been hardware verified and ASIC optimized to reduce the power by over 9X compared to existing methods. The optimized processor can operate at 32MHz with a near threshold supply of 0.5V in a conventional 45nm technology. While attempting to reach high frequencies in the near threshold regime, the probability of timing violations can reduce the robustness of the system. To further optimize the implementation, a low voltage clock tree design has been investigated to improve the reliability of the digital processor. By implementing the proposed clock tree design methodology, the digital processor can improve its robustness (by reducing the probability of timing violations) while reducing the overall power by more than 5 percent. Future work suggests examining new architectures for low-throughput processing and investigating the proposed systems' potential for a multi-channel EEG implementation.
|
84 |
Discovering and Tracking Interesting Web ServicesRocco, Daniel J. (Daniel John) 01 December 2004 (has links)
The World Wide Web has become the standard mechanism for information distribution and scientific collaboration on the Internet. This dissertation research explores a suite of techniques for discovering relevant dynamic sources in a specific domain of interest and for managing Web data effectively. We first explore techniques for discovery and automatic classification of dynamic Web sources. Our approach utilizes a service class model of the dynamic Web that allows the characteristics of interesting services to be specified using a service class description.
To promote effective Web data management, the Page Digest Web document encoding eliminates tag redundancy and places structure, content, tags, and attributes into separate containers, each of which can be referenced in isolation or in conjunction with the other elements of the document. The Page Digest Sentinel system leverages our unique encoding to provide efficient and scalable change monitoring for arbitrary Web documents through document compartmentalization and semantic change request grouping.
Finally, we present XPack, an XML document compression system that uses a containerized view of an XML document to provide both good compression and efficient querying over compressed documents. XPack's queryable XML compression format is general-purpose, does not rely on domain knowledge or particular document structural characteristics for compression, and achieves better query performance than standard query processors using text-based XML.
Our research expands the capabilities of existing dynamic Web techniques, providing superior service discovery and classification services, efficient change monitoring of Web information, and compartmentalized document handling. DynaBot is the first system to combine a service class view of the Web with a modular crawling architecture to provide automated service discovery and classification. The Page Digest Web document encoding represents Web documents efficiently by separating the individual characteristics of the document. The Page Digest Sentinel change monitoring system utilizes the Page Digest document encoding for scalable change monitoring through efficient change algorithms and intelligent request grouping. Finally, XPack is the first XML compression system that delivers compression rates similar to existing techniques while supporting better query performance than standard query processors using text-based XML.
|
85 |
High-performance memory system architectures using data compressionBaek, Seungcheol 22 May 2014 (has links)
The Chip Multi-Processor (CMP) paradigm has cemented itself as the archetypal philosophy of future microprocessor design. Rapidly diminishing technology feature sizes have enabled the integration of ever-increasing numbers of processing cores on a single chip die. This abundance of processing power has magnified the venerable processor-memory performance gap, which is known as the ”memory wall”. To bridge this performance gap, a high-performing memory structure is needed. An attractive solution to overcoming this processor-memory performance gap is using compression in the memory hierarchy. In this thesis, to use compression techniques more efficiently, compressed cacheline size information is studied, and size-aware cache management techniques and hot-cacheline prediction for dynamic early decompression technique are proposed. Also, the proposed works in this thesis attempt to mitigate the limitations of phase change memory (PCM) such as low write performance and limited long-term endurance. One promising solution is the deployment of hybridized memory architectures that fuse dynamic random access memory (DRAM) and PCM, to combine the best attributes of each technology by using the DRAM as an off-chip cache. A dual-phase compression technique is proposed for high-performing DRAM/PCM hybrid environments and a multi-faceted wear-leveling technique is proposed for the long-term endurance of compressed PCM. This thesis also includes a new compression-based hybrid multi-level cell (MLC)/single-level cell (SLC) PCM management technique that aims to combine the performance edge of SLCs with the higher capacity of MLCs in a hybrid environment.
|
86 |
Multiresolution strategies for the numerical solution of optimal control problemsJain, Sachin 26 March 2008 (has links)
Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme.
The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a nonlinear programming (NLP) problem that is solved using standard NLP codes. The novelty of the proposed approach hinges on the automatic calculation of a suitable, nonuniform grid over which the NLP problem is solved, which tends to increase numerical efficiency and robustness. Control and/or state constraints are handled with ease, and without any additional computational complexity. The proposed algorithm is based on a simple and intuitive method to balance several conflicting objectives, such as accuracy of the solution, convergence, and speed of the computations. The benefits of the proposed algorithm over uniform grid implementations are demonstrated with the help of several nontrivial examples. Furthermore, two sequential multiresolution trajectory optimization algorithms for solving problems with moving targets and/or dynamically changing environments have been developed.
|
87 |
Compressão de dados ambientais em redes de sensores sem fio usando código de HuffmanMaciel, Marcos Costa 21 February 2013 (has links)
Fundação do Amparo à Pesquisa do Estado do Amazonas (FAPEAM) / Nesta dissertação de mestrado é apresentada uma proposta de um método simples de compressão de dados sem perda para Redes de Sensores sem Fio (RSSF). Este método é baseado numa codificação Huffman convencional aplicada a um conjunto de amostras de parâmetros monitorados que possuam uma forte correlação temporal, fazendo com que seja gerado um dicionário Huffman a partir dessas probabilidades e que possam ser utilizadas em outros conjuntos de parâmetros de mesma característica. Os resultados de simulação usando temperatura e umidade relativa mostram que este método supera alguns dos mais populares mecanismos de compressão projetados especificamente para RSSF. / In this masters thesis we present a lightweight lossless data compression method for wireless sensor networks(WSN). This method is based on a conventional Huffman coding applied to a sample set of monitored parameters that have a strong temporal correlation, so that a Huffman dictionary is generated from these probabilities, and which may be used in other sets of parameters with same characteristic. Simulations results using temperature and relative humidity measurements show that the proposed method outperforms popular compression mechanisms designed specifically for wireless sensor networks.
|
88 |
Desenvolvimento de tecnicas quimiometricas de compressão de dados e deredução de ruido instrumental aplicadas a oleo diesel e madeira de eucalipto usando espectroscopia NIR / Development of chemometric technics for data compression and reduction of diesel oil and eucalypus wood employing NIR spectroscopyDantas Filho, Heronides Adonias 16 March 2007 (has links)
Orientadores: Celio Pasquini, Mario Cesar Ugulino de Araujo / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Quimica / Made available in DSpace on 2018-08-09T13:24:36Z (GMT). No. of bitstreams: 1
DantasFilho_HeronidesAdonias_D.pdf: 2337564 bytes, checksum: b5a44bf3eec3ce95ab683c5b2621b012 (MD5)
Previous issue date: 2007 / Resumo: Neste trabalho foram desenvolvidas e aplicadas técnicas de seleção de amostras e de variáveis espectrais para calibração multivariada a partir do Algoritmo das Projeções Sucessivas (APS). Também foi utilizada a transformada wavelet para resolver problemas de redução de ruído associado a dados de espectroscopia NIR (Infravermelho Próximo), na construção de modelos de calibração multivariada baseados em Regressão Linear Múltipla (MLR) para estimativa de parâmetros de qualidade de óleo diesel combustível e também de madeira de eucalipto. Os espectros NIR de transmitância para óleo diesel e de reflectância para madeira de eucalipto foram registrados empregando-se um equipamento NIR-Bomem com detector de Arseneto de Gálio e Índio. Para a aplicação em óleo diesel, foram estudadas as regiões espectrais: 850 - 1.100 nm, 1.100 - 1.570 nm e 1.570 - 2.500 nm. Para as amostras de madeira de eucalipto foi empregada a região de 1.100 - 2.500 nm. Os resultados do uso de técnicas de seleção de variáveis e amostras por MLR comprovaram sua simplicidade frente os modelos de regressão por mínimos quadrados parciais (PLS) que empregam toda a região espectral e transformação em variáveis latentes e são mais complexos de interpretar. O emprego de seleção de amostras demonstrou ainda a possibilidade de procedimentos de recalibrações e transferência de calibração que utilizam um número reduzido de amostras, sem a perda significativa da capacidade preditiva dos modelos MLR. O uso de filtragem wavelet também teve sua eficiência comprovada tanto no contexto da calibração multivariada quanto na filtragem de espectros NIR a partir de varreduras individuais. Na maioria dos casos de que trata esta tese e por conseqüência das técnicas quimiométricas empregadas, melhorias quanto à minimização do erro (RMSEP) associado à quantificação dos parâmetros de qualidade, bem como redução do tempo empregado na aquisição de varreduras de espectros NIR foram as principais contribuições fornecidas / Abstract: This work describes two techniques for spectral variable and sample selection based on the Successive Projections Algorithm (SPA), aiming the construction of multivariate regression models. Also, the wavelet transform was employed to solve problems related to noise reduction associated with spectroscopic data in the near infrared spectral region (NIR), and employed in the construction of multivariate calibration models based in Linear Multiple Regression (LMR) to estimate the quality parameters of diesel fuel and eucalyptus wood. The NIR transmission spectra for diesel samples and the reflectance spectra obtained for wood samples were acquired by using a NIR-Bomen equipment with AsGaIn detector. For application in diesel, the following spectral regions have been investigated: 850 - 1100 nm, 1100 - 1570 nm and 1570 - 2500 nm. For wood samples the spectral region employed was from 1100 - 2500 nm. The results obtained by using the variable selection techniques and LMR demonstrate their simplicity when compared with its counterpart Partial Least Square (PLS) which employs full spectral region and latent variables, being, therefore, more difficult to be interpreted. The use of wavelet filtering also demonstrates its efficiency both for multivariate calibration and NIR spectral data filtering. In most of the cases approached in this work, and inconsequence for the chemometric techniques employed, improvements in the error (RMSEP) associated with the quality parameters as well a decrease in the analysis time were the main achievements of this work / Doutorado / Quimica Analitica / Doutor em Ciências
|
89 |
Compressão de dados baseada nos modelos de Markov minimos / Data compression based on the minimal Markov modelsYaginuma, Karina Yuriko 03 December 2010 (has links)
Orientador: Jesus Enrique Garcia / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-15T18:18:34Z (GMT). No. of bitstreams: 1
Yaginuma_KarinaYuriko_M.pdf: 793513 bytes, checksum: 80908040b7ddf985dbe851b78dc4f279 (MD5)
Previous issue date: 2010 / Resumo: Nesta dissertação e proposta uma metodologia de compressão de dados usando Modelos de Markov Mínimos (MMM). Para tal fim estudamos cadeias de Markov de alcance variavel (VLMC, Variable Length Markov Chains) e MMM. Apresentamos entao uma aplicacão dos MMM a dados linguísticos. Paralelamente estudamos o princípio MDL (Minimum Description Length) e o problema de compressão de dados. Propomos uma metodologia de compressao de dados utilizando MMM e apresentamos um algoritmo próprio para compressao usando MMM. Comparamos mediante simulacão e aplicacao a dados reais as características da compressao de dados utilizando as cadeias completas de Markov, VLMC e MMM / Abstract: In this dissertation we propose a methodology for data compression using Minimal Markov Models (MMM). To this end we study Variable Length Markov Chains (VLMC) and MMM. Then present an application of MMM to linguistic data. In parallel we studied the MDL principle (Minimum Description Length) and the problem of data compression. We propose a method of data compression using MMM and present an algorithm suitable for compression using MMM. Compared through simulation and application to real data characteristics of data compression using the complete Markov chains, VLMC and MMM / Mestrado / Probabilidade e Estatistica / Mestre em Estatística
|
90 |
Novas abordagens para compressão de documentos XML / New approaches for compression of XML documentsTeixeira, Márlon Amaro Coelho 19 August 2018 (has links)
Orientador: Leonardo de Souza Mendes / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-19T13:11:21Z (GMT). No. of bitstreams: 1
Teixeira_MarlonAmaroCoelho_M.pdf: 1086156 bytes, checksum: 1acfaaf659e42716010448c6781b6313 (MD5)
Previous issue date: 2011 / Resumo: Atualmente, alguns dos fatores que determinam o sucesso ou fracasso das corporações estão ligados a velocidade e a eficiência da tomada de suas decisões. Para que estes quesitos sejam alcançados, a integração dos sistemas computacionais legados aos novos sistemas computacionais é de fundamental importância, criando assim a necessidade de que velhas e novas tecnologias interoperem. Como solução a este problema surge a linguagem XML, uma linguagem auto-descritiva, independente de tecnologia e plataforma, que vem se tornando um padrão de comunicação entre sistemas heterogêneos. Por ser auto-descritiva, a XML se torna redundante, o que gera mais informações a ser transferida e armazenada, exigindo mais recursos dos sistemas computacionais. Este trabalho consiste em apresentar novas abordagens de compressão específicas para a linguagem XML, com o objetivo de reduzir o tamanho de seus documentos, diminuindo os impactos sobre os recursos de rede, armazenamento e processamento. São apresentadas 2 novas abordagens, assim como os casos de testes que as avaliam, considerando os quesitos: taxa de compressão, tempo de compressão e tolerância dos métodos a baixas disponibilidades de memória. Os resultados obtidos são comparados aos métodos de compressão de XML que se destacam na literatura. Os resultados demonstram que a utilização de compressores de documentos XML pode reduzir consideravelmente os impactos de desempenho criados pela linguagem / Abstract: Actually, some of the factors that determine success or failure of a corporation are on the speed and efficiency of making their decisions. For these requirements are achieved, the integration of legacy computational systems to new computational systems is of fundamental importance, thus creating the need for old and new technologies interoperate. As a solution to this problem comes to XML, a language self-descriptive and platform-independent technology, and it is becoming a standard for communication between heterogeneous systems. Being self-descriptive, the XML becomes redundant, which generates more information to be transferred and stored, requiring more resources of computational systems. This work presents new approaches to specific compression for XML, in order to reduce the size of your documents, reducing the impacts on the reducing the impact on network, storage and processing resources. Are presented two new approaches as well as test cases that evaluate, considering the questions: compression ratio, compression time and tolerance of the methods to low memory availability. The results are compared to the XML compression methods that stand out in the literature. The results demonstrate that the use of compressors XML documents can significantly reduce the performance impacts created by the language / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
|
Page generated in 0.1391 seconds