• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 10
  • 9
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 103
  • 103
  • 103
  • 101
  • 28
  • 22
  • 18
  • 16
  • 15
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Tópicos em seleção de modelos markovianos / Topics in selection of Markov models

Viola, Márcio Luis Lanfredi, 1978- 19 August 2018 (has links)
Orientador: Jesus Enrique Garcia / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-19T15:10:51Z (GMT). No. of bitstreams: 1 Viola_MarcioLuisLanfredi_D.pdf: 951071 bytes, checksum: 87d2c8b2501105bc64aab5e92c769ea4 (MD5) Previous issue date: 2011 / Resumo: Nesta tese abordamos o problema estatístico de seleção de um modelo Markoviano de ordem finita que se ajuste bem a um conjunto de dados em duas situações diferentes. Em relação ao primeiro caso, propomos uma metodologia para a estimação de uma árvore de contextos utilizando-se amostras independentes sendo que a maioria delas são provenientes de um mesmo processo de Markov de memória variável finita e as demais provêm de um outro processo Markoviano de memória variável finita. O método proposto é baseado na taxa de entropia relativa simetrizada como uma medida de similaridade. Além disso, o conceito de ponto de ruptura assintótico foi adaptado ao nosso problema de seleção a fim de mostrar que o procedimento proposto, nesta tese, é robusto. Em relação ao segundo problema, considerando um processo de Markov multivariado de ordem finita, propomos uma metodologia consistente que fornece a partição mais fina das coordenadas do processo de forma que os seus elementos sejam condionalmente independentes. O método obtido é baseado no BIC (Critério de Informação Bayesiano). Porém, quando o número de coordenadas do processo cresce, o custo computacional do critério BIC torna-se excessivo. Neste caso, propomos um algoritmo mais eficiente do ponto de vista computacional e a sua consistência é demonstrada. A eficiência das metodologias propostas foi estudada através de simulações e elas foram aplicadas em dados linguísticos / Abstract: This work related two statistical problems involving the selection of a Markovian model of finite order. Firstly, we propose a procedure to choose a context tree from independent samples, with more than half of them being realizations of the same finite memory Markovian processes with finite alphabet with law P. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. We define the asymptotic breakdown point for a model selection procedure, and show the asymptotic breakdown point for our procedure. Moreover, we study the robust procedure by simulations and it is applied to linguistic data. The aim of other problem is to develop a consistent methodology for obtain the finner partitions of the coordinates of an multivariate Markovian stationary process such that their elements are conditionally independents. The proposed method is establishment by Bayesian information criterion (BIC). However, when the number of the coordinates of process increases, the computing of criterion BIC becomes excessive. In this case, we propose an algorithm more efficient and the its consistency is demonstrated. It is tested by simulations and applied to linguistic data / Doutorado / Estatistica / Doutor em Estatística
92

Learning Decentralized Goal-Based Vector Quantization

Gupta, Piyush 05 1900 (has links) (PDF)
No description available.
93

Modified VQ Coders For ECG

Narasimaham, M V S Phani 04 1900 (has links) (PDF)
No description available.
94

Design and evaluation of compact ISAs / Estudo e avaliação de conjuntos de instruções compactos

Lopes, Bruno Cardoso, 1985- 24 August 2018 (has links)
Orientador: Rodolfo Jardim de Azevedo / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-24T12:29:38Z (GMT). No. of bitstreams: 1 Lopes_BrunoCardoso_D.pdf: 3162388 bytes, checksum: 3a46d0fb9404a69bf87489922e4743b0 (MD5) Previous issue date: 2014 / Resumo: Sistemas embarcados modernos são compostos de SoC heterogêneos, variando entre processadores de baixo e alto custo. Apesar de processadores RISC serem o padrão para estes dispositivos, a situação mudou recentemente: fabricantes estão construindo sistemas embarcados utilizando processadores RISC - ARM e MIPS - e CISC (x86). A adição de novas funcionalidades em software embarcados requer maior utilização da memória, um recurso caro e escasso em SoCs. Assim, o tamanho de código executável é crítico, porque afeta diretamente o número de misses na cache de instruções. Processadores CISC costumavam possuir maior densidade de código do que processadores RISC, uma vez que a codificação de instruções com tamanho variável beneficia as instruções mais usadas, os programas são menores. No entanto, com a adição de novas extensões e instruções mais longas, a densidade do CISC em aplicativos recentes tornou-se similar ao RISC. Nesta tese de doutorado, investigamos a compressibilidade de processadores RISC e CISC; SPARC e x86. Nós propomos uma extensão de 16-bits para o processador SPARC, o SPARC16. Apresentamos também, a primeira metodologia para gerar ISAs de 16-bits e avaliamos a compressão atingida em comparação com outras extensões de 16-bits. Programas do SPARC16 podem atingir taxas de compressão melhores do que outros ISAs, atingindo taxas de até 67%. O SPARC16 também reduz taxas de cache miss em até 9%, podendo usar caches menores do que processadores SPARC mas atingindo o mesmo desempenho; a redução pode chegar à um fator de 16. Estudamos também como novas extensões constantemente introduzem novas funcionalidades para o x86, levando ao inchaço do ISA - com o total de 1300 instruções em 2013. Alem disso, 57 instruções se tornam inutilizadas entre 1995 e 2012. Resolvemos este problema propondo um mecanismo de reciclagem de opcodes utilizando emulação de instruções legadas, sem quebrar compatibilidade com softwares antigos. Incluímos um estudo de caso onde instruções x86 da extensão AVX são recodificadas usando codificações menores, oriundas de instruções inutilizadas, atingindo até 14% de redução no tamanho de código e 53% de diminuição do número de cache misses. Os resultados finais mostram que usando nossa técnica, até 40% das instruções do x86 podem ser removidas com menos de 5% de perda de desempenho / Abstract: Modern embedded devices are composed of heterogeneous SoC systems ranging from low to high-end processor chips. Although RISC has been the traditional processor for these devices, the situation changed recently; manufacturers are building embedded systems using both RISC - ARM and MIPS - and CISC processors (x86). New functionalities in embedded software require more memory space, an expensive and rare resource in SoCs. Hence, executable code size is critical since performance is directly affected by instruction cache misses. CISC processors used to have a higher code density than RISC since variable length encoding benefits most used instructions, yielding smaller programs. However, with the addition of new extensions and longer instructions, CISC density in recent applications became similar to RISC. In this thesis, we investigate compressibility of RISC and CISC processors, namely SPARC and x86. We propose a 16-bit extension to the SPARC processor, the SPARC16. Additionally, we provide the first methodology for generating 16-bit ISAs and evaluate compression among different 16-bit extensions. SPARC16 programs can achieve better compression ratios than other ISAs, attaining results as low as 67%. SPARC16 also reduces cache miss rates up to 9%, requiring smaller caches than SPARC processors to achieve the same performance; a cache size reduction that can reach a factor of 16. Furthermore, we study how new extensions are constantly introducing new functionalities to x86, leading to the ISA bloat at the cost a complex microprocessor front-end design, area and energy consumption - the x86 ISA reached over 1300 different instructions in 2013. Moreover, analyzed x86 code from 5 Windows versions and 7 Linux distributions in the range from 1995 to 2012 shows that up to 57 instructions get unused with time. To solve this problem, we propose a mechanism to recycle instruction opcodes through legacy instruction emulation without breaking backward software compatibility. We present a case study of the AVX x86 SIMD instructions with shorter instruction encodings from other unused instructions to yield up to 14% code size reduction and 53% instruction cache miss reduction in SPEC CPU2006 floating-point programs. Finally, our results show that up to 40% of the x86 instructions can be removed with less than 5% of overhead through our technique without breaking any legacy code / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
95

An image delta compression tool: IDelta

Sullivan, Kevin Michael 01 January 2004 (has links)
The purpose of this thesis is to present a modified version of the algorithm used in the open source differencing tool zdelta, entitled "iDelta". This algorithm will manage file data and will be built specifically to difference images in the Photoshop file format.
96

AZIP, audio compression system: Research on audio compression, comparison of psychoacoustic principles and genetic algorithms

Chen, Howard 01 January 2005 (has links)
The purpose of this project is to investigate the differences between psychoacoustic principles and genetic algorithms (GA0). These will be discussed separately. The review will also compare the compression ratio and the quality of the decompressed files decoded by these two methods.
97

Universal homophonic coding

Stevens, Charles Cater 11 1900 (has links)
Redundancy in plaintext is a fertile source of attack in any encryption system. Compression before encryption reduces the redundancy in the plaintext, but this does not make a cipher more secure. The cipher text is still susceptible to known-plaintext and chosen-plaintext attacks. The aim of homophonic coding is to convert a plaintext source into a random sequence by randomly mapping each source symbol into one of a set of homophones. Each homophone is then encoded by a source coder after which it can be encrypted with a cryptographic system. The security of homophonic coding falls into the class of unconditionally secure ciphers. The main advantage of homophonic coding over pure source coding is that it provides security both against known-plaintext and chosen-plaintext attacks, whereas source coding merely protects against a ciphertext-only attack. The aim of this dissertation is to investigate the implementation of an adaptive homophonic coder based on an arithmetic coder. This type of homophonic coding is termed universal, as it is not dependent on the source statistics. / Computer Science / M.Sc. (Computer Science)
98

Constrained measurement systems of low-dimensional signals

Yap, Han Lun 20 December 2012 (has links)
The object of this thesis is the study of constrained measurement systems of signals having low-dimensional structure using analytic tools from Compressed Sensing (CS). Realistic measurement systems usually have architectural constraints that make them differ from their idealized, well-studied counterparts. Nonetheless, these measurement systems can exploit structure in the signals that they measure. Signals considered in this research have low-dimensional structure and can be broken down into two types: static or dynamic. Static signals are either sparse in a specified basis or lying on a low-dimensional manifold (called manifold-modeled signals). Dynamic signals, exemplified as states of a dynamical system, either lie on a low-dimensional manifold or have converged onto a low-dimensional attractor. In CS, the Restricted Isometry Property (RIP) of a measurement system ensures that distances between all signals of a certain sparsity are preserved. This stable embedding ensures that sparse signals can be distinguished one from another by their measurements and therefore be robustly recovered. Moreover, signal-processing and data-inference algorithms can be performed directly on the measurements instead of requiring a prior signal recovery step. Taking inspiration from the RIP, this research analyzes conditions on realistic, constrained measurement systems (of the signals described above) such that they are stable embeddings of the signals that they measure. Specifically, this thesis focuses on four different types of measurement systems. First, we study the concentration of measure and the RIP of random block diagonal matrices that represent measurement systems constrained to make local measurements. Second, we study the stable embedding of manifold-modeled signals by existing CS matrices. The third part of this thesis deals with measurement systems of dynamical systems that produce time series observations. While Takens' embedding result ensures that this time series output can be an embedding of the dynamical systems' states, our research establishes that a stronger stable embedding result is possible under certain conditions. The final part of this thesis is the application of CS ideas to the study of the short-term memory of neural networks. In particular, we show that the nodes of a recurrent neural network can be a stable embedding of sparse input sequences.
99

Universal homophonic coding

Stevens, Charles Cater 11 1900 (has links)
Redundancy in plaintext is a fertile source of attack in any encryption system. Compression before encryption reduces the redundancy in the plaintext, but this does not make a cipher more secure. The cipher text is still susceptible to known-plaintext and chosen-plaintext attacks. The aim of homophonic coding is to convert a plaintext source into a random sequence by randomly mapping each source symbol into one of a set of homophones. Each homophone is then encoded by a source coder after which it can be encrypted with a cryptographic system. The security of homophonic coding falls into the class of unconditionally secure ciphers. The main advantage of homophonic coding over pure source coding is that it provides security both against known-plaintext and chosen-plaintext attacks, whereas source coding merely protects against a ciphertext-only attack. The aim of this dissertation is to investigate the implementation of an adaptive homophonic coder based on an arithmetic coder. This type of homophonic coding is termed universal, as it is not dependent on the source statistics. / Computer Science / M.Sc. (Computer Science)
100

'n Masjienleerbenadering tot woordafbreking in Afrikaans

Fick, Machteld 06 1900 (has links)
Text in Afrikaans / Die doel van hierdie studie was om te bepaal tot watter mate ’n suiwer patroongebaseerde benadering tot woordafbreking bevredigende resultate lewer. Die masjienleertegnieke kunsmatige neurale netwerke, beslissingsbome en die TEX-algoritme is ondersoek aangesien dit met letterpatrone uit woordelyste afgerig kan word om lettergreep- en saamgesteldewoordverdeling te doen. ’n Leksikon van Afrikaanse woorde is uit ’n korpus van elektroniese teks genereer. Om lyste vir lettergreep- en saamgesteldewoordverdeling te kry, is woorde in die leksikon in lettergrepe verdeel en saamgestelde woorde is in hul samestellende dele verdeel. Uit elkeen van hierdie lyste van ±183 000 woorde is ±10 000 woorde as toetsdata gereserveer terwyl die res as afrigtingsdata gebruik is. ’n Rekursiewe algoritme is vir saamgesteldewoordverdeling ontwikkel. In hierdie algoritme word alle ooreenstemmende woorde uit ’n verwysingslys (die leksikon) onttrek deur stringpassing van die begin en einde van woorde af. Verdelingspunte word dan op grond van woordlengte uit die samestelling van begin- en eindwoorde bepaal. Die algoritme is uitgebrei deur die tekortkominge van hierdie basiese prosedure aan te spreek. Neurale netwerke en beslissingsbome is afgerig en variasies van beide tegnieke is ondersoek om die optimale modelle te kry. Patrone vir die TEX-algoritme is met die OPatGen-program gegenereer. Tydens toetsing het die TEX-algoritme die beste op beide lettergreep- en saamgesteldewoordverdeling presteer met 99,56% en 99,12% akkuraatheid, respektiewelik. Dit kan dus vir woordafbreking gebruik word met min risiko vir afbrekingsfoute in gedrukte teks. Die neurale netwerk met 98,82% en 98,42% akkuraatheid op lettergreep- en saamgesteldewoordverdeling, respektiewelik, is ook bruikbaar vir lettergreepverdeling, maar dis meer riskant. Ons het bevind dat beslissingsbome te riskant is om vir lettergreepverdeling en veral vir woordverdeling te gebruik, met 97,91% en 90,71% akkuraatheid, respektiewelik. ’n Gekombineerde algoritme is ontwerp waarin saamgesteldewoordverdeling eers met die TEXalgoritme gedoen word, waarna die resultate van lettergreepverdeling deur beide die TEXalgoritme en die neurale netwerk gekombineer word. Die algoritme het 1,3% minder foute as die TEX-algoritme gemaak. ’n Toets op gepubliseerde Afrikaanse teks het getoon dat die risiko vir woordafbrekingsfoute in teks met gemiddeld tien woorde per re¨el ±0,02% is. / The aim of this study was to determine the level of success achievable with a purely pattern based approach to hyphenation in Afrikaans. The machine learning techniques artificial neural networks, decision trees and the TEX algorithm were investigated since they can be trained with patterns of letters from word lists for syllabification and decompounding. A lexicon of Afrikaans words was extracted from a corpus of electronic text. To obtain lists for syllabification and decompounding, words in the lexicon were respectively syllabified and compound words were decomposed. From each list of ±183 000 words, ±10 000 words were reserved as testing data and the rest was used as training data. A recursive algorithm for decompounding was developed. In this algorithm all words corresponding with a reference list (the lexicon) are extracted by string fitting from beginning and end of words. Splitting points are then determined based on the length of reassembled words. The algorithm was expanded by addressing shortcomings of this basic procedure. Artificial neural networks and decision trees were trained and variations of both were examined to find optimal syllabification and decompounding models. Patterns for the TEX algorithm were generated by using the program OPatGen. Testing showed that the TEX algorithm performed best on both syllabification and decompounding tasks with 99,56% and 99,12% accuracy, respectively. It can therefore be used for hyphenation in Afrikaans with little risk of hyphenation errors in printed text. The performance of the artificial neural network was lower, but still acceptable, with 98,82% and 98,42% accuracy for syllabification and decompounding, respectively. The decision tree with accuracy of 97,91% on syllabification and 90,71% on decompounding was found to be too risky to use for either of the tasks A combined algorithm was developed where words are first decompounded by using the TEX algorithm before syllabifying them with both the TEX algoritm and the neural network and combining the results. This algoritm reduced the number of errors made by the TEX algorithm by 1,3% but missed more hyphens. Testing the algorithm on Afrikaans publications showed the risk for hyphenation errors to be ±0,02% for text assumed to have an average of ten words per line. / Decision Sciences / D. Phil. (Operational Research)

Page generated in 0.1183 seconds