• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Análise de textura em imagens baseado em medidas de complexidade / Image Texture Analysis based on complex measures

Rayner Harold Montes Condori 30 November 2015 (has links)
A análise de textura é uma das mais básicas e famosas áreas de pesquisa em visão computacional. Ela é também de grande importância em muitas outras disciplinas, tais como ciências médicas e biológicas. Por exemplo, uma tarefa comum de análise de textura é a detecção de tecidos não saudáveis em imagens de Ressonância Magnética do pulmão. Nesta dissertação, nós propomos um método novo de caracterização de textura baseado nas medidas de complexidade tais como o expoente de Hurst, o expoente de Lyapunov e a complexidade de Lempel-Ziv. Estas medidas foram aplicadas sobre amostras de imagens no espaço de frequência. Três métodos de amostragem foram propostas, amostragem: radial, circular e por caminhadas determinísticas parcialmente auto- repulsivas (amostragem CDPA). Cada método de amostragem produz um vetor de características por medida de complexidade aplicada. Esse vetor contem um conjunto de descritores que descrevem a imagem processada. Portanto, cada imagem será representada por nove vetores de características (três medidas de complexidade e três métodos de amostragem), os quais serão comparados na tarefa de classificação de texturas. No final, concatenamos cada vetor de características conseguido calculando a complexidade de Lempel-Ziv em amostras radiais e circulares com os descritores obtidos através de técnicas de análise de textura tradicionais, tais como padrões binários locais (LBP), wavelets de Gabor (GW), matrizes de co-ocorrência en níveis de cinza (GLCM) e caminhadas determinísticas parcialmente auto-repulsivas em grafos (CDPAg). Este enfoque foi testado sobre três bancos de imagens: Brodatz, USPtex e UIUC, cada um com seus próprios desafios conhecidos. As taxas de acerto de todos os métodos tradicionais foram incrementadas com a concatenação de relativamente poucos descritores de Lempel-Ziv. Por exemplo, no caso do método LBP, o incremento foi de 84.25% a 89.09% com a concatenação de somente cinco descritores. De fato, simplesmente concatenando cinco descritores são suficientes para ver um incremento na taxa de acerto de todos os métodos tradicionais estudados. Por outro lado, a concatenação de un número excessivo de descritores de Lempel-Ziv (por exemplo mais de 40) geralmente não leva a melhora. Neste sentido, vendo os resultados semelhantes obtidos nos três bancos de imagens analisados, podemos concluir que o método proposto pode ser usado para incrementar as taxas de acerto em outras tarefas que envolvam classificação de texturas. Finalmente, com a amostragem CDPA também se obtém resultados significativos, que podem ser melhorados em trabalhos futuros. / Texture analysis is one of the basic and most popular computer vision research areas. It is also of importance in many other disciplines, such as medical sciences and biology. For example, non-healthy tissue detection in lung Magnetic Resonance images is a common texture analysis task. We proposed a novel method for texture characterization based on complexity measures such as Lyapunov exponent, Hurst exponent and Lempel-Ziv complexity. This measurements were applied over samples taken from images in the frequency domain. Three types of sampling methods were proposed: radial sampling, circular sampling and sampling by using partially self-avoiding deterministic walks (CDPA sampling). Each sampling method produce a feature vector which contains a set of descriptors that characterize the processed image. Then, each image will be represented by nine feature vectors which are means to be compared in texture classification tasks (three complexity measures over samples from three sampling methods). In the end, we combine each Lempel-Ziv feature vector from the circular and radial sampling with descriptors obtained through traditional image analysis techniques, such as Local Binary Patterns (LBP), Gabor Wavelets (GW), Gray Level Co-occurrence Matrix (GLCM) and Self-avoiding Deterministic Walks in graphs (CDPAg). This approach were tested in three datasets: Brodatz, USPtex and UIUC, each one with its own well-known challenges. All traditional methods success rates were increased by adding relatively few Lempel-Ziv descriptors. For example in the LBP case the increment went from 84.25% to 89.09% with the addition of only five descriptors. In fact, just adding five Lempel-Ziv descriptors are enough to see an increment in the success rate of every traditional method. However, adding too many Lempel-Ziv descriptors (for example more than 40) generally doesnt produce better results. In this sense, seeing the similar results we obtain in all three databases, we conclude that this approach may be used to increment the success rate in a lot of others texture classification tasks. Finally, the CDPA sampling also obtain very promising results that we can improve further on future works.
12

Computational Intelligence and Complexity Measures for Chaotic Information Processing

Arasteh, Davoud 16 May 2008 (has links)
This dissertation investigates the application of computational intelligence methods in the analysis of nonlinear chaotic systems in the framework of many known and newly designed complex systems. Parallel comparisons are made between these methods. This provides insight into the difficult challenges facing nonlinear systems characterization and aids in developing a generalized algorithm in computing algorithmic complexity measures, Lyapunov exponents, information dimension and topological entropy. These metrics are implemented to characterize the dynamic patterns of discrete and continuous systems. These metrics make it possible to distinguish order from disorder in these systems. Steps required for computing Lyapunov exponents with a reorthonormalization method and a group theory approach are formalized. Procedures for implementing computational algorithms are designed and numerical results for each system are presented. The advance-time sampling technique is designed to overcome the scarcity of phase space samples and the buffer overflow problem in algorithmic complexity measure estimation in slow dynamics feedback-controlled systems. It is proved analytically and tested numerically that for a quasiperiodic system like a Fibonacci map, complexity grows logarithmically with the evolutionary length of the data block. It is concluded that a normalized algorithmic complexity measure can be used as a system classifier. This quantity turns out to be one for random sequences and a non-zero value less than one for chaotic sequences. For periodic and quasi-periodic responses, as data strings grow their normalized complexity approaches zero, while a faster deceasing rate is observed for periodic responses. Algorithmic complexity analysis is performed on a class of certain rate convolutional encoders. The degree of diffusion in random-like patterns is measured. Simulation evidence indicates that algorithmic complexity associated with a particular class of 1/n-rate code increases with the increase of the encoder constraint length. This occurs in parallel with the increase of error correcting capacity of the decoder. Comparing groups of rate-1/n convolutional encoders, it is observed that as the encoder rate decreases from 1/2 to 1/7, the encoded data sequence manifests smaller algorithmic complexity with a larger free distance value.
13

Análise não linear de padrões encefalográficos de ratos normais e em status epilepticus submetidos a dieta normal e hiperlipídica

PESSOA, Daniella Tavares 28 February 2012 (has links)
Submitted by (lucia.rodrigues@ufrpe.br) on 2016-05-31T12:48:57Z No. of bitstreams: 1 Daniella Tavares Pessoa.pdf: 1486789 bytes, checksum: a6f7a6497263d8419ed731a88dac28b8 (MD5) / Made available in DSpace on 2016-05-31T12:48:57Z (GMT). No. of bitstreams: 1 Daniella Tavares Pessoa.pdf: 1486789 bytes, checksum: a6f7a6497263d8419ed731a88dac28b8 (MD5) Previous issue date: 2012-02-28 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The increased consumption of hyperlipidic diet has been an increase in obesity rates and levels of serum cholesterol and triglycerides in a large part of the population, as well as, has been linked with the development of neurodegenerative diseases, such as Alzheimer's disease. On the other hand, several studies demonstrated the importance of lipids in brain structure and activity. Epilepsy is a pathology related to the brain activity disorder, with high rate of refractoriness to conventional therapeutics, in these cases hyperlipidic diet has been used such an alternative treatment. Therefore, the investigation of possible interference from hyperlipidemic diets in TLE can add new perspectives in understanding the behavior and treatment of this pathology. In the present study we used mathematical computational methods to analyze electrographic patterns of rats in status epilepticus induced by pilocarpine fed with hyperlipidic diet. These rats were analyzed through electrographic parameters using ECoG records and determining: energies of power spectrum in the frequency of delta, theta, alpha and beta waves; Lempel-Ziv complexity; and fractal dimension of phase space. Status epilepticus induced changes in the encephalographic pattern measured by distribution of main brain waves using power spectrum, Lempel-Ziv complexity and fractal dimension of phase space. Hyperlipidic diet in normal rats also changed the values of brain waves energy in power spectrum and Lempel-Ziv complexity; however, fractal dimension of phase space showed no significant differences due to hyperlipidic diet treatment. Despite the hyperlipidic diet reduced brain activity before pilocarpine administration, the nutritional status did not change the encephalographic pattern during status epilepticus. In conclusion, hyperlipidic diet induced slower brain waves and decreased the complexity of brain activity, opposite effects of status epilepticus. Therefore, the mathematical methods were effective to detect brain hyperactivity caused by status epilepticus and reduced brain activity induced by hyperlipidic diet. / O aumento do consumo de dietas hiperlipídicas vem elevando os índices de obesidade e os níveis de colesterol e triglicerídeos de grande parte da população, além de estar relacionado ao desenvolvimento de doenças neurodegenerativas, como a doença de Alzheimer. Por outro lado muitas pesquisas têm comprovado a importância dos lipídeos na estrutura e atividade do cérebro. A epilepsia é uma patologia relacionada à desordem da atividade cerebral, com alto índice de refratariedade a medicamentos convencionais, nesses casos, o consumo de dietas hiperlipídica vem sendo utilizado como uma terapia alternativa. A investigação de possíveis interferências de dietas hiperlipídicas na ELT pode acrescentar novas perspectivas na compreensão do comportamento e tratamento desta condição patológica. Nesse trabalho foram analisados ratos em status epilepticus induzido pela pilocarpina submetidos à dieta hiperlipídica. Esses ratos foram analisados através de parâmetros eletrográficos utilizando os registros de ECoG e determinando as energias do seu espectro de potência nas freqüências das ondas delta, teta, alfa e beta; a complexidade de Lempel-Ziv e a dimensão fractal do espaço de fase. O status epilepticus induziu alterações no padrão encefalográfico mensuradas pela distribuição de energia das principais ondas cerebrais utilizando o espectro de potência, a complexidade de Lempel-Ziv e a dimensão fractal do espaço de fase. A dieta hiperlipídica, em ratos normais, também alterou os valores da energia das ondas cerebrais no espectro de potência e na complexidade de Lempel-Ziv; entretanto, a dimensão fractal do espaço de fase não revelou diferenças significativas devido ao tratamento com a dieta hiperlipídica. Apesar da dieta hiperlipídica ter reduzido a atividade cerebral antes da administração da pilocarpina, a condição nutricional não influenciou o padrão encefalográfico durante o status epilepticus. Em conclusão, a dieta hiperlipídica causou uma desaceleração das ondas cerebrais e diminuição da complexidade da atividade cerebral, efeitos contrários aos do status epilepticus. Portanto, os métodos matemáticos utilizados foram eficientes na detecção da hiperatividade cerebral causada pelo status epilepticus e redução da atividade cerebral induzida pela dieta hiperlipídica.
14

Correlation attacks on stream ciphers using convolutional codes

Bruwer, Christian S 24 January 2006 (has links)
This dissertation investigates four methods for attacking stream ciphers that are based on nonlinear combining generators: -- Two exhaustive-search correlation attacks, based on the binary derivative and the Lempel-Ziv complexity measure. -- A fast-correlation attack utilizing the Viterbi algorithm -- A decimation attack, that can be combined with any of the above three attacks. These are ciphertext-only attacks that exploit the correlation that occurs between the ciphertext and an internal linear feedback shift-register (LFSR) of a stream cipher. This leads to a so-called divide and conquer attack that is able to reconstruct the secret initial states of all the internal LFSRs within the stream cipher. The binary derivative attack and the Lempel-Ziv attack apply an exhaustive search to find the secret key that is used to initialize the LFSRs. The binary derivative and the Lempel-Ziv complexity measures are used to discriminate between correct and incorrect solutions, in order to identify the secret key. Both attacks are ideal for implementation on parallel processors. Experimental results show that the Lempel-Ziv correlation attack gives successful results for correlation levels of p = 0.482, requiring approximately 62000 ciphertext bits. And the binary derivative attack is successful for correlation levels of p = 0.47, using approximately 24500 ciphertext bits. The fast-correlation attack, utilizing the Viterbi algorithm, applies principles from convolutional coding theory, to identify an embedded low-rate convolutional code in the pn-sequence that is generated by an internal LFSR. The embedded convolutional code can then be decoded with a low complexity Viterbi algorithm. The algorithm operates in two phases: In the first phase a set of suitable parity check equations is found, based on the feedback taps of the LFSR, which has to be done once only once for a targeted system. In the second phase these parity check equations are utilized in a Viterbi decoding algorithm to recover the transmitted pn-sequence, thereby obtaining the secret initial state of the LFSR. Simulation results for a 19-bit LFSR show that this attack can recover the secret key for correlation levels of p = 0.485, requiring an average of only 153,448 ciphertext bits. All three attacks investigated in this dissertation are capable of attacking LFSRs with a length of approximately 40 bits. However, these attacks can be extended to attack much longer LFSRs by making use of a decimation attack. The decimation attack is able to reduce (decimate) the size of a targeted LFSR, and can be combined with any of the three above correlation attacks, to attack LFSRs with a length much longer than 40 bits. / Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2007. / Electrical, Electronic and Computer Engineering / unrestricted
15

Fast Low Memory T-Transform: string complexity in linear time and space with applications to Android app store security.

Rebenich, Niko 27 April 2012 (has links)
This thesis presents flott, the Fast Low Memory T-Transform, the currently fastest and most memory efficient linear time and space algorithm available to compute the string complexity measure T-complexity. The flott algorithm uses 64.3% less memory and in our experiments runs asymptotically 20% faster than its predecessor. A full C-implementation is provided and published under the Apache Licence 2.0. From the flott algorithm two deterministic information measures are derived and applied to Android app store security. The derived measures are the normalized T-complexity distance and the instantaneous T-complexity rate which are used to detect, locate, and visualize unusual information changes in Android applications. The information measures introduced present a novel, scalable approach to assist with the detection of malware in app stores. / Graduate
16

Modelos de compressão de dados para classificação e segmentação de texturas

Honório, Tatiane Cruz de Souza 31 August 2010 (has links)
Made available in DSpace on 2015-05-14T12:36:26Z (GMT). No. of bitstreams: 1 parte1.pdf: 2704137 bytes, checksum: 1bc9cc5c3099359131fb11fa1878c22f (MD5) Previous issue date: 2010-08-31 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This work analyzes methods for textures images classification and segmentation using lossless data compression algorithms models. Two data compression algorithms are evaluated: the Prediction by Partial Matching (PPM) and the Lempel-Ziv-Welch (LZW) that had been applied in textures classification in previous works. The textures are pre-processed using histogram equalization. The classification method is divided into two stages. In the learning stage or training, the compression algorithm builds statistical models for the horizontal and the vertical structures of each class. In the classification stage, samples of textures to be classified are compressed using models built in the learning stage, sweeping the samples horizontally and vertically. A sample is assigned to the class that obtains the highest average compression. The classifier tests were made using the Brodatz textures album. The classifiers were tested for various contexts sizes (in the PPM case), samples number and training sets. For some combinations of these parameters, the classifiers achieved 100% of correct classifications. Texture segmentation process was made only with the PPM. Initially, the horizontal models are created using eight textures samples of size 32 x 32 pixels for each class, with the PPM context of a maximum size 1. The images to be segmented are compressed by the models of classes, initially in blocks of size 64 x 64 pixels. If none of the models achieve a compression ratio at a predetermined interval, the block is divided into four blocks of size 32 x 32. The process is repeated until a model reach a compression ratio in the range of the compression ratios set for the size of the block in question. If the block get the 4 x 4 size it is classified as belonging to the class of the model that reached the highest compression ratio. / Este trabalho se propõe a analisar métodos de classificação e segmentação de texturas de imagens digitais usando algoritmos de compressão de dados sem perdas. Dois algoritmos de compressão são avaliados: o Prediction by Partial Matching (PPM) e o Lempel-Ziv-Welch (LZW), que já havia sido aplicado na classificação de texturas em trabalhos anteriores. As texturas são pré-processadas utilizando equalização de histograma. O método de classificação divide-se em duas etapas. Na etapa de aprendizagem, ou treinamento, o algoritmo de compressão constrói modelos estatísticos para as estruturas horizontal e vertical de cada classe. Na etapa de classificação, amostras de texturas a serem classificadas são comprimidas utilizando modelos construídos na etapa de aprendizagem, varrendo-se as amostras na horizontal e na vertical. Uma amostra é atribuída à classe que obtiver a maior compressão média. Os testes dos classificadores foram feitos utilizando o álbum de texturas de Brodatz. Os classificadores foram testados para vários tamanhos de contexto (no caso do PPM), amostras e conjuntos de treinamento. Para algumas das combinações desses parâmetros, os classificadores alcançaram 100% de classificações corretas. A segmentação de texturas foi realizada apenas com o PPM. Inicialmente, são criados os modelos horizontais usados no processo de segmentação, utilizando-se oito amostras de texturas de tamanho 32 x 32 pixels para cada classe, com o contexto PPM de tamanho máximo 1. As imagens a serem segmentadas são comprimidas utilizando-se os modelos das classes, inicialmente, em blocos de tamanho 64 x 64 pixels. Se nenhum dos modelos conseguir uma razão de compressão em um intervalo pré-definido, o bloco é dividido em quatro blocos de tamanho 32 x 32. O processo se repete até que algum modelo consiga uma razão de compressão no intervalo de razões de compressão definido para o tamanho do bloco em questão, podendo chegar a blocos de tamanho 4 x 4 quando o bloco é classificado como pertencente à classe do modelo que atingiu a maior taxa de compressão.
17

Robust Techniques Of Language Modeling For Spoken Language Identification

Basavaraja, S V January 2007 (has links)
Language Identification (LID) is the task of automatically identifying the language of speech signal uttered by an unknown speaker. An N language LID task is to classify an input speech utterance, spoken by an unknown speaker and of unknown text, as belonging to one of the N languages L1, L2, . . , LN. We present a new approach to spoken language modeling for language identification using the Lempel-Ziv-Welch (LZW) algorithm, with which we try to overcome the limitations of n-gram stochastic models by automatically identifying the valid set of variable length patterns from the training data. However, since several patterns in a language pattern table are also shared by other language pattern tables, confusability prevailed in the LID task. To overcome this, three pruning techniques are proposed to make these pattern tables more language specific. For LID with limited training data, we present another language modeling technique, which compensates for language specific patterns missing in the language specific LZW pattern table. We develop two new discriminative measures for LID based on the LZW algorithm, viz., (i) Compression Ratio Score (LZW-CRS) and (ii) Weighted Discriminant Score (LZW-WDS). It is shown that for a 6-language LID task of the OGI-TS database, the new model (LZW-WDS) significantly outperforms the conventional bigram approach. With regard to the front end of the LID system, we develop a modified technique to model for Acoustic Sub-Word Units (ASWU) and explore its effectiveness. The segmentation of speech signal is done using an acoustic criterion (ML-segmentation). However, we believe that consistency and discriminability among speech units is the key issue for the success of ASWU based speech processing. We develop a new procedure for clustering and modeling the segments using sub-word GMMs. Because of the flexibility in choosing the labels for the sub-word units, we do an iterative re-clustering and modeling of the segments. Using a consistency measure of labeling the acoustic segments, the convergence of iterations is demonstrated. We show that the performance of new ASWU based front-end and the new LZW based back-end for LID outperforms the earlier reported PSWR based LID.

Page generated in 0.0436 seconds