• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 11
  • 4
  • 2
  • 1
  • Tagged with
  • 35
  • 18
  • 17
  • 15
  • 11
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Análise de textura em imagens baseado em medidas de complexidade / Image Texture Analysis based on complex measures

Rayner Harold Montes Condori 30 November 2015 (has links)
A análise de textura é uma das mais básicas e famosas áreas de pesquisa em visão computacional. Ela é também de grande importância em muitas outras disciplinas, tais como ciências médicas e biológicas. Por exemplo, uma tarefa comum de análise de textura é a detecção de tecidos não saudáveis em imagens de Ressonância Magnética do pulmão. Nesta dissertação, nós propomos um método novo de caracterização de textura baseado nas medidas de complexidade tais como o expoente de Hurst, o expoente de Lyapunov e a complexidade de Lempel-Ziv. Estas medidas foram aplicadas sobre amostras de imagens no espaço de frequência. Três métodos de amostragem foram propostas, amostragem: radial, circular e por caminhadas determinísticas parcialmente auto- repulsivas (amostragem CDPA). Cada método de amostragem produz um vetor de características por medida de complexidade aplicada. Esse vetor contem um conjunto de descritores que descrevem a imagem processada. Portanto, cada imagem será representada por nove vetores de características (três medidas de complexidade e três métodos de amostragem), os quais serão comparados na tarefa de classificação de texturas. No final, concatenamos cada vetor de características conseguido calculando a complexidade de Lempel-Ziv em amostras radiais e circulares com os descritores obtidos através de técnicas de análise de textura tradicionais, tais como padrões binários locais (LBP), wavelets de Gabor (GW), matrizes de co-ocorrência en níveis de cinza (GLCM) e caminhadas determinísticas parcialmente auto-repulsivas em grafos (CDPAg). Este enfoque foi testado sobre três bancos de imagens: Brodatz, USPtex e UIUC, cada um com seus próprios desafios conhecidos. As taxas de acerto de todos os métodos tradicionais foram incrementadas com a concatenação de relativamente poucos descritores de Lempel-Ziv. Por exemplo, no caso do método LBP, o incremento foi de 84.25% a 89.09% com a concatenação de somente cinco descritores. De fato, simplesmente concatenando cinco descritores são suficientes para ver um incremento na taxa de acerto de todos os métodos tradicionais estudados. Por outro lado, a concatenação de un número excessivo de descritores de Lempel-Ziv (por exemplo mais de 40) geralmente não leva a melhora. Neste sentido, vendo os resultados semelhantes obtidos nos três bancos de imagens analisados, podemos concluir que o método proposto pode ser usado para incrementar as taxas de acerto em outras tarefas que envolvam classificação de texturas. Finalmente, com a amostragem CDPA também se obtém resultados significativos, que podem ser melhorados em trabalhos futuros. / Texture analysis is one of the basic and most popular computer vision research areas. It is also of importance in many other disciplines, such as medical sciences and biology. For example, non-healthy tissue detection in lung Magnetic Resonance images is a common texture analysis task. We proposed a novel method for texture characterization based on complexity measures such as Lyapunov exponent, Hurst exponent and Lempel-Ziv complexity. This measurements were applied over samples taken from images in the frequency domain. Three types of sampling methods were proposed: radial sampling, circular sampling and sampling by using partially self-avoiding deterministic walks (CDPA sampling). Each sampling method produce a feature vector which contains a set of descriptors that characterize the processed image. Then, each image will be represented by nine feature vectors which are means to be compared in texture classification tasks (three complexity measures over samples from three sampling methods). In the end, we combine each Lempel-Ziv feature vector from the circular and radial sampling with descriptors obtained through traditional image analysis techniques, such as Local Binary Patterns (LBP), Gabor Wavelets (GW), Gray Level Co-occurrence Matrix (GLCM) and Self-avoiding Deterministic Walks in graphs (CDPAg). This approach were tested in three datasets: Brodatz, USPtex and UIUC, each one with its own well-known challenges. All traditional methods success rates were increased by adding relatively few Lempel-Ziv descriptors. For example in the LBP case the increment went from 84.25% to 89.09% with the addition of only five descriptors. In fact, just adding five Lempel-Ziv descriptors are enough to see an increment in the success rate of every traditional method. However, adding too many Lempel-Ziv descriptors (for example more than 40) generally doesnt produce better results. In this sense, seeing the similar results we obtain in all three databases, we conclude that this approach may be used to increment the success rate in a lot of others texture classification tasks. Finally, the CDPA sampling also obtain very promising results that we can improve further on future works.
22

Implementation Of A Distributed Video Codec

Isik, Cem Vedat 01 February 2008 (has links) (PDF)
Current interframe video compression standards such as the MPEG4 and H.264, require a high-complexity encoder for predictive coding to exploit the similarities among successive video frames. This requirement is acceptable for cases where the video sequence to be transmitted is encoded once and decoded many times. However, some emerging applications such as video-based sensor networks, power-aware surveillance and mobile video communication systems require computational complexity to be shifted from encoder to decoder. Distributed Video Coding (DVC) is a new coding paradigm, based on two information-theoretic results, Slepian-Wolf and Wyner-Ziv, which allows exploiting source statistics at the decoder only. This architecture, therefore, enables very simple encoders to be used in video coding. Wyner-Ziv video coding is a particular case of DVC which deals with lossy source coding where side information is available at the decoder only. In this thesis, we implemented a DVC codec based on the DISCOVER (DIStributed COding for Video sERvices) project and carried out a detailed analysis of each block. Several algorithms have been implemented for each block and results are compared in terms of rate-distortion. The implemented architecture is aimed to be used as a testbed for future studies.
23

Coding with side information

Cheng, Szeming 01 November 2005 (has links)
Source coding and channel coding are two important problems in communications. Although side information exists in everyday scenario, the effect of side information is not taken into account in the conventional setups. In this thesis, we focus on the practical designs of two interesting coding problems with side information: Wyner-Ziv coding (source coding with side information at the decoder) and Gel??fand-Pinsker coding (channel coding with side information at the encoder). For WZC, we split the design problem into the two cases when the distortion of the reconstructed source is zero and when it is not. We review that the first case, which is commonly called Slepian-Wolf coding (SWC), can be implemented using conventional channel coding. Then, we detail the SWC design using the low-density parity-check (LDPC) code. To facilitate SWC design, we justify a necessary requirement that the SWC performance should be independent of the input source. We show that a sufficient condition of this requirement is that the hypothetical channel between the source and the side information satisfies a symmetry condition dubbed dual symmetry. Furthermore, under that dual symmetry condition, SWC design problem can be simply treated as LDPC coding design over the hypothetical channel. When the distortion of the reconstructed source is non-zero, we propose a practical WZC paradigm called Slepian-Wolf coded quantization (SWCQ) by combining SWC and nested lattice quantization. We point out an interesting analogy between SWCQ and entropy coded quantization in classic source coding. Furthermore, a practical scheme of SWCQ using 1-D nested lattice quantization and LDPC is implemented. For GPC, since the actual design procedure relies on the more precise setting of the problem, we choose to investigate the design of GPC as the form of a digital watermarking problem as digital watermarking is the precise dual of WZC. We then introduce an enhanced version of the well-known spread spectrum watermarking technique. Two applications related to digital watermarking are presented.
24

Computational Intelligence and Complexity Measures for Chaotic Information Processing

Arasteh, Davoud 16 May 2008 (has links)
This dissertation investigates the application of computational intelligence methods in the analysis of nonlinear chaotic systems in the framework of many known and newly designed complex systems. Parallel comparisons are made between these methods. This provides insight into the difficult challenges facing nonlinear systems characterization and aids in developing a generalized algorithm in computing algorithmic complexity measures, Lyapunov exponents, information dimension and topological entropy. These metrics are implemented to characterize the dynamic patterns of discrete and continuous systems. These metrics make it possible to distinguish order from disorder in these systems. Steps required for computing Lyapunov exponents with a reorthonormalization method and a group theory approach are formalized. Procedures for implementing computational algorithms are designed and numerical results for each system are presented. The advance-time sampling technique is designed to overcome the scarcity of phase space samples and the buffer overflow problem in algorithmic complexity measure estimation in slow dynamics feedback-controlled systems. It is proved analytically and tested numerically that for a quasiperiodic system like a Fibonacci map, complexity grows logarithmically with the evolutionary length of the data block. It is concluded that a normalized algorithmic complexity measure can be used as a system classifier. This quantity turns out to be one for random sequences and a non-zero value less than one for chaotic sequences. For periodic and quasi-periodic responses, as data strings grow their normalized complexity approaches zero, while a faster deceasing rate is observed for periodic responses. Algorithmic complexity analysis is performed on a class of certain rate convolutional encoders. The degree of diffusion in random-like patterns is measured. Simulation evidence indicates that algorithmic complexity associated with a particular class of 1/n-rate code increases with the increase of the encoder constraint length. This occurs in parallel with the increase of error correcting capacity of the decoder. Comparing groups of rate-1/n convolutional encoders, it is observed that as the encoder rate decreases from 1/2 to 1/7, the encoded data sequence manifests smaller algorithmic complexity with a larger free distance value.
25

Nelinearna dinamička analiza fizičkih procesa u žiivotnoj sredini / Nonlinear dynamical analysis of the physical processes in the environment

Mimić Gordan 29 September 2016 (has links)
<p>Ispitivan&nbsp; je&nbsp; spregnut&nbsp; sistem&nbsp; jednačina&nbsp; za&nbsp; prognozu&nbsp; temperature&nbsp; na povr&scaron;ini&nbsp; i&nbsp; u&nbsp; dubljem sloju zemlji&scaron;ta.&nbsp; Računati&nbsp; su&nbsp; Ljapunovljevi eksponenti,&nbsp; bifurkacioni dijagram, atraktor i analiziran je domen re&scaron;enja. Uvedene su nove informacione mere&nbsp; bazirane na<br />Kolmogorovljevoj kompleksnosti,&nbsp; za kvantifikaciju&nbsp; stepena nasumičnosti u vremenskim serijama,.&nbsp; Nove mere su primenjene na razne serije dobijene merenjem fizičkih faktora životne sredine i pomoću klimatskih modela.</p> / <p>Coupled system of prognostic equations for&nbsp; the&nbsp; ground surface temperature and&nbsp; the deeper layer temperature was examind. Lyapunov exponents, bifurcation diagrams, attractor and the domain of solutions were analyzed.&nbsp; Novel information measures based on Kolmogorov complexity&nbsp; and used&nbsp; for the quantification of randomness in time series, were presented.Novel measures were tested on various time series obtained by measuring physical factors of the environment or as the climate model outputs.</p>
26

Codage de sources distribuées : Outils et Applications à la compression vidéo

Toto-Zarasoa, Velotiaray 29 November 2010 (has links) (PDF)
Le codage de sources distribuées est une technique permettant de compresser plusieurs sources corrélées sans aucune coopération entre les encodeurs, et sans perte de débit si leur décodage s'effectue conjointement. Fort de ce principe, le codage de vidéo distribué exploite la corrélation entre les images successives d'une vidéo, en simplifiant au maximum l'encodeur et en laissant le décodeur exploiter la corrélation. Parmi les contributions de cette thèse, nous nous intéressons dans une première partie au codage asymétrique de sources binaires dont la distribution n'est pas uniforme, puis au codage des sources à états de Markov cachés. Nous montrons d'abord que, pour ces deux types de sources, exploiter la distribution au décodeur permet d'augmenter le taux de compression. En ce qui concerne le canal binaire symétrique modélisant la corrélation entre les sources, nous proposons un outil, basé sur l'algorithme EM, pour en estimer le paramètre. Nous montrons que cet outil permet d'obtenir une estimation rapide du paramètre, tout en assurant une précision proche de la borne de Cramer-Rao. Dans une deuxième partie, nous développons des outils permettant de décoder avec succès les sources précédemment étudiées. Pour cela, nous utilisons des codes Turbo et LDPC basés syndrome, ainsi que l'algorithme EM. Cette partie a été l'occasion de développer des nouveaux outils pour atteindre les bornes des codages asymétrique et non-asymétrique. Nous montrons aussi que, pour les sources non-uniformes, le rôle des sources corrélées n'est pas symétrique. Enfin, nous montrons que les modèles de sources proposés modélisent bien les distributions des plans de bits des vidéos; nous montrons des résultats prouvant l'efficacité des outils développés. Ces derniers permettent d'améliorer de façon notable la performance débit-distorsion d'un codeur vidéo distribué, mais sous certaines conditions d'additivité du canal de corrélation.
27

Football on mobile phones : algorithms, architectures and quality of experience in streaming video

Sun, Jiong January 2006 (has links)
<p>In this thesis we study algorithms and architectures that can provide a better Quality of Experience (QoE) for streaming video systems and services. With cases and examples taken from the application scenarios of football on mobile phones, we address the fundamental problems behind streaming video services. Thus, our research results can be applied and extended to other networks, to other sports and to other cultural activities.</p><p>In algorithm development, we propose five different schemes. We suggest a blind motion estimation and a trellis based motion estimation with dynamic programming algorithms for Wyner-Ziv coding. We develop a trans-media technology, vibrotactile coding of visual signals for mobile phones. We propose a new bandwidth prediction scheme for real-time video conference. We also provide an effective method based on dynamic programming to select optimal services and maximize QoE.</p><p>In architecture design, we offer three architectures for real-time interactive video and two for streaming live football information. The former three are: a structure of motion estimation in Wyner-Ziv coding for real-time video; a variable bit rate Wyner-Ziv video coding structure based on multi-view camera array; and a dynamic resource allocation structure based on 3-D object motion. The latter two are: a vibrotactile signal rendering system for live information; and a Universal Multimedia Access architecture for streaming live football video.</p><p>In QoE exploration, we give a detailed and deep discussion of QoE and the enabling techniques. We also develop a conceptual model for QoE. Moreover we place streaming video services in a framework of QoE. The new general framework of streaming video services allows for the interaction between the user, content and technology.</p><p>We demonstrate that it is possible to develop algorithms and architectures that take into account the user's perspective. Quality of Experience in video mobile services is within our reach.</p>
28

Football on mobile phones : algorithms, architectures and quality of experience in streaming video

Sun, Jiong January 2006 (has links)
In this thesis we study algorithms and architectures that can provide a better Quality of Experience (QoE) for streaming video systems and services. With cases and examples taken from the application scenarios of football on mobile phones, we address the fundamental problems behind streaming video services. Thus, our research results can be applied and extended to other networks, to other sports and to other cultural activities. In algorithm development, we propose five different schemes. We suggest a blind motion estimation and a trellis based motion estimation with dynamic programming algorithms for Wyner-Ziv coding. We develop a trans-media technology, vibrotactile coding of visual signals for mobile phones. We propose a new bandwidth prediction scheme for real-time video conference. We also provide an effective method based on dynamic programming to select optimal services and maximize QoE. In architecture design, we offer three architectures for real-time interactive video and two for streaming live football information. The former three are: a structure of motion estimation in Wyner-Ziv coding for real-time video; a variable bit rate Wyner-Ziv video coding structure based on multi-view camera array; and a dynamic resource allocation structure based on 3-D object motion. The latter two are: a vibrotactile signal rendering system for live information; and a Universal Multimedia Access architecture for streaming live football video. In QoE exploration, we give a detailed and deep discussion of QoE and the enabling techniques. We also develop a conceptual model for QoE. Moreover we place streaming video services in a framework of QoE. The new general framework of streaming video services allows for the interaction between the user, content and technology. We demonstrate that it is possible to develop algorithms and architectures that take into account the user's perspective. Quality of Experience in video mobile services is within our reach.
29

Análise não linear de padrões encefalográficos de ratos normais e em status epilepticus submetidos a dieta normal e hiperlipídica

PESSOA, Daniella Tavares 28 February 2012 (has links)
Submitted by (lucia.rodrigues@ufrpe.br) on 2016-05-31T12:48:57Z No. of bitstreams: 1 Daniella Tavares Pessoa.pdf: 1486789 bytes, checksum: a6f7a6497263d8419ed731a88dac28b8 (MD5) / Made available in DSpace on 2016-05-31T12:48:57Z (GMT). No. of bitstreams: 1 Daniella Tavares Pessoa.pdf: 1486789 bytes, checksum: a6f7a6497263d8419ed731a88dac28b8 (MD5) Previous issue date: 2012-02-28 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The increased consumption of hyperlipidic diet has been an increase in obesity rates and levels of serum cholesterol and triglycerides in a large part of the population, as well as, has been linked with the development of neurodegenerative diseases, such as Alzheimer's disease. On the other hand, several studies demonstrated the importance of lipids in brain structure and activity. Epilepsy is a pathology related to the brain activity disorder, with high rate of refractoriness to conventional therapeutics, in these cases hyperlipidic diet has been used such an alternative treatment. Therefore, the investigation of possible interference from hyperlipidemic diets in TLE can add new perspectives in understanding the behavior and treatment of this pathology. In the present study we used mathematical computational methods to analyze electrographic patterns of rats in status epilepticus induced by pilocarpine fed with hyperlipidic diet. These rats were analyzed through electrographic parameters using ECoG records and determining: energies of power spectrum in the frequency of delta, theta, alpha and beta waves; Lempel-Ziv complexity; and fractal dimension of phase space. Status epilepticus induced changes in the encephalographic pattern measured by distribution of main brain waves using power spectrum, Lempel-Ziv complexity and fractal dimension of phase space. Hyperlipidic diet in normal rats also changed the values of brain waves energy in power spectrum and Lempel-Ziv complexity; however, fractal dimension of phase space showed no significant differences due to hyperlipidic diet treatment. Despite the hyperlipidic diet reduced brain activity before pilocarpine administration, the nutritional status did not change the encephalographic pattern during status epilepticus. In conclusion, hyperlipidic diet induced slower brain waves and decreased the complexity of brain activity, opposite effects of status epilepticus. Therefore, the mathematical methods were effective to detect brain hyperactivity caused by status epilepticus and reduced brain activity induced by hyperlipidic diet. / O aumento do consumo de dietas hiperlipídicas vem elevando os índices de obesidade e os níveis de colesterol e triglicerídeos de grande parte da população, além de estar relacionado ao desenvolvimento de doenças neurodegenerativas, como a doença de Alzheimer. Por outro lado muitas pesquisas têm comprovado a importância dos lipídeos na estrutura e atividade do cérebro. A epilepsia é uma patologia relacionada à desordem da atividade cerebral, com alto índice de refratariedade a medicamentos convencionais, nesses casos, o consumo de dietas hiperlipídica vem sendo utilizado como uma terapia alternativa. A investigação de possíveis interferências de dietas hiperlipídicas na ELT pode acrescentar novas perspectivas na compreensão do comportamento e tratamento desta condição patológica. Nesse trabalho foram analisados ratos em status epilepticus induzido pela pilocarpina submetidos à dieta hiperlipídica. Esses ratos foram analisados através de parâmetros eletrográficos utilizando os registros de ECoG e determinando as energias do seu espectro de potência nas freqüências das ondas delta, teta, alfa e beta; a complexidade de Lempel-Ziv e a dimensão fractal do espaço de fase. O status epilepticus induziu alterações no padrão encefalográfico mensuradas pela distribuição de energia das principais ondas cerebrais utilizando o espectro de potência, a complexidade de Lempel-Ziv e a dimensão fractal do espaço de fase. A dieta hiperlipídica, em ratos normais, também alterou os valores da energia das ondas cerebrais no espectro de potência e na complexidade de Lempel-Ziv; entretanto, a dimensão fractal do espaço de fase não revelou diferenças significativas devido ao tratamento com a dieta hiperlipídica. Apesar da dieta hiperlipídica ter reduzido a atividade cerebral antes da administração da pilocarpina, a condição nutricional não influenciou o padrão encefalográfico durante o status epilepticus. Em conclusão, a dieta hiperlipídica causou uma desaceleração das ondas cerebrais e diminuição da complexidade da atividade cerebral, efeitos contrários aos do status epilepticus. Portanto, os métodos matemáticos utilizados foram eficientes na detecção da hiperatividade cerebral causada pelo status epilepticus e redução da atividade cerebral induzida pela dieta hiperlipídica.
30

Correlation attacks on stream ciphers using convolutional codes

Bruwer, Christian S 24 January 2006 (has links)
This dissertation investigates four methods for attacking stream ciphers that are based on nonlinear combining generators: -- Two exhaustive-search correlation attacks, based on the binary derivative and the Lempel-Ziv complexity measure. -- A fast-correlation attack utilizing the Viterbi algorithm -- A decimation attack, that can be combined with any of the above three attacks. These are ciphertext-only attacks that exploit the correlation that occurs between the ciphertext and an internal linear feedback shift-register (LFSR) of a stream cipher. This leads to a so-called divide and conquer attack that is able to reconstruct the secret initial states of all the internal LFSRs within the stream cipher. The binary derivative attack and the Lempel-Ziv attack apply an exhaustive search to find the secret key that is used to initialize the LFSRs. The binary derivative and the Lempel-Ziv complexity measures are used to discriminate between correct and incorrect solutions, in order to identify the secret key. Both attacks are ideal for implementation on parallel processors. Experimental results show that the Lempel-Ziv correlation attack gives successful results for correlation levels of p = 0.482, requiring approximately 62000 ciphertext bits. And the binary derivative attack is successful for correlation levels of p = 0.47, using approximately 24500 ciphertext bits. The fast-correlation attack, utilizing the Viterbi algorithm, applies principles from convolutional coding theory, to identify an embedded low-rate convolutional code in the pn-sequence that is generated by an internal LFSR. The embedded convolutional code can then be decoded with a low complexity Viterbi algorithm. The algorithm operates in two phases: In the first phase a set of suitable parity check equations is found, based on the feedback taps of the LFSR, which has to be done once only once for a targeted system. In the second phase these parity check equations are utilized in a Viterbi decoding algorithm to recover the transmitted pn-sequence, thereby obtaining the secret initial state of the LFSR. Simulation results for a 19-bit LFSR show that this attack can recover the secret key for correlation levels of p = 0.485, requiring an average of only 153,448 ciphertext bits. All three attacks investigated in this dissertation are capable of attacking LFSRs with a length of approximately 40 bits. However, these attacks can be extended to attack much longer LFSRs by making use of a decimation attack. The decimation attack is able to reduce (decimate) the size of a targeted LFSR, and can be combined with any of the three above correlation attacks, to attack LFSRs with a length much longer than 40 bits. / Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2007. / Electrical, Electronic and Computer Engineering / unrestricted

Page generated in 0.0408 seconds