31 |
Segmentation de documents administratifs en couches couleur / Segmentation of administrative document images into color layersCarel, Elodie 08 October 2015 (has links)
Les entreprises doivent traiter quotidiennement de gros volumes de documents papiers de toutes sortes. Automatisation, traçabilité, alimentation de systèmes d’informations, réduction des coûts et des délais de traitement, la dématérialisation a un impact économique évident. Pour respecter les contraintes industrielles, les processus historiques d’analyse simplifient les images grâce à une séparation fond/premier-plan. Cependant, cette binarisation peut être source d’erreurs lors des étapes de segmentation et de reconnaissance. Avec l’amélioration des techniques, la communauté d’analyse de documents a montré un intérêt croissant pour l’intégration d’informations colorimétriques dans les traitements, ceci afin d’améliorer leurs performances. Pour respecter le cadre imposé par notre partenaire privé, l’objectif était de mettre en place des processus non supervisés. Notre but est d’être capable d’analyser des documents même rencontrés pour la première fois quels que soient leurs contenus, leurs structures, et leurs caractéristiques en termes de couleurs. Les problématiques de ces travaux ont été d’une part l’identification d’un nombre raisonnable de couleurs principales sur une image ; et d’autre part, le regroupement en couches couleur cohérentes des pixels ayant à la fois une apparence colorimétrique très proche, et présentant une unité logique ou sémantique. Fournies sous forme d’un ensemble d’images binaires, ces couches peuvent être réinjectées dans la chaîne de dématérialisation en fournissant une alternative à l’étape de binarisation classique. Elles apportent en plus des informations complémentaires qui peuvent être exploitées dans un but de segmentation, de localisation, ou de description. Pour cela, nous avons proposé une segmentation spatio-colorimétrique qui permet d’obtenir un ensemble de régions locales perceptuellement cohérentes appelées superpixels, et dont la taille s’adapte au contenu spécifique des images de documents. Ces régions sont ensuite regroupées en couches couleur globales grâce à une analyse multi-résolution. / Industrial companies receive huge volumes of documents everyday. Automation, traceability, feeding information systems, reducing costs and processing times, dematerialization has a clear economic impact. In order to respect the industrial constraints, the traditional digitization process simplifies the images by performing a background/foreground separation. However, this binarization can lead to some segmentation and recognition errors. With the improvements of technology, the community of document analysis has shown a growing interest in the integration of color information in the process to enhance its performance. In order to work within the scope provided by our industrial partner in the digitization flow, an unsupervised segmentation approach was chosen. Our goal is to be able to cope with document images, even when they are encountered for the first time, regardless their content, their structure, and their color properties. To this end, the first issue of this project was to identify a reasonable number of main colors which are observable on an image. Then, we aim to group pixels having both close color properties and a logical or semantic unit into consistent color layers. Thus, provided as a set of binary images, these layers may be reinjected into the digitization chain as an alternative to the conventional binarization. Moreover, they also provide extra-information about colors which could be exploited for segmentation purpose, elements spotting, or as a descriptor. Therefore, we have proposed a spatio-colorimetric approach which gives a set of local regions, known as superpixels, which are perceptually meaningful. Their size is adapted to the content of the document images. These regions are then merged into global color layers by means of a multiresolution analysis.
|
32 |
Non-stationary signal classification for radar transmitter identificationDu Plessis, Marthinus Christoffel 09 September 2010 (has links)
The radar transmitter identification problem involves the identification of a specific radar transmitter based on a received pulse. The radar transmitters are of identical make and model. This makes the problem challenging since the differences between radars of identical make and model will be solely due to component tolerances and variation. Radar pulses also vary in time and frequency which means that the problem is non-stationary. Because of this fact, time-frequency representations such as shift-invariant quadratic time-frequency representations (Cohen’s class) and wavelets were used. A model for a radar transmitter was developed. This consisted of an analytical solution to a pulse-forming network and a linear model of an oscillator. Three signal classification algorithms were developed. A signal classifier was developed that used a radially Gaussian Cohen’s class transform. This time-frequency representation was refined to increase the classification accuracy. The classification was performed with a support vector machine classifier. The second signal classifier used a wavelet packet transform to calculate the feature values. The classification was performed using a support vector machine. The third signal classifier also used the wavelet packet transform to calculate the feature values but used a Universum type classifier for classification. This classifier uses signals from the same domain to increase the classification accuracy. The classifiers were compared against each other on a cubic and exponential chirp test problem and the radar transmitter model. The classifier based on the Cohen’s class transform achieved the best classification accuracy. The classifier based on the wavelet packet transform achieved excellent results on an Electroencephalography (EEG) test dataset. The complexity of the wavelet packet classifier is significantly lower than the Cohen’s class classifier. Copyright / Dissertation (MEng)--University of Pretoria, 2010. / Electrical, Electronic and Computer Engineering / unrestricted
|
33 |
Image Approximation using TriangulationTrisiripisal, Phichet 11 July 2003 (has links)
An image is a set of quantized intensity values that are sampled at a finite set of sample points on a two-dimensional plane. Images are crucial to many application areas, such as computer graphics and pattern recognition, because they discretely represent the information that the human eyes interpret. This thesis considers the use of triangular meshes for approximating intensity images. With the help of the wavelet-based analysis, triangular meshes can be efficiently constructed to approximate the image data. In this thesis, this study will focus on local image enhancement and mesh simplification operations, which try to minimize the total error of the reconstructed image as well as the number of triangles used to represent the image. The study will also present an optimal procedure for selecting triangle types used to represent the intensity image. Besides its applications to image and video compression, this triangular representation is potentially very useful for data storage and retrieval, and for processing such as image segmentation and object recognition. / Master of Science
|
34 |
Analyse harmonique sur graphes dirigés et applications : de l'analyse de Fourier aux ondelettes / Harmonic Analysis on directed graphs and applications : From Fourier analysis to waveletsSevi, Harry 22 November 2018 (has links)
La recherche menée dans cette thèse a pour but de développer une analyse harmonique pour des fonctions définies sur les sommets d'un graphe orienté. À l'ère du déluge de données, de nombreuses données sont sous forme de graphes et données sur ce graphe. Afin d'analyser d'exploiter ces données de graphes, nous avons besoin de développer des méthodes mathématiques et numériquement efficientes. Ce développement a conduit à l'émergence d'un nouveau cadre théorique appelé le traitement de signal sur graphe dont le but est d'étendre les concepts fondamentaux du traitement de signal classique aux graphes. Inspirées par l'aspect multi échelle des graphes et données sur graphes, de nombreux constructions multi-échelles ont été proposé. Néanmoins, elles s'appliquent uniquement dans le cadre non orienté. L'extension d'une analyse harmonique sur graphe orienté bien que naturelle, s'avère complexe. Nous proposons donc une analyse harmonique en utilisant l'opérateur de marche aléatoire comme point de départ de notre cadre. Premièrement, nous proposons des bases de type Fourier formées des vecteurs propres de l'opérateur de marche aléatoire. De ces bases de Fourier, nous en déterminons une notion fréquentielle en analysant la variation de ses vecteurs propres. La détermination d'une analyse fréquentielle à partir de la base des vecteurs de l'opérateur de marche aléatoire nous amène aux constructions multi-échelles sur graphes orientés. Plus particulièrement, nous proposons une construction en trames d'ondelettes ainsi qu'une construction d'ondelettes décimées sur graphes orientés. Nous illustrons notre analyse harmonique par divers exemples afin d'en montrer l'efficience et la pertinence. / The research conducted in this thesis aims to develop a harmonic analysis for functions defined on the vertices of an oriented graph. In the era of data deluge, much data is in the form of graphs and data on this graph. In order to analyze and exploit this graph data, we need to develop mathematical and numerically efficient methods. This development has led to the emergence of a new theoretical framework called signal processing on graphs, which aims to extend the fundamental concepts of conventional signal processing to graphs. Inspired by the multi-scale aspect of graphs and graph data, many multi-scale constructions have been proposed. However, they apply only to the non-directed framework. The extension of a harmonic analysis on an oriented graph, although natural, is complex. We, therefore, propose a harmonic analysis using the random walk operator as the starting point for our framework. First, we propose Fourier-type bases formed by the eigenvectors of the random walk operator. From these Fourier bases, we determine a frequency notion by analyzing the variation of its eigenvectors. The determination of a frequency analysis from the basis of the vectors of the random walk operator leads us to multi-scale constructions on oriented graphs. More specifically, we propose a wavelet frame construction as well as a decimated wavelet construction on directed graphs. We illustrate our harmonic analysis with various examples to show its efficiency and relevance.
|
35 |
Un système intégré d'acquisition 3D multispectral : acquisition, codage et compression des données / A 3D multispectral integrated acquisition system : acquisition, data coding and compressionDelcourt, Jonathan 29 October 2010 (has links)
Nous avons développé un système intégré permettant l'acquisition simultanée de la forme 3D ainsi que de la réflectance des surfaces des objets scannés. Nous appelons ce système un scanner 3D multispectral du fait qu’il combine, dans un couple stéréoscopique, une caméra multispectrale et un système projecteur de lumière structurée. Nous voyons plusieurs possibilités d’application pour un tel système mais nous mettons en avant des applications dans le domaine de l’archivage et la diffusion numériques des objets du patrimoine. Dans le manuscrit, nous présentons d’abord ce système ainsi que tous les calibrages et traitements nécessaires à sa mise en oeuvre. Ensuite, une fois que le système est fonctionnel, les données qui en sont générées sont riches d’informations, hétérogènes (maillage + réflectances, etc.) et surtout occupent beaucoup de place. Ce fait rend problématiques le stockage et la transmission, notamment pour des applications en ligne de type musée virtuel. Pour cette raison, nous étudions les différentes possibilités de représentation et de codage des données acquises par ce système pour en adopter la plus pertinente. Puis nous examinons les stratégies les plus appropriées à la compression de telles données, sans toutefois perdre la généralité sur d’autres données (type satellitaire). Nous réalisons un benchmark des stratégies de compression en proposant un cadre d’évaluation et des améliorations sur les stratégies classiques existantes. Cette première étude nous permettra de proposer une approche adaptative qui se révélera plus efficace pour la compression et notamment dans le cadre de la stratégie que nous appelons Full-3D. / We have developed an integrated system permitting the simultaneous acquisition of the 3D shape and the spectral spectral reflectance of scanned object surfaces. We call this system a 3D multispectral scanner because it combines within a stereopair, a multispectral video camera and a structured light projector. We see several application possibilities for a such acquisition system but we want to highlight applications in the field of digital archiving and broadcasting for heritage objects. In the manuscript we first introduce the acquisition system and its necessary calibrations and treatments needed for his use. Then, once the acquisition system is functional, data that are generated are rich in information, heterogeneous (mesh + reflectance, etc.) and in particular require lots of memory space. This fact makes data storage and transmission problematic, especially for applications like on line virtual museum. For this reason we study the different possibilities of representation and coding of data acquired by this system to adopt the most appropriate one. Then we examinate the most appropriate strategies to compress such data, without lost the generality on other data (satellite type). We perform a benchmark of compression strategies by providing an evaluation framework and improvements on existing conventional strategies. This first study will allow us to propose an adaptive approach that will be most effective for compression and particularly in the context of the compression strategy that we call Full-3D.
|
36 |
A Novel Framework to Determine Physiological Signals From Blood Flow DynamicsChetlur Adithya, Prashanth 03 April 2018 (has links)
Centers for Disease Control and Prevention (CDC) estimate that more than 11.2 million people require critical and emergency care in the United States per year. Optimizing and improving patient morbidity and mortality outcomes are the primary objectives of monitoring in critical and emergency care. Patients in need of critical or emergency care in general are at a risk of single or multiple organ failures occurring due to a traumatic injury, a surgical event, or an underlying pathology that results in severe patient hemodynamic instability. Hence, continuous monitoring of fundamental cardiovascular hemodynamic parameters, such as heart rate, respiratory rate, blood pressure, blood oxygenation and core temperature, is essential to accomplish diagnostics in critical and emergency care. Today’s standard of care measures these critical parameters using multiple monitoring technologies.
Though it is possible to measure all the fundamental cardiovascular hemodynamic parameters using the blood flow dynamics, its use is currently only limited to measuring continuous blood pressure. No other comparable studies in the literature were successful in quantifying other critical parameters from the blood flow dynamics for a few reasons. First, the blood flow dynamics exhibit a complicated and sensitive dynamic pressure field. Existing blood flow based data acquisition systems are unable to detect these sensitive variations in the pressure field. Further, the pressure field is also influenced by the presence of background acoustic interference, resulting in a noisy pressure profile. Thus in order to extract critical parameters from this dynamic pressure field with fidelity, there is need for an integrated framework that is composed of a highly sensitive data acquisition system and advanced signal processing. In addition, existing state-of-the-art technologies require expensive instrumentation and complex infrastructure. The information sensed using these multiple monitoring technologies is integrated and visualized using a clinical information system. This process of integration and visualization creates the need for functional interoperability within the multiple monitoring technologies. Limited functional interoperability not only results in diagnostic errors but also their complexity makes it impossible to use such technologies to accomplish monitoring in low resource settings. These multiple monitoring technologies are neither portable nor scalable, in addition to inducing extreme patient discomfort. For these reasons, existing monitoring technologies do not efficiently meet the monitoring and diagnostic requirements of critical and emergency care.
In order to address the challenges presented by existing blood flow based data acquisition systems and other monitoring systems, a point of care monitoring device was developed to provide multiple critical parameters by means of uniquely measuring a physiological process. To demonstrate the usability of this novel catheter multiscope, a feasibility study was performed using an animal model. The corresponding results are presented in this dissertation. The developed measurement system first acquires the dynamics of blood flow through a minimally invasive catheter. Then, a signal processing framework is developed to characterize the blood flow dynamics and to provide critical parameters such as heart rate, respiratory rate, and blood pressure. The framework used to extract the physiological data corresponding to the acoustic field of the blood flow consisted of a noise cancellation technique and a wavelet based source separation. The preliminary results of the acoustic field of the blood flow revealed the presence of acoustic heart and respiratory pulses. A unique and novel framework was also developed to extract continuous blood pressure from the pressure field of the blood flow. Finally, the computed heart and respiratory rates, systolic and diastolic pressures were benchmarked with actual values measured using conventional devices to validate the measurements of the catheter multiscope.
In summary, the results of the feasibility study showed that the novel catheter multiscope can provide critical parameters such as heart rate, respiratory rate and blood pressure with clinical accuracy. In addition, this dissertation also highlights the diagnostic potential of the developed catheter multiscope by presenting preliminary results of proof of concept studies performed for application case studies such as sinus rhythm pattern recognition and fetal monitoring through phonocardiography.
|
37 |
關於週期性波包近似值的理論與應用 / On the Theory and Applications of Periodic Wavelet Approximation鄧起文, Deng, Qi Wen Unknown Date (has links)
在本篇論文裏,我們將使用所謂的週期化(periodization)的裝置作用於Daubechies' compactly supported wavelets上而得到一族構成L<sup>2</sup>([0,1])和H<sup>s</sup>-periodic (the space of periodic function locally in H<sup>s</sup>)基底的正交的週期性波包(orthonormal periodic wavelets)。然後我們給出了對於一函數的波包近似值的誤差估計(參閱定理6)以及對於週期性邊界值的常微分方程問題的解的波包近似值的誤差估計(參閱定理7)。對於Burger equation的數值解也當作一個應用來討論。 / In this thesis,we shall construct a family of orthonormal periodic wavelets which form a basis of L<sup>2</sup>([0,l]) and H<sup>s</sup>-periodic (the space of periodic functions locally in H<sup>s</sup>) by using a device called periodization ([10,7]) on Daubechies' compactly supported wavelets.We then give the error estimates for the wavelet approximation to a given function (see theorem 6) and to a solution of periodic boundary value problem for ordinary differential equation(see theorem 7). Numerical solution for Burger equation is also discussed as an application.
|
38 |
Fluxes and Mixing Processes in the Marine Atmospheric Boundary LayerNilsson, Erik Olof January 2013 (has links)
Atmospheric models are strongly dependent on the turbulent exchange of momentum, sensible heat and moisture (latent heat) at the surface. Oceans cover about 70% of the Earth’s surface and understanding the processes that control air-sea exchange is of great importance in order to predict weather and climate. In the atmosphere, for instance, hurricane development, cyclone intensity and track depend on these processes. Ocean waves constitute an obvious example of air-sea interaction and can cause the air-flow over sea to depend on surface conditions in uniquely different ways compared to boundary layers over land. When waves are generated by wind they are called wind sea or growing sea, and when they leave their generation area or propagate faster than the generating wind they are called swell. The air-sea exchange is mediated by turbulent eddies occurring on many different scales. Field measurements and high-resolution turbulence resolving numerical simulations have here been used to study these processes. The standard method to measure turbulent fluxes is the eddy covariance method. A spatial separation is often used between instruments when measuring scalar flux; this causes an error which was investigated for the first time over sea. The error is typically smaller over ocean than over land, possibly indicating changes in turbulence structure over sea. Established and extended analysis methods to determine the dominant scales of momentum transfer was used to interpret how reduced drag and sometimes net upward momentum flux can persist in the boundary layer indirectly affected by swell. A changed turbulence structure with increased turbulence length scales and more effective mixing was found for swell. A study, using a coupled wave-atmosphere regional climate model, gave a first indication on what impact wave mixing have on atmosphere and wave parameters. Near surface wind speed and wind gradients was affected especially for shallow boundary layers, which typically increased in height from the introduced wave-mixing. A large impact may be expected in regions of the world with predominant swell. The impact of swell waves on air-sea exchange and mixing should be taken into account to develop more reliable coupled Earth system models.
|
39 |
Multiresolution analysis of ultrasound images of the prostateZhao, Fangwei January 2004 (has links)
[Truncated abstract] Transrectal ultrasound (TRUS) has become the urologist’s primary tool for diagnosing and staging prostate cancer due to its real-time and non-invasive nature, low cost, and minimal discomfort. However, the interpretation of a prostate ultrasound image depends critically on the experience and expertise of a urologist and is still difficult and subjective. To overcome the subjective interpretation and facilitate objective diagnosis, computer aided analysis of ultrasound images of the prostate would be very helpful. Computer aided analysis of images may improve diagnostic accuracy by providing a more reproducible interpretation of the images. This thesis is an attempt to address several key elements of computer aided analysis of ultrasound images of the prostate. Specifically, it addresses the following tasks: 1. modelling B-mode ultrasound image formation and statistical properties; 2. reducing ultrasound speckle; and 3. extracting prostate contour. Speckle refers to the granular appearance that compromises the image quality and resolution in optics, synthetic aperture radar (SAR), and ultrasound. Due to the existence of speckle the appearance of a B-mode ultrasound image does not necessarily relate to the internal structure of the object being scanned. A computer simulation of B-mode ultrasound imaging is presented, which not only provides an insight into the nature of speckle, but also a viable test-bed for any ultrasound speckle reduction methods. Motivated by analysis of the statistical properties of the simulated images, the generalised Fisher-Tippett distribution is empirically proposed to analyse statistical properties of ultrasound images of the prostate. A speckle reduction scheme is then presented, which is based on Mallat and Zhong’s dyadic wavelet transform (MZDWT) and modelling statistical properties of the wavelet coefficients and exploiting their inter-scale correlation. Specifically, the squared modulus of the component wavelet coefficients are modelled as a two-state Gamma mixture. Interscale correlation is exploited by taking the harmonic mean of the posterior probability functions, which are derived from the Gamma mixture. This noise reduction scheme is applied to both simulated and real ultrasound images, and its performance is quite satisfactory in that the important features of the original noise corrupted image are preserved while most of the speckle noise is removed successfully. It is also evaluated both qualitatively and quantitatively by comparing it with median, Wiener, and Lee filters, and the results revealed that it surpasses all these filters. A novel contour extraction scheme (CES), which fuses MZDWT and snakes, is proposed on the basis of multiresolution analysis (MRA). Extraction of the prostate contour is placed in a multi-scale framework provided by MZDWT. Specifically, the external potential functions of the snake are designated as the modulus of the wavelet coefficients at different scales, and thus are “switchable”. Such a multi-scale snake, which deforms and migrates from coarse to fine scales, eventually extracts the contour of the prostate
|
40 |
Uma ferramenta para An?lise Multiresolu??o de dados n?o regularmente amostradosMedeiros, Luiz Paulo de Souza 24 February 2012 (has links)
Made available in DSpace on 2014-12-17T14:56:03Z (GMT). No. of bitstreams: 1
LuizPSM_DISSERT.pdf: 2597772 bytes, checksum: 2a89626ca2223935451ac8cbba3ad340 (MD5)
Previous issue date: 2012-02-24 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / Digital signal processing (DSP) aims to extract specific information from digital signals.
Digital signals are, by definition, physical quantities represented by a sequence of
discrete values and from these sequences it is possible to extract and analyze the desired
information. The unevenly sampled data can not be properly analyzed using standard
techniques of digital signal processing. This work aimed to adapt a technique of DSP,
the multiresolution analysis, to analyze unevenly smapled data, to aid the studies in the
CoRoT laboratory at UFRN. The process is based on re-indexing the wavelet transform to
handle unevenly sampled data properly. The was efective presenting satisfactory results / O processamento digital de sinais (PDS) tem como objetivo a extra??o de informa??es
espec?ficas a partir de sinais armazenados digitalmente. Os sinais digitais s?o,
por defini??o, grandezas f?sicas representadas por uma sequ?ncia de valores discretos e
? a partir dessas sequ?ncias de valores que ? poss?vel extrair e analisar as informa??es
desejadas. Os sinais digitais n?o regularmente espa?ados n?o s?o corretamente analisados
utilizando as t?cnicas padr?es do processamento digital de sinais. Neste trabalho
teve-se o objetivo de adequar uma t?cnica de PDS, a an?lise multiresolu??o, para analisar
sinais n?o regularmente espa?ados, visando auxiliar as pesquisas realizadas no laborat?rio
CoRoT na UFRN. O trabalho desenvolvido consiste em uma reindexa??o da transformada
Wavelet para tratar os dados n?o regularmente espa?ados de maneira adequada. O m?todo
mostrou-se efetivo, apresentando resultados satisfat?rios
|
Page generated in 0.088 seconds