• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 310
  • 85
  • 65
  • 65
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 560
  • 560
  • 560
  • 560
  • 196
  • 133
  • 91
  • 88
  • 85
  • 81
  • 76
  • 73
  • 73
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Image transition techniques using projective geometry

Wong, Tzu Yen January 2009 (has links)
[Truncated abstract] Image transition effects are commonly used on television and human computer interfaces. The transition between images creates a perception of continuity which has aesthetic value in special effects and practical value in visualisation. The work in this thesis demonstrates that better image transition effects are obtained by incorporating properties of projective geometry into image transition algorithms. Current state-of-the-art techniques can be classified into two main categories namely shape interpolation and warp generation. Many shape interpolation algorithms aim to preserve rigidity but none preserve it with perspective effects. Most warp generation techniques focus on smoothness and lack the rigidity of perspective mapping. The affine transformation, a commonly used mapping between triangular patches, is rigid but not able to model perspective effects. Image transition techniques from the view interpolation community are effective in creating transitions with the correct perspective effect, however, those techniques usually require more feature points and algorithms of higher complexity. The motivation of this thesis is to enable different views of a planar surface to be interpolated with an appropriate perspective effect. The projective geometric relationship which produces the perspective effect can be specified by two quadrilaterals. This problem is equivalent to finding a perspectively appropriate interpolation for projective transformation matrices. I present two algorithms that enable smooth perspective transition between planar surfaces. The algorithms only require four point correspondences on two input images. ...The second algorithm generates transitions between shapes that lie on the same plane which exhibits a strong perspective effect. It recovers the perspective transformation which produces the perspective effect and constrains the transition so that the in-between shapes also lie on the same plane. For general image pairs with multiple quadrilateral patches, I present a novel algorithm that is transitionally symmetrical and exhibits good rigidity. The use of quadrilaterals, rather than triangles, allows an image to be represented by a small number of primitives. This algorithm uses a closed form force equilibrium scheme to correct the misalignment of the multiple transitional quadrilaterals. I also present an application for my quadrilateral interpolation algorithm in Seitz and Dyer's view morphing technique. This application automates and improves the calculation of the reprojection homography in the postwarping stage of their technique. Finally I unify different image transition research areas into a common framework, this enables analysis and comparison of the techniques and the quality of their results. I highlight that quantitative measures can greatly facilitate the comparisons among different techniques and present a quantitative measure based on epipolar geometry. This novel quantitative measure enables the quality of transitions between images of a scene from different viewpoints to be quantified by its estimated camera path.
422

Pixel and patch based texture synthesis using image segmentation

Tran, Minh Tue January 2010 (has links)
[Truncated abstract] Texture exists all around us and serves as an important visual cue for the human visual system. Captured within an image, we identify texture by its recognisable visual pattern. It carries extensive information and plays an important role in our interpretation of a visual scene. The subject of this thesis is texture synthesis, which is de ned as the creation of a new texture that shares the fundamental visual characteristics of an existing texture such that the new image and the original are perceptually similar. Textures are used in computer graphics, computer-aided design, image processing and visualisation to produce realistic recreations of what we see in the world. For example, the texture on an object communicates its shape and surface properties in a 3D scene. Humans can discriminate between two textures and decide on their similarity in an instant, yet, achieving this algorithmically is not a simple process. Textures range in complexity and developing an approach that consistently synthe- sises this immense range is a dfficult problem to solve and motivates this research. Typically, texture synthesis methods aim to replicate texture by transferring the recognisable repeated patterns from the sample texture to synthesised output. Feature transferal can be achieved by matching pixels or patches from the sample to the output. As a result, two main approaches, pixel-based and patch-based, have es- tablished themselves in the active eld of texture synthesis. This thesis contributes to the present knowledge by introducing two novel texture synthesis methods. Both methods use image segmentation to improve synthesis results. ... The sample is segmented and the boundaries of the middle patch are confined to follow segment boundaries. This prevents texture features from being cut o prematurely, a common artifact of patch-based results, and eliminates the need for patch boundary comparisons that most other patch- based synthesis methods employ. Since no user input is required, this method is simple and straight-forward to run. The tiling of pre-computed tile pairs allows outputs that are relatively large to the sample size to be generated quickly. Output results show great success for textures with stochastic and semi-stochastic clustered features but future work is needed to suit more highly structured textures. Lastly these two texture synthesis methods are applied to the areas of image restoration and image replacement. These two areas of image processing involve replacing parts of an image with synthesised texture and are often referred to as constrained texture synthesis. Images can contain a large amount of complex information, therefore replacing parts of an image while maintaining image fidelity is a difficult problem to solve. The texture synthesis approaches and constrained synthesis implementations proposed in this thesis achieve successful results comparable with present methods.
423

Syntactic models with applications in image analysis

Evans, Fiona H January 2007 (has links)
[Truncated abstract] The field of pattern recognition aims to develop algorithms and computer programs that can learn patterns from data, where learning encompasses the problems of recognition, representation, classification and prediction. Syntactic pattern recognition recognises that patterns may be hierarchically structured. Formal language theory is an example of a syntactic approach, and is used extensively in computer languages and speech processing. However, the underlying structure of language and speech is strictly one-dimensional. The application of syntactic pattern recognition to the analysis of images requires an extension of formal language theory. Thus, this thesis extends and generalises formal language theory to apply to data that have possibly multi-dimensional underlying structure and also hierarchic structure . . . As in the case for curves, shapes are modelled as a sequence of local relationships between the curves, and these are estimated using a training sample. Syntactic square detection was extremely successful – detecting 100% of squares in images containing only a single square, and over 50% of the squares in images containing ten squares highly likely to be partially or severely occluded. The detection and classification of polygons was successful, despite a tendency for occluded squares and rectangles to be confused. The algorithm also peformed well on real images containing fish. The success of the syntactic approaches for detecting edges, detecting curves and detecting, classifying and counting occluded shapes is evidence of the potential of syntactic models.
424

Diagnostic imaging pathways

Dhillon, Ravinder January 2007 (has links)
[Truncated abstract] Hypothesis: There is deficiency in the evidence base and scientific underpinning of existing diagnostic imaging pathways (DIP) for diagnostic endpoints. Objective: a) To carry out systematic review of literature in relation to use of diagnostic imaging tests for diagnosis and investigation of 78 common clinical problems, b) To identify deficiencies and controversies in existing diagnostic imaging pathways, and to develop a new set of consensus based pathways for diagnostic imaging (DIP) supported by evidence as an education and decision support tool for hospital based doctors and general practitioners, c) To carry out a trial dissemination, implementation and evaluation of DIP. Methods: 78 common clinical presentations were chosen for development of DIP. For general practitioners, clinical topics were selected based on the following criteria: common clinical problem, complex in regards to options available for imaging, subject to inappropriate imaging resulting in unnecessary expenditure and /or radiation exposure, and new options for imaging of which general practitioners may not be aware. For hospital based junior doctors and medical students, additional criteria included: acute presentation when immediate access to expert radiological opinion may be lacking and clinical problem for which there is a need for education. Systematic review of the literature in relation to each of the 78 topics was carried out using Ovid, Pubmed and Cochrane Database of Systematic Reviews. ... The electronic environment and the method of delivery provided a satisfactory medium for dissemination. Getting DIP implemented required vigorous effort. Knowledge of diagnostic imaging and requesting behaviour tended to become more aligned with DIP following a period of intensive marketing. Conclusions: Systematic review of literature and input and feedback from various clinicians and radiologists led to the development of 78 consensus based Diagnostic Imaging Pathways supported by evidence. These pathways are a valuable decision support tool and are a definite step towards incorporating evidence based medicine in patient management. The clinical and academic content of DIP is of practical use to a wide range of clinicians in hospital and general practice settings. It is source of high level knowledge; a reference tool for the latest available and most effective imaging test for a particular clinical problem. In addition, it is an educational tool for medical students, junior doctors, medical imaging technologists, and allied health care personnel.
425

Efficient registration of limited field of view ocular fundus imagery

Van der Westhuizen, Christo Carel 12 1900 (has links)
Thesis (MScEng)-- Stellenbosch University, 2013. / ENGLISH ABSTRACT: Diabetic- and hypertensive retinopathy are two common causes of blindness that can be prevented by managing the underlying conditions. Patients suffering from these conditions are encouraged to undergo regular examinations to monitor the retina for signs of deterioration. For these routine examinations an ophthalmoscope is used. An ophthalmoscope is a relatively inexpensive device that allows an examiner to directly observe the ocular fundus (the interior back wall of the eye that contains the retina). These devices are analog and do not allow the capture of digital imagery. Fundus cameras, on the other hand, are larger devices that o er high quality digital images. They do, however, come at an increased cost and are not practical for use in the eld. In this thesis the design and implementation of a system that digitises imagery from an ophthalmoscope is discussed. The main focus is the development of software algorithms to increase the quality of the images to yield results of a quality closer to that of a fundus camera. The aim is not to match the capabilities of a fundus camera, but rather to o er a cost-e ective alternative that delivers su cient quality for use in conducting routine monitoring of the aforementioned conditions. For the digitisation the camera of a mobile phone is proposed. The camera is attached to an ophthalmoscope to record a video of an examination. Software algorithms are then developed to parse the video frames and combine those that are of better quality. For the parsing a method of rapidly selecting valid frames based on colour thresholding and spatial ltering techniques are developed. Registration is the process of determining how the selected frames t together. Spatial cross-correlation is used to register the frames. Only translational transformations are assumed between frames and the designed algorithms focuses on estimating this relative translation in a large set of frames. Methods of optimising these operations are also developed. For the combination of the frames, averaging is used to form a composite image. The results obtained are in the form of enhanced grayscale images of the fundus. These images do not match those captured with fundus cameras in terms of quality, but do show a signi cant increase when compared to the individual frames that they consists of. Collectively a set of video frames can cover a larger region of the fundus than what they do individually. By combining these frames an e ective increase in the eld of view is obtained. Due to low light exposure, the individual frames also contain signi cant noise. In the results the noise is reduced through the averaging of several frames that overlap at the same location. / AFRIKAANSE OPSOMMING: Diabetiese- en hipertensiewe retinopatie is twee algemene oorsake van blindheid wat deur middel van die behandeling van die onderliggende oorsake voorkom kan word. Pasiënte met hierdie toestande word aangemoedig om gereeld ondersoeke te ondergaan om die toestand van die retina te monitor. 'n Oftalmoskoop word gebruik vir hierdie roetine ondersoeke. 'n Oftalmoskoop is 'n relatiewe goedkoop, analoë toestel wat 'n praktisyn toelaat om die agterste interne wand van die oog the ondersoek waar die retina geleë is. Fundus kameras, aan die ander kant, is groter toestelle wat digitale beelde van 'n hoë gehalte kan neem. Dit kos egter aansienlik meer en is dus nie geskik vir gebruik in die veld nie. In hierdie tesis word die ontwerp en implementering van 'n stelsel wat beelde digitaliseer vanaf 'n oftalmoskoop ondersoek. Die fokus is op die ontwikkeling van sagteware algoritmes om die gehalte van die beelde te verhoog. Die doel is nie om die vermoëns van 'n fundus kamera te ewenaar nie, maar eerder om 'n koste-e ektiewe alternatief te lewer wat voldoende is vir gebruik in die veld tydens die roetine monitering van die bogenoemde toestande. 'n Selfoonkamera word vir die digitaliserings proses voorgestel. Die kamera word aan 'n oftalmoskoop geheg om 'n video van 'n ondersoek af te neem. Sagteware algoritmes word dan ontwikkel om die videos te ontleed en om videogrepe van goeie kwaliteit te selekteer en te kombineer. Vir die aanvanklike ontleding van die videos word kleurband drempel tegnieke voorgestel. Registrasie is die proses waarin die gekose rame bymekaar gepas word. Direkte kruiskorrelasie tegnieke word gebruik om die videogrepe te registreer. Daar word aanvaar dat die videogrepe slegs translasie tussen hulle het en die voorgestelde registrasie metodes fokus op die beraming van die relatiewe translasie van 'n groot versameling videogrepe. Vir die kombinering van die grepe, word 'n gemiddeld gebruik om 'n saamgestelde beeld te vorm. Die resultate wat verkry word, word in die vorm van verbeterde gryskleur beelde van die fundus ten toon gestel. Hierdie beelde is nie gelykstaande aan die kwaliteit van beelde wat deur 'n fundus kamera geneem is nie. Hulle toon wel 'n beduidende verbetering teenoor individuele videogrepe. Deur dat 'n groot versameling videogrepe wat gesamentlik 'n groter area van die fundus dek gekombineer word, word 'n e ektiewe verhoging van data in die area van die saamgestelde beeld verkry. As gevolg van lae lig blootstelling van die individuele grepe bevat hul beduidende ruis. In die saamgestelde beelde is die ruis aansienlik minder as gevolg van 'n groter hoeveelheid data wat gekombineer is om sodoende die ruis uit te sluit.
426

Combinação de dispositivos de baixo custo para rastreamento de gestos

Agostinho, Isabele Andreoli [UNESP] 18 February 2014 (has links) (PDF)
Made available in DSpace on 2015-09-17T15:24:15Z (GMT). No. of bitstreams: 0 Previous issue date: 2014-02-18. Added 1 bitstream(s) on 2015-09-17T15:48:08Z : No. of bitstreams: 1 000846861.pdf: 5095230 bytes, checksum: 96865e3e2fb686fca73eb0225342e4b5 (MD5) / Algumas pesquisas mostram que a combinação de mais de uma tecnologia de sensor pode melhorar o rastreamento de movimentos, tornando-o mais preciso ou permitindo a implementação de aplicações que usam movimentos complexos, como nas línguas de sinais por exemplo. A combinação de dispositivos de rastreamento de movimentos vendidos comercialmente permite desenvolver sistemas de baixo custo e de fácil utilização. O Kinect, o Wii Remote e a 5DT Data Glove Ultra são dispositivos que usam tecnologias que fornecem informações complementares de rastreamento de braços e mãos, são fáceis de usar, têm baixo custo e possuem bibliotecas de desenvolvimento gratuitas, entre outras vantagens. Para avaliar a combinação desses dispositivos para rastreamento de gestos, foi desenvolvido um sistema de rastreamento que contém dois módulos principais, um de tratamento dos dispositivos, com inicialização e junção dos movimentos, e outro com a visualização da movimentação do Humano Virtual para o rastreamento feito. Este sistema utiliza a luva para a captura da configuração das mãos, o Wii Remote para fornecer a rotação dos antebraços e o Kinect para o rastreamento dos braços e da inclinação dos antebraços. Foram executados testes para vários movimentos, e os resultados obtidos relativos a cada dispositivo foram tratados e o rastreamento reproduzido em tempo real no Humano Virtual com sucesso / Some researches show that combination of more than one sensor technology can improve tracking, making it more precise or making possible the development of systems that use complex movements, such as in sign languages. The combination of commercial tracking devices allows the development of low cost and easy to use systems. The Kinect, the Wii Remote and the 5DT Data Glove Ultra are devices that use technologies that give complementary information of arms and hands tracking, are easy to manipulate, have low cost and free development tools, among other advantages. To evaluate the combination of these devices for human communication gesture tracking, a system was developed having two main modules, one for device processing with initialization and movements union, and other that provides the visualization of Virtual Human movements of executed tracking. This system use the glove to provide hands configuration capture, the Wii Remote to give forearms rotation and a Kinect to track arms and forearms pitch. Tests were done for different movements, and the results of each devices data were processed and tracked movement was displayed by the Virtual Human in real time successfully
427

Système d'identification à partir de l'image d'iris et détermination de la localisation des informations / Iris identification system and determination of characteristics location

Hilal, Alaa 21 October 2013 (has links)
Le système d’identification d’iris est considéré comme l’une des meilleures technologies biométriques. Toutefois, des problèmes liés à la segmentation de l’iris et à la normalisation de la texture de l’iris sont généralement signalés comme principales origines des reconnaissances incorrectes. Dans notre travail, trois contributions principales sont proposées pour améliorer le système d’identification d’iris. Une nouvelle méthode de segmentation est développée. Elle détecte la frontière externe de l’iris par un contour circulaire et la pupille, d’une manière précise, à l’aide d’un modèle de contour actif. Ensuite, une nouvelle méthode de normalisation est proposée. Elle assure une représentation plus robuste et un meilleur échantillonnage de la texture de l’iris comparée aux méthodes traditionnelles. Enfin en utilisant le système d’identification d’iris proposé, la localisation des caractéristiques discriminantes dans une région d’iris est identifiée. Nous vérifions que l’information la plus importante de la région de l’iris se trouve à proximité de la pupille et que la capacité discriminante de la texture diminue avec la distance à la pupille. Les méthodes de segmentation et de normalisation développées sont testées et comparées à un système de référence sur une base de données contenant 2639 images d’iris. Une amélioration des performances de reconnaissance valide l’efficacité du système proposé / Iris identification system is considered among the best biometric technologies. However problems related to the segmentation of the iris and to the normalization of iris templates are generally reported and induce loss of recognition performance. In this work three main contributions are made to the progress of the iris identification system. A new segmentation method is developed. It approximates the outer iris boundary with a circle and segments accurately the inner boundary of the iris by use of an active contour model. Next, a new normalization method is proposed. It leads to a more robust characterization and a better sampling of iris textures compared to traditional normalization methods. Finally using the proposed iris identification system, the location of discriminant characteristics along iris templates is identified. It appears that the most discriminant iris characteristics are located in inner regions of the iris (close to the pupil boundary) and that the discriminant capabilities of these characteristics decreases as outer regions of the iris are considered. The developed segmentation and normalization methods are tested and compared to a reference iris identification system over a database of 2639 iris images. Improvement in recognition performance validates the effectiveness of the proposed system
428

Análise de multirresolução baseada em polinômio potência de Sigmóide - Wavelet

Pilastri, André Luiz [UNESP] 08 August 2012 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:24:01Z (GMT). No. of bitstreams: 0 Previous issue date: 2012-08-08Bitstream added on 2014-06-13T20:51:11Z : No. of bitstreams: 1 pilastri_al_me_sjrp.pdf: 506207 bytes, checksum: ce2e546b12ea3a6fa5de08a9258bcef8 (MD5) / Na área de processamento de sinais e, particularmente, em processamento de imagens, pesquisas recentes priorizam o desenvolvimento de novas técnicas e métodos que possam ser empregados em um amplo domínio de aplicações. As pirâmides de imagens constituem uma técnica bastante importante na criação de decomposições multirresolução em visão computacional e processamento de imagens. As transformadas de Wavelets podem ser vistas como mecanismos para decompor sinais nas suas partes constituintes, permitindo analisar os dados em diferentes domínios de frequência com a resolução de cada componente relacionada à sua escala. Além disso, na análise de wavelets, pode-se usar funções que estão contidas em regiões finitas, tornando-as convenientes na aproximação de dados com descontinuidades. Neste contexto, o presente trabalho apresentou uma técnica piramidal baseada nas transformações dos Polinômios Potências de Sigmóide (PPS) e suas famílias PPS-Wavelet, para tratamento em imagens digitais. Foram reaizados experimentos utilizando as novas técnicas piramidais e métricas para a avaliação de qualidade imagem, apresentando resultados promissores em relação à acurácia / In the signal processing and image processing fields, recent research give priority to develop new techniques and methods that can be used in a wide field of applications. The pyramids of images are important techniques used in multiresolution decompositions, applied to computer vision and image processing. The wavelet transforms can be viewed as tools to decompose signals into component parts, allowing to analyze the data in different frequency domains with resolution of each component related to your own scale. Furthermore, in the wavelet analysis, can be used functions which are contained in limited areas, making them suitable approximation of the data discontinuities. In this research presents a technique based on pyramid transforms the PPS and PPS-Wavelet families applied to digital images. The experiments using new techniques and pyramidal metrics for evaluation of image quality presents promising results about accuracy
429

Implementação em FPGA de um sistema para processamento de imagens digitais para aplicações diversificadas

Mertes, Jacqueline Gomes [UNESP] 13 December 2012 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:29:40Z (GMT). No. of bitstreams: 0 Previous issue date: 2012-12-13Bitstream added on 2014-06-13T20:19:47Z : No. of bitstreams: 1 mertes_jg_me_sjrp.pdf: 1925204 bytes, checksum: 46490bb6ae153565ad970cc3d025ddfc (MD5) / Este trabalho descreve um sistema para o processamento de imagens digitais coloridas. Este sistema possui um conjunto de filtros, o qual aliado a um controlador pode ser configurado pelo usuário através de um arquivo de configuração, buscando a melhor adequação do sistema às imagens a serem tratadas. O conjunto de filtros é composto por filtros que desempenham as tarefas de suavização, deteção de borda, equalização de histogramas, normalização de cores e normalização de luminância. O sistema foi descrito utilizando a linguagem de descrição de hardware System Verilog e implementado em um FPGA. Devido à sua característica reconfigurável, este sistema mostrou-se capaz de processar diversos tipos de imagens coloridas, ajustando-se facilmente às mais diferentes aplicações / This work describes a colored digital images processing system. This system has a set of filters, which in junction with a controller can be configured by the user through a setup file, in order to adapt the system to the images to be treated.This set is composed by several filters that perform tasks such as smoothing, edge detection, histogram equalization, color normalization and luminance normalization. The system was described using hardware description language (System Verilog), and implemented in an FPGA. Due to its reconfigurable caracteristic, this system showed capable of processing several types of colored images, easily fitting to a broad set of applications
430

Implementação em FPGA de um sistema para processamento de imagens digitais para aplicações diversificadas /

Mertes, Jacqueline Gomes. January 2012 (has links)
Orientador: Norian Marranghello / Banca: Furio Damiani / Banca: Alexandre C. Rodrigues da Silva / Resumo: Este trabalho descreve um sistema para o processamento de imagens digitais coloridas. Este sistema possui um conjunto de filtros, o qual aliado a um controlador pode ser configurado pelo usuário através de um arquivo de configuração, buscando a melhor adequação do sistema às imagens a serem tratadas. O conjunto de filtros é composto por filtros que desempenham as tarefas de suavização, deteção de borda, equalização de histogramas, normalização de cores e normalização de luminância. O sistema foi descrito utilizando a linguagem de descrição de hardware System Verilog e implementado em um FPGA. Devido à sua característica reconfigurável, este sistema mostrou-se capaz de processar diversos tipos de imagens coloridas, ajustando-se facilmente às mais diferentes aplicações / Abstract: This work describes a colored digital images processing system. This system has a set of filters, which in junction with a controller can be configured by the user through a setup file, in order to adapt the system to the images to be treated.This set is composed by several filters that perform tasks such as smoothing, edge detection, histogram equalization, color normalization and luminance normalization. The system was described using hardware description language (System Verilog), and implemented in an FPGA. Due to its reconfigurable caracteristic, this system showed capable of processing several types of colored images, easily fitting to a broad set of applications / Mestre

Page generated in 0.1535 seconds