• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 28
  • 24
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 118
  • 118
  • 48
  • 31
  • 28
  • 26
  • 24
  • 23
  • 21
  • 20
  • 16
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Monitoramento do estado de sonolência de motoristas de automóveis através de análise de imagens de olhos / State of the Monitoring the of drowsiness of drivers of cars through analysis of eyes' image

Silva, Leonardo Dorneles Figueiredo 28 February 2012 (has links)
Made available in DSpace on 2016-08-17T14:53:19Z (GMT). No. of bitstreams: 1 dissertacao Leonardo Dorneles.pdf: 5262998 bytes, checksum: ddf1c0050c4fd4f028d30417f5fb59f8 (MD5) Previous issue date: 2012-02-28 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / The tiredness and fatigue contribute to the involvement of drivers in a large number of accidents. This number could be reduced if it was possible to detect the moment of inattention and warn the driver of their condition. A methodology that is able to detect this automatically should be able to process the information of the current status of the driver and provide an advise in real time according to their behavior. However it must not affect the driver in its natural way to drive. In this work was developed a methodology that uses image processing techniques, computer vision, machine learning and physical characteristics to detect the eye region and analysis of their behavior with the objective of verifying the level of inattention of drivers of cars. / O cansaço e a fadiga contribuem para que o condutor de veículos automotores se envolvam em um grande número de acidentes. Esse número poderia ser reduzido caso fosse possível detectar o momento de desatenção e alertar o motorista. Uma metodologia que seja capaz de fazer essa detecção de forma automática deve ser capaz de processar as informações da situação atual do usuário e fornecer a resposta em tempo real de acordo com o seu comportamento e também não atrapalhe o condutor na sua forma natural de dirigir. Neste trabalho foi desenvolvido uma metodologia que utiliza técnicas de processamento de imagens, visão computacional, aprendizado de máquina e características físicas para a detecção da região dos olhos e a análise de seu comportamento com o objetivo de verificar o nível de desatenção de motoristas de automotores.
102

New PDE models for imaging problems and applications

Calatroni, Luca January 2016 (has links)
Variational methods and Partial Differential Equations (PDEs) have been extensively employed for the mathematical formulation of a myriad of problems describing physical phenomena such as heat propagation, thermodynamic transformations and many more. In imaging, PDEs following variational principles are often considered. In their general form these models combine a regularisation and a data fitting term, balancing one against the other appropriately. Total variation (TV) regularisation is often used due to its edgepreserving and smoothing properties. In this thesis, we focus on the design of TV-based models for several different applications. We start considering PDE models encoding higher-order derivatives to overcome wellknown TV reconstruction drawbacks. Due to their high differential order and nonlinear nature, the computation of the numerical solution of these equations is often challenging. In this thesis, we propose directional splitting techniques and use Newton-type methods that despite these numerical hurdles render reliable and efficient computational schemes. Next, we discuss the problem of choosing the appropriate data fitting term in the case when multiple noise statistics in the data are present due, for instance, to different acquisition and transmission problems. We propose a novel variational model which encodes appropriately and consistently the different noise distributions in this case. Balancing the effect of the regularisation against the data fitting is also crucial. For this sake, we consider a learning approach which estimates the optimal ratio between the two by using training sets of examples via bilevel optimisation. Numerically, we use a combination of SemiSmooth (SSN) and quasi-Newton methods to solve the problem efficiently. Finally, we consider TV-based models in the framework of graphs for image segmentation problems. Here, spectral properties combined with matrix completion techniques are needed to overcome the computational limitations due to the large amount of image data. Further, a semi-supervised technique for the measurement of the segmented region by means of the Hough transform is proposed.
103

Segmentação de imagens SPECT/Gated-SPECT do miocárdio e geração de um mapa polar. / Segmentation of myocardial SPECT/Gated-SPECT images and polar map generation.

Luis Roberto Pereira de Paula 23 May 2011 (has links)
Tomografia computadorizada por emissão de fóton único (SPECT) é uma modalidade da medicina nuclear baseada na medida da distribuição espacial de um radionuclídeo. Esta técnica é amplamente utilizada em cardiologia para avaliar problemas de perfusão miocárdica, relacionados ao fluxo sanguíneo nas artérias coronárias. As imagens SPECT proporcionam melhor separação das regiões do miocárdio e facilitam a localização e a definição dos defeitos de perfusão. Um dos grandes desafios em estudos SPECT é a eficiente apresentação da informação, uma vez que um único estudo pode gerar imagens de centenas de cortes a serem analisados. Para resolver este problema, são utilizados mapas polares (também conhecidos como gráficos Bulls Eye). Mapas polares são construídos a partir de cortes tomográficos do ventrículo esquerdo e apresentam as informações dos exames de forma sumarizada, em uma imagem bidimensional. Essa dissertação apresenta um método para segmentação do ventrículo esquerdo em estudos SPECT do miocárdio e a construção de mapas polares. A segmentação do ventrículo esquerdo é realizada para facilitar o processo de geração automática de mapas polares. O método desenvolvido utiliza a transformada watershed, no contexto do paradigma de Beucher-Meyer. Para visualização dos resultados, foi desenvolvida uma aplicação, chamada Medical Image Visualizer (MIV). O MIV será disponibilizado como projeto Open Source, podendo ser livremente utilizado e/ou modificado pela comunidade de usuários, desenvolvedores e pesquisadores. / Single photon emission computed tomography (SPECT) is a nuclear medicine tomographic imaging technique based on the measurement of spatial distribution of a radionuclide. This technique is widely used in cardiology to assess myocardial perfusion problems related to blood flow in coronary arteries. SPECT images provide better separation of regions of the myocardium and facilitate the location and definition of perfusion defects. One of the major challenges in SPECT studies is the efficient presentation of information, since a single study can generate hundreds of images of slices to be analyzed. To address this issue, polar maps (also known as Bulls Eye display) are used. Polar maps are built from slices of the left ventricle and provide summarized information of exams in a two dimensional image. This dissertation presents a method for the segmentation of the left ventricle in myocardial SPECT studies and the construction of polar maps. The segmentation of the left ventricle is performed to facilitate the process of automatic generation of polar maps. The method uses the watershed transform, in the context of the Beucher-Meyer paradigm. To display the results, it was developed an application called Medical Image Visualizer (MIV). MIV will be available as an Open Source project and the communities of users, developers and researchers will be able to freely use and/or modify the application.
104

Kontrola zobrazení textu ve formulářích / Quality Check of Text in Forms

Moravec, Zbyněk January 2017 (has links)
Purpose of this thesis is the quality check of correct button text display on photographed monitors. These photographs contain a variety of image distortions which complicates the following image graphic element recognition. This paper outlines several possibilities to detect buttons on forms and further elaborates on the implemented detection based on contour shapes description. After buttons are found, their defects are detected subsequently. Additionally, this thesis describes an automatic identification of picture with the highest quality for documentation purposes.
105

Obslužný program pro colony-picking robot / Control Program for Colony-picking Robot

Matějka, Lukáš January 2012 (has links)
From an overview of most commonly used kinematic conceptions of robotic manipulators, the conception of Cartesian robot was identified as the most suitable for the given task of colony picking. A control system consisting of two modular parts has been designed for the colony picking robot. ColonyCounter module is a set of image processing libraries for identification of microbial colonies in image data and precise localization of individual colonies. This has been achieved by combination of multiple methods, most importantly connected components labelling and Hough circular transform. The second module – ColonyPicker – utilizes output of ColonyCounter module to plan the picking and placing of colonies. Subsequently it controls the transfer process itself using an innovative task planning and executing system.
106

Biometrie s využitím snímků duhovky / Biometry based on iris images

Tobiášová, Nela January 2014 (has links)
The biometric techniques are well known and widespread nowadays. In this context biometry means automated person recognition using anatomic features. This work uses the iris as the anatomic feature. Iris recognition is taken as the most promising technique of all because of its non-invasiveness and low error rate. The inventor of iris recognition is John G. Daugman. His work underlies almost all current public works of this technology. This final thesis is concerned with biometry based on iris images. The principles of biometric methods based on iris images are described in the first part. The first practical part of this work is aimed at the proposal and realization of two methods which localize the iris inner boundary. The third part presents the proposal and realization of iris image processing in order to classifying persons. The last chapter is focus on evaluation of experimental results and there are also compared our results with several well-known methods.
107

Webové rozhraní pro zpracování obrazu / Web Interface for Image Processing

Beran, Milan January 2010 (has links)
This paper concerns design and implementation of a system which provides easier control of digital image processing console applications. The work is based on threes information technology domains: distributed systems, image processing and web technologies. The system consist of number of separated components communicating with each other in order of processing desired tasks. Control interface and the task daemon are implemented in PHP language. Image processing programs are implemented in C language using OpenCV graphic library. Control of the system is carried out through web graphical interface using dynamic control components implemented in Javascript language, jQuery library and jQueryUI interface. Part of the work is also a description of employment of the system in practical use in two environments, experiments concerning system performance and web interface user acceptance testing.
108

Rozpoznávání topologických informací z plánu křižovatky / Topology Recognition from Crossroad Plan

Huták, Petr January 2016 (has links)
This master‘s thesis describes research, design and development of system for topology recognition from crossroad plan. It explains the methods used for image processing, image segmentation, object recognition. It describes approaches in processing of maps represented by raster images and target software, in which the final product of practical part of project will be integrated. Thesis is focused mainly on comparison of different approaches in feature extraction from raster maps and determination their semantic meaning. Practical part of project is implemented in C# language with OpenCV library.
109

Detekce a identifikace typu obratle v CT datech onkologických pacientů / Vertebra detection and identification in CT oncological data

Věžníková, Romana January 2017 (has links)
Automated spine or vertebra detection and segmentation from CT images is a difficult task for several reasons. One of the reasons is unclear vertebra boundaries and indistinct boundaries between vertebra. Next reason is artifacts in images and high degree of anatomical complexity. This paper describes the design and implementation of vertebra detection and classification in CT images of cancer patients, which adds to the complexity because some of vertebrae are deformed. For the vertebra segmentation, the Otsu’s method is used. Vertebra detection is based on search of borders between individual vertebra in sagittal planes. Decision trees or the generalized Hough transform is applied for the identification whereas the vertebra searching is based on similarity between each vertebra model shape and planes of CT scans.
110

Intelligent pattern recognition techniques for photo-realistic 3D modeling of urban planning objects / Techniques intelligentes motif de reconnaissance pour photo-réaliste modélisation 3D de la planification urbaine objets

Tsenoglou, Theocharis 28 November 2014 (has links)
Modélisation 3D réaliste des bâtiments et d'autres objets de planification urbaine est un domaine de recherche actif dans le domaine de la modélisation 3D de la ville, la documentation du patrimoine, tourisme virtuel, la planification urbaine, la conception architecturale et les jeux d'ordinateur. La création de ces modèles, très souvent, nécessite la fusion des données provenant de diverses sources telles que les images optiques et de numérisation de nuages ​​de points laser. Pour imiter de façon aussi réaliste que possible les mises en page, les activités et les fonctionnalités d'un environnement du monde réel, ces modèles doivent atteindre de haute qualité et la précision de photo-réaliste en termes de la texture de surface (par exemple pierre ou de brique des murs) et de la morphologie (par exemple, les fenêtres et les portes) des objets réels. Rendu à base d'images est une alternative pour répondre à ces exigences. Il utilise des photos, prises soit au niveau du sol ou de l'air, à ajouter de la texture au modèle 3D ajoutant ainsi photo-réalisme.Pour revêtement de texture pleine de grandes façades des modèles de blocs 3D, des images qui dépeignent la même façade doivent être correctement combinée et correctement aligné avec le côté du bloc. Les photos doivent être fusionnés de manière appropriée afin que le résultat ne présente pas de discontinuités, de brusques variations de l'éclairage ou des lacunes. Parce que ces images ont été prises, en général, dans différentes conditions de visualisation (angles de vision, des facteurs de zoom, etc.) ils sont sous différentes distorsions de perspective, mise à l'échelle, de luminosité, de contraste et de couleur nuances, ils doivent être corrigés ou ajustés. Ce processus nécessite l'extraction de caractéristiques clés de leur contenu visuel d'images.Le but du travail proposé est de développer des méthodes basées sur la vision par ordinateur et les techniques de reconnaissance des formes, afin d'aider ce processus. En particulier, nous proposons une méthode pour extraire les lignes implicites à partir d'images de mauvaise qualité des bâtiments, y compris les vues de nuit où seules quelques fenêtres éclairées sont visibles, afin de préciser des faisceaux de lignes parallèles 3D et leurs points de fuite correspondants. Puis, sur la base de ces informations, on peut parvenir à une meilleure fusion des images et un meilleur alignement des images aux façades de blocs. / Realistic 3D modeling of buildings and other urban planning objects is an active research area in the field of 3D city modeling, heritage documentation, virtual touring, urban planning, architectural design and computer gaming. The creation of such models, very often, requires merging of data from diverse sources such as optical images and laser scan point clouds. To imitate as realistically as possible the layouts, activities and functionalities of a real-world environment, these models need to attain high photo-realistic quality and accuracy in terms of the surface texture (e.g. stone or brick walls) and morphology (e.g. windows and doors) of the actual objects. Image-based rendering is an alternative for meeting these requirements. It uses photos, taken either from ground level or from the air, to add texture to the 3D model thus adding photo-realism. For full texture covering of large facades of 3D block models, images picturing the same façade need to be properly combined and correctly aligned with the side of the block. The pictures need to be merged appropriately so that the result does not present discontinuities, abrupt variations in lighting or gaps. Because these images were taken, in general, under various viewing conditions (viewing angles, zoom factors etc) they are under different perspective distortions, scaling, brightness, contrast and color shadings, they need to be corrected or adjusted. This process requires the extraction of key features from their visual content of images. The aim of the proposed work is to develop methods based on computer vision and pattern recognition techniques in order to assist this process. In particular, we propose a method for extracting implicit lines from poor quality images of buildings, including night views where only some lit windows are visible, in order to specify bundles of 3D parallel lines and their corresponding vanishing points. Then, based on this information, one can achieve better merging of the images and better alignment of the images to the block façades. Another important application dealt in this thesis is that of 3D modeling. We propose an edge preserving interpolation, based on the mean shift algorithm, that operates jointly on the optical and the elevation data. It succeeds in increasing the resolution of the elevation data (LiDAR) while improving the quality (i.e. straightness) of their edges. At the same time, the color homogeneity of the corresponding imagery is also improved. The reduction of color artifacts in the optical data and the improvement in the spatial resolution of elevation data results in more accurate 3D building models. Finally, in the problem of building detection, the application of the proposed mean shift-based edge preserving smoothing for increasing the quality of aerial/color images improves the performance of binary building vs non-building pixel classification.

Page generated in 0.0611 seconds