• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 589
  • 589
  • 153
  • 116
  • 108
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 73
  • 70
  • 70
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Montagem e utilização de ambientes virtuais agrícolas em um sistema de multiprojeção imersivo a partir de cenas naturais / Generation and use of virtual agricultural environments in an immersive multiprojection system from natural scenes

Oliveira, Claiton de 05 October 2012 (has links)
A geração de ambientes virtuais de áreas urbanas ou cenas naturais coloca um grande número de problemas no âmbito da Computação Gráfica, dado que a quantidade de informação necessária para a criação de modelos realistas é dependente da dimensão e complexidade da área a modelar. A construção de um grande número de modelos de objetos naturais de forma detalhada é extremamente trabalhosa. Enquanto os modelos de estruturas artificiais, tais como máquinas ou edifícios podem ser obtidos a partir de fontes CAD, o mesmo não ocorre as plantas e outros fenômenos naturais. Embora muitos ambientes virtuais sejam criados por modelagem manual e individual de cada um dos seus componentes, os processos automáticos e semi-automáticos de reconstrução de ambientes naturais 3D provaram que podem ser muito mais eficientes, reduzindo a duração, o custo e a alocação de recursos humanos. A integração entre diferentes tecnologias e ferramentas que possibilitem a identificação de elementos em um cenário agrícola, modelagem de objetos 3D e a posterior apresentação e utilização do ambiente virtual em um sistema do tipo CAVE não é uma tarefa trivial. Dessa forma, o objetivo desta pesquisa é desenvolver uma metodologia de montagem automática de ambientes virtuais agrícolas baseada na extração de objetos de cenas naturais reais a partir de imagens de vídeo para utilização no sistema de multiprojeção imersivo do Laboratório Multiusuário de Visualização 3D Imersiva de São Carlos (LMVI). A partir de um modelo de dados 3D projetado em um sistema que oferece um alto grau de imersão e interação como o LMVI, pode-se fazer comparações com outros modelos de dados ou com o mesmo modelo em épocas diferentes. Através da comparação entre modelos é possível identificar alterações que ocorreram no ambiente ao longo do tempo (tanto naturais como causadas pelo homem) auxiliando na tomada de decisão em processos agrícolas. / The generation of virtual environments for urban or natural scenes poses a number of problems within the Computer Graphics, since the amount of information needed to create realistic models is dependent on the size and complexity of the area to be modeled.The construction of a large number of natural object models in detail is extremely laborious. While the models of artificial structures, such as machines or buildings can be obtained from CAD sources, the same is not true plants and other natural phenomena. Although many virtual environments are created by individual and manual modeling of each of their components, the processes of automatic and semi-automatic 3D reconstruction of natural environments have proved to be much more efficient, reducing duration, cost and allocation of human resources.The integration between different technologies and tools that enable the identication of elements in an agricultural setting, modeling of 3D objects and the subsequent presentation and use of virtual environment in a CAVE-like system is not a trivial task. Thus, the objective of this research is to develop a methodology for automatic mounting of agricultural virtual environments based on the extraction of objects of natural scenes from real video images for use in the immersive multiprojection system of the Multiuser Laboratory of 3D Immersive Visualization of Sao Carlos (MLIV). From a 3d data model projected in a system that offers a high degree of immersion and interaction as MLIV, one can make comparisons with other data models or with the same model at different periods. Through the comparison between models is possible to identify changes that occurred in the environment over time (both natural and manmade) assisting the decision making in agricultural processes.
112

Mathematical Analysis of Intensity Based Segmentation Algorithms with Implementations on Finger Images in an Uncontrolled Environment

Svens, Lisa January 2019 (has links)
The main task of this thesis is to perform image segmentation on images of fingers to partition the image into two parts, one with the fingers and one with all that is not fingers. First, we present the theory behind several well-used image segmentation methods, such as SNIC superpixels, the k-means algorithm, and the normalised cut algorithm. These have then been implemented and tested on images of fingers and the results are shown. The implementations are unfortunately not stable and give segmentations of varying results.
113

Biomedical Image Segmentation and Object Detection Using Deep Convolutional Neural Networks

Liming Wu (6622538) 11 June 2019 (has links)
<p>Quick and accurate segmentation and object detection of the biomedical image is the starting point of most disease analysis and understanding of biological processes in medical research. It will enhance drug development and advance medical treatment, especially in cancer-related diseases. However, identifying the objects in the CT or MRI images and labeling them usually takes time even for an experienced person. Currently, there is no automatic detection technique for nucleus identification, pneumonia detection, and fetus brain segmentation. Fortunately, as the successful application of artificial intelligence (AI) in image processing, many challenging tasks are easily solved with deep convolutional neural networks. In light of this, in this thesis, the deep learning based object detection and segmentation methods were implemented to perform the nucleus segmentation, lung segmentation, pneumonia detection, and fetus brain segmentation. The semantic segmentation is achieved by the customized U-Net model, and the instance localization is achieved by Faster R-CNN. The reason we choose U-Net is that such a network can be trained end-to-end, which means the architecture of this network is very simple, straightforward and fast to train. Besides, for this project, the availability of the dataset is limited, which makes U-Net a more suitable choice. We also implemented the Faster R-CNN to achieve the object localization. Finally, we evaluated the performance of the two models and further compared the pros and cons of them. The preliminary results show that deep learning based technique outperforms all existing traditional segmentation algorithms. </p>
114

Image motion analysis using inertial sensors

Saunders, Thomas January 2015 (has links)
Understanding the motion of a camera from only the image(s) it captures is a di cult problem. At best we might hope to estimate the relative motion between camera and scene if we assume a static subject, but once we start considering scenes with dynamic content it becomes di cult to di↵erentiate between motion due to the observer or motion due to scene movement. In this thesis we show how the invaluable cues provided by inertial sensor data can be used to simplify motion analysis and relax requirements for several computer vision problems. This work was funded by the University of Bath.
115

Graph based approaches for image segmentation and object tracking / Méthodes de graphe pour la segmentation d'images et le suivi d'objets dynamiques

Wang, Xiaofang 27 March 2015 (has links)
Cette thèse est proposée en deux parties. Une première partie se concentre sur la segmentation d’image. C’est en effet un problème fondamental pour la vision par ordinateur. En particulier, la segmentation non supervisée d’images est un élément important dans de nombreux algorithmes de haut niveau et de systèmes d’application. Dans cette thèse, nous proposons trois méthodes qui utilisent la segmentation d’images se basant sur différentes méthodes de graphes qui se révèlent être des outils puissants permettant de résoudre ces problèmes. Nous proposons dans un premier temps de développer une nouvelle méthode originale de construction de graphe. Nous analysons également différentes méthodes similaires ainsi que l’influence de l’utilisation de divers descripteurs. Le type de graphe proposé, appelé graphe local/global, encode de manière adaptative les informations sur la structure locale et globale de l’image. De plus, nous réalisons un groupement global en utilisant une représentation parcimonieuse des caractéristiques des superpixels sur le dictionnaire de toutes les caractéristiques en résolvant un problème de minimisation l0. De nombreuses expériences sont menées par la suite sur la base de données <Berkeley Segmentation>, et la méthode proposée est comparée avec des algorithmes classiques de segmentation. Les résultats démontrent que notre méthode peut générer des partitions visuellement significatives, mais aussi que des résultats quantitatifs très compétitifs sont obtenus en comparaison des algorithmes usuels. Dans un deuxième temps, nous proposons de travailler sur une méthode reposant sur un graphe d’affinité discriminant, qui joue un rôle essentiel dans la segmentation d’image. Un nouveau descripteur, appelé patch pondéré par couleur, est développé pour calculer le poids des arcs du graphe d’affinité. Cette nouvelle fonctionnalité est en mesure d’intégrer simultanément l’information sur la couleur et le voisinage en représentant les pixels avec des patchs de couleur. De plus, nous affectons à chaque pixel une pondération à la fois local et globale de manière adaptative afin d’atténuer l’effet trop lisse lié à l’utilisation de patchs. Des expériences approfondies montrent que notre méthode est compétitive par rapport aux autres méthodes standards à partir de plusieurs paramètres d’évaluation. Finalement, nous proposons une méthode qui combine superpixels, représentation parcimonieuse, et une nouvelle caractéristisation de mi-niveau pour décrire les superpixels. Le nouvelle caractérisation de mi-niveau contient non seulement les mêmes informations que les caractéristiques initiales de bas niveau, mais contient également des informations contextuelles supplémentaires. Nous validons la caractéristisation de mi-niveau proposée sur l’ensemble de données MSRC et les résultats de segmentation montrent des améliorations à la fois qualitatives et quantitatives par rapport aux autres méthodes standards. Une deuxième partie se concentre sur le suivi d’objets multiples. C’est un domaine de recherche très actif, qui est d’une importance majeure pour un grand nombre d’applications, par exemple la vidéo-surveillance de piétons ou de véhicules pour des raisons de sécurité ou l’identification de motifs de mouvements animaliers. / Image segmentation is a fundamental problem in computer vision. In particular, unsupervised image segmentation is an important component in many high-level algorithms and practical vision systems. In this dissertation, we propose three methods that approach image segmentation from different angles of graph based methods and are proved powerful to address these problems. Our first method develops an original graph construction method. We also analyze different types of graph construction method as well as the influence of various feature descriptors. The proposed graph, called a local/global graph, encodes adaptively the local and global image structure information. In addition, we realize global grouping using a sparse representation of superpixels’ features over the dictionary of all features by solving a l0-minimization problem. Extensive experiments are conducted on the Berkeley Segmentation Database, and the proposed method is compared with classical benchmark algorithms. The results demonstrate that our method can generate visually meaningful partitions, but also that very competitive quantitative results are achieved compared with state-of-the-art algorithms. Our second method derives a discriminative affinity graph that plays an essential role in graph-based image segmentation. A new feature descriptor, called weighted color patch, is developed to compute the weight of edges in an affinity graph. This new feature is able to incorporate both color and neighborhood information by representing pixels with color patches. Furthermore, we assign both local and global weights adaptively for each pixel in a patch in order to alleviate the over-smooth effect of using patches. The extensive experiments show that our method is competitive compared to the other standard methods with multiple evaluation metrics. The third approach combines superpixels, sparse representation, and a new midlevel feature to describe superpixels. The new mid-level feature not only carries the same information as the initial low-level features, but also carries additional contextual cue. We validate the proposed mid-level feature framework on the MSRC dataset, and the segmented results show improvements from both qualitative and quantitative viewpoints compared with other state-of-the-art methods. Multi-target tracking is an intensively studied area of research and is valuable for a large amount of applications, e.g. video surveillance of pedestrians or vehicles motions for sake of security, or identification of the motion pattern of animals or biological/synthetic particles to infer information about the underlying mechanisms. We propose a detect-then-track framework to track massive colloids’ motion paths in active suspension system. First, a region based level set method is adopted to segment all colloids from long-term videos subject to intensity inhomogeneity. Moreover, the circular Hough transform further refines the segmentation to obtain colloid individually. Second, we propose to recover all colloids’ trajectories simultaneously, which is a global optimal problem that can be solved efficiently with optimal algorithms based on min-cost/max flow. We evaluate the proposed framework on a real benchmark with annotations on 9 different videos. Extensive experiments show that the proposed framework outperforms standard methods with large margin.
116

Mapping individual trees from airborne multi-sensor imagery

Lee, Juheon January 2016 (has links)
Airborne multi-sensor imaging is increasingly used to examine vegetation properties. The advantage of using multiple types of sensor is that each detects a different feature of the vegetation, so that collectively they provide a detailed understanding of the ecological pattern. Specifically, Light Detection And Ranging (LiDAR) devices produce detailed point clouds of where laser pulses have been backscattered from surfaces, giving information on vegetation structure; hyperspectral sensors measure reflectances within narrow wavebands, providing spectrally detailed information about the optical properties of targets; while aerial photographs provide high spatial-resolution imagery so that they can provide more feature details which cannot be identified from hyperspectral or LiDAR intensity images. Using a combination of these sensors, effective techniques can be developed for mapping species and inferring leaf physiological processes at ITC-level. Although multi-sensor approaches have revolutionised ecological research, their application in mapping individual tree crowns is limited by two major technical issues: (a) Multi-sensor imaging requires all images taken from different sensors to be co-aligned, but different sensor characteristics result in scale, rotation or translation mismatches between the images, making correction a pre-requisite of individual tree crown mapping; (b) reconstructing individual tree crowns from unstructured raw data space requires an accurate tree delineation algorithm. This thesis develops a schematic way to resolve these technical issues using the-state-of-the-art computer vision algorithms. A variational method, called NGF-Curv, was developed to co-align hyperspectral imagery, LiDAR and aerial photographs. NGF-Curv algorithm can deal with very complex topographic and lens distortions efficiently, thus improving the accuracy of co-alignment compared to established image registration methods for airborne data. A graph cut method, named MCNCP-RNC was developed to reconstruct individual tree crowns from fully integrated multi-sensor imagery. MCNCP-RNC is not influenced by interpolation artefacts because it detects trees in 3D, and it detects individual tree crowns using both hyperspectral imagery and LiDAR. Based on these algorithms, we developed a new workflow to detect species at pixel and ITC levels in a temperate deciduous forest in the UK. In addition, we modified the workflow to monitor physiological responses of two oak species with respect to environmental gradients in a Mediterranean woodland in Spain. The results show that our scheme can detect individual tree crowns, find species and monitor physiological responses of canopy leaves.
117

Contributions au clustering collaboratif et à ses potentielles applications en imagerie à très haute résolution / Contributions to collaborative clustering and its potential applications on very high resolution satellite images

Sublime, Jérémie 09 November 2016 (has links)
Cette thèse présente plusieurs algorithmes développés dans le cadre du projet ANR COCLICO et contient deux axes principaux :Le premier axe concerne l'introduction d'un algorithme applicable aux images satellite à très haute résolution, qui est basé sur les champs aléatoires de Markov et qui apporte des notions sémantiques sur les clusters découverts. Cet algorithme est inspiré de l'algorithme Iterated conditional modes (ICM) et permet de faire un clustering sur des segments d'images pré-traitées. La méthode que nous proposons permet de gérer des voisinages irréguliers entre segments et d'obtenir des informations sémantiques de bas niveau sur les clusters de l'image traitée.Le second axe porte sur le développement de méthodes de clustering collaboratif applicables à autant d'algorithmes que possible, ce qui inclut les algorithmes du premier axe. La caractéristique principale des méthodes proposées dans cette thèse est leur applicabilité aux deux cas suivants : 1) plusieurs algorithmes travaillant sur les mêmes objets dans des espaces de représentation différents, 2) plusieurs algorithmes travaillant sur des données différentes ayant des distributions similaires. Les méthodes que nous proposons peuvent s'appliquer à de nombreux algorithmes comme l'ICM, les K-Moyennes, l'algorithme EM, ou les cartes topographiques (SOM et GTM). Contrairement aux méthodes précédemment proposées, notre modèle permet à des algorithmes très différents de collaborer ensemble, n'impose pas de contrainte sur le nombre de clusters recherchés et a une base mathématique solide. / This thesis presents several algorithms developed in the context of the ANR COCLICO project and contains two main axis: The first axis is concerned with introducing Markov Random Fields (MRF) based models to provide a semantic rich and suited algorithm applicable to images that are already segmented. This method is based on the Iterated Conditional Modes Algorithm (ICM algorithm) and can be applied to the segments of very high resolution (VHR) satellite pictures. Our proposed method can cope with highly irregular neighborhood dependencies and provides some low level semantic information on the clusters and their relationship within the image. The second axis deals with collaborative clustering methods developed with the goal of being applicable to as many clustering algorithms as possible, including the algorithms used in the first axis of this work. A key feature of the methods proposed in this thesis is that they can deal with either of the following two cases: 1) several clustering algorithms working together on the same data represented in different feature spaces, 2) several clustering algorithms looking for similar clusters in different data sets having similar distributions. Clustering algorithms to which these methods are applicable include the ICM algorithm, the K-Means algorithm, density based algorithms such as DB-scan, all Expectation-Maximization (EM) based algorithms such as the Self-Organizing Maps (SOM) and the Generative Topographic Mapping (GTM) algorithms. Unlike previously introduced methods, our models have no restrictions in term of types of algorithms that can collaborate together, do not require that all methods be looking for the same number of clusters, and are provided with solid mathematical foundations.
118

Marketingový význam body image mužov / Marketing significance od male body image

Benová, Natália January 2011 (has links)
The aim of this thesis is to analyse key cultural factors determining the impact of male body perception on consumer's behavior, evaluate their significance and reflect them in appropriate marketing recommendations. The thesis is divided into four major parts. The theoretical part provides the key information concerning the subject and constitutes the basis for the analytical part. Chapter Body Image Segmentation Potential demonstrates the possibility of using body image for the male market in the Czech Republic. The Questionnaire Survey shows male attitudes to the subject of body image. The Content Analysis examines the male depiction in Czech and Slovak TV commercials. The chapters are supplemented with secondary MML-TGI data. The conclusion of the thesis, as well as the partial conclusions, shows the analyses evaluation and marketing recommendations.
119

Montagem e utilização de ambientes virtuais agrícolas em um sistema de multiprojeção imersivo a partir de cenas naturais / Generation and use of virtual agricultural environments in an immersive multiprojection system from natural scenes

Claiton de Oliveira 05 October 2012 (has links)
A geração de ambientes virtuais de áreas urbanas ou cenas naturais coloca um grande número de problemas no âmbito da Computação Gráfica, dado que a quantidade de informação necessária para a criação de modelos realistas é dependente da dimensão e complexidade da área a modelar. A construção de um grande número de modelos de objetos naturais de forma detalhada é extremamente trabalhosa. Enquanto os modelos de estruturas artificiais, tais como máquinas ou edifícios podem ser obtidos a partir de fontes CAD, o mesmo não ocorre as plantas e outros fenômenos naturais. Embora muitos ambientes virtuais sejam criados por modelagem manual e individual de cada um dos seus componentes, os processos automáticos e semi-automáticos de reconstrução de ambientes naturais 3D provaram que podem ser muito mais eficientes, reduzindo a duração, o custo e a alocação de recursos humanos. A integração entre diferentes tecnologias e ferramentas que possibilitem a identificação de elementos em um cenário agrícola, modelagem de objetos 3D e a posterior apresentação e utilização do ambiente virtual em um sistema do tipo CAVE não é uma tarefa trivial. Dessa forma, o objetivo desta pesquisa é desenvolver uma metodologia de montagem automática de ambientes virtuais agrícolas baseada na extração de objetos de cenas naturais reais a partir de imagens de vídeo para utilização no sistema de multiprojeção imersivo do Laboratório Multiusuário de Visualização 3D Imersiva de São Carlos (LMVI). A partir de um modelo de dados 3D projetado em um sistema que oferece um alto grau de imersão e interação como o LMVI, pode-se fazer comparações com outros modelos de dados ou com o mesmo modelo em épocas diferentes. Através da comparação entre modelos é possível identificar alterações que ocorreram no ambiente ao longo do tempo (tanto naturais como causadas pelo homem) auxiliando na tomada de decisão em processos agrícolas. / The generation of virtual environments for urban or natural scenes poses a number of problems within the Computer Graphics, since the amount of information needed to create realistic models is dependent on the size and complexity of the area to be modeled.The construction of a large number of natural object models in detail is extremely laborious. While the models of artificial structures, such as machines or buildings can be obtained from CAD sources, the same is not true plants and other natural phenomena. Although many virtual environments are created by individual and manual modeling of each of their components, the processes of automatic and semi-automatic 3D reconstruction of natural environments have proved to be much more efficient, reducing duration, cost and allocation of human resources.The integration between different technologies and tools that enable the identication of elements in an agricultural setting, modeling of 3D objects and the subsequent presentation and use of virtual environment in a CAVE-like system is not a trivial task. Thus, the objective of this research is to develop a methodology for automatic mounting of agricultural virtual environments based on the extraction of objects of natural scenes from real video images for use in the immersive multiprojection system of the Multiuser Laboratory of 3D Immersive Visualization of Sao Carlos (MLIV). From a 3d data model projected in a system that offers a high degree of immersion and interaction as MLIV, one can make comparisons with other data models or with the same model at different periods. Through the comparison between models is possible to identify changes that occurred in the environment over time (both natural and manmade) assisting the decision making in agricultural processes.
120

Abordagens para a segmentação de coronárias em ecocardiografia. / Approaches for coronary segmentation in echocardiography.

André Fernando Lourenço de Souza 03 August 2010 (has links)
A Ecocardiografia continua sendo a técnica de captura de imagens mais promissora, não-invasiva, sem radiação ionizante e de baixo custo para avaliação de condições cardíacas. Porém, é afetada consideravelmente por ruídos do tipo speckle, que são difíceis de serem filtrados. Por isso fez-se necessário fazer a escolha certa entre filtragem e segmentador para a obtenção de resultados melhores na segmentação de estruturas. O objetivo dessa pesquisa foi estudar essa combinação entre filtro e segmentador. Para isso, foi desenvolvido um sistema segmentador, a fim de sistematizar essa avaliação. Foram implementados dois filtros para atenuar o efeito do ruído speckle - Linear Scaling Mean Variance (LSMV) e o filtro de Chitwong - testados em imagens simuladas. Foram simuladas 60 imagens com 300 por 300 pixels, 3 modelos, 4 espessuras e 5 níveis de contrastes diferentes, todas com ruído speckle. Além disso, foram feitos testes com a combinação de filtros. Logo após, foi implementado um algoritmo de conectividade Fuzzy para fazer a segmentação e um sistema avaliador, seguindo os critérios descritos por Loizou, que faz a contagem de verdadeiro-positivos (VP) e falso-positivos (FP). Foi verificado que o filtro LSMV é a melhor opção para segmentação por conectividade Fuzzy. Foram obtidas taxas de VP e FP na ordem de 95% e 5%, respectivamente, e acurácia em torno de 95%. Para imagens ruidosas com alto contraste, aplicando a segmentação sem filtragem, a acurácia obtida foi na ordem de 60%. / The echocardiography is the imaging technique that remains most promising, noninvasive, no ionizing radiation and inexpensive to assess heart conditions. On the other hand, is considerably affected by noises, such as speckle, that are very difficult to be filtered. That is why it is necessary to make the right choice of filter and segmentation method to obtain the best results on image segmentation. The goal was evaluate this filter and segmentation method combination. For that, it was developed a segmentation system, to help the assessment. Two filters were implemented to mitigate the effect of speckle noise Linear Scaling Mean Variance (LSMV) and the filter presented by Chitwong - to be tested in simulated images. We simulated 60 images, with size 300 by 300 pixels, 3 models, 4 thicknesses and 5 different levels of contrast, all with speckle noise. In addition, tests were made with a combination of filters. Furthermore, it was implemented a Fuzzy Connectedness algorithm and an evaluation system, following the criteria described by Loizou, which makes the true positives (TP) and false positives (FP) counting. It was found that the LSMV filter is the best option for Fuzzy Connectedness. We obtained rates of TP and FP of 95% and 5% using LSMV, and accuracy of 95%. Using high contrast noisy images, without filtering, we obtained the accuracy in order of 60%.

Page generated in 0.1118 seconds