• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 588
  • 588
  • 153
  • 116
  • 107
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 72
  • 70
  • 69
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

"Segmentação de imagens e validação de classes por abordagem estocástica" / Image segmentation and class validation in a stochastic approach

Gerhardinger, Leandro Cavaleri 13 April 2006 (has links)
Uma etapa de suma importância na análise automática de imagens é a segmentação, que procura dividir uma imagem em regiões cujos pixels exibem um certo grau de similaridade. Uma característica que provê similaridade entre pixels de uma mesma região é a textura, formada geralmente pela combinação aleatória de suas intensidades. Muitos trabalhos vêm sendo realizados com o intuito de estudar técnicas não-supervisionadas de segmentação de imagens por modelos estocásticos, definindo texturas como campos aleatórios de Markov. Um método com esta abordagem que se destaca é o EM/MPM, um algoritmo iterativo que combina a técnica EM para realizar uma estimação de parâmetros por máxima verossimilhança com a MPM, utilizada para segmentação pela minimização do número de pixels erroneamente classificados. Este trabalho desenvolveu um estudo sobre a modelagem e a implementação do algoritmo EM/MPM, juntamente com sua abordagem multiresolução. Foram propostas uma estimação inicial de parâmetros por limiarização e uma combinação com o algoritmo de Annealing. Foi feito também um estudo acerca da validação de classes, ou seja, a busca pelo número de regiões diferentes na imagem, mostrando as principais técnicas encontradas na literatura e propondo uma nova abordagem, baseada na distribuição dos níveis de cinza das classes. Por fim, foi desenvolvida uma extensão do modelo para a segmentação de malhas em duas e três dimensões. / An important stage of the automatic image analysis process is segmentation, that aims to split an image into regions whose pixels exhibit a certain degree of similarity. Texture is known as an efficient feature that provides enough discriminant power to differenciate pixels from distinct regions. It is usually defined as a random combination of pixel intensities. A considerable amount of researches has been done on non-supervised techniques for image segmentation based on stochastic models, in which texture is defined as Markov Random Fields. Such an important method in this category is the EM/MPM, an iterative algorithm that combines the maximum-likelihood parameter estimation model EM with the MPM segmentation algorithm, whose aim is to minimize the number of misclassified pixels in the image. This work has carried out a study on stochastic models for segmentation and shows an implementation for the EM/MPM algorithm, together with a multiresolution approach. A new threshold-based scheme for the estimation of initial parameters for the EM/MPM model has been proposed. This work also shows how to incorporate the concept of annealing to the current EM/MPM algorithm in order to improve segmentation. Additionally, a study on the class validity problem (search for the correct number of classes) has been done, showing the most important techniques available in the literature. As a consequence, a gray level distribution-based approach has been devised. Finally, the work shows an extension of the traditional EM/MPM technique for segmenting 2D and 3D meshes.
262

Development of the VHP-Female Full-Body Computational Model and Its Applications for Biomedical Electromagnetic Modeling

Yanamadala, Janakinadh 28 January 2015 (has links)
Computational modeling offers better insight into a wide range of bioelectrical and biomechanical problems with improved tools for the design of medical devices and the diagnosis of pathologies. Electromagnetic modeling at low and high frequencies is particularly necessary. Modeling electromagnetic, structural, thermal, and acoustic response of the human body to different internal and external stimuli is limited by the availability of numerically efficient computational human models. This study describes the development to date of a computational full-body human model - Visible Human Project (VHP) - Female Model. Its unique feature is full compatibility both with MATLAB and specialized FEM computational software packages such as ANSYS HFSS/Maxwell 3D. This study also describes progress made to date in using the newly developed tools for segmentation. A visualization tool is implemented within MATLAB and is based on customized version of the constrained 2D Delaunay triangulation method for intersecting objects. This thesis applies a VHP - Female Model to a specific application, transcranial Direct Current Stimulation (tDCS). Transcranial Direct Current Stimulation has been beneficial in the stimulation of cortical activity and treatment of neurological disorders in humans. The placement of electrodes, which is cephalic versus extracephalic montages, is studied for optimal targeting of currents for a given functional area. Given the difficulty of obtaining in vivo measurements of current density, modeling of conventional and alternative electrode montages via the FEM has been utilized to provide insight into the tDCS montage performance. An insight into future work and potential areas of research, such as study of bone quality have been presented too.
263

Detecção e classificação de sinalização vertical de trânsito em cenários complexos

Hoelscher, Igor Gustavo January 2017 (has links)
A mobilidade é uma marca da nossa civilização. Tanto o transporte de carga quanto o de passageiros compartilham de uma enorme infra-estrutura de conexões operados com o apoio de um sofisticado sistema logístico. Simbiose otimizada de módulos mecânicos e elétricos, os veículos evoluem continuamente com a integração de avanços tecnológicos e são projetados para oferecer o melhor em conforto, segurança, velocidade e economia. As regulamentações organizam o fluxo de transporte rodoviário e as suas interações, estipulando regras a fim de evitar conflitos. Mas a atividade de condução pode tornar-se estressante em diferentes condições, deixando os condutores humanos propensos a erros de julgamento e criando condições de acidente. Os esforços para reduzir acidentes de trânsito variam desde campanhas de re-educação até novas tecnologias. Esses tópicos têm atraído cada vez mais a atenção de pesquisadores e indústrias para Sistemas de Transporte Inteligentes baseados em imagens. Este trabalho apresenta um estudo sobre técnicas de detecção e classificação de sinalização vertical de trânsito em imagens de cenários de tráfego complexos. O sistema de reconhecimento visual automático dos sinais destina-se a ser utilizado para o auxílio na atividade de direção de um condutor humano ou como informação para um veículo autônomo. Com base nas normas para sinalização viária, foram testadas duas abordagens para a segmentação de imagens e seleção de regiões de interesse. O primeiro, uma limiarização de cor em conjunto com Descritores de Fourier. Seu desempenho não foi satisfatório. No entanto, utilizando os seus princípios, desenvolveu-se um novo método de filtragem de cores baseado em Lógica Fuzzy que, juntamente com um algoritmo de seleção de regiões estáveis em diferentes tons de cinza (MSER), ganhou robustez à oclusão parcial e a diferentes condições de iluminação. Para classificação, duas Redes Neurais Convolucionais curtas são apresentadas para reconhecer sinais de trânsito brasileiros e alemães. A proposta é ignorar cálculos complexos ou features selecionadas manualmente para filtrar falsos positivos antes do reconhecimento, realizando a confirmação (etapa de detecção) e a classificação simultaneamente. A utilização de métodos do estado da arte para treinamento e otimização melhoraram a eficiência da técnica de aprendizagem da máquina. Além disso, este trabalho fornece um novo conjunto de imagens com cenários de tráfego em diferentes regiões do Brasil, contendo 2.112 imagens em resolução WSXGA+. As análises qualitativas são mostradas no conjunto de dados brasileiro e uma análise quantitativa com o conjunto de dados alemão apresentou resultados competitivos com outros métodos: 94% de acurácia na extração e 99% de acurácia na classificação. / Mobility is an imprint of our civilization. Both freight and passenger transport share a huge infrastructure of connecting links operated with the support of a sophisticated logistic system. As an optimized symbiosis of mechanical and electrical modules, vehicles are evolving continuously with the integration of technological advances and are engineered to offer the best in comfort, safety, speed and economy. Regulations organize the flow of road transportation machines and help on their interactions, stipulating rules to avoid conflicts. But driving can become stressing on different conditions, leaving human drivers prone to misjudgments and creating accident conditions. Efforts to reduce traffic accidents that may cause injuries and even deaths range from re-education campaigns to new technologies. These topics have increasingly attracted the attention of researchers and industries to Image-based Intelligent Transportation Systems. This work presents a study on techniques for detecting and classifying traffic signs in images of complex traffic scenarios. The system for automatic visual recognition of signs is intended to be used as an aid for a human driver or as input to an autonomous vehicle. Based on the regulations for road signs, two approaches for image segmentation and selection of regions of interest were tested. The first one, a color thresholding in conjunction with Fourier Descriptors. Its performance was not satisfactory. However, using its principles, a new method of color filtering using Fuzzy Logic was developed which, together with an algorithm that selects stable regions in different shades of gray (MSER), the approach gained robustness to partial occlusion and to different lighting conditions. For classification, two short Convolutional Neural Networks are presented to recognize both Brazilian and German traffic signs. The proposal is to skip complex calculations or handmade features to filter false positives prior to recognition, making the confirmation (detection step) and the classification simultaneously. State-of-the-art methods for training and optimization improved the machine learning efficiency. In addition, this work provides a new dataset with traffic scenarios in different regions of Brazil, containing 2,112 images in WSXGA+ resolution. Qualitative analyzes are shown in the Brazilian dataset and a quantitative analysis with the German dataset presented competitive results with other methods: 94% accuracy in extraction and 99% accuracy in the classification.
264

Segmentação semiautomática de conjuntos completos de imagens do ventrículo esquerdo / Semiautomatic segmentation of left ventricle in full sets of cardiac images

Rafael Siqueira Torres 05 April 2017 (has links)
A área médica tem se beneficiado das ferramentas construídas pela Computação e, ao mesmo tempo, tem impulsionado o desenvolvimento de novas técnicas em diversas especialidades da Computação. Dentre estas técnicas a segmentação tem como objetivo separar em uma imagem objetos de interesse, podendo chamar a atenção do profissional de saúde para áreas de relevância ao diagnóstico. Além disso, os resultados da segmentação podem ser utilizados para a reconstrução de modelos tridimensionais, que podem ter características extraídas que auxiliem o médico em tomadas de decisão. No entanto, a segmentação de imagens médicas ainda é um desafio, por ser extremamente dependente da aplicação e das estruturas de interesse presentes na imagem. Esta dissertação apresenta uma técnica de segmentação semiautomática do endocárdio do ventrículo esquerdo em conjuntos de imagens cardíacas de Ressonância Magnética Nuclear. A principal contribuição é a segmentação considerando todas as imagens provenientes de um exame, por meio da propagação dos resultados obtidos em imagens anteriormente processadas. Os resultados da segmentação são avaliados usando-se métricas objetivas como overlap, entre outras, comparando com imagens fornecidas por especialistas na área de Cardiologia / The medical field has been benefited from the tools built by Computing and has promote the development of new techniques in diverse Computer specialties. Among these techniques, the segmentation aims to divide an image into interest objects, leading the attention of the specialist to areas that are relevant in diagnosys. In addition, segmentation results can be used for the reconstruction of three-dimensional models, which may have extracted features that assist the physician in decision making. However, the segmentation of medical images is still a challenge because it is extremely dependent on the application and structures of interest present in the image. This dissertation presents a semiautomatic segmentation technique of the left ventricular endocardium in sets of cardiac images of Nuclear Magnetic Resonance. The main contribution is the segmentation considering all the images coming from an examination, through the propagation of the results obtained in previously processed images. Segmentation results are evaluated using objective metrics such as overlap, among others, compared to images provided by specialists in the Cardiology field
265

Mapeamento semântico com aprendizado estatístico relacional para representação de conhecimento em robótica móvel. / Semantic mapping with statistical relational learning for knowledge representation in mobile robotics.

Corrêa, Fabiano Rogério 30 March 2009 (has links)
A maior parte dos mapas empregados em tarefas de navegação por robôs móveis representam apenas informações espaciais do ambiente. Outros tipos de informações, que poderiam ser obtidos dos sensores do robô e incorporados à representação, são desprezados. Hoje em dia é comum um robô móvel conter sensores de distância e um sistema de visão, o que permitiria a princípio usá-lo na realização de tarefas complexas e gerais de maneira autônoma, dada uma representação adequada e um meio de extrair diretamente dos sensores o conhecimento necessário. Uma representação possível nesse contexto consiste no acréscimo de informação semântica aos mapas métricos, como por exemplo a segmentação do ambiente seguida da rotulação de cada uma de suas partes. O presente trabalho propõe uma maneira de estruturar a informação espacial criando um mapa semântico do ambiente que representa, além de obstáculos, um vínculo entre estes e as imagens segmentadas correspondentes obtidas por um sistema de visão omnidirecional. A representação é implementada por uma descrição relacional do domínio, que quando instanciada gera um campo aleatório condicionado, onde são realizadas as inferências. Modelos que combinam probabilidade e lógica de primeira ordem são mais expressivos e adequados para estruturar informações espaciais em semânticas. / Most maps used in navigational tasks by mobile robots represent only environmental spatial information. Other kinds of information, that might be obtained from the sensors of the robot and incorporated in the representation, are negleted. Nowadays it is common for mobile robots to have distance sensors and a vision system, which could in principle be used to accomplish complex and general tasks in an autonomously manner, given an adequate representation and a way to extract directly from the sensors the necessary knowledge. A possible representation in this context consists of the addition of semantic information to metric maps, as for example the environment segmentation followed by an attribution of labels to them. This work proposes a way to structure the spatial information in order to create a semantic map representing, beyond obstacles, an anchoring between them and the correspondent segmented images obtained by an omnidirectional vision system. The representation is implemented by a domains relational description that, when instantiated, produces a conditional random field, which supports the inferences. Models that combine probability and firstorder logic are more expressive and adequate to structure spatial in semantic information.
266

Efficient optimization for labeling problems with prior information: applications to natural and medical images

Bai, Junjie 01 May 2016 (has links)
Labeling problem, due to its versatile modeling ability, is widely used in various image analysis tasks. In practice, certain prior information is often available to be embedded in the model to increase accuracy and robustness. However, it is not always straightforward to formulate the problem so that the prior information is correctly incorporated. It is even more challenging that the proposed model admits efficient algorithms to find globally optimal solution. In this dissertation, a series of natural and medical image segmentation tasks are modeled as labeling problems. Each proposed model incorporates different useful prior information. These prior information includes ordering constraints between certain labels, soft user input enforcement, multi-scale context between over-segmented regions and original voxel, multi-modality context prior, location context between multiple modalities, star-shape prior, and gradient vector flow shape prior. With judicious exploitation of each problem's intricate structure, efficient and exact algorithms are designed for all proposed models. The efficient computation allow the proposed models to be applied on large natural and medical image datasets using small memory footprint and reasonable time assumption. The global optimality guarantee makes the methods robust to local noise and easy to debug. The proposed models and algorithms are validated on multiple experiments, using both natural and medical images. Promising and competitive results are shown when compared to state-of-art.
267

Automated and interactive approaches for optimal surface finding based segmentation of medical image data

Sun, Shanhui 01 December 2012 (has links)
Optimal surface finding (OSF), a graph-based optimization approach to image segmentation, represents a powerful framework for medical image segmentation and analysis. In many applications, a pre-segmentation is required to enable OSF graph construction. Also, the cost function design is critical for the success of OSF. In this thesis, two issues in the context of OSF segmentation are addressed. First, a robust model-based segmentation method suitable for OSF initialization is introduced. Second, an OSF-based segmentation refinement approach is presented. For segmenting complex anatomical structures (e.g., lungs), a rough initial segmentation is required to apply an OSF-based approach. For this purpose, a novel robust active shape model (RASM) is presented. The RASM matching in combination with OSF is investigated in the context of segmenting lungs with large lung cancer masses in 3D CT scans. The robustness and effectiveness of this approach is demonstrated on 30 lung scans containing 20 normal lungs and 40 diseased lungs where conventional segmentation methods frequently fail to deliver usable results. The developed RASM approach is generally applicable and suitable for large organs/structures. While providing high levels of performance in most cases, OSF-based approaches may fail in a local region in the presence of pathology or other local challenges. A new (generic) interactive refinement approach for correcting local segmentation errors based on the OSF segmentation framework is proposed. Following the automated segmentation, the user can inspect the result and correct local or regional segmentation inaccuracies by (iteratively) providing clues regarding the location of the correct surface. This expert information is utilized to modify the previously calculated cost function, locally re-optimizing the underlying modified graph without a need to start the new optimization from scratch. For refinement, a hybrid desktop/virtual reality user interface based on stereoscopic visualization technology and advanced interaction techniques is utilized for efficient interaction with the segmentations (surfaces). The proposed generic interactive refinement method is adapted to three applications. First, two refinement tools for 3D lung segmentation are proposed, and the performance is assessed on 30 test cases from 18 CT lung scans. Second, in a feasibility study, the approach is expanded to 4D OSF-based lung segmentation refinement and an assessment of performance is provided. Finally, a dual-surface OSF-based intravascular ultrasound (IVUS) image segmentation framework is introduced, application specific segmentation refinement methods are developed, and an evaluation on 41 test cases is presented. As demonstrated by experiments, OSF-based segmentation refinement is a promising approach to address challenges in medical image segmentation.
268

Consensus Segmentation for Positron Emission Tomography: Development and Applications in Radiation Therapy

McGurk, Ross January 2013 (has links)
<p>The use of positron emission tomography (PET) in radiation therapy has continued to grow, especially since the development of combined computed tomography (CT) and PET imaging system in the early 1990s. Today, the biggest use of PET-CT is in oncology, where a glucose analog radiotracer is rapidly incorporated into the metabolic pathways of a variety of cancers. Images representing the in-vivo distribution of this radiotracer are used for the staging, delineation and assessment of treatment response of patients undergoing chemotherapy or radiation therapy. While PET offers the ability to provide functional information, the imaging quality of PET is adversely affected by its lower spatial resolution. It also has unfavorable image noise characteristics due to radiation dose concerns and patient compliance. These factors result in PET images having less detail and lower signal-to-noise (SNR) properties compared to images produced by CT. This complicates the use of PET within many areas of radiation oncology, but particularly the delineation of targets for radiation therapy and the assessment of patient response to therapy. The development of segmentation methods that can provide accurate object identification in PET images under a variety of imaging conditions has been a goal of the imaging community for years. The goal of this thesis are to: (1) investigate the effect of filtering on segmentation methods; (2) investigate whether combining individual segmentation methods can improve segmentation accuracy; (3) investigate whether the consensus volumes can be useful in aiding physicians of different experience in defining gross tumor volumes (GTV) for head-and-neck cancer patients; and (4) to investigate whether consensus volumes can be useful in assessing early treatment response in head-and-neck cancer patients.</p><p>For this dissertation work, standard spherical objects of volumes ranging from 1.15 cc to 37 cc and two irregularly shaped objects of volume 16 cc and 32 cc formed by deforming high density plastic bottles were placed in a standardized image quality phantom and imaged at two contrasts (4:1 or 8:1 for spheres, and 4.5:1 and 9:1 for irregular) and three scan durations (1, 2 and 5 minutes). For the work carried out into the comparison of images filters, Gaussian and bilateral filters matched to produce similar image signal to noise (SNR) in background regions were applied to raw unfiltered images. Objects were segmented using thresholding at 40% of the maximum intensity within a region-of-interest (ROI), an adaptive thresholding method which accounts for the signal of the object as well as background, k-means clustering, and a seeded region-growing method adapted from the literature. Quality of the segmentations was assessed using the Dice Similarity Coefficient (DSC) and symmetric mean absolute surface distance (SMASD). Further, models describing how DSC varies with object size, contrast, scan duration, filter choice and segmentation method were fitted using generalized estimating equations (GEEs) and standard regression for comparison. GEEs accounted for the bounded, correlated and heteroscedastic nature of the DSC metric. Our analysis revealed that object size had the largest effect on DSC for spheres, followed by contrast and scan duration. In addition, compared to filtering images with a 5 mm full-width at half maximum (FWHM) Gaussian filter, a 7 mm bilateral filter with moderate pre-smoothing (3 mm Gaussian (G3B7)) produced significant improvements in 3 out of the 4 segmentation methods for spheres. For the irregular objects, time had the biggest effect on DSC values, followed by contrast. </p><p>For the study of applying consensus methods to PET segmentation, an additional gradient based method was included into the collection individual segmentation methods used for the filtering study. Objects in images acquired for 5 minute scan durations were filtered with a 5 mm FWHM Gaussian before being segmented by all individual methods. Two approaches of creating a volume reflecting the agreement between the individual methods were investigated. First, a simple majority voting scheme (MJV), where individual voxels segmented by three or more of the individual methods are included in the consensus volume, and second, the Simultaneous Truth and Performance Level Estimation (STAPLE) method which is a maximum likelihood methodology previously presented in the literature but never applied to PET segmentation. Improvements in accuracy to match or exceed the best performing individual method were observed, and importantly, both consensus methods provided robustness against poorly performing individual methods. In fact, the distributions of DSC and SMASD values for the MJV and STAPLE closely match the distribution that would result if the best individual method result were selected for all objects (the best individual method varies by objects). Given that the best individual method is dependent on object type, size, contrast, and image noise and the best individual method is not able to be known before segmentation, consensus methods offer a marked improvement over the current standard of using just one of the individual segmentation methods used in this dissertation. </p><p>To explore the potential application of consensus volumes to radiation therapy, the MJV consensus method was used to produce GTVs in a population of head and neck cancer patients. This GTV and one created using simple 40% thresholding were then available to be used as a guidance volume for an attending head and neck radiation oncologist and a resident who had completed their head and neck rotation. The task for each physician was to manually delineate GTVs using the CT and PET images. Each patient was contoured three times by each physician- without guidance and with guidance using either the MJV consensus volume or 40% thresholding. Differences in GTV volumes between physicians were not significant, nor were differences between the GTV volumes regardless of the guidance volume available to the physicians. However, on average, 15-20% of the provided guidance volume lay outside the final physician-defined contour.</p><p>In the final study, the MJV and STAPLE consensus volumes were used to extract maximum, peak and mean SUV measurements in two baseline PET scans and one PET scan taken during patients' prescribed radiation therapy treatments. Mean SUV values derived from consensus volumes showed smaller variability compared to maximum SUV values. Baseline and intratreatment variability was assessed using a Bland-Altman analysis which showed that baseline variability in SUV was lower than intratreatment changes in SUV.</p><p>The techniques developed and reported in this thesis demonstrate how filter choice affects segmentation accuracy, how the use of GEEs more appropriately account for the properties of a common segmentation quality metric, and how consensus volumes not only provide an accuracy on par with the single best performing individual method in a given activity distribution, but also exhibit a robustness against variable performance of individual segmentation methods that make up the consensus volume. These properties make the use of consensus volumes appealing for a variety of tasks in radiation oncology.</p> / Dissertation
269

Satellite Image Processing with Biologically-inspired Computational Methods and Visual Attention

Sina, Md Ibne 27 July 2012 (has links)
The human vision system is generally recognized as being superior to all known artificial vision systems. Visual attention, among many processes that are related to human vision, is responsible for identifying relevant regions in a scene for further processing. In most cases, analyzing an entire scene is unnecessary and inevitably time consuming. Hence considering visual attention might be advantageous. A subfield of computer vision where this particular functionality is computationally emulated has been shown to retain high potential in solving real world vision problems effectively. In this monograph, elements of visual attention are explored and algorithms are proposed that exploit such elements in order to enhance image understanding capabilities. Satellite images are given special attention due to their practical relevance, inherent complexity in terms of image contents, and their resolution. Processing such large-size images using visual attention can be very helpful since one can first identify relevant regions and deploy further detailed analysis in those regions only. Bottom-up features, which are directly derived from the scene contents, are at the core of visual attention and help identify salient image regions. In the literature, the use of intensity, orientation and color as dominant features to compute bottom-up attention is ubiquitous. The effects of incorporating an entropy feature on top of the above mentioned ones are also studied. This investigation demonstrates that such integration makes visual attention more sensitive to fine details and hence retains the potential to be exploited in a suitable context. One interesting application of bottom-up attention, which is also examined in this work, is that of image segmentation. Since low salient regions generally correspond to homogenously textured regions in the input image; a model can therefore be learned from a homogenous region and used to group similar textures existing in other image regions. Experimentation demonstrates that the proposed method produces realistic segmentation on satellite images. Top-down attention, on the other hand, is influenced by the observer’s current states such as knowledge, goal, and expectation. It can be exploited to locate target objects depending on various features, and increases search or recognition efficiency by concentrating on the relevant image regions only. This technique is very helpful in processing large images such as satellite images. A novel algorithm for computing top-down attention is proposed which is able to learn and quantify important bottom-up features from a set of training images and enhances such features in a test image in order to localize objects having similar features. An object recognition technique is then deployed that extracts potential target objects from the computed top-down attention map and attempts to recognize them. An object descriptor is formed based on physical appearance and uses both texture and shape information. This combination is shown to be especially useful in the object recognition phase. The proposed texture descriptor is based on Legendre moments computed on local binary patterns, while shape is described using Hu moment invariants. Several tools and techniques such as different types of moments of functions, and combinations of different measures have been applied for the purpose of experimentations. The developed algorithms are generalized, efficient and effective, and have the potential to be deployed for real world problems. A dedicated software testing platform has been designed to facilitate the manipulation of satellite images and support a modular and flexible implementation of computational methods, including various components of visual attention models.
270

Statistical methods for coupling expert knowledge and automatic image segmentation and registration

Kolesov, Ivan A. 20 December 2012 (has links)
The objective of the proposed research is to develop methods that couple an expert user's guidance with automatic image segmentation and registration algorithms. Often, complex processes such as fire, anatomical changes/variations in human bodies, or unpredictable human behavior produce the target images; in these cases, creating a model that precisely describes the process is not feasible. A common solution is to make simplifying assumptions when performing detection, segmentation, or registration tasks automatically. However, when these assumptions are not satisfied, the results are unsatisfactory. Hence, removing these, often times stringent, assumptions at the cost of minimal user input is considered an acceptable trade-off. Three milestones towards reaching this goal have been achieved. First, an interactive image segmentation approach was created in which the user is coupled in a closed-loop control system with a level set segmentation algorithm. The user's expert knowledge is combined with the speed of automatic segmentation. Second, a stochastic point set registration algorithm is presented. The point sets can be derived from simple user input (e.g. a thresholding operation), and time consuming correspondence labeling is not required. Furthermore, common smoothness assumptions on the non-rigid deformation field are removed. Third, a stochastic image registration algorithm is designed to capture large misalignments. For future research, several improvements of the registration are proposed, and an iterative, landmark based segmentation approach, which couples the segmentation and registration, is envisioned.

Page generated in 0.1336 seconds