Spelling suggestions: "subject:"groundtruth"" "subject:"grundtruth""
1 |
Interactive shadow removalGong, Han January 2015 (has links)
Shadows are ubiquitous in image and video, and their removal is of interest in both Computer Vision and Graphics. In this thesis, four methods for interactive shadow removal from single images are presented. Their improvements are made in user interaction, quality and robustness of shadow removal. We also show our state-of-the-art ground truth data set with variable scene categories for shadow removal and applications for shadow editing and its extension to video data processing.
|
2 |
Nonrigid surface tracking, analysis and evaluationLi, Wenbin January 2014 (has links)
Estimating the dense image motion or optical flow on a real-world nonrigid surface is a fundamental research issue in computer vision, and is applicable to a wide range of fields, including medical imaging, computer animation and robotics. However, nonrigid surface tracking is a difficult challenge because complex nonrigid deformation, accompanied by image blur and natural noise, may lead to severe intensity changes to pixels through an image sequence. This violates the basic intensity constancy assumption of most visual tracking methods. In this thesis, we show that local geometric constraints and long term feature matching techniques can improve local motion preservation, and reduce error accumulation in optical flow estimation. We also demonstrate that combining RGB data with additional information from other sensing channels, can improve tracking performance in blurry scenes as well as allow us to create nonrigid ground truth from real world scenes. First, we introduce a local motion constraint based on a laplacian mesh representation of nonrigid surfaces. This additional constraint term encourages local smoothness whilst simultaneously preserving nonrigid deformation. The results show that our method outperforms most global constraint based models on several popular benchmarks. Second, we observe that the inter-frame blur in general video sequences is near linear, and can be roughly represented by 3D camera motion. To recover dense correspondences from a blurred scene, we therefore design a mechanical device to track camera motion and formulate this as a directional constraint into the optical flow framework. This improves optical flow in blurred scenes. Third, inspired by recent developments in long term feature matching, we introduce an optimisation framework for dense long term tracking -- applicable to any existing optical flow method -- using anchor patches. Finally, we observe that traditional nonrigid surface analysis suffers from a lack of suitable ground truth datasets given real-world noise and long image sequences. To address this, we construct a new ground truth by simultaneously capturing both normal RGB and near-infrared images. The latter spectrum contains dense markers, visible only in the infrared, and represents ground truth positions. Our benchmark contains many real-world scenes and properties absent in existing ground truth datasets.
|
3 |
Medical imaging segmentation assessment via Bayesian approaches to fusion, accuracy and variability estimation with application to head and neck cancerGhattas, Andrew Emile 01 August 2017 (has links)
With the advancement of technology, medical imaging has become a fast growing area of research. Some imaging questions require little physician analysis, such as diagnosing a broken bone, using a 2-D X-ray image. More complicated questions, using 3-D scans, such as computerized tomography (CT), can be much more difficult to answer. For example, estimating tumor growth to evaluate malignancy; which informs whether intervention is necessary. This requires careful delineation of different structures in the image. For example, what is the tumor versus what is normal tissue; this is referred to as segmentation. Currently, the gold standard of segmentation is for a radiologist to manually trace structure edges in the 3-D image, however, this can be extremely time consuming. Additionally, manual segmentation results can differ drastically between and even within radiologists. A more reproducible, less variable, and more time efficient segmentation approach would drastically improve medical treatment. This potential, as well as the continued increase in computing power, has led to computationally intensive semiautomated segmentation algorithms. Segmentation algorithms' widespread use is limited due to difficulty in validating their performance. Fusion models, such as STAPLE, have been proposed as a way to combine multiple segmentations into a consensus ground truth; this allows for evaluation of both manual and semiautomated segmentation in relation to the consensus ground truth. Once a consensus ground truth is obtained, a multitude of approaches have been proposed for evaluating different aspects of segmentation performance; segmentation accuracy, between- and within -reader variability.
The focus of this dissertation is threefold. First, a simulation based tool is introduced to allow for the validation of fusion models. The simulation properties closely follow a real dataset, in order to ensure that they mimic reality. Second, a statistical hierarchical Bayesian fusion model is proposed, in order to estimate a consensus ground truth within a robust statistical framework. The model is validated using the simulation tool and compared to other fusion models, including STAPLE. Additionally, the model is applied to real datasets and the consensus ground truth estimates are compared across different fusion models. Third, a statistical hierarchical Bayesian performance model is proposed in order to estimate segmentation method specific accuracy, between- and within -reader variability. An extensive simulation study is performed to validate the model’s parameter estimation and coverage properties. Additionally, the model is fit to a real data source and performance estimates are summarized.
|
4 |
"Perceived neighborhood walkability" and physical activity in four urban settings in South AfricaIsiagi, Moses 24 February 2020 (has links)
Introduction.
In Africa, studies on the associations between the perceived neighbourhood walkability and physical activity, particularly, by socio-economic status (SES) remain scarce. This study explores these associations by validating the Neighbourhood Environmental Walkability Scale (NEWS-Africa) in an urban setting of South Africa to gain a better understanding of the construct of neighbourhood “walkability”.
Methods.
A convient sample of residents from four suburbs in urban metropole (n=52, 18-65yr, 81% women) in the Western Cape Province, South Africa (viz. Langa, Khayelitsha, Pinelands and Table View) were recruited through invitations following community gatherings and church services. Measures were obtained on perceived neighbourhood walkability, self-reported and measured physical activity and socio-economic status. Langa and Khayelitsha represented two primarily low-SES townships, whereas Pinelands and Table View represented suburbs of a higher-SES. Participants completed the 76-item (13 subscales) NEWS-Africa survey by structured interviews and reported weekly minutes of walking for transport and recreation using items from the International Physical Activity Questionnaire. Objective data on physical activity was collected using accelerometers, and ground-truthing was used to assess the neighbourhood environment using global information systems (GIS) in a 1000m buffer around each geocoded household. The research was carried out in three parts: 1) Evaluating the reliability and construct validity of the NEWS-Africa instrument between the two-SES groups. 2) Examining some of the walkability constructs and subscales of the NEWS-Africa instrument using GIS and ground-truthing, and the extent to which the SES of communities influenced these associations. 3) Examining the differences in self-reported physical activity (domains), measured physical activity (MVPA) when groups are divided according to SES, GIS walkability (1000m buffers) and if the data support the notion of utilitarian walking in low SES groups, irrespective of the built environment attributes.
Results.
For the combined-SES groups, the test-retest reliability indicated a good reliability with 10 out of the 13 scales of the NEWS-Africa being significantly and positively correlated. The Spearman’s correlations ranged from (rs = -0.43, p=0.00 to rs = 0.79, p=0.00). For construct validity of the NEWS -Africa instrument against self-reported physical activity, only three scales were related to walking for transport: Neighbourhood surroundings scale (rs= -0.34, p=0.01), Safety from Traffic scale (rs =0.34, p< 0.05) and people in the low-SES and combined SES perceived public bus/ train stops to be nearer than they actually were (rs =-0.50, P< 0.05). Of the 13 scales of the NEWS-Africa questionnaire, 6 were significantly correlated to GIS-measured walkability index parameters. The Roads and walking paths scale was positively associated with GIS-measured walkability (rs = 0.3), and the Stranger danger scale was negatively associated with GIS-measured walkability (rs = -0.4). When we considered GIS-measured Land use mix, 3 of the NEWS- Africa scales were correlated (For the entire sample, the scales including Places for walking, cycling and playing overall scale (rs = 0.3), and Neighbourhood surroundings scale (rs = 0.3), were positively associated respectively). Conversely, Stranger danger scale was inversely correlated (rs = -0.6). Intersection density measured with GIS was significantly and positively associated with the Roads and walking paths scale for all groups combined (rs = 0.3). For GIS-measured walkability, self-report physical and measured physical activity, there were no associations in any of the domains for self-reported physical activity within the 1000m buffer for all groups. However, for the objectively measured physical activity in the 1000m buffer, vigorous physical activity (rs = -0.39) was inversely associated with intersection density in the low-SES and moderate (rs = -0.29) and total MVPA (rs = -0.31) were inversely associated with Intersection density in the high SES.
Conclusions: The overall results of the current study across all chapters generally show a mismatch between the perceived and objectively-assessed built environment, particularly in low-income communities. Furthermore, in low-SES communities, we failed to show the expected relationships between attributes of the built environment and physical activity, suggesting that physical activity in these communities is more utilitarian in nature, and as such, may not be as influenced by aspect of the built environment. In summary, the data suggest that the environment (including crime rates, poor access to physical activity facilities and public transportation predominantly made by buses) has less of an association with physical activity in LMICs and more disadvantaged communities, where physical activity is used for utilitarian, rather than recreational purposes. This study stemmed from the need to broaden research on the relationship between the built environment and physical activity, considering walkability constructs. These findings also suggest that the definition of the construct of walkability be re-examined, in relation to low SES settings.
|
5 |
Capabilities of LANDSAT-5 Thematic Mapper (TM) data in studying soybean and corn crop variablesThenkabail, Prasad Srinavasa January 1992 (has links)
No description available.
|
6 |
Object Trackers Performance Evaluation and Improvement with Applications using High-order TensorPang, Yu January 2020 (has links)
Visual tracking is one of the fundamental problems in computer vision. This topic has been a widely explored area attracting a great amount of research efforts. Over the decades, hundreds of visual tracking algorithms, or trackers in short, have been developed and a great packs of public datasets are available alongside. As the number of trackers grow, it then becomes a common problem how to evaluate who is a better tracker. Many metrics have been proposed together with tons of evaluation datasets. In my research work, we first make an application practice of tracking multiple objects in a restricted scene with very low frame rate. It has a unique challenge that the image quality is low and we cannot assume images are close together in a temporal space. We design a framework that utilize background subtraction and object detection, then we apply template matching algorithms to achieve the tracking by detection. While we are exploring the applications of tracking algorithm, we realize the problem when authors compare their proposed tracker with others, there is unavoidable subjective biases: it is non-trivial for the authors to optimize other trackers, while they can reasonably tune their own tracker to the best. Our assumption is based on that the authors will give a default setting to other trackers, hence the performances of other trackers are less biased. So we apply a leave-their-own-tracker-out strategy to weigh the performances of other different trackers. we derive four metrics to justify the results. Besides the biases in evaluation, the datasets we use as ground truth may not be perfect either. Because all of them are labeled by human annotators, they are prone to label errors, especially due to partial visibility and deformation. we demonstrate some human errors from existing datasets and propose smoothing technologies to detect and correct them. we use a two-step adaptive image alignment algorithm to find the canonical view of the video sequence. then use different techniques to smooth the trajectories at certain degrees. The results show it can slightly improve the trained model, but would overt if overcorrected. Once we have a clear understanding and reasonable approaches towards the visual tracking scenario, we apply the principles in multi-target tracking cases. To solve the problem, we formulate it into a multi-dimensional assignment problem, and build the motion information in a high-order tensor framework. We propose to solve it using rank-1 tensor approximation and use a tensor power iteration algorithm to efficiently obtain the solution. It can apply in pedestrian tracking, aerial video tracking, as well as curvalinear structure tracking in medical video. Furthermore, this proposed framework can also fit into the affinity measurement of multiple objects simultaneously. We propose the Multiway Histogram Intersection to obtain the similarities between histograms of more than two targets. With the solution of using tensor power iteration algorithm, we show it can be applied in a few multi-target tracking applications. / Computer and Information Science
|
7 |
STUDY ON THE PATTERN RECOGNITION ENHANCEMENT FOR MATRIX FACTORIZATIONS WITH AUTOMATIC RELEVANCE DETERMINATIONtao, hau 01 December 2018 (has links)
Learning the parts of objects have drawn more attentions in computer science recently, and they have been playing the important role in computer applications such as object recognition, self-driving cars, and image processing, etc… However, the existing research such as traditional non-negative matrix factorization (NMF), principal component analysis (PCA), and vector quantitation (VQ) has not been discovering the ground-truth bases which are basic components representing objects. On this thesis, I am proposed to study on pattern recognition enhancement combined non-negative matrix factorization (NMF) with automatic relevance determination (ARD). The main point of this research is to propose a new technique combining the algorithm Expectation Maximization (EM) with Automatic Relevance Determination (ARD) to discover the ground truth basis of datasets, and then to compare my new proposed technique to the others such as: traditional NMF, sparseness constraint and graph embedding in pattern recognition problems to verify if my method has over performance in accuracy rate than the others. Particularly, the new technique will be tested on variety of datasets from simple to complex one, from synthetic datasets to real ones. To compare the performance, I split these datasets into 10 random partitions as the training and the testing sets called 10-fold cross validation, and then use the technique called Euclidean algorithm to classify them and test their accuracy. As the result, my proposed method has higher accuracy than the others, and it is good to use in pattern recognition problems with missing data.
|
8 |
Caractériser et détecter les communautés dans les réseaux sociaux / Characterising and detecting communities in social networksCreusefond, Jean 21 February 2017 (has links)
Dans cette thèse, je commence par présenter une nouvelle caractérisation des communautés à partir d'un réseau de messages inscrits dans le temps. Je montre que la structure de ce réseau a un lien avec les communautés : on trouve majoritairement des échanges d'information à l'intérieur des communautés tandis que les frontières servent à la diffusion.Je propose ensuite d'évaluer les communautés par la vitesse de propagation des communications qui s'y déroulent avec une nouvelle fonction de qualité : la compacité. J'y présente aussi un algorithme de détection de communautés, le Lex-Clustering, basé sur un algorithme de parcours de graphe qui reproduit des caractéristiques des modèles de diffusion d'information. Enfin, je présente une méthodologie permettant de faire le lien entre les fonctions de qualité et les vérités de terrain. J'introduis le concept de contexte, des ensembles de vérités de terrain qui présentente des ressemblances. Je mets à disposition un logiciel nommé CoDACom (Community Detection Algorithm Comparator, codacom.greyc.fr) permettant d'appliquer cette méthodologie ainsi que d'utiliser un grand nombre d'outils de détection de communautés. / N this thesis, I first present a new way of characterising communities from a network of timestamped messages. I show that its structure is linked with communities : communication structures are over-represented inside communities while diffusion structures appear mainly on the boundaries.Then, I propose to evaluate communities with a new quality function, compacity, that measures the propagation speed of communications in communities. I also present the Lex-Clustering, a new community detection algorithm based on the LexDFS graph traversal that features some characteristics of information diffusion.Finally, I present a methodology that I used to link quality functions and ground-truths. I introduce the concept of contexts, sets of ground-truths that are similar in some way. I implemented this methodology in a software called CoDACom (Community Detection Algorithm Comparator, codacom.greyc.fr) that also provides many community detection tools.
|
9 |
Projeto e implementação do equipamento para tomografia com nêutrons do IPEN-CNEN/SP / Design and development of a neutron tomography facility at the IPEN-CNEN/SPSCHOUERI, ROBERTO M. 22 June 2016 (has links)
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2016-06-22T12:49:00Z
No. of bitstreams: 0 / Made available in DSpace on 2016-06-22T12:49:00Z (GMT). No. of bitstreams: 0 / Dissertação (Mestrado em Tecnologia Nuclear) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
|
10 |
Projeto e implementação do equipamento para tomografia com nêutrons do IPEN-CNEN/SP / Design and development of a neutron tomography facility at the IPEN-CNEN/SPSCHOUERI, ROBERTO M. 22 June 2016 (has links)
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2016-06-22T12:49:00Z
No. of bitstreams: 0 / Made available in DSpace on 2016-06-22T12:49:00Z (GMT). No. of bitstreams: 0 / Na presente dissertação, foi desenvolvido um equipamento para tomografia com nêutrons que está operacional e instalado no canal de irradiação 14 do Reator Nuclear de Pesquisas IEA-R1 do IPEN-CNEN/SP. As imagens apresentadas neste trabalho, são de objetos que foram selecionados de modo a realçarem uma das principais aplicações da técnica, que é o estudo de materiais hidrogenados mesmo se envoltos por espessa camada de alguns metais. Neste equipamento, uma tomografia completa pode ser obtida em 400 s, com uma resolução espacial máxima de 205 μm. Estas características são comparáveis às dos equipamentos mais desenvolvidos em operação em outros países, e propiciam imagens com qualidade suficiente para que sejam realizadas análises tanto qualitativas quanto quantitativas dos objetos inspecionados. A implementação da técnica da tomografia com nêutrons abre a possibilidade de novas linhas de pesquisa, pois disponibiliza uma nova ferramenta para inspeção de objetos, que fornece uma visão da sua estrutura interna, que nem sempre é possível por métodos de imageamento bidimensional. / Dissertação (Mestrado em Tecnologia Nuclear) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
|
Page generated in 0.044 seconds