• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 1
  • Tagged with
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Um estudo comparativo de segmentação de imagens por aplicações do corte normalizado em grafos / A comparative study of image segmentation by application of normalized cut on graphs

Ferreira, Anselmo Castelo Branco 17 August 2018 (has links)
Orientador: Marco Antonio Garcia de Carvalho / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-17T11:47:27Z (GMT). No. of bitstreams: 1 Ferreira_AnselmoCasteloBranco_M.pdf: 7338510 bytes, checksum: 593cb683d0380e0c894f0147a4129c77 (MD5) Previous issue date: 2011 / Resumo: O particionamento de grafos tem sido amplamente utilizado como meio de segmentação de imagens. Uma das formas de particionar grafos é por meio de uma técnica conhecida como Corte Normalizado, que analisa os autovetores da matriz laplaciana de um grafo e utiliza alguns deles para o corte. Essa dissertação propõe o uso de Corte Normalizado em grafos originados das modelagens por Quadtree e Árvore dos Componentes a fim de realizar segmentação de imagens. Experimentos de segmentação de imagens por Corte Normalizado nestas modelagens são realizados e um benchmark específico compara e classifica os resultados obtidos por outras técnicas propostas na literatura específica. Os resultados obtidos são promissores e nos permitem concluir que o uso de outras modelagens de imagens por grafos no Corte Normalizado pode gerar melhores segmentações. Uma das modelagens pode inclusive trazer outro benefício que é gerar um grafo representativo da imagem com um número menor de nós do que representações mais tradicionais / Abstract: The graph partitioning has been widely used as a mean of image segmentation. One way to partition graphs is through a technique known as Normalized Cut, which analyzes the graph's Laplacian matrix eigenvectors and uses some of them for the cut. This work proposes the use of Normalized Cut in graphs generated by structures based on Quadtree and Component Tree to perform image segmentation. Experiments of image segmentation by Normalized Cut in these models are made and a specific benchmark compares and ranks the results obtained by other techniques proposed in the literature. The results are promising and allow us to conclude that the use of other image graph models in the Normalized Cut can generate better segmentations. One of the structures can also bring another benefit that is generating an image representative graph with fewer graph nodes than the traditional representations / Mestrado / Tecnologia e Inovação / Mestre em Tecnologia
2

Segmentação de imagens digitais combinando watershed e corte normalizado em grafos / Digital image segmentation combining watershed and normalized cut

Pinto, Tiago Willian, 1985- 25 August 2018 (has links)
Orientadores: Marco Antonio Garcia de Carvalho, Paulo Sérgio Martins Pedro / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-25T02:01:02Z (GMT). No. of bitstreams: 1 Pinto_TiagoWillian_M.pdf: 4501631 bytes, checksum: fd8dab16452e93b1ceec36bc90f085b9 (MD5) Previous issue date: 2014 / Resumo: Em Visão Computacional, a importância da segmentação de imagens é comparável apenas à sua complexidade. Interpretar a semântica de uma imagem com exatidão envolve inúmeras variáveis e condições, o que deixa um vasto campo em aberto aos pesquisadores. O intuito deste trabalho é implementar um método de segmentação de imagens através da combinação de quatro técnicas de computação: A Transformação Watershed, o Watershed Hierárquico, o Contextual Spaces Algorithm e o Corte Normalizado. A Transformação Watershed é uma técnica de segmentação de imagens do campo da Morfologia Matemática baseada em crescimento de regiões e uma forma eficiente de implementá-la é através da Transformada Imagem-Floresta. Esta técnica produz uma super-segmentação da imagem, o que dificulta a interpretação visual do resultado. Uma das formas de simplificar e reduzir essa quantidade de regiões é através da construção de um espaço de escalas chamado Watershed Hierárquico, que agrupa regiões através de um limiar que representa uma característica do relevo. O Contextual Spaces Algorithm é uma técnica de reclassificação utilizada no campo de Busca de Imagens Baseado em contexto, e explora a similaridade entre os diferentes objetos de uma coleção através da análise do contexto entre elas. O Corte Normalizado é uma técnica que explora a análise do grau de dissimilaridade entre regiões e tem suas bases na teoria espectral dos grafos. O Watershed Hierárquico é uma abordagem multiescala de análise das regiões do watershed, que possibilita a extração de métricas que podem servir de subsídio para aplicação do Corte Normalizado. A proposta deste projeto é combinar estas técnicas, implementando um método de segmentação que explore os benefícios alcançados por cada uma, variando entre diferentes métricas do Watershed Hierárquico com o Corte Normalizado e comparando os resultados obtidos / Abstract: In computer vision , the importance of image segmentation is comparable only by its complexity. Interpreting the semantics of an image accurately involves many variables and conditions, which leaves a vast field open to researchers. The purpose of this work is to implement a method of image segmentation by combining four computing techniques: The Watershed Transform, the Hierarchical Watershed, Contextual Spaces Algorithm and Normalized Cut. The Watershed Transform is a technique for image segmentation from the field of Mathematical Morphology based on region growing and an efficient way to implement it is through the Image Foresting Transform. This technique produces an over-segmentated image, which makes the visual interpretation of the result be very hard. One way to simplify and reduce the quantity of regions is by constructing a space of scales called Hierarchical Watershed, grouping regions through a threshold that represents a characteristic of the relief. The Contextual Spaces Algorithm is a reranking technique used in the field of Context Based Image Retrieval, and explores the similarity between different objects in a collection by analyzing the context between them. Normalized Cut is a technique that exploits the analysis of the degree of dissimilarity between regions and has its foundations in the spectral graph theory. The Hierarchical Watershed is a multiscale approach for analyzing regions of the watershed, which enables the extraction of metrics that can serve as a basis for applying the Normalized Cut. The purpose of this project is to combine these techniques, implementing a segmentation method that exploits the benefits achieved by each one, varying between different metrics of Hierarchical Watershed with Normalized Cut and comparing the results / Mestrado / Tecnologia e Inovação / Mestre em Tecnologia
3

Counting of Mostly Static People in Indoor Conditions

Khemlani, Amit A 09 November 2004 (has links)
The ability to count people from video is a challenging problem. The scientific challenge arises from the fact that although the task is pretty well defined, the imaging scenario is not well constrained. The background scene is uncontrolled. Lighting is complex and varying. And, image resolution, both in terms of spatial and temporal is usually poor, especially in pre-stored surveillance videos. Passive counting of people from video has many practical applications such as in monitoring the number of people sitting in front of a TV set, counting people in an elevator, counting people passing through a security door, and counting people in a mall. This has led to some research in automated counting of people. The context of most of the work in people counting is in counting pedestrians in outdoor settings or moving subjects in indoor settings. There is little work done in counting of people who are not moving around and very little work done in people counting that can handle harsh variations in illumination conditions. In this thesis, we explore a design that handles such issues at pixel level using photometry based normalization and at feature level by exploiting spatiotemporal coherence that is present in the change seen in the video. We have worked on home and laboratory dataset. The home dataset has subjects watching television and the laboratory dataset has subjects working. The design of the people counter is based on video data that is temporally sparsely sampled at 15 seconds of time difference between consecutive frames. Specific computer vision methods used involves image intensity normalization, frame to frame differencing, motion accumulation using autoregressive model and grouping in spatio-temporal volume. The experimental results show: The algorithm is less susceptible to lighting changes. Given an empty scene with just lighting change it usually produces a count of zero. It can count in varying illumination conditions. It can count people even if they are partially visible. Counts are generated for any moving objects in the scene. It does not yet try to distinguish between humans and non-humans. Counting errors are concentrated around frames with large motion events, such as a person moving out from a scene.
4

Object Extraction From Images/videos Using A Genetic Algorithm Based Approach

Yilmaz, Turgay 01 January 2008 (has links) (PDF)
The increase in the use of digital video/image has showed the need for modeling and querying the semantic content in them. Using manual annotation techniques for defining the semantic content is both costly in time and have limitations on querying capabilities. So, the need for content based information retrieval in multimedia domain is to extract the semantic content in an automatic way. The semantic content is usually defined with the objects in images/videos. In this thesis, a Genetic Algorithm based object extraction and classification mechanism is proposed for extracting the content of the videos and images. The object extraction is defined as a classification problem and a Genetic Algorithm based classifier is proposed for classification. Candidate objects are extracted from videos/images by using Normalized-cut segmentation and sent to the classifier for classification. Objects are defined with the Best Representative and Discriminative Feature (BRDF) model, where features are MPEG-7 descriptors. The decisions of the classifier are calculated by using these features and BRDF model. The classifier improves itself in time, with the genetic operations of GA. In addition to these, the system supports fuzziness by making multiple categorization and giving fuzzy decisions on the objects. Externally from the base model, a statistical feature importance determination method is proposed to generate BRDF model of the categories automatically. In the thesis, a platform independent application for the proposed system is also implemented.
5

Learning, Detection, Representation, Indexing And Retrieval Of Multi-agent Events In Videos

Hakeem, Asaad 01 January 2007 (has links)
The world that we live in is a complex network of agents and their interactions which are termed as events. An instance of an event is composed of directly measurable low-level actions (which I term sub-events) having a temporal order. Also, the agents can act independently (e.g. voting) as well as collectively (e.g. scoring a touch-down in a football game) to perform an event. With the dawn of the new millennium, the low-level vision tasks such as segmentation, object classification, and tracking have become fairly robust. But a representational gap still exists between low-level measurements and high-level understanding of video sequences. This dissertation is an effort to bridge that gap where I propose novel learning, detection, representation, indexing and retrieval approaches for multi-agent events in videos. In order to achieve the goal of high-level understanding of videos, firstly, I apply statistical learning techniques to model the multiple agent events. For that purpose, I use the training videos to model the events by estimating the conditional dependencies between sub-events. Thus, given a video sequence, I track the people (heads and hand regions) and objects using a Meanshift tracker. An underlying rule-based system detects the sub-events using the tracked trajectories of the people and objects, based on their relative motion. Next, an event model is constructed by estimating the sub-event dependencies, that is, how frequently sub-event B occurs given that sub-event A has occurred. The advantages of such an event model are two-fold. First, I do not require prior knowledge of the number of agents involved in an event. Second, no assumptions are made about the length of an event. Secondly, after learning the event models, I detect events in a novel video by using graph clustering techniques. To that end, I construct a graph of temporally ordered sub-events occurring in the novel video. Next, using the learnt event model, I estimate a weight matrix of conditional dependencies between sub-events in the novel video. Further application of Normalized Cut (graph clustering technique) on the estimated weight matrix facilitate in detecting events in the novel video. The principal assumption made in this work is that the events are composed of highly correlated chains of sub-events that have high conditional dependency (association) within the cluster and relatively low conditional dependency (disassociation) between clusters. Thirdly, in order to represent the detected events, I propose an extension of CASE representation of natural languages. I extend CASE to allow the representation of temporal structure between sub-events. Also, in order to capture both multi-agent and multi-threaded events, I introduce a hierarchical CASE representation of events in terms of sub-events and case-lists. The essence of the proposition is that, based on the temporal relationships of the agent motions and a description of its state, it is possible to build a formal description of an event. Furthermore, I recognize the importance of representing the variations in the temporal order of sub-events, that may occur in an event, and encode the temporal probabilities directly into my event representation. The proposed extended representation with probabilistic temporal encoding is termed P-CASE that allows a plausible means of interface between users and the computer. Using the P-CASE representation I automatically encode the event ontology from training videos. This offers a significant advantage, since the domain experts do not have to go through the tedious task of determining the structure of events by browsing all the videos. Finally, I utilize the event representation for indexing and retrieval of events. Given the different instances of a particular event, I index the events using the P-CASE representation. Next, given a query in the P-CASE representation, event retrieval is performed using a two-level search. At the first level, a maximum likelihood estimate of the query event with the different indexed event models is computed. This provides the maximum matching event. At the second level, a matching score is obtained for all the event instances belonging to the maximum matched event model, using a weighted Jaccard similarity measure. Extensive experimentation was conducted for the detection, representation, indexing and retrieval of multiple agent events in videos of the meeting, surveillance, and railroad monitoring domains. To that end, the Semoran system was developed that takes in user inputs in any of the three forms for event retrieval: using predefined queries in P-CASE representation, using custom queries in P-CASE representation, or query by example video. The system then searches the entire database and returns the matched videos to the user. I used seven standard video datasets from the computer vision community as well as my own videos for testing the robustness of the proposed methods.
6

Corte normalizado em grafos = um algoritmo aglomerativo para segmentação de imagens de colonias de bactérias= Normalized cut on graphs: an aglomerative algorithm for bacterial colonies image segmentation / Normalized cut on graphs : an aglomerative algorithm for bacterial colonies image segmentation

Costa, André Luis da, 1982- 22 August 2018 (has links)
Orientador: Marco Antonio Garcia de Carvalho / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-22T22:09:46Z (GMT). No. of bitstreams: 1 Costa_AndreLuisda_M.pdf: 6614237 bytes, checksum: b36b41dce03cbb78f037ec20725bddd2 (MD5) Previous issue date: 2013 / Resumo: O problema de segmentação de colônias de bactérias em placas de Petri possui algumas características bem distintas daquelas encontradas, por exemplo, em problemas de segmentação de imagens naturais. A principal característica é o alto número de colônias que podem ser encontradas em uma placa. Desta forma, é primordial que o algoritmo de segmentação seja capaz de realizar a segmentação da imagem em um grande número de regiões. Este cenário extremo é ideal para analisar limitações dos algoritmos de segmentação. De fato, neste trabalho foi verificado que o algoritmo de corte normalizado original, que se fundamenta na teoria espectral de grafos, é inadequado para aplicações que exigem que a segmentação seja realizada em um grande número de regiões. Contudo, a utilização do critério de corte normalizado para segmentar imagens de colônias de bactérias ainda é possível graças a um novo algoritmo que está sendo introduzido neste trabalho. O novo algoritmo fundamenta-se no agrupamento hierárquico dos nós do grafo, ao invés de utilizar conceito da teoria espectral. Experimentos mostram também que o biparticionamento de um grafo pelo novo algoritmo apresenta um valor de corte normalizado médio cerca de 40 vezes menor que o biparticionamento pelo algoritmo baseado na teoria espectral / Abstract: The problem of bacteria colonies segmentation in Petri dishes has some very different characteristics from those found, for example, in segmenting natural images. The main feature is the high number of colonies that can be found on a plate. Thus, it is essential that the segmentation algorithm is capable of performing the image segmentation into a huge number of regions. This extreme scenario is ideal for analyzing segmentation algorithms limitations. In fact, this study showed that the original normalized cut algorithm, which is based on the spectral graph theory, is inappropriate for applications that require that the segmentation be performed on a large number of regions. However, the use of normalized cut criteria for segmenting bacteria colonies images is still possible thanks to a new algorithm that is being introduced in this paper. The new algorithm is based on hierarchical clustering of the graph nodes, instead of using the spectral theory concepts. Experiments also show that the bi-partitioning of a graph by the new algorithm has a normalized cut average value about 40 times lesser than the bi-partitioning by the algorithm based on the spectral theory / Mestrado / Tecnologia e Inovação / Mestre em Tecnologia
7

Выявление признаков постобработки изображений : магистерская диссертация / Photo tampering detecton

Antselevich, A. A., Анцелевич, А. А. January 2015 (has links)
An algorithm, which is able to find out, whether a given digital photo was tampered, and to generate tampering map, which depicts the processed parts of the image, was analyzed in details and implemented. The software was also optimized, deeply tested, the modes giving the best quality were found. The program can be launched on a usual user PC. / В процессе работы был детально разобран и реализован алгоритм поиска признаков постобработки в изображениях. Разработанное приложение было оптимизировано, было проведено его тестирование, были найдены режимы работы приложения с более высокими показателями точности. Реализованное приложение может быть запущено на обычном персональном компьютере. Помимо информации о наличии выявленных признаков постобработки полученное приложение генерирует карту поданного на вход изображения, на которой выделены его участки, возможно подвергнутые постобработке.

Page generated in 0.1741 seconds