• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Study on Object Search and Relationship Search from Text Archive Data / テキストアーカイブデータからのオブジェクト検索と関係検索に関する研究

Yating, Zhang 23 September 2016 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第20026号 / 情博第621号 / 新制||情||108(附属図書館) / 33122 / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 田中 克己, 教授 吉川 正俊, 教授 黒橋 禎夫 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
12

Contribuciones al alineamiento de nubes de puntos 3d para su uso en aplicaciones de captura robotizada de objetos

Torre Ferrero, Carlos 08 November 2010 (has links)
En aplicaciones de captura robotizada se ha hecho necesario el uso de información tridimensional de los objetos que son manipulados. Esta información puede obtenerse mediante dispositivos de adquisición 3D, tales como escáneres láser o cámaras de tiempo de vuelo, que proporcionan imágenes de rango de los objetos. En este trabajo de tesis se presenta un nuevo enfoque para encontrar, sin disponer de una estimación previa, la transformación rígida que produzca una alineación adecuada de las nubes de puntos obtenidas con esos dispositivos. El algoritmo realiza una búsqueda iterativa de correspondencias mediante la comparación de descriptores 2D en varios niveles de resolución utilizando para ello una medida de similitud específicamente diseñada para el descriptor propuesto en esta tesis. Este algoritmo de alineamiento se puede utilizar tanto para modelado 3D como para aplicaciones de manipulación de objetos en situaciones en las que los objetos estén parcialmente ocluidos o presenten simetrías. / In applications of robotic manipulation of objects, the use of three-dimensional information of objects being manipulated has been made necessary. This information can be obtained by 3D acquisition devices, such as laser scanners or cameras of flight time, providing range images of objects. This thesis presents a new approach to find, without having a previous estimate, the Euclidean transformation that produces a proper alignment of point clouds obtained with these devices. The algorithm performs an iterative search for correspondences by comparing 2D descriptors at various levels of resolution using a similarity measure specifically designed for the descriptor proposed in this thesis. This alignment algorithm can be used for both 3D modelling and robotic manipulation applications when objects are partially occluded or have symmetries.
13

On the effect of INQUERY term-weighting scheme on query-sensitive similarity measures

Kini, Ananth Ullal 12 April 2006 (has links)
Cluster-based information retrieval systems often use a similarity measure to compute the association among text documents. In this thesis, we focus on a class of similarity measures named Query-Sensitive Similarity (QSS) measures. Recent studies have shown QSS measures to positively influence the outcome of a clustering procedure. These studies have used QSS measures in conjunction with the ltc term-weighting scheme. Several term-weighting schemes have superseded the ltc term-weighing scheme and demonstrated better retrieval performance relative to the latter. We test whether introducing one of these schemes, INQUERY, will offer any benefit over the ltc scheme when used in the context of QSS measures. The testing procedure uses the Nearest Neighbor (NN) test to quantify the clustering effectiveness of QSS measures and the corresponding term-weighting scheme. The NN tests are applied on certain standard test document collections and the results are tested for statistical significance. On analyzing results of the NN test relative to those obtained for the ltc scheme, we find several instances where the INQUERY scheme improves the clustering effectiveness of QSS measures. To be able to apply the NN test, we designed a software test framework, Ferret, by complementing the features provided by dtSearch, a search engine. The test framework automates the generation of NN coefficients by processing standard test document collection data. We provide an insight into the construction and working of the Ferret test framework.
14

Fuzzy Tolerance Neighborhood Approach to Image Similarity in Content-based Image Retrieval

Meghdadi, Amir Hossein 22 June 2012 (has links)
The main contribution of this thesis, is to define similarity measures between two images with the main focus on content-based image retrieval (CBIR). Each image is considered as a set of visual elements that can be described with a set of visual descriptions (features). The similarity between images is then defined as the nearness between sets of elements based on a tolerance and a fuzzy tolerance relation. A tolerance relation is used to describe the approximate nature of the visual perception. A fuzzy tolerance relation is adopted to eliminate the need for a sharp threshold and hence model the gradual changes in perception of similarities. Three real valued similarity measures as well as a fuzzy valued similarity measure are proposed. All of the methods are then used in two CBIR experiments and the results are compared with classical measures of distance (namely, Kantorovich, Hausdorff and Mahalanobis). The results are compared with other published research papers. An important advantage of the proposed methods is shown to be their effectiveness in an unsupervised setting with no prior information. Eighteen different features (based on color, texture and edge) are used in all the experiments. A feature selection algorithm is also used to train the system in choosing a suboptimal set of visual features.
15

Fuzzy Tolerance Neighborhood Approach to Image Similarity in Content-based Image Retrieval

Meghdadi, Amir Hossein 22 June 2012 (has links)
The main contribution of this thesis, is to define similarity measures between two images with the main focus on content-based image retrieval (CBIR). Each image is considered as a set of visual elements that can be described with a set of visual descriptions (features). The similarity between images is then defined as the nearness between sets of elements based on a tolerance and a fuzzy tolerance relation. A tolerance relation is used to describe the approximate nature of the visual perception. A fuzzy tolerance relation is adopted to eliminate the need for a sharp threshold and hence model the gradual changes in perception of similarities. Three real valued similarity measures as well as a fuzzy valued similarity measure are proposed. All of the methods are then used in two CBIR experiments and the results are compared with classical measures of distance (namely, Kantorovich, Hausdorff and Mahalanobis). The results are compared with other published research papers. An important advantage of the proposed methods is shown to be their effectiveness in an unsupervised setting with no prior information. Eighteen different features (based on color, texture and edge) are used in all the experiments. A feature selection algorithm is also used to train the system in choosing a suboptimal set of visual features.
16

Mesures de similarité pour cartes généralisées / Similarity measures between generalized maps

Combier, Camille 28 November 2012 (has links)
Une carte généralisée est un modèle topologique permettant de représenter implicitementun ensemble de cellules (sommets, arêtes, faces , volumes, . . .) ainsi que l’ensemblede leurs relations d’incidence et d’adjacence au moyen de brins et d’involutions. Les cartes généralisées sont notamment utilisées pour modéliser des images et objets3D. A ce jour il existe peu d’outils permettant l’analyse et la comparaison de cartes généralisées.Notre objectif est de définir un ensemble d’outils permettant la comparaisonde cartes généralisées.Nous définissons tout d’abord une mesure de similarité basée sur la taille de la partiecommune entre deux cartes généralisées, appelée plus grande sous-carte commune.Nous définissons deux types de sous-cartes, partielles et induites, la sous-carte induitedoit conserver toutes les involutions tandis que la sous-carte partielle autorise certaines involutions à ne pas être conservées. La sous-carte partielle autorise que les involutionsne soient pas toutes conservées en analogie au sous-graphe partiel pour lequelles arêtes peuvent ne pas être toutes présentes. Ensuite nous définissons un ensembled’opérations de modification de brins et de coutures pour les cartes généralisées ainsiqu’une distance d’édition. La distance d’édition est égale au coût minimal engendrépar toutes les successions d’opérations transformant une carte généralisée en une autrecarte généralisée. Cette distance permet la prise en compte d’étiquettes, grâce à l’opérationde substitution. Les étiquettes sont posées sur les brins et permettent d’ajouter del’information aux cartes généralisées. Nous montrons ensuite, que pour certains coûtsnotre distance d’édition peut être calculée directement à partir de la plus grande souscartecommune.Le calcul de la distance d’édition est un problème NP-difficile. Nous proposons unalgorithme glouton permettant de calculer en temps polynomial une approximation denotre distance d’édition de cartes. Nous proposons un ensemble d’heuristiques baséessur des descripteurs du voisinage des brins de la carte généralisée permettant de guiderl’algorithme glouton, et nous évaluons ces heuristiques sur des jeux de test générésaléatoirement, pour lesquels nous connaissons une borne de la distance.Nous proposons des pistes d’utilisation de nos mesures de similarités dans le domainede l’analyse d’image et de maillages. Nous comparons notre distance d’éditionde cartes généralisées avec la distance d’édition de graphes, souvent utilisée en reconnaissancede formes structurelles. Nous définissons également un ensemble d’heuristiquesprenant en compte les étiquettes de cartes généralisées modélisant des images etdes maillages. Nous mettons en évidence l’aspect qualitatif de notre appariement, permettantde mettre en correspondance des zones de l’image et des points du maillages. / A generalized map is a topological model that allows to represent implicitly differenttypes of cells (vertices, edges, volumes, . . . ) and their relationship by using a set of dartsand some involutions. Generalized maps are used to model 3D meshes and images.Anyway there exists only few tools to compare theses generalized maps. Our main goalis to define some tools tolerant to error to compare them.We define a similarity measure based on the size of the common part of two generalizedmaps, called maximum common submap. Then we define two types of submaps,partial and induced, the induced submap needs to preserve all the involutions whereasthe partial one can allow some involutions to be removed. Then we define a set of operationsto modify a generalized map into another and the associated edit distance. Theedit distance is equal to the minimal cost of all the sequences of operations that modifya generalized map into the other. This edit distance can use labels to consider additionalinformation, with the operation called ’substitution’. Labels are set on darts. Wenext showa relation between our edit distance and the distance based on the maximumcommon submap.Computing theses distance are aNP-hard problem.We propose a greedy algorithmcomputing an approximation of it. We also propose a set of heuristics based on thedescription of the neighborhoob of the darts to help the greedy algorithm.We try thesesheuristics on a set of generalized maps randomly generated where a lower bound of thedistance is known. We also propose some applications of our similarity measures inthe image analysis domain. We compare our edit distance on generalized maps withthe edit distance on graphs. We also define a set of labels specific on images and 3Dmeshes. And we show that the matching computed by our algorithm construct a linkbetween images’s areas.
17

Contribution à la veille stratégique : DOWSER, un système de découverte de sources Web d’intérêt opérationnel / Buisness Intelligence contribution : DOWSER, Discovering of Web Sources Evaluating Relevance

Noël, Romain 17 October 2014 (has links)
L'augmentation constante du volume d'information disponible sur le Web a rendu compliquée la découverte de nouvelles sources d'intérêt sur un sujet donné. Les experts du renseignement doivent faire face à cette problématique lorsqu'ils recherchent des pages sur des sujets spécifiques et sensibles. Ces pages non populaires sont souvent mal indexées ou non indexées par les moteurs de recherche à cause de leur contenu délicat, les rendant difficile à trouver. Nos travaux, qui s'inscrivent dans ce contenu du Renseignement d'Origine Source Ouverte (ROSO), visent à aider l'expert du renseignement dans sa tâche de découverte de nouvelles sources. Notre approche s'articule autour de la modélisation du besoin opérationnel et de l'exploration ciblée du Web. La modélisation du besoin informationnel permet de guider l'exploration du web pour découvrir et fournir des sources pertinentes à l'expert. / The constant growth of the Web in recent years has made more difficult the discovery of new sources of information on a given topic. This is a prominent problem for Expert in Intelligence Analysis (EIA) who are faced with the search of pages on specific and sensitive topics. Because of their lack of popularity or because they are poorly indexed due to their sensitive content, these pages are hard to find with traditional search engine. In this article, we describe a new Web source discovery system called DOWSER. The goal of this system is to provide users with new sources of information related to their needs without considering the popularity of a page unlike classic Information Retrieval tools. The expected result is a balance between relevance and originality, in the sense that the wanted pages are not necessary popular. DOWSER in based on a user profile to focus its exploration of the Web in order to collect and index only related Web documents.
18

Biometrie sítnice pro účely rozpoznávání osob / Retinal biometry for human recognition

Sikorová, Eva January 2015 (has links)
This master thesis deals with recognition of a person by comparing symptom sets extracted from images of the retinal vessels pattern. The first part includes the insight into biometric issues, the punctual analysis of human identification using retina images, and especially the literature research of methods of extraction and comparison. In the practical part there were realized algorithms for human identification with the method of nearest neighbor search (NS), translation, template matching (TM) and extended NS and TM including more symptoms, for which MATLAB program was used. The thesis includes testing of suggested programs on the biometric database of symptomatic vectors with the following evaluation.
19

Evaluation formative du savoir-faire des apprenants à l'aide d'algorithmes de classification : application à l'électronique numérique / Formative evaluation of the learners' know-how using classification algorithms : application to th digital electronics

Tanana, Mariam 19 November 2009 (has links)
Lorsqu'un enseignant veut évaluer le savoir-faire des apprenants à l'aide d'un logiciel, il utilise souvent les systèmes Tutoriels Intelligents (STI). Or, les STI sont difficiles à développer et destinés à un domaine pédagogique très ciblé. Depuis plusieurs années, l'utilisation d'algorithmes de classification par apprentissage supervisé a été proposée pour évaluer le savoir des apprenants. Notre hypothèse est que ces mêmes algorithmes vont aussi nous permettre d'évaluer leur savoir-faire. Notre domaine d'application étant l'électronique numérique, nous proposons une mesure de similarité entre schémas électroniques et une bas d'apprentissage générée automatiquement. cette base d'apprentissage est composées de schémas électroniques pédagogiquement étiquetés "bons" ou "mauvais" avec des informations concernant le degré de simplification des erreurs commises. Finalement, l'utilisation d'un algorithme de classification simple (les k plus proches voisins) nous a permis de faire une évaluation des schémas électroniques dans la majorité des cas. / When a teacher wants to evaluate the know-how of the learners using a software, he often uses Intelligent Tutorial Systems (ITS). However, those systems are difficult to develop and intended for a very targeted educational domain. For several years, the used of supervised classification algorithms was proposed to estimate the learners' knowledge. From this fact, we assume that the same kinf of algorithms can help to adress the learners' know-how evaluation. Our application field being digital system design, we propose a similarity measure between digital circuits and instances issued from an automatically generated database. This database consists of electronic circuits pedagogically labelled "good" or "bad" with information concerning the simplification degrees or made mistakes. Finally, the use of a simple classification algorithm (namely k-nearest neighbours classifier) allowed us to achieve a circuit's evaluation in most cases.
20

Feature-based Comparison and Generation of Time Series

Kegel, Lars, Hahmann, Martin, Lehner, Wolfgang 17 August 2022 (has links)
For more than three decades, researchers have been developping generation methods for the weather, energy, and economic domain. These methods provide generated datasets for reasons like system evaluation and data availability. However, despite the variety of approaches, there is no comparative and cross-domain assessment of generation methods and their expressiveness. We present a similarity measure that analyzes generation methods regarding general time series features. By this means, users can compare generation methods and validate whether a generated dataset is considered similar to a given dataset. Moreover, we propose a feature-based generation method that evolves cross-domain time series datasets. This method outperforms other generation methods regarding the feature-based similarity.

Page generated in 0.0693 seconds