• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 1
  • Tagged with
  • 13
  • 13
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Content based video retrieval via spatial-temporal information discovery

Wang, Lei January 2013 (has links)
Content based video retrieval (CBVR) has been strongly motivated by a variety of realworld applications. Most state-of-the-art CBVR systems are built based on Bag-of-visual- Words (BovW) framework for visual resources representation and access. The framework, however, ignores spatial and temporal information contained in videos, which plays a fundamental role in unveiling semantic meanings. The information includes not only the spatial layout of visual content on a still frame (image), but also temporal changes across the sequential frames. Specially, spatially and temporally co-occurring visual words, which are extracted under the BovW framework, often tend to collaboratively represent objects, scenes, or events in the videos. The spatial and temporal information discovery would be useful to advance the CBVR technology. In this thesis, we propose to explore and analyse the spatial and temporal information from a new perspective: i) co-occurrence of the visual words is formulated as a correlation matrix, ii) spatial proximity and temporal coherence are analytically and empirically studied to re ne this correlation. Following this, a quantitative spatial and temporal correlation (STC) model is de ned. The STC discovered from either the query example (denoted by QC) or the data collection (denoted by DC) are assumed to determine speci- city of the visual words in the retrieval model, i:e: selected Words-Of-Interest are found more important for certain topics. Based on this hypothesis, we utilized the STC matrix to establish a novel visual content similarity measurement method and a query reformulation scheme for the retrieval model. Additionally, the STC also characterizes the context of the visual words, and accordingly a STC-Based context similarity measurement is proposed to detect the synonymous visual words. The method partially solves an inherent error of visual vocabulary under the BovW framework. Systematic experimental evaluations on public TRECVID and CC WEB VIDEO video collections demonstrate that the proposed methods based on the STC can substantially improve retrieval e ectiveness of the BovW framework. The retrieval model based on STC outperforms state-of-the-art CBVR methods on the data collections without storage and computational expense. Furthermore, the rebuilt visual vocabulary in this thesis is more compact and e ective. Above methods can be incorporated together for e ective and e cient CBVR system implementation. Based on the experimental results, it is concluded that the spatial-temporal correlation e ectively approximates the semantical correlation. This discovered correlation approximation can be utilized for both visual content representation and similarity measurement, which are key issues for CBVR technology development.
2

Data-driven temporal information extraction with applications in general and clinical domains

Filannino, Michele January 2016 (has links)
The automatic extraction of temporal information from written texts is pivotal for many Natural Language Processing applications such as question answering, text summarisation and information retrieval. However, Temporal Information Extraction (TIE) is a challenging task because of the amount of types of expressions (durations, frequencies, times, dates) and their high morphological variability and ambiguity. As far as the approaches are concerned, the most common among the existing ones is rule-based, while data-driven ones are under-explored. This thesis introduces a novel domain-independent data-driven TIE strategy. The identification strategy is based on machine learning sequence labelling classifiers on features selected through an extensive exploration. Results are further optimised using an a posteriori label-adjustment pipeline. The normalisation strategy is rule-based and builds on a pre-existing system. The methodology has been applied to both specific (clinical) and generic domain, and has been officially benchmarked at the i2b2/2012 and TempEval-3 challenges, ranking respectively 3rd and 1st. The results prove the TIE task to be more challenging in the clinical domain (overall accuracy 63%) rather than in the general domain (overall accuracy 69%).Finally, this thesis also presents two applications of TIE. One of them introduces the concept of temporal footprint of a Wikipedia article, and uses it to mine the life span of persons. In the other case, TIE techniques are used to improve pre-existing information retrieval systems by filtering out temporally irrelevant results.
3

Recognition of off-line handwritten cursive text

Abuhaiba, Ibrahim S. I. January 1996 (has links)
The author presents novel algorithms to design unconstrained handwriting recognition systems organized in three parts: In Part One, novel algorithms are presented for processing of Arabic text prior to recognition. Algorithms are described to convert a thinned image of a stroke to a straight line approximation. Novel heuristic algorithms and novel theorems are presented to determine start and end vertices of an off-line image of a stroke. A straight line approximation of an off-line stroke is converted to a one-dimensional representation by a novel algorithm which aims to recover the original sequence of writing. The resulting ordering of the stroke segments is a suitable preprocessed representation for subsequent handwriting recognition algorithms as it helps to segment the stroke. The algorithm was tested against one data set of isolated handwritten characters and another data set of cursive handwriting, each provided by 20 subjects, and has been 91.9% and 91.8% successful for these two data sets, respectively. In Part Two, an entirely novel fuzzy set-sequential machine character recognition system is presented. Fuzzy sequential machines are defined to work as recognizers of handwritten strokes. An algorithm to obtain a deterministic fuzzy sequential machine from a stroke representation, that is capable of recognizing that stroke and its variants, is presented. An algorithm is developed to merge two fuzzy machines into one machine. The learning algorithm is a combination of many described algorithms. The system was tested against isolated handwritten characters provided by 20 subjects resulting in 95.8% recognition rate which is encouraging and shows that the system is highly flexible in dealing with shape and size variations. In Part Three, also an entirely novel text recognition system, capable of recognizing off-line handwritten Arabic cursive text having a high variability is presented. This system is an extension of the above recognition system. Tokens are extracted from a onedimensional representation of a stroke. Fuzzy sequential machines are defined to work as recognizers of tokens. It is shown how to obtain a deterministic fuzzy sequential machine from a token representation that is capable'of recognizing that token and its variants. An algorithm for token learning is presented. The tokens of a stroke are re-combined to meaningful strings of tokens. Algorithms to recognize and learn token strings are described. The. recognition stage uses algorithms of the learning stage. The process of extracting the best set of basic shapes which represent the best set of token strings that constitute an unknown stroke is described. A method is developed to extract lines from pages of handwritten text, arrange main strokes of extracted lines in the same order as they were written, and present secondary strokes to main strokes. Presented secondary strokes are combined with basic shapes to obtain the final characters by formulating and solving assignment problems for this purpose. Some secondary strokes which remain unassigned are individually manipulated. The system was tested against the handwritings of 20 subjects yielding overall subword and character recognition rates of 55.4% and 51.1%, respectively.
4

Making sense of spatial, sensor and temporal information for context modeling

Monteagudo, Jose Antonio, Jiménez, Ramón David January 2008 (has links)
<p><p>Context represents any information regarding the situation of entities, being these a person, place or object that is considered relevant to the interaction between a user and an application.</p><p><p>The results obtained permits an user to save context information attached to a picture in a database, as well as retrieve pictures from that database and show it in a web interface with its context information associated. The web interface also allows the user to perform searches by using different criteria, so only the pictures that matches with that criteria will be shown.</p></p></p> / Final Degree Project - Thesis
5

Pour une cohérence du résultat d'un opérateur dans un contexte spatial, temporel et alphanumérique / Towards a coherency of the data model associated with the result of an operator in a spatial, temporal and alphanumerical context

Moussa, Ahmad 18 December 2018 (has links)
L'information géographique peut être perçue selon trois dimensions : une dimension spatiale (e.g., ville), une dimension temporelle (e.g., date) et une dimension alphanumérique (e.g., population). L'intégration de ces différents types de données est l'un des challenges actuels de la production d'applications permettant la représentation et le traitement d'informations géographiques. Nous nous proposons de contribuer à cette problématique en proposant une méthode afin d'aider à la gestion de la cohérence des données impliquant deux des trois dimensions. Cette cohérence peut se poser après application d'un opérateur sur les informations géographiques. La méthode que nous proposons est basée sur des liens sémantiques entre les dimensions. Une liaison sémantique représente le rapport logique entre les dimensions. Cette liaison associée à des règles de comportement des opérateurs de manipulation, nous permet de garantir la cohérence du modèle de données associé au résultat d’un opérateur. / Geographic information can be perceived according to three dimensions: a spatial dimension (e.g., city), a temporal dimension (e.g., date) and an alphanumeric dimension (e.g., population). The integration of these various types of data is one of the current challenges in order to model geographic representation and to process geographic information. We contribute to this challenge by proposing a method to help in the management of data coherency involving two on three dimensions. This coherency can be settled after applying an operator on geographic information. Our method is based on semantic links between dimensions. A semantic link represents the logical interaction between (two) dimensions. This connection, associated with a set of rules depending on data manipulation operators, allows us to guarantee the coherency of the data model associated with the result of an operator.
6

Extracting Clinical Event Timelines : Temporal Information Extraction and Coreference Resolution in Electronic Health Records / Création de Chronologies d'Événements Médicaux : Extraction d'Informations Temporelles et Résolution de la Coréférence dans les Dossiers Patients Électroniques

Tourille, Julien 18 December 2018 (has links)
Les dossiers patients électroniques contiennent des informations importantes pour la santé publique. La majeure partie de ces informations est contenue dans des documents rédigés en langue naturelle. Bien que le texte texte soit pertinent pour décrire des concepts médicaux complexes, il est difficile d'utiliser cette source de données pour l'aide à la décision, la recherche clinique ou l'analyse statistique.Parmi toutes les informations cliniques intéressantes présentes dans ces dossiers, la chronologie médicale du patient est l'une des plus importantes. Être capable d'extraire automatiquement cette chronologie permettrait d'acquérir une meilleure connaissance de certains phénomènes cliniques tels que la progression des maladies et les effets à long-terme des médicaments. De plus, cela permettrait d'améliorer la qualité des systèmes de question--réponse et de prédiction de résultats cliniques. Par ailleurs, accéder aux chronologiesmédicales est nécessaire pour évaluer la qualité du parcours de soins en le comparant aux recommandations officielles et pour mettre en lumière les étapes de ce parcours auxquelles une attention particulière doit être portée.Dans notre thèse, nous nous concentrons sur la création de ces chronologies médicales en abordant deux questions connexes en traitement automatique des langues: l'extraction d'informations temporelles et la résolution de la coréférence dans des documents cliniques.Concernant l'extraction d'informations temporelles, nous présentons une approche générique pour l'extraction de relations temporelles basée sur des traits catégoriels. Cette approche peut être appliquée sur des documents écrits en anglais ou en français. Puis, nous décrivons une approche neuronale pour l'extraction d'informations temporelles qui inclut des traits catégoriels.La deuxième partie de notre thèse porte sur la résolution de la coréférence. Nous décrivons une approche neuronale pour la résolution de la coréférence dans les documents cliniques. Nous menons une étude empirique visant à mesurer l'effet de différents composants neuronaux, tels que les mécanismes d'attention ou les représentations au niveau des caractères, sur la performance de notre approche. / Important information for public health is contained within Electronic Health Records (EHRs). The vast majority of clinical data available in these records takes the form of narratives written in natural language. Although free text is convenient to describe complex medical concepts, it is difficult to use for medical decision support, clinical research or statistical analysis.Among all the clinical aspects that are of interest in these records, the patient timeline is one of the most important. Being able to retrieve clinical timelines would allow for a better understanding of some clinical phenomena such as disease progression and longitudinal effects of medications. It would also allow to improve medical question answering and clinical outcome prediction systems. Accessing the clinical timeline is needed to evaluate the quality of the healthcare pathway by comparing it to clinical guidelines, and to highlight the steps of the pathway where specific care should be provided.In this thesis, we focus on building such timelines by addressing two related natural language processing topics which are temporal information extraction and clinical event coreference resolution.Our main contributions include a generic feature-based approach for temporal relation extraction that can be applied to documents written in English and in French. We devise a neural based approach for temporal information extraction which includes categorical features.We present a neural entity-based approach for coreference resolution in clinical narratives. We perform an empirical study to evaluate how categorical features and neural network components such as attention mechanisms and token character-level representations influence the performance of our coreference resolution approach.
7

A Study on Effective Approaches for Exploiting Temporal Information in News Archives / ニュースアーカイブの時制情報活用のための有効な手法に関する研究

Wang, Jiexin 26 September 2022 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24259号 / 情博第803号 / 新制||情||135(附属図書館) / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 吉川 正俊, 教授 田島 敬史, 教授 黒橋 禎夫, 特定准教授 LIN Donghui / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
8

Spatio-temporal information system for the geosciences

Le, Hai Ha 03 November 2014 (has links) (PDF)
The development of spatio–temporal geoscience information systems (TGSIS) as the next generation of geographic information systems (GIS) and geoscience information systems (GSIS) was investigated with respect to the following four aspects: concepts, data models, software, and applications. These systems are capable of capturing, storing, managing, and querying data of geo–objects subject to dynamic processes, thereby causing the evolution of their geometry, topology and geoscience properties. In this study, five data models were proposed. The first data model represents static geo–objects whose geometries are in the 3–dimensional space. The second and third data models represent geological surfaces evolving in a discrete and continuous manner, respectively. The fourth data model is a general model that represents geo–objects whose geometries are n–dimensional embedding in the m–dimensional space R^m, m >= 3. The topology and the properties of these geo–objects are also represented in the data model. In this model, time is represented as one dimension (valid time). Moreover, the valid time is an independent variable, whereas geometry, topology, and the properties are dependent (on time) variables. The fifth data model represents multiple indexed geoscience data in which time and other non–spatial dimensions are interpreted as larger spatial dimensions. To capture data in space and time, morphological interpolation methods were reviewed, and a new morphological interpolation method was proposed to model geological surfaces evolving continuously in a time interval. This algorithm is based on parameterisation techniques to locate the cross–reference and then compute the trajectories complying with geometrical constraints. In addition, the long transaction feature was studied, and the data schema, functions, triggers, and views were proposed to implement the long transaction feature and the database versioning in PostgreSQL. To implement database versioning tailored to geoscience applications, an algorithm comparing two triangulated meshes was also proposed. Therefore, TGSIS enable geologists to manage different versions of geoscience data for different geological paradigms, data, and authors. Finally, a prototype software system was built. This system uses the client/server architecture in which the server side uses the PostgreSQL database management system and the client side uses the gOcad geomodeling system. The system was also applied to certain sample applications.
9

Making sense of spatial, sensor and temporal information for context modeling

Monteagudo, Jose Antonio, Jiménez, Ramón David January 2008 (has links)
Context represents any information regarding the situation of entities, being these a person, place or object that is considered relevant to the interaction between a user and an application. The results obtained permits an user to save context information attached to a picture in a database, as well as retrieve pictures from that database and show it in a web interface with its context information associated. The web interface also allows the user to perform searches by using different criteria, so only the pictures that matches with that criteria will be shown. / Final Degree Project - Thesis
10

Métodos de representación y verificación del locutor con independencia del texto / Méthodes de représentation et de vérification du locuteur indépendant du texte / Methods of utterances representation in text-independent speaker verification

Hernandez Sierra, Gabriel 15 December 2014 (has links)
La reconnaissance automatique du locuteur indépendante du texte est une méthode récente dans le domaine des systèmes biométriques. Le développement de la reconnaissance du locuteur se reflète tout autant dans la participation croissante aux compétitions internationales et dans les progrès en termes de performance relevés dans ces campagnes. Cependant la précision des méthodes reste limitée par la quantité d'information discriminante du locuteur présente dans les représentations informatiques des énoncés vocaux. Cette thèse présente une étude sur ces représentations. Elle identifie deux faiblesses principales. Tout d’abord, les représentations usuelles ignorent les paramètres temporels de la voix pourtant connus pour leur pouvoir discriminant. Par ailleurs, ces représentations reposent sur le paradigme de l’apprentissage statistique et diminuent l’importance d’événements rares dans une population de locuteurs, mais fréquents dans un locuteur donné.Pour répondre à ces verrous, cette thèse propose une nouvelle représentation des énoncés. Celle-ci projette chaque vecteur acoustique dans un large espace binaire intrinsèquement discriminant du locuteur. Une mesure de similitude associée à une représentation globale (vecteurs cumulatifs) est également proposée. L’approche proposée permet ainsi à la fois de représenter des événements rares mais pertinents et de travailler sur des informations temporelles. Cette approche permet de tirer parti des solutions de compensation de la variabilité « session », qui provient de l’ensemble des facteurs indésirables, exploitées dans les approches de type « iVector ». Dans ce domaine, des améliorations aux algorithmes de l’état de l’art ont été proposées.Une solution originale permettant d’exploiter l’information temporelle à l’intérieur de cette représentation binaire a été proposée. La complémentarité des sources d’information a été attestée par un gain en performance relevé grâce à une fusion linéaire des deux types d’information, indépendant et dépendant de la séquence temporelle. / Text-independent automatic speaker recognition is a recent method in biometric area. Its increasing interest is reflected both in the increasing participation in international competitions and in the performance progresses. Moreover, the accuracy of the methods is still limited by the quantity of speaker discriminant information contained in the representations of speech utterances. This thesis presents a study on speech representation for speaker recognition systems. It shows firstly two main weaknesses. First, it fails to take into account the temporal behavior of the voice, which is known to contain speaker discriminant information. Secondly, speech events rare in a large population of speakers although very present for a given speaker are hardly taken into account by these approaches, which is contradictory when the goal is to discriminate among speakers.In order to overpass these limitations, we propose in this thesis a new speech representation for speaker recognition. This method represents each acoustic vector in a a binary space which is intrinsically speaker discriminant. A similarity measure associated with a global representation (cumulative vectors) is also proposed. This new speech utterance representation is able to represent infrequent but discriminant events and to work on temporal information. It allows also to take advantage of existing « session » variability compensation approaches (« session » variability represents all the negative variability factors). In this area, we proposed also several improvements to the usual session compensation algorithms. An original solution to deal with the temporal information inside the binary speech representation was also proposed. Thanks to a linear fusion approach between the two sources of information, we demonstrated the complementary nature of the temporal information versus the classical time independent representations. / El reconocimiento automático del locutor independiente del texto, es un método dereciente incorporación en los sistemas biométricos. El desarrollo y auge del mismo serefleja en las competencias internacionales, pero aun la eficacia de los métodos de reconocimientose encuentra afectada por la cantidad de información discriminatoria dellocutor que esta presente en las representaciones actuales de las expresiones de voz.En esta tesis se realizó un estudio donde se identificaron dos principales debilidadespresentes en las representaciones actuales del locutor. En primer lugar, no se tiene encuenta el comportamiento temporal de la voz, siendo este un rasgo discriminatorio dellocutor y en segundo lugar los eventos pocos frecuentes dentro de una población delocutores pero frecuentes en un locutor dado, apenas son tenidos en cuenta por estosenfoques, lo cual es contradictorio cuando el objetivo es discriminar los locutores. Motivadopor la solución de estos problemas, se confirmó la redundancia de informaciónexistente en las representaciones actuales y la necesidad de emplear nuevas representacionesde las expresiones de voz. Se propuso un nuevo enfoque con el desarrollo de unmétodo para la obtención de un modelo generador capaz de transformar la representación actual del espacio acústico a una representación en un espacio binario, dondese propuso una medida de similitud asociada con una representación global (vectoracumulativo) que contiene tanto los eventos frecuentes como los pocos frecuentes enuna expresión de voz. Para la compensación de la variabilidad de sesión se incorporóen la matriz de dispersión intra-clase, la información común de la población de locutores,lo que implicó la modificación de tres algoritmos de la literatura que mejoraronsu desempeño respecto a la eficacia en el reconocimiento del locutor, tanto utilizandoel nuevo enfoque propuesto como el enfoque actual de referencia. La información temporalexistente en las expresiones de voz fue capturada e incorporada en una nuevarepresentación, mejorando aun más la eficacia del enfoque propuesto. Finalmente sepropuso y evaluó una fusión lineal entre los dos enfoques que demostró la informacióncomplementaria existente entre ellos, obteniéndose los mejores resultados de eficaciaen el reconocimiento del locutor.

Page generated in 0.1275 seconds