Spelling suggestions: "subject:"cross language information retrieval"" "subject:"gross language information retrieval""
21 |
Mesurer et améliorer la qualité des corpus comparables / Measuring and Improving Comparable Corpus QualityLi, Bo 26 June 2012 (has links)
Les corpus bilingues sont des ressources essentielles pour s'affranchir de la barrière de la langue en traitement automatique des langues (TAL) dans un contexte multilingue. La plupart des travaux actuels utilisent des corpus parallèles qui sont surtout disponibles pour des langues majeurs et pour des domaines spécifiques. Les corpus comparables, qui rassemblent des textes comportant des informations corrélées, sont cependant moins coûteux à obtenir en grande quantité. Plusieurs travaux antérieurs ont montré que l'utilisation des corpus comparables est bénéfique à différentes taches en TAL. En parallèle à ces travaux, nous proposons dans cette thèse d'améliorer la qualité des corpus comparables dans le but d'améliorer les performances des applications qui les exploitent. L'idée est avantageuse puisqu'elle peut être utilisée avec n'importe quelle méthode existante reposant sur des corpus comparables. Nous discuterons en premier la notion de comparabilité inspirée des expériences d'utilisation des corpus bilingues. Cette notion motive plusieurs implémentations de la mesure de comparabilité dans un cadre probabiliste, ainsi qu'une méthodologie pour évaluer la capacité des mesures de comparabilité à capturer un haut niveau de comparabilité. Les mesures de comparabilité sont aussi examinées en termes de robustesse aux changements des entrées du dictionnaire. Les expériences montrent qu'une mesure symétrique s'appuyant sur l'entrelacement du vocabulaire peut être corrélée avec un haut niveau de comparabilité et est robuste aux changements des entrées du dictionnaire. En s'appuyant sur cette mesure de comparabilité, deux méthodes nommées: greedy approach et clustering approach, sont alors développées afin d'améliorer la qualité d'un corpus comparable donnée. L'idée générale de ces deux méthodes est de choisir une sous partie du corpus original qui soit de haute qualité, et d'enrichir la sous-partie de qualité moindre avec des ressources externes. Les expériences montrent que l'on peut améliorer avec ces deux méthodes la qualité en termes de score de comparabilité d'un corpus comparable donnée, avec la méthode clustering approach qui est plus efficace que la method greedy approach. Le corpus comparable ainsi obtenu, permet d'augmenter la qualité des lexiques bilingues en utilisant l'algorithme d'extraction standard. Enfin, nous nous penchons sur la tâche d'extraction d'information interlingue (Cross-Language Information Retrieval, CLIR) et l'application des corpus comparables à cette tâche. Nous développons de nouveaux modèles CLIR en étendant les récents modèles proposés en recherche d'information monolingue. Le modèle CLIR montre de meilleurs performances globales. Les lexiques bilingues extraits à partir des corpus comparables sont alors combinés avec le dictionnaire bilingue existant, est utilisé dans les expériences CLIR, ce qui induit une amélioration significative des systèmes CLIR. / Bilingual corpora are an essential resource used to cross the language barrier in multilingual Natural Language Processing (NLP) tasks. Most of the current work makes use of parallel corpora that are mainly available for major languages and constrained areas. Comparable corpora, text collections comprised of documents covering overlapping information, are however less expensive to obtain in high volume. Previous work has shown that using comparable corpora is beneficent for several NLP tasks. Apart from those studies, we will try in this thesis to improve the quality of comparable corpora so as to improve the performance of applications exploiting them. The idea is advantageous since it can work with any existing method making use of comparable corpora. We first discuss in the thesis the notion of comparability inspired from the usage experience of bilingual corpora. The notion motivates several implementations of the comparability measure under the probabilistic framework, as well as a methodology to evaluate the ability of comparability measures to capture gold-standard comparability levels. The comparability measures are also examined in terms of robustness to dictionary changes. The experiments show that a symmetric measure relying on vocabulary overlapping can correlate very well with gold-standard comparability levels and is robust to dictionary changes. Based on the comparability measure, two methods, namely the greedy approach and the clustering approach, are then developed to improve the quality of any given comparable corpus. The general idea of these two methods is to choose the highquality subpart from the original corpus and to enrich the low-quality subpart with external resources. The experiments show that one can improve the quality, in terms of comparability scores, of the given comparable corpus by these two methods, with the clustering approach being more efficient than the greedy approach. The enhanced comparable corpus further results in better bilingual lexicons extracted with the standard extraction algorithm. Lastly, we investigate the task of Cross-Language Information Retrieval (CLIR) and the application of comparable corpora in CLIR. We develop novel CLIR models extending the recently proposed information-based models in monolingual IR. The information-based CLIR model is shown to give the best performance overall. Bilingual lexicons extracted from comparable corpora are then combined with the existing bilingual dictionary and used in CLIR experiments, which results in significant improvement of the CLIR system.
|
22 |
Cross-view Embeddings for Information RetrievalGupta, Parth Alokkumar 03 March 2017 (has links)
In this dissertation, we deal with the cross-view tasks related to information retrieval
using embedding methods. We study existing methodologies and propose new methods to overcome their limitations. We formally introduce the concept of mixed-script
IR, which deals with the challenges faced by an IR system when a language is written
in different scripts because of various technological and sociological factors. Mixed-script terms are represented by a small and finite feature space comprised of character
n-grams. We propose the cross-view autoencoder (CAE) to model such terms in an
abstract space and CAE provides the state-of-the-art performance.
We study a wide variety of models for cross-language information retrieval (CLIR)
and propose a model based on compositional neural networks (XCNN) which overcomes the limitations of the existing methods and achieves the best results for many
CLIR tasks such as ad-hoc retrieval, parallel sentence retrieval and cross-language
plagiarism detection. We empirically test the proposed models for these tasks on
publicly available datasets and present the results with analyses.
In this dissertation, we also explore an effective method to incorporate contextual
similarity for lexical selection in machine translation. Concretely, we investigate a
feature based on context available in source sentence calculated using deep autoencoders. The proposed feature exhibits statistically significant improvements over the
strong baselines for English-to-Spanish and English-to-Hindi translation tasks.
Finally, we explore the the methods to evaluate the quality of autoencoder generated representations of text data and analyse its architectural properties. For this,
we propose two metrics based on reconstruction capabilities of the autoencoders:
structure preservation index (SPI) and similarity accumulation index (SAI). We also
introduce a concept of critical bottleneck dimensionality (CBD) below which the
structural information is lost and present analyses linking CBD and language perplexity. / En esta disertación estudiamos problemas de vistas-múltiples relacionados con la recuperación de información utilizando técnicas de representación en espacios de baja dimensionalidad. Estudiamos las técnicas existentes y proponemos nuevas técnicas para solventar algunas de las limitaciones existentes. Presentamos formalmente el concepto de recuperación de información con escritura mixta, el cual trata las dificultades de los sistemas de recuperación de información cuando los textos contienen escrituras en distintos alfabetos debido a razones tecnológicas y socioculturales. Las palabras en escritura mixta son representadas en un espacio de características finito y reducido, compuesto por n-gramas de caracteres. Proponemos los auto-codificadores de vistas-múltiples (CAE, por sus siglas en inglés) para modelar dichas palabras en un espacio abstracto, y esta técnica produce resultados de vanguardia.
En este sentido, estudiamos varios modelos para la recuperación de información entre lenguas diferentes (CLIR, por sus siglas en inglés) y proponemos un modelo basado en redes neuronales composicionales (XCNN, por sus siglas en inglés), el cual supera las limitaciones de los métodos existentes. El método de XCNN propuesto produce mejores resultados en diferentes tareas de CLIR tales como la recuperación de información ad-hoc, la identificación de oraciones equivalentes en lenguas distintas y la detección de plagio entre lenguas diferentes. Para tal efecto, realizamos pruebas experimentales para dichas tareas sobre conjuntos de datos disponibles públicamente, presentando los resultados y análisis correspondientes.
En esta disertación, también exploramos un método eficiente para utilizar similitud semántica de contextos en el proceso de selección léxica en traducción automática. Específicamente, proponemos características extraídas de los contextos disponibles en las oraciones fuentes mediante el uso de auto-codificadores. El uso de las características propuestas demuestra mejoras estadísticamente significativas sobre sistemas de traducción robustos para las tareas de traducción entre inglés y español, e inglés e hindú.
Finalmente, exploramos métodos para evaluar la calidad de las representaciones de datos de texto generadas por los auto-codificadores, a la vez que analizamos las propiedades de sus arquitecturas. Como resultado, proponemos dos nuevas métricas para cuantificar la calidad de las reconstrucciones generadas por los auto-codificadores: el índice de preservación de estructura (SPI, por sus siglas en inglés) y el índice de acumulación de similitud (SAI, por sus siglas en inglés). También presentamos el concepto de dimensión crítica de cuello de botella (CBD, por sus siglas en inglés), por debajo de la cual la información estructural se deteriora. Mostramos que, interesantemente, la CBD está relacionada con la perplejidad de la lengua. / En aquesta dissertació estudiem els problemes de vistes-múltiples relacionats amb la recuperació d'informació utilitzant tècniques de representació en espais de baixa dimensionalitat. Estudiem les tècniques existents i en proposem unes de noves per solucionar algunes de les limitacions existents. Presentem formalment el concepte de recuperació d'informació amb escriptura mixta, el qual tracta les dificultats dels sistemes de recuperació d'informació quan els textos contenen escriptures en diferents alfabets per motius tecnològics i socioculturals. Les paraules en escriptura mixta són representades en un espai de característiques finit i reduït, composat per n-grames de caràcters. Proposem els auto-codificadors de vistes-múltiples (CAE, per les seves sigles en anglès) per modelar aquestes paraules en un espai abstracte, i aquesta tècnica produeix resultats d'avantguarda.
En aquest sentit, estudiem diversos models per a la recuperació d'informació entre llengües diferents (CLIR , per les sevas sigles en anglès) i proposem un model basat en xarxes neuronals composicionals (XCNN, per les sevas sigles en anglès), el qual supera les limitacions dels mètodes existents. El mètode de XCNN proposat produeix millors resultats en diferents tasques de CLIR com ara la recuperació d'informació ad-hoc, la identificació d'oracions equivalents en llengües diferents, i la detecció de plagi entre llengües diferents. Per a tal efecte, realitzem proves experimentals per aquestes tasques sobre conjunts de dades disponibles públicament, presentant els resultats i anàlisis corresponents.
En aquesta dissertació, també explorem un mètode eficient per utilitzar similitud semàntica de contextos en el procés de selecció lèxica en traducció automàtica. Específicament, proposem característiques extretes dels contextos disponibles a les oracions fonts mitjançant l'ús d'auto-codificadors. L'ús de les característiques proposades demostra millores estadísticament significatives sobre sistemes de traducció robustos per a les tasques de traducció entre anglès i espanyol, i anglès i hindú.
Finalment, explorem mètodes per avaluar la qualitat de les representacions de dades de text generades pels auto-codificadors, alhora que analitzem les propietats de les seves arquitectures. Com a resultat, proposem dues noves mètriques per quantificar la qualitat de les reconstruccions generades pels auto-codificadors: l'índex de preservació d'estructura (SCI, per les seves sigles en anglès) i l'índex d'acumulació de similitud (SAI, per les seves sigles en anglès). També presentem el concepte de dimensió crítica de coll d'ampolla (CBD, per les seves sigles en anglès), per sota de la qual la informació estructural es deteriora. Mostrem que, de manera interessant, la CBD està relacionada amb la perplexitat de la llengua. / Gupta, PA. (2017). Cross-view Embeddings for Information Retrieval [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/78457
|
23 |
A generic architecture for semantic enhanced tagging systemsMagableh, Murad January 2011 (has links)
The Social Web, or Web 2.0, has recently gained popularity because of its low cost and ease of use. Social tagging sites (e.g. Flickr and YouTube) offer new principles for end-users to publish and classify their content (data). Tagging systems contain free-keywords (tags) generated by end-users to annotate and categorise data. Lack of semantics is the main drawback in social tagging due to the use of unstructured vocabulary. Therefore, tagging systems suffer from shortcomings such as low precision, lack of collocation, synonymy, multilinguality, and use of shorthands. Consequently, relevant contents are not visible, and thus not retrievable while searching in tag-based systems. On the other hand, the Semantic Web, so-called Web 3.0, provides a rich semantic infrastructure. Ontologies are the key enabling technology for the Semantic Web. Ontologies can be integrated with the Social Web to overcome the lack of semantics in tagging systems. In the work presented in this thesis, we build an architecture to address a number of tagging systems drawbacks. In particular, we make use of the controlled vocabularies presented by ontologies to improve the information retrieval in tag-based systems. Based on the tags provided by the end-users, we introduce the idea of adding “system tags” from semantic, as well as social, resources. The “system tags” are comprehensive and wide-ranging in comparison with the limited “user tags”. The system tags are used to fill the gap between the user tags and the search terms used for searching in the tag-based systems. We restricted the scope of our work to tackle the following tagging systems shortcomings: - The lack of semantic relations between user tags and search terms (e.g. synonymy, hypernymy), - The lack of translation mediums between user tags and search terms (multilinguality), - The lack of context to define the emergent shorthand writing user tags. To address the first shortcoming, we use the WordNet ontology as a semantic lingual resource from where system tags are extracted. For the second shortcoming, we use the MultiWordNet ontology to recognise the cross-languages linkages between different languages. Finally, to address the third shortcoming, we use tag clusters that are obtained from the Social Web to create a context for defining the meaning of shorthand writing tags. A prototype for our architecture was implemented. In the prototype system, we built our own database to host videos that we imported from real tag-based system (YouTube). The user tags associated with these videos were also imported and stored in the database. For each user tag, our algorithm adds a number of system tags that came from either semantic ontologies (WordNet or MultiWordNet), or from tag clusters that are imported from the Flickr website. Therefore, each system tag added to annotate the imported videos has a relationship with one of the user tags on that video. The relationship might be one of the following: synonymy, hypernymy, similar term, related term, translation, or clustering relation. To evaluate the suitability of our proposed system tags, we developed an online environment where participants submit search terms and retrieve two groups of videos to be evaluated. Each group is produced from one distinct type of tags; user tags or system tags. The videos in the two groups are produced from the same database and are evaluated by the same participants in order to have a consistent and reliable evaluation. Since the user tags are used nowadays for searching the real tag-based systems, we consider its efficiency as a criterion (reference) to which we compare the efficiency of the new system tags. In order to compare the relevancy between the search terms and each group of retrieved videos, we carried out a statistical approach. According to Wilcoxon Signed-Rank test, there was no significant difference between using either system tags or user tags. The findings revealed that the use of the system tags in the search is as efficient as the use of the user tags; both types of tags produce different results, but at the same level of relevance to the submitted search terms.
|
24 |
Topic and link detection from multilingual news.January 2003 (has links)
Huang Ruizhang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 110-114). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Defitition of Topic and Event --- p.2 / Chapter 1.2 --- Event and Topic Discovery --- p.2 / Chapter 1.2.1 --- Problem Definition --- p.2 / Chapter 1.2.2 --- Characteristics of the Discovery Problems --- p.3 / Chapter 1.2.3 --- Our Contributions --- p.5 / Chapter 1.3 --- Story Link Detection --- p.5 / Chapter 1.3.1 --- Problem Definition --- p.5 / Chapter 1.3.2 --- Our Contributions --- p.6 / Chapter 1.4 --- Thesis Organization --- p.7 / Chapter 2 --- Literature Review --- p.8 / Chapter 2.1 --- University of Massachusetts (UMass) --- p.8 / Chapter 2.1.1 --- Topic Detection Approach --- p.8 / Chapter 2.1.2 --- Story Link Detection Approach --- p.9 / Chapter 2.2 --- BBN Technologies --- p.10 / Chapter 2.3 --- IBM Research Center --- p.11 / Chapter 2.4 --- Carnegie Mellon University (CMU) --- p.12 / Chapter 2.4.1 --- Topic Detection Approach --- p.12 / Chapter 2.4.2 --- Story Link Detection Approach --- p.14 / Chapter 2.5 --- National Taiwan University (NTU) --- p.14 / Chapter 2.5.1 --- Topic Detection Approach --- p.14 / Chapter 2.5.2 --- Story Link Detection Approach --- p.15 / Chapter 3 --- System Overview --- p.17 / Chapter 3.1 --- News Sources --- p.18 / Chapter 3.2 --- Story Preprocessing --- p.24 / Chapter 3.3 --- Information Extraction --- p.25 / Chapter 3.4 --- Gloss Translation --- p.26 / Chapter 3.5 --- Term Weight Calculation --- p.30 / Chapter 3.6 --- Event And Topic Discovery --- p.31 / Chapter 3.7 --- Story Link Detection --- p.33 / Chapter 4 --- Event And Topic Discovery --- p.34 / Chapter 4.1 --- Overview of Event and Topic discovery --- p.34 / Chapter 4.2 --- Event Discovery Component --- p.37 / Chapter 4.2.1 --- Overview of Event Discovery Algorithm --- p.37 / Chapter 4.2.2 --- Similarity Calculation --- p.39 / Chapter 4.2.3 --- Story and Event Combination --- p.43 / Chapter 4.2.4 --- Event Discovery Output --- p.44 / Chapter 4.3 --- Topic Discovery Component --- p.45 / Chapter 4.3.1 --- Overview of Topic Discovery Algorithm --- p.47 / Chapter 4.3.2 --- Relevance Model --- p.47 / Chapter 4.3.3 --- Event and Topic Combination --- p.50 / Chapter 4.3.4 --- Topic Discovery Output --- p.50 / Chapter 5 --- Event And Topic Discovery Experimental Results --- p.54 / Chapter 5.1 --- Testing Corpus --- p.54 / Chapter 5.2 --- Evaluation Methodology --- p.56 / Chapter 5.3 --- Experimental Results on Event Discovery --- p.58 / Chapter 5.3.1 --- Parameter Tuning --- p.58 / Chapter 5.3.2 --- Event Discovery Result --- p.59 / Chapter 5.4 --- Experimental Results on Topic Discovery --- p.62 / Chapter 5.4.1 --- Parameter Tuning --- p.64 / Chapter 5.4.2 --- Topic Discovery Results --- p.64 / Chapter 6 --- Story Link Detection --- p.67 / Chapter 6.1 --- Topic Types --- p.67 / Chapter 6.2 --- Overview of Link Detection Component --- p.68 / Chapter 6.3 --- Automatic Topic Type Categorization --- p.70 / Chapter 6.3.1 --- Training Data Preparation --- p.70 / Chapter 6.3.2 --- Feature Selection --- p.72 / Chapter 6.3.3 --- Training and Tuning Categorization Model --- p.73 / Chapter 6.4 --- Link Detection Algorithm --- p.74 / Chapter 6.4.1 --- Story Component Weight --- p.74 / Chapter 6.4.2 --- Story Link Similarity Calculation --- p.76 / Chapter 6.5 --- Story Link Detection Output --- p.77 / Chapter 7 --- Link Detection Experimental Results --- p.80 / Chapter 7.1 --- Testing Corpus --- p.80 / Chapter 7.2 --- Topic Type Categorization Result --- p.81 / Chapter 7.3 --- Link Detection Evaluation Methodology --- p.82 / Chapter 7.4 --- Experimental Results on Link Detection --- p.83 / Chapter 7.4.1 --- Language Normalization Factor Tuning --- p.83 / Chapter 7.4.2 --- Link Detection Performance --- p.90 / Chapter 7.4.3 --- Link Detection Performance Breakdown --- p.91 / Chapter 8 --- Conclusions and Future Work --- p.95 / Chapter 8.1 --- Conclusions --- p.95 / Chapter 8.2 --- Future Work --- p.96 / Chapter A --- List of Topic Title Annotated for TDT3 corpus by LDC --- p.98 / Chapter B --- List of Manually Annotated Events for TDT3 Corpus --- p.104 / Bibliography --- p.114
|
25 |
Portable language technology a resource-light approach to morpho-syntactic tagging /Feldman, Anna. January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 258-273).
|
26 |
Entwurf und Implementierung eines Frameworks zur Analyse und Evaluation von Verfahren im Information RetrievalWilhelm, Thomas 13 August 2008 (has links) (PDF)
Diese Diplomarbeit führt kurz in das Thema Information Retrieval mit den Schwerpunkten
Evaluation und Evaluationskampagnen ein. Im Anschluss wird anhand der Nachteile eines
vorhandenen Retrieval Systems ein neues Retrieval Framework zur experimentellen Evaluation
von Ansätzen aus dem Information Retrieval entworfen und umgesetzt.
Die Komponenten des Frameworks sind dabei so abstrakt angelegt, dass verschiedene, bestehende
Retrieval Systeme, wie zum Beispiel Apache Lucene oder Terrier, integriert werden
können. Anhand einer Referenzimplementierung für den ImageCLEF Photographic Retrieval
Task des ImageCLEF Tracks des Cross Language Evaluation Forums wird die Funktionsfähigkeit
des Frameworks überprüft und bestätigt.
|
27 |
Peer to peer English/Chinese cross-language information retrievalLu, Chengye January 2008 (has links)
Peer to peer systems have been widely used in the internet. However, most of the peer to peer information systems are still missing some of the important features, for example cross-language IR (Information Retrieval) and collection selection / fusion features. Cross-language IR is the state-of-art research area in IR research community. It has not been used in any real world IR systems yet. Cross-language IR has the ability to issue a query in one language and receive documents in other languages. In typical peer to peer environment, users are from multiple countries. Their collections are definitely in multiple languages. Cross-language IR can help users to find documents more easily. E.g. many Chinese researchers will search research papers in both Chinese and English. With Cross-language IR, they can do one query in Chinese and get documents in two languages. The Out Of Vocabulary (OOV) problem is one of the key research areas in crosslanguage information retrieval. In recent years, web mining was shown to be one of the effective approaches to solving this problem. However, how to extract Multiword Lexical Units (MLUs) from the web content and how to select the correct translations from the extracted candidate MLUs are still two difficult problems in web mining based automated translation approaches. Discovering resource descriptions and merging results obtained from remote search engines are two key issues in distributed information retrieval studies. In uncooperative environments, query-based sampling and normalized-score based merging strategies are well-known approaches to solve such problems. However, such approaches only consider the content of the remote database but do not consider the retrieval performance of the remote search engine. This thesis presents research on building a peer to peer IR system with crosslanguage IR and advance collection profiling technique for fusion features. Particularly, this thesis first presents a new Chinese term measurement and new Chinese MLU extraction process that works well on small corpora. An approach to selection of MLUs in a more accurate manner is also presented. After that, this thesis proposes a collection profiling strategy which can discover not only collection content but also retrieval performance of the remote search engine. Based on collection profiling, a web-based query classification method and two collection fusion approaches are developed and presented in this thesis. Our experiments show that the proposed strategies are effective in merging results in uncooperative peer to peer environments. Here, an uncooperative environment is defined as each peer in the system is autonomous. Peer like to share documents but they do not share collection statistics. This environment is a typical peer to peer IR environment. Finally, all those approaches are grouped together to build up a secure peer to peer multilingual IR system that cooperates through X.509 and email system.
|
28 |
Word embeddings for monolingual and cross-language domain-specific information retrieval / Ordinbäddningar för enspråkig och tvärspråklig domänspecifik informationssökningWigder, Chaya January 2018 (has links)
Various studies have shown the usefulness of word embedding models for a wide variety of natural language processing tasks. This thesis examines how word embeddings can be incorporated into domain-specific search engines for both monolingual and cross-language search. This is done by testing various embedding model hyperparameters, as well as methods for weighting the relative importance of words to a document or query. In addition, methods for generating domain-specific bilingual embeddings are examined and tested. The system was compared to a baseline that used cosine similarity without word embeddings, and for both the monolingual and bilingual search engines the use of monolingual embedding models improved performance above the baseline. However, bilingual embeddings, especially for domain-specific terms, tended to be of too poor quality to be used directly in the search engines. / Flera studier har visat att ordinbäddningsmodeller är användningsbara för många olika språkteknologiuppgifter. Denna avhandling undersöker hur ordinbäddningsmodeller kan användas i sökmotorer för både enspråkig och tvärspråklig domänspecifik sökning. Experiment gjordes för att optimera hyperparametrarna till ordinbäddningsmodellerna och för att hitta det bästa sättet att vikta ord efter hur viktiga de är i dokumentet eller sökfrågan. Dessutom undersöktes metoder för att skapa domänspecifika tvåspråkiga inbäddningar. Systemet jämfördes med en baslinje utan inbäddningar baserad på cosinuslikhet, och för både enspråkiga och tvärspråkliga sökningar var systemet som använde enspråkiga inbäddningar bättre än baslinjen. Däremot var de tvåspråkiga inbäddningarna, särskilt för domänspecifika ord, av låg kvalitet och gav för dåliga resultat för direkt användning inom sökmotorer.
|
29 |
Automatic construction of English/Chinese parallel corpus.January 2001 (has links)
Li Kar Wing. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 88-96). / Abstracts in English and Chinese. / ABSTRACT --- p.i / ACKNOWLEDGEMENTS --- p.v / LIST OF TABLES --- p.viii / LIST OF FIGURES --- p.ix / CHAPTERS / Chapter 1. --- INTRODUCTION --- p.1 / Chapter 1.1 --- Application of corpus-based techniques --- p.2 / Chapter 1.1.1 --- Machine Translation (MT) --- p.2 / Chapter 1.1.1.1 --- Linguistic --- p.3 / Chapter 1.1.1.2 --- Statistical --- p.4 / Chapter 1.1.1.3 --- Lexicon construction --- p.4 / Chapter 1.1.2 --- Cross-lingual Information Retrieval (CLIR) --- p.6 / Chapter 1.1.2.1 --- Controlled vocabulary --- p.6 / Chapter 1.1.2.2 --- Free text --- p.7 / Chapter 1.1.2.3 --- Application corpus-based approach in CLIR --- p.9 / Chapter 1.2 --- Overview of linguistic resources --- p.10 / Chapter 1.3 --- Written language corpora --- p.12 / Chapter 1.3.1 --- Types of corpora --- p.13 / Chapter 1.3.2 --- Limitation of comparable corpora --- p.16 / Chapter 1.4 --- Outline of the dissertation --- p.17 / Chapter 2. --- LITERATURE REVIEW --- p.19 / Chapter 2.1 --- Research in automatic corpus construction --- p.20 / Chapter 2.2 --- Research in translation alignment --- p.25 / Chapter 2.2.1 --- Sentence alignment --- p.27 / Chapter 2.2.2 --- Word alignment --- p.28 / Chapter 2.3 --- Research in alignment of sequences --- p.33 / Chapter 3. --- ALIGNMENT AT WORD LEVEL AND CHARACTER LEVEL --- p.35 / Chapter 3.1 --- Title alignment --- p.35 / Chapter 3.1.1 --- Lexical features --- p.37 / Chapter 3.1.2 --- Grammatical features --- p.40 / Chapter 3.1.3 --- The English/Chinese alignment model --- p.41 / Chapter 3.2 --- Alignment at word level and character level --- p.42 / Chapter 3.2.1 --- Alignment at word level --- p.42 / Chapter 3.2.2 --- Alignment at character level: Longest matching --- p.44 / Chapter 3.2.3 --- Longest common subsequence(LCS) --- p.46 / Chapter 3.2.4 --- Applying LCS in the English/Chinese alignment model --- p.48 / Chapter 3.3 --- Reduce overlapping ambiguity --- p.52 / Chapter 3.3.1 --- Edit distance --- p.52 / Chapter 3.3.2 --- Overlapping in the algorithm model --- p.54 / Chapter 4. --- ALIGNMENT AT TITLE LEVEL --- p.59 / Chapter 4.1 --- Review of score functions --- p.59 / Chapter 4.2 --- The Score function --- p.60 / Chapter 4.2.1 --- (C matches E) and (E matches C) --- p.60 / Chapter 4.2.2 --- Length similarity --- p.63 / Chapter 5. --- EXPERIMENTAL RESULTS --- p.69 / Chapter 5.1 --- Hong Kong government press release articles --- p.69 / Chapter 5.2 --- Hang Seng Bank economic monthly reports --- p.76 / Chapter 5.3 --- Hang Seng Bank press release articles --- p.78 / Chapter 5.4 --- Hang Seng Bank speech articles --- p.81 / Chapter 5.5 --- Quality of the collections and future work --- p.84 / Chapter 6. --- CONCLUSION --- p.87 / Bibliography
|
30 |
Auxílio na prevenção de doenças crônicas por meio de mapeamento e relacionamento conceitual de informações em biomedicina / Support in the Prevention of Chronic Diseases by means of Mapping and Conceptual Relationship of Biomedical InformationPollettini, Juliana Tarossi 28 November 2011 (has links)
Pesquisas recentes em medicina genômica sugerem que fatores de risco que incidem desde a concepção de uma criança até o final de sua adolescência podem influenciar no desenvolvimento de doenças crônicas da idade adulta. Artigos científicos com descobertas e estudos inovadores sobre o tema indicam que a epigenética deve ser explorada para prevenir doenças de alta prevalência como doenças cardiovasculares, diabetes e obesidade. A grande quantidade de artigos disponibilizados diariamente dificulta a atualização de profissionais, uma vez que buscas por informação exata se tornam complexas e dispendiosas em relação ao tempo gasto na procura e análise dos resultados. Algumas tecnologias e técnicas computacionais podem apoiar a manipulação dos grandes repositórios de informações biomédicas, assim como a geração de conhecimento. O presente trabalho pesquisa a descoberta automática de artigos científicos que relacionem doenças crônicas e fatores de risco para as mesmas em registros clínicos de pacientes. Este trabalho também apresenta o desenvolvimento de um arcabouço de software para sistemas de vigilância que alertem profissionais de saúde sobre problemas no desenvolvimento humano. A efetiva transformação dos resultados de pesquisas biomédicas em conhecimento possível de ser utilizado para beneficiar a saúde pública tem sido considerada um domínio importante da informática. Este domínio é denominado Bioinformática Translacional (BUTTE,2008). Considerando-se que doenças crônicas são, mundialmente, um problema sério de saúde e lideram as causas de mortalidade com 60% de todas as mortes, o presente trabalho poderá possibilitar o uso direto dos resultados dessas pesquisas na saúde pública e pode ser considerado um trabalho de Bioinformática Translacional. / Genomic medicine has suggested that the exposure to risk factors since conception may influence gene expression and consequently induce the development of chronic diseases in adulthood. Scientific papers bringing up these discoveries indicate that epigenetics must be exploited to prevent diseases of high prevalence, such as cardiovascular diseases, diabetes and obesity. A large amount of scientific information burdens health care professionals interested in being updated, once searches for accurate information become complex and expensive. Some computational techniques might support management of large biomedical information repositories and discovery of knowledge. This study presents a framework to support surveillance systems to alert health professionals about human development problems, retrieving scientific papers that relate chronic diseases to risk factors detected on a patient\'s clinical record. As a contribution, healthcare professionals will be able to create a routine with the family, setting up the best growing conditions. According to Butte, the effective transformation of results from biomedical research into knowledge that actually improves public health has been considered an important domain of informatics and has been called Translational Bioinformatics. Since chronic diseases are a serious health problem worldwide and leads the causes of mortality with 60% of all deaths, this scientific investigation will probably enable results from bioinformatics researches to directly benefit public health.
|
Page generated in 0.135 seconds