• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improving Table Scans for Trie Indexed Databases

Toney, Ethan 01 January 2018 (has links)
We consider a class of problems characterized by the need for a string based identifier that reflect the ontology of the application domain. We present rules for string-based identifier schemas that facilitate fast filtering in databases used for this class of problems. We provide runtime analysis of our schema and experimentally compare it with another solution. We also discuss performance in our solution to a game engine. The string-based identifier schema can be used in addition scenarios such as cloud computing. An identifier schema adds metadata about an element. So the solution hinges on additional memory but as long as queries operate only on the included metadata there is no need to load the element from disk which leads to huge performance gains.
2

Knihovna pro detekci významových vlastností stromových struktur / A Library for Detection of Semantic Properties of Tree Structures

Panov, Sergey January 2020 (has links)
Testování multikomponentních IT a IoT systémů, které zpracovávají proudy různých zpráv je složitou úlohou. Proč je to složité? Kvůli množství komponentů, jejích asynchronní interakcí, různým kombinacím události pro testování, testovacímu prostředí, které se liší od reálného a množství dalším důvodů. Táhle práce navrhuje způsob generování komplexních dat pro testovací účely s minimálním zásahem vývojářů. Generování dat založeno na analýze sledu komunikace reálného systému a následnou syntézou podobných sledů pro testování. Práce také navrhuje framework na prvotní analýzu zpráv přenášených v zachycené komunikace. Tohle může být uděláno použitím různých abstraktních modelů: modelu zprávy a modelu komunikaci. Výsledkem téhle práce je implementovaní knihovna na tvorbu modelu zprávy a množina operací pro práci s tímto modelem.
3

Parallel Viterbi Search For Continuous Speech Recognition On A Multi-Core Architecture

Parihar, Naveen 11 December 2009 (has links)
State-of-the-art speech-recognition systems can successfully perform simple tasks in real-time on most computers, when the tasks are performed in controlled and noiseree environments. However, current algorithms and processors are not yet powerful enough for real-time large-vocabulary conversational speech recognition in noisy, real-world environments. Parallel processing can improve the real-time performance of speech recognition systems and increase their applicability, and developing an effective approach to parallelization is especially important given the recent trend toward multi-core processor design. In this dissertation, we introduce methods for parallelizing a single-pass across-word n-gram lexical-tree based Viterbi recognizer, which is the most popular architecture for Viterbi-based large vocabulary continuous speech recognition. We parallelize two different open-source implementations of such a recognizer, one developed at Mississippi State University and the other developed at Rheinisch-Westfalische Technische Hochschule University in Germany. We describe three methods for parallelization. The first, called parallel fast likelihood computation, parallelizes likelihood computations by decomposing mixtures among CPU cores, so that each core computes the likelihood of the set of mixtures allocated to it. A second method, lexical-tree division, parallelizes the search management component of a speech recognizer by dividing the lexical tree among the cores. A third and alternative method for parallelizing the search-management component of a speech recognizer, called lexical-tree copies decomposition, dynamically distributes the active lexical-tree copies among the cores. All parallelization methods were tested on two and four cores of an Intel Core2 Quad processor and significantly improved real-time performance. Several challenges for parallelizing a lexical-tree based Viterbi speech recognizer are also identified and discussed.
4

FreeCore : un système d'indexation de résumés de document sur une Table de Hachage Distribuée (DHT) / FreeCore : an index system of summary of documents on an Distributed Hash Table (DHT)

Ngom, Bassirou 13 July 2018 (has links)
Cette thèse étudie la problématique de l’indexation et de la recherche dans les tables de hachage distribuées –Distributed Hash Table (DHT). Elle propose un système de stockage distribué des résumés de documents en se basant sur leur contenu. Concrètement, la thèse utilise les Filtre de Blooms (FBs) pour représenter les résumés de documents et propose une méthode efficace d’insertion et de récupération des documents représentés par des FBs dans un index distribué sur une DHT. Le stockage basé sur contenu présente un double avantage, il permet de regrouper les documents similaires afin de les retrouver plus rapidement et en même temps, il permet de retrouver les documents en faisant des recherches par mots-clés en utilisant un FB. Cependant, la résolution d’une requête par mots-clés représentée par un filtre de Bloom constitue une opération complexe, il faut un mécanisme de localisation des filtres de Bloom de la descendance qui représentent des documents stockés dans la DHT. Ainsi, la thèse propose dans un deuxième temps, deux index de filtres de Bloom distribués sur des DHTs. Le premier système d’index proposé combine les principes d’indexation basée sur contenu et de listes inversées et répond à la problématique liée à la grande quantité de données stockée au niveau des index basés sur contenu. En effet, avec l’utilisation des filtres de Bloom de grande longueur, notre solution permet de stocker les documents sur un plus grand nombre de serveurs et de les indexer en utilisant moins d’espace. Ensuite, la thèse propose un deuxième système d’index qui supporte efficacement le traitement des requêtes de sur-ensembles (des requêtes par mots-clés) en utilisant un arbre de préfixes. Cette dernière solution exploite la distribution des données et propose une fonction de répartition paramétrable permettant d’indexer les documents avec un arbre binaire équilibré. De cette manière, les documents sont répartis efficacement sur les serveurs d’indexation. En outre, la thèse propose dans la troisième solution, une méthode efficace de localisation des documents contenant un ensemble de mots-clés donnés. Comparé aux solutions de même catégorie, cette dernière solution permet d’effectuer des recherches de sur-ensembles en un moindre coût et constitue est une base solide pour la recherche de sur-ensembles sur les systèmes d’index construits au-dessus des DHTs. Enfin, la thèse propose le prototype d’un système pair-à-pair pour l’indexation de contenus et la recherche par mots-clés. Ce prototype, prêt à être déployé dans un environnement réel, est expérimenté dans l’environnement de simulation peersim qui a permis de mesurer les performances théoriques des algorithmes développés tout au long de la thèse. / This thesis examines the problem of indexing and searching in Distributed Hash Table (DHT). It provides a distributed system for storing document summaries based on their content. Concretely, the thesis uses Bloom filters (BF) to represent document summaries and proposes an efficient method for inserting and retrieving documents represented by BFs in an index distributed on a DHT. Content-based storage has a dual advantage. It allows to group similar documents together and to find and retrieve them more quickly at the same by using Bloom filters for keywords searches. However, processing a keyword query represented by a Bloom filter is a difficult operation and requires a mechanism to locate the Bloom filters that represent documents stored in the DHT. Thus, the thesis proposes in a second time, two Bloom filters indexes schemes distributed on DHT. The first proposed index system combines the principles of content-based indexing and inverted lists and addresses the issue of the large amount of data stored by content-based indexes. Indeed, by using Bloom filters with long length, this solution allows to store documents on a large number of servers and to index them using less space. Next, the thesis proposes a second index system that efficiently supports superset queries processing (keywords-queries) using a prefix tree. This solution exploits the distribution of the data and proposes a configurable distribution function that allow to index documents with a balanced binary tree. In this way, documents are distributed efficiently on indexing servers. In addition, the thesis proposes in the third solution, an efficient method for locating documents containing a set of keywords. Compared to solutions of the same category, the latter solution makes it possible to perform subset searches at a lower cost and can be considered as a solid foundation for supersets queries processing on over-dht index systems. Finally, the thesis proposes a prototype of a peer-to-peer system for indexing content and searching by keywords. This prototype, ready to be deployed in a real environment, is experimented with peersim that allowed to measure the theoretical performances of the algorithms developed throughout the thesis.
5

Query Segmentation For E-Commerce Sites

Gong, Xiaojing 12 July 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Query segmentation module is an integral part of Natural Language Processing which analyzes users' query and divides them into separate phrases. Published works on the query segmentation focus on the web search using Google n-gram frequencies corpus or text retrieval from relational databases. However, this module is also useful in the domain of E-Commerce for product search. In this thesis, we will discuss query segmentation in the context of the E-Commerce area. We propose a hybrid unsupervised segmentation methodology which is based on prefix tree, mutual information and relative frequency count to compute the score of query pairs and involve Wikipedia for new words recognition. Furthermore, we use two unique E-Commerce evaluation methods to quantify the accuracy of our query segmentation method.
6

Generická analýza toků v počítačových sítích / Generic Flow Analysis in Computer Networks

Jančová, Markéta January 2020 (has links)
Tato práce se zabývá problematikou popisu síťového provozu pomocí automaticky vytvořeného modelu komunikace. Hlavním zaměřením jsou komunikace v řídicích systémech , které využívají speciální protokoly, jako je například IEC 60870-5-104 . V této práci představujeme metodu charakteristiky síťového provozu z pohledu obsahu komunikace i chování v čase. Tato metoda k popisu využívá deterministické konečné automaty , prefixové stromy  a analýzu opakovatelnosti. Ve druhé části této diplomové práce se zaměřujeme na implementaci programu, který je schopný na základě takového modelu komunikace verifikovat síťový provoz v reálném čase.

Page generated in 0.0683 seconds