• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 1
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Minerador WEB: um estudo sobre mecanismos de descoberta de informações na WEB. / Minerador WEB: a study on mechanisms of discovery of information in the WEB.

Toscano, Wagner 10 July 2003 (has links)
A Web (WWW - World Wide Web) possui uma grande quantidade e variedade de informações. Isso representa um grande atrativo para que as pessoas busquem alguma informação desejada na Web. Por outo lado, dessa grande quantidade de informações resulta o problema fundamental de como descobrir, de uma maneira eficaz, se a informação desejada está presente na Web e como chegar até ela. A existência de um conjunto de informações que não se permitem acessar com facilidade ou que o acesso é desprovido de ferramentas eficazes de busca da informção, inviabiliza sua utilização. Soma-se às dificuldades no processo de pesquisa, a falta de estrutura das informações da Web que dificulta a aplicação de processos na busca da informação. Neste trabalho é apresentado um estudo de técnicas alternativas de busca da informação, pela aplicação de diversos conceitos relacionados à recuperação da informação e à representação do conhecimento. Mais especificamente, os objetivos são analisar a eficiência resultante da utilização de técnicas complementares de busca da informação, em particular mecanismos de extração de informações a partir de trechos explícitos nos documentos HTML e o uso do método de Naive Bayes na classificação de sites, e analisar a eficácia de um processo de armazenamento de informações extraídas da Web numa base de conhecimento (descrita em lógica de primeira ordem) que, aliada a um conhecimento de fundo, permita respomder a consultas mais complexas que as possíveis por meio do uso de expressões baseadas em palavras-chave e conectivos lógicos. / The World Wide Web (Web) has a huge amount and a large diversity of informations. There is a big appeal to people navigate on the Web to search for a desired information. On the other hand, due to this huge amount of data, we are faced with the fundamental problems of how to discover and how to reach the desired information in a efficient way. If there is no efficient mechanisms to find informations, the use of the Web as a useful source of information becomes very restrictive. Another important problem to overcome is the lack of a regular structure of the information in the Web, making difficult the use of usual information search methods. In this work it is presented a study of alternative techniques for information search. Several concepts of information retrieval and knowledge representation are applied. A primary goal is to analyse the efficiency of information retrieval methods using analysis of extensional information and probabilistic methods like Naive Bayes to classify sites among a pre-defined classes of sites.Another goal is to design a logic based knowledhe base, in order to enable a user to apply more complex queries than queries based simply on expressions using keywouds and logical connectives
2

Minerador WEB: um estudo sobre mecanismos de descoberta de informações na WEB. / Minerador WEB: a study on mechanisms of discovery of information in the WEB.

Wagner Toscano 10 July 2003 (has links)
A Web (WWW - World Wide Web) possui uma grande quantidade e variedade de informações. Isso representa um grande atrativo para que as pessoas busquem alguma informação desejada na Web. Por outo lado, dessa grande quantidade de informações resulta o problema fundamental de como descobrir, de uma maneira eficaz, se a informação desejada está presente na Web e como chegar até ela. A existência de um conjunto de informações que não se permitem acessar com facilidade ou que o acesso é desprovido de ferramentas eficazes de busca da informção, inviabiliza sua utilização. Soma-se às dificuldades no processo de pesquisa, a falta de estrutura das informações da Web que dificulta a aplicação de processos na busca da informação. Neste trabalho é apresentado um estudo de técnicas alternativas de busca da informação, pela aplicação de diversos conceitos relacionados à recuperação da informação e à representação do conhecimento. Mais especificamente, os objetivos são analisar a eficiência resultante da utilização de técnicas complementares de busca da informação, em particular mecanismos de extração de informações a partir de trechos explícitos nos documentos HTML e o uso do método de Naive Bayes na classificação de sites, e analisar a eficácia de um processo de armazenamento de informações extraídas da Web numa base de conhecimento (descrita em lógica de primeira ordem) que, aliada a um conhecimento de fundo, permita respomder a consultas mais complexas que as possíveis por meio do uso de expressões baseadas em palavras-chave e conectivos lógicos. / The World Wide Web (Web) has a huge amount and a large diversity of informations. There is a big appeal to people navigate on the Web to search for a desired information. On the other hand, due to this huge amount of data, we are faced with the fundamental problems of how to discover and how to reach the desired information in a efficient way. If there is no efficient mechanisms to find informations, the use of the Web as a useful source of information becomes very restrictive. Another important problem to overcome is the lack of a regular structure of the information in the Web, making difficult the use of usual information search methods. In this work it is presented a study of alternative techniques for information search. Several concepts of information retrieval and knowledge representation are applied. A primary goal is to analyse the efficiency of information retrieval methods using analysis of extensional information and probabilistic methods like Naive Bayes to classify sites among a pre-defined classes of sites.Another goal is to design a logic based knowledhe base, in order to enable a user to apply more complex queries than queries based simply on expressions using keywouds and logical connectives
3

Anomaly detection technique for sequential data / Technique de détection d'anomalies utilisant des données séquentielles

Pellissier, Muriel 15 October 2013 (has links)
De nos jours, beaucoup de données peuvent être facilement accessibles. Mais toutes ces données ne sont pas utiles si nous ne savons pas les traiter efficacement et si nous ne savons pas extraire facilement les informations pertinentes à partir d'une grande quantité de données. Les techniques de détection d'anomalies sont utilisées par de nombreux domaines afin de traiter automatiquement les données. Les techniques de détection d'anomalies dépendent du domaine d'application, des données utilisées ainsi que du type d'anomalie à détecter.Pour cette étude nous nous intéressons seulement aux données séquentielles. Une séquence est une liste ordonnée d'objets. Pour de nombreux domaines, il est important de pouvoir identifier les irrégularités contenues dans des données séquentielles comme par exemple les séquences ADN, les commandes d'utilisateur, les transactions bancaires etc.Cette thèse présente une nouvelle approche qui identifie et analyse les irrégularités de données séquentielles. Cette technique de détection d'anomalies peut détecter les anomalies de données séquentielles dont l'ordre des objets dans les séquences est important ainsi que la position des objets dans les séquences. Les séquences sont définies comme anormales si une séquence est presque identique à une séquence qui est fréquente (normale). Les séquences anormales sont donc les séquences qui diffèrent légèrement des séquences qui sont fréquentes dans la base de données.Dans cette thèse nous avons appliqué cette technique à la surveillance maritime, mais cette technique peut être utilisée pour tous les domaines utilisant des données séquentielles. Pour notre application, la surveillance maritime, nous avons utilisé cette technique afin d'identifier les conteneurs suspects. En effet, de nos jours 90% du commerce mondial est transporté par conteneurs maritimes mais seulement 1 à 2% des conteneurs peuvent être physiquement contrôlés. Ce faible pourcentage est dû à un coût financier très élevé et au besoin trop important de ressources humaines pour le contrôle physique des conteneurs. De plus, le nombre de conteneurs voyageant par jours dans le monde ne cesse d'augmenter, il est donc nécessaire de développer des outils automatiques afin d'orienter le contrôle fait par les douanes afin d'éviter les activités illégales comme les fraudes, les quotas, les produits illégaux, ainsi que les trafics d'armes et de drogues. Pour identifier les conteneurs suspects nous comparons les trajets des conteneurs de notre base de données avec les trajets des conteneurs dits normaux. Les trajets normaux sont les trajets qui sont fréquents dans notre base de données.Notre technique est divisée en deux parties. La première partie consiste à détecter les séquences qui sont fréquentes dans la base de données. La seconde partie identifie les séquences de la base de données qui diffèrent légèrement des séquences qui sont fréquentes. Afin de définir une séquence comme normale ou anormale, nous calculons une distance entre une séquence qui est fréquente et une séquence aléatoire de la base de données. La distance est calculée avec une méthode qui utilise les différences qualitative et quantitative entre deux séquences. / Nowadays, huge quantities of data can be easily accessible, but all these data are not useful if we do not know how to process them efficiently and how to extract easily relevant information from a large quantity of data. The anomaly detection techniques are used in many domains in order to help to process the data in an automated way. The anomaly detection techniques depend on the application domain, on the type of data, and on the type of anomaly.For this study we are interested only in sequential data. A sequence is an ordered list of items, also called events. Identifying irregularities in sequential data is essential for many application domains like DNA sequences, system calls, user commands, banking transactions etc.This thesis presents a new approach for identifying and analyzing irregularities in sequential data. This anomaly detection technique can detect anomalies in sequential data where the order of the items in the sequences is important. Moreover, our technique does not consider only the order of the events, but also the position of the events within the sequences. The sequences are spotted as anomalous if a sequence is quasi-identical to a usual behavior which means if the sequence is slightly different from a frequent (common) sequence. The differences between two sequences are based on the order of the events and their position in the sequence.In this thesis we applied this technique to the maritime surveillance, but this technique can be used by any other domains that use sequential data. For the maritime surveillance, some automated tools are needed in order to facilitate the targeting of suspicious containers that is performed by the customs. Indeed, nowadays 90% of the world trade is transported by containers and only 1-2% of the containers can be physically checked because of the high financial cost and the high human resources needed to control a container. As the number of containers travelling every day all around the world is really important, it is necessary to control the containers in order to avoid illegal activities like fraud, quota-related, illegal products, hidden activities, drug smuggling or arm smuggling. For the maritime domain, we can use this technique to identify suspicious containers by comparing the container trips from the data set with itineraries that are known to be normal (common). A container trip, also called itinerary, is an ordered list of actions that are done on containers at specific geographical positions. The different actions are: loading, transshipment, and discharging. For each action that is done on a container, we know the container ID and its geographical position (port ID).This technique is divided into two parts. The first part is to detect the common (most frequent) sequences of the data set. The second part is to identify those sequences that are slightly different from the common sequences using a distance-based method in order to classify a given sequence as normal or suspicious. The distance is calculated using a method that combines quantitative and qualitative differences between two sequences.
4

Vyhledávání v multimodálních databázích / Multimodal Database Search

Krejčíř, Tomáš January 2009 (has links)
The field that deals with storing and effective searching of multimedia documents is called Information retrieval. This paper describes solution of effective searching in collections of shots. Multimedia documents are presented as vectors in high-dimensional space, because in such collection of documents it is easier to define semantics as well as the mechanisms of searching. The work aims at problems of similarity searching based on metric space, which uses distance functions, such as Euclidean, Chebyshev or Mahalanobis, for comparing global features and cosine or binary rating for comparing local features. Experiments on the TRECVid dataset compare implemented distance functions. Best distance function for global features appears to be Mahalanobis and for local features cosine rating.
5

A New Approach for Automated Feature Selection

Gocht, Andreas 05 April 2019 (has links)
Feature selection or variable selection is an important step in different machine learning tasks. In a traditional approach, users specify the amount of features, which shall be selected. Afterwards, algorithm select features by using scores like the Joint Mutual Information (JMI). If users do not know the exact amount of features to select, they need to evaluate the full learning chain for different feature counts in order to determine, which amount leads to the lowest training error. To overcome this drawback, we extend the JMI score and mitigate the flaw by introducing a stopping criterion to the selection algorithm that can be specified depending on the learning task. With this, we enable developers to carry out the feature selection task before the actual learning is done. We call our new score Historical Joint Mutual Information (HJMI). Additionally, we compare our new algorithm, using the novel HJMI score, against traditional algorithms, which use the JMI score. With this, we demonstrate that the HJMI-based algorithm is able to automatically select a reasonable amount of features: Our approach delivers results as good as traditional approaches and sometimes even outperforms them, as it is not limited to a certain step size for feature evaluation.

Page generated in 0.1687 seconds