• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 12
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improving Dynamic Navigation Algorithms

Yue, Weiya, Ph.D. 30 September 2013 (has links)
No description available.
2

Effective and Efficient Methodologies for Social Network Analysis

Pan, Long 16 January 2008 (has links)
Performing social network analysis (SNA) requires a set of powerful techniques to analyze structural information contained in interactions between social entities. Many SNA technologies and methodologies have been developed and have successfully provided significant insights for small-scale interactions. However, these techniques are not suitable for analyzing large social networks, which are very popular and important in various fields and have special structural properties that cannot be obtained from small networks or their analyses. There are a number of issues that need to be further studied in the design of current SNA techniques. A number of key issues can be embodied in three fundamental and critical challenges: long processing time, large computational resource requirements, and network dynamism. In order to address these challenges, we discuss an anytime-anywhere methodology based on a parallel/distributed computational framework to effectively and efficiently analyze large and dynamic social networks. In our methodology, large social networks are decomposed into intra-related smaller parts. A coarse-level of network analysis is built based on comprehensively analyzing each part. The partial analysis results are incrementally refined over time. Also, during the analyses process, network dynamic changes are effectively and efficiently adapted based on the obtained results. In order to evaluate and validate our methodology, we implement our methodology for a set of SNA metrics which are significant for SNA applications and cover a wide range of difficulties. Through rigorous theoretical and experimental analyses, we demonstrate that our anytime-anywhere methodology is / Ph. D.
3

Algoritmos anytime baseados em instâncias para classificação em fluxo de dados / Instance-based anytime algorithm to data stream classification

Lemes, Cristiano Inácio 09 March 2016 (has links)
Aprendizado em fluxo de dados é uma área de pesquisa importante e que vem crescendo nos últimos tempos. Em muitas aplicações reais os dados são gerados em uma sequência temporal potencialmente infinita. O processamento em fluxo possui como principal característica a necessidade por respostas que atendam restrições severas de tempo e memória. Por exemplo, um classificador aplicado a um fluxo de dados deve prover uma resposta a um determinado evento antes que o próximo evento ocorra. Caso isso não ocorra, alguns eventos do fluxo podem ficar sem classificação. Muitos fluxos geram eventos em uma taxa de chegada com grande variabilidade, ou seja, o intervalo de tempo de ocorrência entre dois eventos sucessivos pode variar muito. Para que um sistema de aprendizado obtenha sucesso na aquisição de conhecimento é preciso que ele apresente duas características principais: (i) ser capaz de prover uma classificação para um novo exemplo em tempo hábil e (ii) ser capaz de adaptar o modelo de classificação de maneira a tratar mudanças de conceito, uma vez que os dados podem não apresentar uma distribuição estacionária. Algoritmos de aprendizado de máquina em lote não possuem essas propriedades, pois assumem que as distribuições são estacionárias e não estão preparados para atender restrições de memória e processamento. Para atender essas necessidades, esses algoritmos devem ser adaptados ao contexto de fluxo de dados. Uma possível adaptação é tornar o algoritmo de classificação anytime. Algoritmos anytime são capazes de serem interrompidos e prover uma resposta (classificação) aproximada a qualquer instante. Outra adaptação é tornar o algoritmo incremental, de maneira que seu modelo possa ser atualizado para novos exemplos do fluxo de dados. Neste trabalho é realizada a investigação de dois métodos capazes de realizar o aprendizado em um fluxo de dados. O primeiro é baseado no algoritmo k-vizinhos mais próximo anytime estado-da-arte, onde foi proposto um novo método de desempate para ser utilizado neste algoritmo. Os experimentos mostraram uma melhora consistente no desempenho deste algoritmo em várias bases de dados de benchmark. O segundo método proposto possui as características dos algoritmos anytime e é capaz de tratar a mudança de conceito nos dados. Este método foi chamado de Algoritmo Anytime Incremental e possui duas versões, uma baseado no algoritmo Space Saving e outra em uma Janela Deslizante. Os experimentos mostraram que em cada fluxo cada versão deste método proposto possui suas vantagens e desvantagens. Mas no geral, comparado com outros métodos baselines, ambas as versões apresentaram melhor desempenho. / Data stream learning is a very important research field that has received much attention from the scientific community. In many real-world applications, data is generated as potentially infinite temporal sequences. The main characteristic of stream processing is to provide answers observing stringent restrictions of time and memory. For example, a data stream classifier must provide an answer for each event before the next one arrives. If this does not occur, some events from the data stream may be left unclassified. Many streams generate events with highly variable output rate, i.e. the time interval between two consecutive events may vary greatly. For a learning system to be successful, two properties must be satisfied: (i) it must be able to provide a classification for a new example in a short time and (ii) it must be able to adapt the classification model to treat concept change, since the data may not follow a stationary distribution. Batch machine learning algorithms do not satisfy those properties because they assume that the distribution is stationary and they are not prepared to operate with severe memory and processing constraints. To satisfy these requirements, these algorithms must be adapted to the data stream context. One possible adaptation is to turn the algorithm into an anytime classifier. Anytime algorithms may be interrupted and still provide an approximated answer (classification) at any time. Another adaptation is to turn the algorithm into an incremental classifier so that its model may be updated with new examples from the data stream. In this work, it is performed an evaluation of two approaches for data stream learning. The first one is based on a state-of-the-art k-nearest neighbor anytime classifier. A new tiebreak approach is proposed to be used with this algorithm. Experiments show consistently better results in the performance of this algorithm in many benchmark data sets. The second proposed approach is to adapt the anytime algorithm for concept change. This approach was called Incremental Anytime Algorithm, and it was designed with two versions. One version is based on the Space Saving algorithm and the other is based in a Sliding Window. Experiments show that both versions are significantly better than baseline approaches.
4

Algoritmos anytime baseados em instâncias para classificação em fluxo de dados / Instance-based anytime algorithm to data stream classification

Cristiano Inácio Lemes 09 March 2016 (has links)
Aprendizado em fluxo de dados é uma área de pesquisa importante e que vem crescendo nos últimos tempos. Em muitas aplicações reais os dados são gerados em uma sequência temporal potencialmente infinita. O processamento em fluxo possui como principal característica a necessidade por respostas que atendam restrições severas de tempo e memória. Por exemplo, um classificador aplicado a um fluxo de dados deve prover uma resposta a um determinado evento antes que o próximo evento ocorra. Caso isso não ocorra, alguns eventos do fluxo podem ficar sem classificação. Muitos fluxos geram eventos em uma taxa de chegada com grande variabilidade, ou seja, o intervalo de tempo de ocorrência entre dois eventos sucessivos pode variar muito. Para que um sistema de aprendizado obtenha sucesso na aquisição de conhecimento é preciso que ele apresente duas características principais: (i) ser capaz de prover uma classificação para um novo exemplo em tempo hábil e (ii) ser capaz de adaptar o modelo de classificação de maneira a tratar mudanças de conceito, uma vez que os dados podem não apresentar uma distribuição estacionária. Algoritmos de aprendizado de máquina em lote não possuem essas propriedades, pois assumem que as distribuições são estacionárias e não estão preparados para atender restrições de memória e processamento. Para atender essas necessidades, esses algoritmos devem ser adaptados ao contexto de fluxo de dados. Uma possível adaptação é tornar o algoritmo de classificação anytime. Algoritmos anytime são capazes de serem interrompidos e prover uma resposta (classificação) aproximada a qualquer instante. Outra adaptação é tornar o algoritmo incremental, de maneira que seu modelo possa ser atualizado para novos exemplos do fluxo de dados. Neste trabalho é realizada a investigação de dois métodos capazes de realizar o aprendizado em um fluxo de dados. O primeiro é baseado no algoritmo k-vizinhos mais próximo anytime estado-da-arte, onde foi proposto um novo método de desempate para ser utilizado neste algoritmo. Os experimentos mostraram uma melhora consistente no desempenho deste algoritmo em várias bases de dados de benchmark. O segundo método proposto possui as características dos algoritmos anytime e é capaz de tratar a mudança de conceito nos dados. Este método foi chamado de Algoritmo Anytime Incremental e possui duas versões, uma baseado no algoritmo Space Saving e outra em uma Janela Deslizante. Os experimentos mostraram que em cada fluxo cada versão deste método proposto possui suas vantagens e desvantagens. Mas no geral, comparado com outros métodos baselines, ambas as versões apresentaram melhor desempenho. / Data stream learning is a very important research field that has received much attention from the scientific community. In many real-world applications, data is generated as potentially infinite temporal sequences. The main characteristic of stream processing is to provide answers observing stringent restrictions of time and memory. For example, a data stream classifier must provide an answer for each event before the next one arrives. If this does not occur, some events from the data stream may be left unclassified. Many streams generate events with highly variable output rate, i.e. the time interval between two consecutive events may vary greatly. For a learning system to be successful, two properties must be satisfied: (i) it must be able to provide a classification for a new example in a short time and (ii) it must be able to adapt the classification model to treat concept change, since the data may not follow a stationary distribution. Batch machine learning algorithms do not satisfy those properties because they assume that the distribution is stationary and they are not prepared to operate with severe memory and processing constraints. To satisfy these requirements, these algorithms must be adapted to the data stream context. One possible adaptation is to turn the algorithm into an anytime classifier. Anytime algorithms may be interrupted and still provide an approximated answer (classification) at any time. Another adaptation is to turn the algorithm into an incremental classifier so that its model may be updated with new examples from the data stream. In this work, it is performed an evaluation of two approaches for data stream learning. The first one is based on a state-of-the-art k-nearest neighbor anytime classifier. A new tiebreak approach is proposed to be used with this algorithm. Experiments show consistently better results in the performance of this algorithm in many benchmark data sets. The second proposed approach is to adapt the anytime algorithm for concept change. This approach was called Incremental Anytime Algorithm, and it was designed with two versions. One version is based on the Space Saving algorithm and the other is based in a Sliding Window. Experiments show that both versions are significantly better than baseline approaches.
5

Vers des systèmes de recommandation robustes pour la navigation Web : inspiration de la modélisation statistique du langage

Bonnin, Geoffray 23 November 2010 (has links) (PDF)
Le but de cette thèse est d'améliorer la qualité des systèmes de recommandation pour la navigation Web en utilisant la séquentialité des actions de navigation des utilisateurs. La notion de séquentialité a déjà été étudiée dans ce contexte. De telles études tentent habituellement de trouver un bon compromis entre précision, complexité en temps et en mémoire, et couverture. De plus, le Web a cela de particulier que du bruit peut être contenu au sein des navigations (erreurs de navigation, apparition de pop-ups, etc.), et que les utilisateurs peuvent effectuer des navigations parallèles. La plupart des modèles qui ont été proposés dans la littérature exploitent soit des suites contiguës de ressources et ne sont pas résistants au bruit, soit des suites discontiguës de ressources et induisent une complexité en temps et en mémoire importantes. Cette complexité peut être réduite en effectuant une sélection sur les séquences, mais cela engendre alors des problèmes de couverture. Enfin à notre connaissance, le fait que les utilisateurs puissent effectuer des navigations parallèles n'a jamais été étudié du point de vue de la recommandation. La problématique de cette thèse est donc de proposer un nouveau modèle séquentiel ayant les cinq caractéristiques suivantes : (1) une bonne précision de recommandation, (2) une bonne résistance au bruit, (3) la prise en compte des navigations parallèles, (4) une bonne couverture (5) et une faible complexité en temps et en mémoire. Afin de répondre à cette problématique, nous nous inspirons de la Modélisation Statistique du Langage (MSL), qui a des caractéristiques très proches de celles de la navigation Web. La MSL est étudiée depuis beaucoup plus longtemps que les systèmes de recommandation et a largement prouvé sa précision et son efficacité. De plus, la plupart des modèles statistiques de langage qui ont été proposés prennent en compte des séquences. Nous avons donc étudié la possibilité d'exploiter les modèles utilisés en MSL et leur adaptation aux contraintes spécifiques de la navigation Web.
6

Speeding Up the Convergence of Online Heuristic Search and Scaling Up Offline Heuristic Search

Furcy, David Andre 25 November 2004 (has links)
The most popular methods for solving the shortest-path problem in Artificial Intelligence are heuristic search algorithms. The main contributions of this research are new heuristic search algorithms that are either faster or scale up to larger problems than existing algorithms. Our contributions apply to both online and offline tasks. For online tasks, existing real-time heuristic search algorithms learn better informed heuristic values and in some cases eventually converge to a shortest path by repeatedly executing the action leading to a successor state with a minimum cost-to-goal estimate. In contrast, we claim that real-time heuristic search converges faster to a shortest path when it always selects an action leading to a state with a minimum f-value, where the f-value of a state is an estimate of the cost of a shortest path from start to goal via the state, just like in the offline A* search algorithm. We support this claim by implementing this new non-trivial action-selection rule in FALCONS and by showing empirically that FALCONS significantly reduces the number of actions to convergence of a state-of-the-art real-time search algorithm. For offline tasks, we improve on two existing ways of scaling up best-first search to larger problems. First, it is known that the WA* algorithm (a greedy variant of A*) solves larger problems when it is either diversified (i.e., when it performs expansions in parallel) or committed (i.e., when it chooses the state to expand next among a fixed-size subset of the set of generated but unexpanded states). We claim that WA* solves even larger problems when it is enhanced with both diversity and commitment. We support this claim with our MSC-KWA* algorithm. Second, it is known that breadth-first search solves larger problems when it prunes unpromising states, resulting in the beam search algorithm. We claim that beam search quickly solves even larger problems when it is enhanced with backtracking based on limited discrepancy search. We support this claim with our BULB algorithm. We show that both MSC-KWA* and BULB scale up to larger problems than several state-of-the-art offline search algorithms in three standard benchmark domains. Finally, we present an anytime variant of BULB and apply it to the multiple sequence alignment problem in biology.
7

Gränssnittsanalys av Video On Demand tjänster : En gränssnittsanalys av fyra stycken VOD-tjänster på fyra stycken plattformar / Interface analysis of Video On Demand services : An interface analysis of four VOD services on fourplatforms

Ask, Hanna January 2013 (has links)
This paper works as an assignment from the Video On Demand Company Headweb, who wanted aninterface analysis of their own and their competitors VOD services. The purpose is to help Headwebwith their own interface by studying theirs and others interfaces and make an analysis on how theyfunction, how they look and how good the usability is with every service, with hope to findsomething that could make the VOD service of Headweb better.I have also studied how the interfaces of the VOD services look on different platforms, to see howthey are designed as a whole. This has been an expert analysis where I have studied the differentVOD-services by myself, without any external users.To find advantages and disadvantages with the services and their platforms, I have used differenttheories, models and design patterns to frame areas such as filmsearch, categorization, registrationand navigation. I have compared the VOD services with each other and drew conclusions for eacharea to see how the interfaces differ between the services, but also between platforms.My analysis has shown that all VOD services in this project have a very unique interface, eventhough they all have the same goal, to allow the user to control their viewing of film. They usevarious functions and structure and use their own page names and categorizations. In this project, Ihave also found a number of changes that I recommend that Headweb does to their service toenhance the usability.
8

Issues of Real Time Information Retrieval in Large, Dynamic and Heterogeneous Search Spaces

Korah, John 10 March 2010 (has links)
Increasing size and prevalence of real time information have become important characteristics of databases found on the internet. Due to changing information, the relevancy ranking of the search results also changes. Current methods in information retrieval, which are based on offline indexing, are not efficient in such dynamic search spaces and cannot quickly provide the most current results. Due to the explosive growth of the internet, stove-piped approaches for dealing with dynamism by simply employing large computational resources are ultimately not scalable. A new processing methodology that incorporates intelligent resource allocation strategies is required. Also, modeling the dynamism in the search space in real time is essential for effective resource allocation. In order to support multi-grained dynamic resource allocation, we propose to use a partial processing approach that uses anytime algorithms to process the documents in multiple steps. At each successive step, a more accurate approximation of the final similarity values of the documents is produced. Resource allocation algorithm use these partial results to select documents for processing, decide on the number of processing steps and the computation time allocated for each step. We validate the processing paradigm by demonstrating its viability with image documents. We design an anytime image algorithm that uses a combination of wavelet transforms and machine learning techniques to map low level visual features to higher level concepts. Experimental validation is done by implementing the image algorithm within an established multiagent information retrieval framework called I-FGM. We also formulate a multiagent resource allocation framework for design and performance analysis of resource allocation with partial processing. A key aspect of the framework is modeling changes in the search space as external and internal dynamism using a grid-based search space model. The search space model divides the documents or candidates into groups based on its partial-value and portion processed. Hence the changes in the search space can be effectively represented in the search space model as flow of agents and candidates between the grids. Using comparative experimental studies and detailed statistical analysis we validate the search space model and demonstrate the effectiveness of the resource allocation framework. / Ph. D.
9

Des systèmes d'aide à la décision temps réel et distribués : modélisation par agents

Duvallet, Claude 05 October 2001 (has links) (PDF)
Les systèmes d'aide à la décision (SAD) doivent permettre aux utilisateurs (décideurs) de prendre les meilleures décisions dans les meilleurs délais. Dans cette thèse, nous nous sommes intéressés aux systèmes qui reposent sur une architecture multi-agents. En effet, les systèmes multi-agents (SMA) permettent de construire des systèmes informatiques ayant recours à l'interrogation multi-critères, souvent utilisée dans les SAD. De façon plus générale, les SMA permettent de concevoir des systèmes qui sont de nature complexe. Cependant, ils n'intègrent pas la notion de contraintes temporelles qui sont souvent très fortes dans les SAD. De plus, dans ces systèmes, des résultats même partiels ou incomplets obtenus dans les temps sont souvent préférés car plus utiles pour la prise de décision que des résultats complets et précis obtenus en retard. Pour cela, les techniques ``anytime'' (raisonnement progressif) semblent une excellente solution. Dans cette thèse, nous présentons une méthode de conception d'un système multi-agent temps réel basé sur l'exploitation des techniques ``anytime''. De plus, nous prenons en compte dans notre modèle l'aspect souvent distribué des SAD.
10

Approches anytime et distribuées pour l'appariment de graphes / Anytime and distributed approaches for graph matching

Abu-Aisheh, Zeina 25 May 2016 (has links)
En raison de la capacité et de l'amélioration des performances informatiques, les représentations structurelles sont devenues de plus en plus populaires dans le domaine de la reconnaissance de formes (RF). Quand les objets sont structurés à base de graphes, le problme de la comparaison d'objets revient à un problme d'appariement de graphes (Graph Matching). Au cours de la dernière décennie, les chercheurs travaillant dans le domaine de l'appariement de graphes ont porté une attention particulière à la distance d'édition entre graphes (GED), notamment pour sa capacité à traiter différent types de graphes. GED a été ainsi appliquée sur des problématiques spécifiques qui varient de la reconnaissance de molécules à la classi fication d'images. / Due to the inherent genericity of graph-based representations, and thanks to the improvement of computer capacities, structural representations have become more and more popular in the field of Pattern Recognition (PR). In a graph-based representation, vertices and their attributes describe objects (or part of them) while edges represent interrelationships between the objects. Representing objects by graphs turns the problem of object comparison into graph matching (GM) where correspondences between vertices and edges of two graphs have to be found.

Page generated in 0.0339 seconds