• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 10
  • 7
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 74
  • 17
  • 14
  • 13
  • 13
  • 13
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Cálculo de reputação em redes sociais a partir de dados da colaboração entre os participantes / Computation of reputation in social networks from the collaboration between participants

Sonco Mamani, Edith Zaida 03 December 2012 (has links)
Na Web 2.0 são encontrados sistemas com alto volume de interação social. Alguns desses sistemas oferecem cálculo de reputação ou alguma forma de classificação de usuários ou do conteúdo compartilhado. Contudo, em muitos casos, esse valor de reputação resultante é obtido somente a partir de dados quantitativos ou qualitativos. O objetivo deste trabalho é elaborar um modelo para o cálculo de reputação em comunidades on-line, baseando-se em dados qualitativos e quantitativos provenientes da interação dos próprios participantes da rede, a fim de potencializar a colaboração entre os membros e fornecer um meio de cálculo resistente a algumas das vulnerabilidades comuns em sistemas de reputação, como tolerância a ruídos e ataques Sybil. Para atingir esse objetivo é realizada uma adaptação do algoritmo PageRank, definida como CR (Collaborative Reputation) para obter uma ordenação dos usuários a partir de suas interações. Para avaliação, adotamos um conjunto de dados do sítio Epinions.com, com o qual foi realizada uma análise comparativa dos resultados obtidos a partir do modelo proposto com outros três algoritmos correlatos ao trabalho apresentado. Dentre as técnicas usadas na análise estão: diversidade de valores, comparação da ordenação, estudo comparativo de cenários, tolerância a ruídos e robustez contra ataques tipo Sybil. Os algoritmos usados na avaliação são: o PageRank original e o algoritmo ReCop, usados para a identificação de usuários relevantes, e o algoritmo LeaderRank usado para a identificação dos usuários com maior prestígio na rede. Os resultados indicam que o modelo proposto é mais sensível às interações dos usuários em comparação aos outros modelos usados na avaliação, mas é mais eficiente a ataques Sybil. / In the Web 2.0, there are systems with high volume of social interaction. Some of these systems offer reputation calculation or some form of classification of users or shared content,. However, in many cases, this reputation value is obtained solely from quantitative or qualitative data. The objective of this work is to develop a model for the reputation calculation in online communities, based on qualitative and quantitative data from the interaction of the participants of a social network, in order to potentiate the collaboration between users, and to provide a resistant environment for some of the vulnerabilities present in reputation systems. To achieve this goal we defined an adaptation of the PageRank algorithm, defined as CR (Collaborative Reputation), to obtain a rank of users based on their interactions in the network. For evaluation, we used a dataset from the site Epinions.com. With that database, we executed a comparative analysis of the results of the proposed algorithm and of three other algorithms related to the presented work. The procedures used in the analysis were: diversity of values, comparison of ordination, comparative study of scenarios, noise tolerance and robustness against Sybil attacks. The algorithms used in the comparison were: the original PageRank algorithm and ReCop, used to identify relevant users, and the algorithm LeaderRank, which is used for identification of the most prestigious users in the network. The results showed that the proposed model is more sensitive to the interactions of users, but its performance on Sybil attacks is better than the others.
12

Efficient Node Proximity and Node Significance Computations in Graphs

January 2017 (has links)
abstract: Node proximity measures are commonly used for quantifying how nearby or otherwise related to two or more nodes in a graph are. Node significance measures are mainly used to find how much nodes are important in a graph. The measures of node proximity/significance have been highly effective in many predictions and applications. Despite their effectiveness, however, there are various shortcomings. One such shortcoming is a scalability problem due to their high computation costs on large size graphs and another problem on the measures is low accuracy when the significance of node and its degree in the graph are not related. The other problem is that their effectiveness is less when information for a graph is uncertain. For an uncertain graph, they require exponential computation costs to calculate ranking scores with considering all possible worlds. In this thesis, I first introduce Locality-sensitive, Re-use promoting, approximate Personalized PageRank (LR-PPR) which is an approximate personalized PageRank calculating node rankings for the locality information for seeds without calculating the entire graph and reusing the precomputed locality information for different locality combinations. For the identification of locality information, I present Impact Neighborhood Indexing (INI) to find impact neighborhoods with nodes' fingerprints propagation on the network. For the accuracy challenge, I introduce Degree Decoupled PageRank (D2PR) technique to improve the effectiveness of PageRank based knowledge discovery, especially considering the significance of neighbors and degree of a given node. To tackle the uncertain challenge, I introduce Uncertain Personalized PageRank (UPPR) to approximately compute personalized PageRank values on uncertainties of edge existence and Interval Personalized PageRank with Integration (IPPR-I) and Interval Personalized PageRank with Mean (IPPR-M) to compute ranking scores for the case when uncertainty exists on edge weights as interval values. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2017
13

Cálculo de reputação em redes sociais a partir de dados da colaboração entre os participantes / Computation of reputation in social networks from the collaboration between participants

Edith Zaida Sonco Mamani 03 December 2012 (has links)
Na Web 2.0 são encontrados sistemas com alto volume de interação social. Alguns desses sistemas oferecem cálculo de reputação ou alguma forma de classificação de usuários ou do conteúdo compartilhado. Contudo, em muitos casos, esse valor de reputação resultante é obtido somente a partir de dados quantitativos ou qualitativos. O objetivo deste trabalho é elaborar um modelo para o cálculo de reputação em comunidades on-line, baseando-se em dados qualitativos e quantitativos provenientes da interação dos próprios participantes da rede, a fim de potencializar a colaboração entre os membros e fornecer um meio de cálculo resistente a algumas das vulnerabilidades comuns em sistemas de reputação, como tolerância a ruídos e ataques Sybil. Para atingir esse objetivo é realizada uma adaptação do algoritmo PageRank, definida como CR (Collaborative Reputation) para obter uma ordenação dos usuários a partir de suas interações. Para avaliação, adotamos um conjunto de dados do sítio Epinions.com, com o qual foi realizada uma análise comparativa dos resultados obtidos a partir do modelo proposto com outros três algoritmos correlatos ao trabalho apresentado. Dentre as técnicas usadas na análise estão: diversidade de valores, comparação da ordenação, estudo comparativo de cenários, tolerância a ruídos e robustez contra ataques tipo Sybil. Os algoritmos usados na avaliação são: o PageRank original e o algoritmo ReCop, usados para a identificação de usuários relevantes, e o algoritmo LeaderRank usado para a identificação dos usuários com maior prestígio na rede. Os resultados indicam que o modelo proposto é mais sensível às interações dos usuários em comparação aos outros modelos usados na avaliação, mas é mais eficiente a ataques Sybil. / In the Web 2.0, there are systems with high volume of social interaction. Some of these systems offer reputation calculation or some form of classification of users or shared content,. However, in many cases, this reputation value is obtained solely from quantitative or qualitative data. The objective of this work is to develop a model for the reputation calculation in online communities, based on qualitative and quantitative data from the interaction of the participants of a social network, in order to potentiate the collaboration between users, and to provide a resistant environment for some of the vulnerabilities present in reputation systems. To achieve this goal we defined an adaptation of the PageRank algorithm, defined as CR (Collaborative Reputation), to obtain a rank of users based on their interactions in the network. For evaluation, we used a dataset from the site Epinions.com. With that database, we executed a comparative analysis of the results of the proposed algorithm and of three other algorithms related to the presented work. The procedures used in the analysis were: diversity of values, comparison of ordination, comparative study of scenarios, noise tolerance and robustness against Sybil attacks. The algorithms used in the comparison were: the original PageRank algorithm and ReCop, used to identify relevant users, and the algorithm LeaderRank, which is used for identification of the most prestigious users in the network. The results showed that the proposed model is more sensitive to the interactions of users, but its performance on Sybil attacks is better than the others.
14

VersionsRank : escores de reputação de páginas web baseados na detecção de versões

Silva, Glauber Rodrigues da January 2009 (has links)
Os motores de busca utilizam o WebGraph formado pelas páginas e seus links para atribuir reputação às páginas Web. Essa reputação é utilizada para montar o ranking de resultados retornados ao usuário. No entanto, novas versões de páginas com uma boa reputação acabam por distribuir os votos de reputação entre todas as versões, trazendo prejuízo à página original e também as suas versões. O objetivo deste trabalho é especificar novos escores que considerem todas as versões de uma página Web para atribuir reputação para as mesmas. Para atingir esse objetivo, foram propostos quatro escores que utilizam a detecção de versões para atribuir uma reputação mais homogênea às páginas que são versões de um mesmo documento. Os quatro escores propostos podem ser classificados em duas categorias: os que realizam mudanças estruturais no WebGraph (VersionRank e VersionPageRank) e os que realizam operações aritméticas sobre os escores obtidos pelo algoritmo de PageRank (VersionSumRank e VersionAverageRank). Os experimentos demonstram que o VersionRank tem desempenho 26,55% superior ao PageRank para consultas navegacionais sobre a WBR03 em termos de MRR, e em termos de P@10, o VersionRank tem um ganho de 9,84% para consultas informacionais da WBR99. Já o escore VersionAverageRank, apresentou melhores resultados na métrica P@10 para consultas informacionais na WBR99 e WBR03. Na WBR99, os ganhos foram de 6,74% sobre o PageRank. Na WBR03, para consultas informacionais aleatórias o escore VersionAverageRank obteve um ganho de 35,29% em relação ao PageRank. / Search engines use WebGraph formed by the pages and their links to assign reputation to Web pages. This reputation is used for ranking show for the user. However, new versions of pages with a good reputation distribute your votes of reputation among all versions, damaging the reputation of original page and also their versions. The objective of this work is to specify the new scores to consider all versions of a Web page to assign reputation to them. To achieve this goal, four scores were proposed using the version detection to assign a more homogeneous reputation to the pages that are versions of the same document. The four scores proposed can be classified into two categories: those who perform structural changes in WebGraph (VersionRank and VersionPageRank) and those who performs arithmetic operations on the scores obtained by the PageRank algorithm (VersionSumRank and VersionAverageRank). The experiments show that the performance VersionRank is 26.55% higher than the PageRank for navigational queries on WBR03 in terms of MRR, and in terms of P@10, the VersionRank has a gain of 9.84% for the WBR99 informational queries. The score VersionAverageRank showed better results in the metric P@10 for WBR99 and WBR03 information queries. In WBR99, it had a gain of 6.74% compared to PageRank. In WBR03 for random informational queries, VersionAverageRank showed an increase of 35.29% compared to PageRank.
15

Utvärdering av Random Indexing och PageRank som verktyg för automatisk textsammanfattning

Gustavsson, Pär January 2009 (has links)
<p>Mängden information på internet är enorm och bara forsätter att öka på både gott och ont. Framförallt kan det vara svårt för grupper såsom synskadade och personer med språksvårigheter att navigera sig och ta vara på all denna information. Därmed finns ett behov av väl fungerande sammanfattningsverktyg för dessa, men även för andra människor som snabbt behöver presenteras det viktigaste ur en uppsättning texter. Den här studien undersöker hur väl sammanfattningssystemet CogSum, som är baserat på Random Indexing, presterar med och utan rankningsalgoritmen PageRank aktiverat på nyhetstexter och texter från Försäkringskassan. Utöver detta används sammanfattningssystemet SweSum som en baslinje i undersökningen. Rapporten innefattar en teoretisk bakgrund som avhandlar automatisk textsammanfattning i stort vilket inkluderar olika utvärderingsmetoder, tekniker och sammanfattningssystem. Utvärderingen utfördes med hjälp av det automatiska utvärderingsverktyget KTHxc på nyhetstexterna och ett annat sådant, AutoSummENG, på Försäkringskassans texter. Studiens resultat påvisar att CogSum utan PageRank presterar bättre än CogSum med PageRank på 10 nyhetstexter medan det omvända gäller för 5 texter från Försäkringskassan. SweSum i sin tur erhöll det bästa resultatet för nyhetstexterna respektive det sämsta för texterna från Försäkringskassan.</p>
16

Decentralized Web Search

Haque, Md Rakibul 08 June 2012 (has links)
Centrally controlled search engines will not be sufficient and reliable for indexing and searching the rapidly growing World Wide Web in near future. A better solution is to enable the Web to index itself in a decentralized manner. Existing distributed approaches for ranking search results do not provide flexible searching, complete results and ranking with high accuracy. This thesis presents a decentralized Web search mechanism, named DEWS, which enables existing webservers to collaborate with each other to form a distributed index of the Web. DEWS can rank the search results based on query keyword relevance and relative importance of websites in a distributed manner preserving a hyperlink overlay on top of a structured P2P overlay. It also supports approximate matching of query keywords using phonetic codes and n-grams along with list decoding of a linear covering code. DEWS supports incremental retrieval of search results in a decentralized manner which reduces network bandwidth required for query resolution. It uses an efficient routing mechanism extending the Plexus routing protocol with a message aggregation technique. DEWS maintains replica of indexes, which reduces routing hops and makes DEWS robust to webservers failure. The standard LETOR 3.0 dataset was used to validate the DEWS protocol. Simulation results show that the ranking accuracy of DEWS is close to the centralized case, while network overhead for collaborative search and indexing is logarithmic on network size. The results also show that DEWS is resilient to changes in the available pool of indexing webservers and works efficiently even in the presence of heavy query load.
17

Tamper-Resilient Methods for Web-Based Open Systems

Caverlee, James 05 July 2007 (has links)
The Web and Web-based open systems are characterized by their massive amount of data and services for leveraging this data. These systems are noted for their open and unregulated nature, self-supervision, and high degree of dynamism, which are key features in supporting a rich set of opportunities for information sharing, discovery, and commerce. But these open and self-managing features also carry risks and raise growing concerns over the security and privacy of these systems, including issues like spam, denial-of-service, and impersonated digital identities. Our focus in this thesis is on the design, implementation, and analysis of large-scale Web-based open systems, with an eye toward enabling new avenues of information discovery and ensuring robustness in the presence of malicious participants. We identify three classes of vulnerabilities that threaten these systems: vulnerabilities in link-based search services, vulnerabilities in reputation-based trust services over online communities, and vulnerabilities in Web categorization and integration services. This thesis introduces a suite of methods for increasing the tamper-resilience of Web-based open systems in the face of a large and growing number of threats. We make three unique contributions: First, we present a source-centric architecture and a set of techniques for providing tamper-resilient link analysis of the World Wide Web. We propose the concept of link credibility and present a credibility-based link analysis model. We show that these approaches significantly reduce the impact of malicious spammers on Web rankings. Second, we develop a social network trust aggregation framework for supporting tamper-resilient trust establishment in online social networks. These community-based social networking systems are already extremely important and growing rapidly. We show that our trust framework support high quality information discovery and is robust to the presence of malicious participants in the social network. Finally, we introduce a set of techniques for reducing the opportunities of attackers to corrupt Web-based categorization and integration services, which are especially important for organizing and making accessible the large body of Web-enabled databases on the Deep Web that are beyond the reach of traditional Web search engines. We show that these techniques reduce the impact of poor quality or intentionally misleading resources and support personalized Web resource discovery.
18

Tinklapių optimizavimo paieškos sistemoms tyrimai / Researches in optimisation of websites for the search engines

Smirnov, Aleksandr 15 June 2005 (has links)
World Wide Web affects the new areas of business as it occurs in our lives more and more often. The term "site" becomes well-known and important for the employers and the Internet advertisement more popular since the commercial sites’ visitors are potential clients of the enterprises and institutes. It is noticed that Internet projects often fail: despite of the site is fine, functional and has a lot of useful information, the result is useless or it does not corresponds to its purposes. This happens because of the very simple reason: nobody knows about the site! Among thousands of sites which can be found by the search engine the user can overview and evaluate only few of the search results. Sooner or later sites’ owners find out this fact, and they question themselves: “Why not my site is placed in the first search engine result page?” Moreover, the site’s theme completely corresponds to the query, and the quality of content is fine and even better than the one of competitors. The answer exists: the site is not optimized for the search engines. The main purpose of the paper is to analyze, select and apply in practice the effective search engine optimization strategies and methods. The main contribution of the paper is the detail description of information dynamics and its structure, research of the world popular search engine’s Google work principles and also other search engines’ and directories’ functional peculiarities, analysis of the page ranging algorithms. The... [to full text]
19

Decentralized Web Search

Haque, Md Rakibul 08 June 2012 (has links)
Centrally controlled search engines will not be sufficient and reliable for indexing and searching the rapidly growing World Wide Web in near future. A better solution is to enable the Web to index itself in a decentralized manner. Existing distributed approaches for ranking search results do not provide flexible searching, complete results and ranking with high accuracy. This thesis presents a decentralized Web search mechanism, named DEWS, which enables existing webservers to collaborate with each other to form a distributed index of the Web. DEWS can rank the search results based on query keyword relevance and relative importance of websites in a distributed manner preserving a hyperlink overlay on top of a structured P2P overlay. It also supports approximate matching of query keywords using phonetic codes and n-grams along with list decoding of a linear covering code. DEWS supports incremental retrieval of search results in a decentralized manner which reduces network bandwidth required for query resolution. It uses an efficient routing mechanism extending the Plexus routing protocol with a message aggregation technique. DEWS maintains replica of indexes, which reduces routing hops and makes DEWS robust to webservers failure. The standard LETOR 3.0 dataset was used to validate the DEWS protocol. Simulation results show that the ranking accuracy of DEWS is close to the centralized case, while network overhead for collaborative search and indexing is logarithmic on network size. The results also show that DEWS is resilient to changes in the available pool of indexing webservers and works efficiently even in the presence of heavy query load.
20

VersionsRank : escores de reputação de páginas web baseados na detecção de versões

Silva, Glauber Rodrigues da January 2009 (has links)
Os motores de busca utilizam o WebGraph formado pelas páginas e seus links para atribuir reputação às páginas Web. Essa reputação é utilizada para montar o ranking de resultados retornados ao usuário. No entanto, novas versões de páginas com uma boa reputação acabam por distribuir os votos de reputação entre todas as versões, trazendo prejuízo à página original e também as suas versões. O objetivo deste trabalho é especificar novos escores que considerem todas as versões de uma página Web para atribuir reputação para as mesmas. Para atingir esse objetivo, foram propostos quatro escores que utilizam a detecção de versões para atribuir uma reputação mais homogênea às páginas que são versões de um mesmo documento. Os quatro escores propostos podem ser classificados em duas categorias: os que realizam mudanças estruturais no WebGraph (VersionRank e VersionPageRank) e os que realizam operações aritméticas sobre os escores obtidos pelo algoritmo de PageRank (VersionSumRank e VersionAverageRank). Os experimentos demonstram que o VersionRank tem desempenho 26,55% superior ao PageRank para consultas navegacionais sobre a WBR03 em termos de MRR, e em termos de P@10, o VersionRank tem um ganho de 9,84% para consultas informacionais da WBR99. Já o escore VersionAverageRank, apresentou melhores resultados na métrica P@10 para consultas informacionais na WBR99 e WBR03. Na WBR99, os ganhos foram de 6,74% sobre o PageRank. Na WBR03, para consultas informacionais aleatórias o escore VersionAverageRank obteve um ganho de 35,29% em relação ao PageRank. / Search engines use WebGraph formed by the pages and their links to assign reputation to Web pages. This reputation is used for ranking show for the user. However, new versions of pages with a good reputation distribute your votes of reputation among all versions, damaging the reputation of original page and also their versions. The objective of this work is to specify the new scores to consider all versions of a Web page to assign reputation to them. To achieve this goal, four scores were proposed using the version detection to assign a more homogeneous reputation to the pages that are versions of the same document. The four scores proposed can be classified into two categories: those who perform structural changes in WebGraph (VersionRank and VersionPageRank) and those who performs arithmetic operations on the scores obtained by the PageRank algorithm (VersionSumRank and VersionAverageRank). The experiments show that the performance VersionRank is 26.55% higher than the PageRank for navigational queries on WBR03 in terms of MRR, and in terms of P@10, the VersionRank has a gain of 9.84% for the WBR99 informational queries. The score VersionAverageRank showed better results in the metric P@10 for WBR99 and WBR03 information queries. In WBR99, it had a gain of 6.74% compared to PageRank. In WBR03 for random informational queries, VersionAverageRank showed an increase of 35.29% compared to PageRank.

Page generated in 0.0292 seconds