• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 11
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 99
  • 28
  • 20
  • 16
  • 14
  • 14
  • 13
  • 12
  • 12
  • 12
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Uma abordagem para combate da fraude de clique baseada em CAPTCHAs clicáveis

COSTA  , Rodrigo Alves 04 March 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-08-30T12:25:54Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Tese-RodrigoAlvesCosta.pdf: 3140049 bytes, checksum: ec578743ee4523a7f146fc645013c2cc (MD5) / Made available in DSpace on 2017-08-30T12:25:55Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Tese-RodrigoAlvesCosta.pdf: 3140049 bytes, checksum: ec578743ee4523a7f146fc645013c2cc (MD5) Previous issue date: 2016-03-04 / No atual clima desafiador da economia global, anunciantes e suas agências de publicidade encontram-se em busca contínua por oportunidades de negócios que os levem a aumentar significativamente a exposição de suas marcas e de seus produtos a custos cada vez menores. Recursos de marketing têm sido redirecionados a campanhas na Internet do tipo Pagamento Por Clique (PPC), com remunerações associadas à realização de alguma ação potencialmente lucrativa para o anunciante, como o clique em um anúncio por parte de um usuário. Nessa estrutura, um fraudador pode aparecer como alguém que busca aumentar os lucros de um publicador de anúncios ao clicar nos anúncios exibidos sem ter a real intenção de comprar os produtos, ou ainda como uma empresa concorrente que clica nos anúncios de determinada organização para gerar custos indevidos e esse processo é exponencialmente mais oneroso para as vítimas da fraude se for realizado de maneira automatizada, por meio de scripts e bots. Combater a fraude de clique é, portanto, de fundamental importância para a continuidade do mercado de anúncios online, já que anunciantes mais conservadores acabam se afastando desta forma de fazer negócios por sua aversão a riscos e, no mundo ideal, no qual a fraude de clique ou inexiste ou possui influência mínima no faturamento das companhias, é necessário desenvolver soluções computacionais que suportem o crescimento do mesmo. Nesse sentido, esta pesquisa tem por objetivo produzir contribuições acadêmicas e desenvolver soluções com foco no negócio de PPC, incluindo a disponibilização de um material inédito em língua portuguesa sobre a sistemática, histórico e evolução dos anúncios online, bem como suas fraudes, casos legais e formas de detecção, e a elaboração de uma abordagem de combate que alia formas clássicas de resposta à fraude de clique com uma abordagem inovadora para a sua prevenção, baseada em CAPTCHA clicáveis, algo que não foi encontrado na literatura, nem no mercado. Apresentamos o Google NoCAPTCHA reCAPTCHA como uma forma de prevenir a automatização da fraude de clique, o que caracteriza uma mudança de paradigma no combate à fraude de clique, que passa a ser de prevenção proativa, ao contrário das abordagens clássicas, que buscavam identificar a fraude por meio de heurísticas de análise de histórico de cliques após as mesmas já terem ocorrido. Com efeito, este trabalho contribui na aquisição de conhecimento no domínio desse negócio, através de um estudo extensivo de anúncios online, apoiando a identificação de requisitos de sistemas e do negócio, e sua disposição de mercado. Como um benefício, tal estudo viabilizou a proposição de uma abordagem de segurança para as redes de anúncios, inspirada em casos legais e no conjunto de métodos utilizados para fraudes de clique. / In the current challenging climate of the global economy, advertisers and their advertising agencies are in continuous search for business opportunities that lead them to significantly increase the exposure of their brands and their products at increasingly lower costs. Marketing resources have been gradually redirected to marketing campaigns such as Pay Per Click (PPC), which consists of associating payment to some potentially lucrative action for the advertiser, such as clicking on an ad by a user. Within this market structure, a fraudster may appear as someone who seeks to increase the profits of an ad publisher by merely clicking on the ads displayed on the site of the publisher without real intention to buy the product, or as a competitor clicking on the ads of a particular organization in order to generate undue costs. This entire process is exponentially more costly for the victims if it is done in an automated manner through scripts and bots, such that combating, or even preventing, click fraud is of fundamental importance for the continuity of the online marketing advertising, as more conservative advertisers end up avoiding this business structure for their risk aversion profile. To achieve the ideal world, where click fraud either does not exist or has minimal impact on the companies revenues, it is necessary to develop computational techniques and security solutions that support the growth of this market. In this sense, this research aims to produce contributions and develop solutions for online ads, focusing on the PPC business, including the provision of academic material in the Portuguese language about the systematic, history and evolution of online advertising as well as their frauds, legal cases and forms of dectecion, and the development of an approach to combating click fraud that combines classic forms of detection with an innovative approach to prevention, based on clickable CAPTCHA, something neither found in the literature on past researchs nor on the current solutions available in the market. Thus, the developed prototype introduced Google NoCAPTCHA reCAPTCHA as a way to prevent automation of click fraud: by clicking on an ad, the user will need to respond to a challenge and, once identified as a human, proceed. The authentication aspect ensures continuous use of the system without the need to respond additional challenges through the use of temporary coupons represented as cookies. This characterizes a paradigm shift in the fight against click fraud, which happens to be from reactive detection to proactive prevention, since traditional approaches used to identify fraud through heuristics that performed historical analysis of clicks after the fraud has already occurred. Indeed, this work contributes to the acquisition of knowledge of the business, through an extensive study of online ads, significantly supporting the identification of business and system requirements, as well as their current market provision. As a benefit, this study made it possible to propose a security approach to online advertising networks, inspired by legal cases and the set of methods used for click fraud.
12

A Survey of the Hadamard Conjecture

Tressler, Eric Paul 10 May 2004 (has links)
Hadamard matrices are defined, and their basic properties outlined. A survey of historical and recent literature follows, in which a number of existence theorems are examined and given context. Finally, a new result for Hadamard matrices over $\Z_2$ is presented and given a graph-theoretic interpretation. / Master of Science
13

Coloration d'hypergraphes et clique-coloration

Défossez, David 12 October 2006 (has links) (PDF)
Le travail de cette thèse s'est porté sur certains problèmes de coloration d'hypergraphes, dont certains sont en lien avec les graphes parfaits.<br /><br />Dans un premier temps, la coloration des hypergraphes est abordée de manière générale, et nous y démontrons une conjecture de Sterboul (généralisant un précédent résultat de Fournier et Las Vergnas) affirmant que si un hypergraphe ne contient pas un type particulier de cycle impair, alors il est 2-coloriable.<br /><br />Par la suite nous étudions plus précisément le problème de clique-coloration : une clique maximale d'un graphe est un sous-graphe complet, maximal par inclusion. Le problème consiste à colorier les sommets du graphe de sorte que chaque clique maximale contienne aux moins deux sommets de couleurs distinctes. Le point de départ de cette thèse était de savoir s'il existe une constante k telle que tous les graphes parfaits sont k-clique-coloriables. Cette question n'est toujours pas résolue, bien qu'on ne connaisse aucun graphe sans trou impair qui n'est pas 3-clique-coloriable. Cependant, une telle constante existe dans de nombreux cas particuliers, dont certains (tels que les graphes sans diamant ou sans taureau) sont étudiés ici.<br /><br />La complexité du problème de clique-coloration est également abordée, en essayant de déterminer la la classe de complexité exacte selon les cas particuliers. Plusieurs résultats sont établis, concernant notamment la difficulté de décider si un graphe parfait est 2-clique-coloriable : ce problème est Sigma_2 P-complet, et est NP-complet pour les graphes parfaits sans K_4.
14

Network Based Approaches for Clustering and Location Decisions

Verma, Anurag 2012 August 1900 (has links)
The objective of this dissertation is to study commonly occurring location and clustering problems on graphs. The dissertation is presented as a collection of results in topics including finding maximum cliques in large graphs, graph clustering in large scale graphs, determining location of facilities for pre-positioning emergency relief supplies, and selecting nodes to form a virtual backbone in a wireless sensor network. To begin with, a new clique relaxation called a k-community is defined as a connected subgraph such that endpoints of every edge have at least k common neighbors within the subgraph. It is used to develop scale reduction techniques to obtain the maximum clique on very large scale real life networks. Analytically, the technique is been shown to be very effective on power-law random graphs. Experimental results on real life graph instances (Collaboration networks, P2P networks, Social networks, etc.) show our procedure to be much more effective than a regular k-core peeling approach. Next, a general purpose network clustering algorithm based on the clique relaxation concept of k-community is presented. A salient feature of this approach is that it does not use any prior information about the structure of the network. By defining a cluster as a k-community, the proposed algorithm aims to provide a clustering of a network into k-communities with varying values of k. Even though the algorithm is not designed to optimize any particular performance measure, the computational results suggest that it performs well on a number of criteria that are used in literature to evaluate the quality of a clustering. The third topic deals with choosing the locations of disaster response facilities for the storage of emergency supplies, which is critical to the quality of service provided in a large scale emergency like an earthquake. In the existing literature, large scale emergency facility location models have either assumed that disaster response facilities will always be functioning and available when required, or that the functioning of a facility is independent of a particular disaster scenario. In this paper new location models are presented that explicitly take into consideration the stochastic nature of the impact a disaster can have on the disaster response facilities and the population centers in surrounding areas. A comparison of the results obtained using our models with those from models available in literature using a case study suggests that the locations suggested by the model in this paper significantly reduce the expected cost of transportation of supplies when we consider the damage a disaster causes to the disaster response facilities and areas near it. Lastly, a distributed approximate algorithm for forming the communication backbone in wireless sensor networks is presented. Some of the most popular routing protocols for wireless sensor networks require a virtual backbone for efficient communication be- tween the sensors. Connected Dominating Sets (CDS) have been studied as a method of choosing nodes to be in the backbone. The traditional approach is to assume that the transmission range of each node is given and then minimize the number of nodes in the CDS representing the backbone. A recently introduced alternative strategy is based on the concept of k-bottleneck connected dominating set (k-BCDS), which, given a positive integer k, minimizes the transmission range of the nodes that ensures a CDS of size k exists in the network. This paper provides a 6-approximate distributed algorithm for the k-BCDS problem. The results of empirical evaluation of the proposed algorithm are also included.
15

Soziale Arbeit mit rechten Jugendcliquen

Borrmann, Stefan January 2005 (has links)
Zugl.: Berlin, Techn. Univ., Diss., 2005 u.d.T.: Borrmann, Stefan: Wissenschaftlich begründete Leitlinien für die Praxis sozialer Arbeit
16

El problema de la degenerancia de grafos en Congested Clique

Pérez Salazar, Sebastián Walter January 2016 (has links)
Ingeniero Civil Matemático / La computación distribuida, rama de las ciencias de la computación, se focaliza en estu- diar sistemas distribuidos, tales como internet, redes sociales o protocolos de mensajería. La información se encuentra distribuida entre las distintas entidades que conforman el sistema y el objetivo último es resolver algún problema global. Para ello las distintas entidades se comunican mediante los canales de la red hasta que encuentran la solución. Esta memoria comienza estudiando el problema de la degenerancia de un grafo en los modelos de computación distribuida UCAST y BCAST, donde la degenerancia de un grafo G se define como el máximo grado mínimo de un subgrafo F de G. Los modelos distribuidos UCAST y BCAST corresponden a redes completas de n individuos, los cuales se comunican de manera síncrona por los canales de la red en rondas. En el primer caso, cada individuo por ronda puede enviar diferentes mensajes por cada uno de sus canales. En el segundo caso, cada individuo por ronda envía a través de sus canales el mismo mensaje. En general, se suele decir que BCAST es una restricción de UCAST. Primero, se construye un protocolo aleatorio en el modelo UCAST que calcula una (1 + ε)- aproximación de la degenerancia en O(log n) rondas con mensajes de largo O(log n). En el modelo BCAST se demuestra que el problema de calcular la degenerancia es difícil en 1 ronda. Más específicamente, se demuestra que todo protocolo aleatorio de 1 ronda que calcule exactamente la degenerancia debe enviar un mensaje de largo Ω(n). En el mismo modelo, se construye un protocolo aleatorio de 2 rondas con mensajes de largo O(log 2 n) que calcula una (1 + ε)-aproximación de la degenerancia para grafos α-densos. Finalmente, se construye un protocolo determinista que calcula una (2 + ε)-aproximación de la degenerancia en O(log n) rondas con mensajes de largo O(log n). Como segunda parte de este trabajo, y motivado por el protocolo en BCAST que calcula una (2 + ε)-aproximación de la degenerancia, se estudia la siguiente dinámica sobre grafos: Durante cada iteración, eliminar todos los vértices que tengan grado a lo más el grado prome- dio del grafo +1. Se conjetura que para todo grafo G de n vértices la dinámica toma O(log n) iteraciones en vaciar el grafo. Se aborda el problema estudiando clases de grafos tales como: bosques, grafos planares, grafos con degenerancia acotada y grafos unión disjunta de cliques. Finalmente, se estudian diversos problemas en el modelo BCAST. Se comienza estudiando el problema de calcular el conjunto independiente maximal con un vértice fijo. Se prueba que el problema es difícil en 1 ronda y luego se contruye un protocolo que en O(log n) rondas, usando mensaje de largo O(log n), calcula el conjunto independiente. Se estudia también el problema de calcular el número cromático de un grafo. Se prueba que el problema resulta difícil en 1 ronda. Concluyendo el capítulo, se estudian los problemas de encontrar conjuntos dominantes de tamaño k y conjuntos -dominantes de tamaño k, en ambos casos se demuestra que los problemas son difíciles en 1 ronda. / Este trabajo ha sido parcialmente financiado por Proyecto Fondecyt 1130061
17

Risk-averse optimization in networks

Rysz, Maciej Wladyslaw 01 May 2014 (has links)
The primary goal of this research is to model and develop efficient solution techniques for graph theoretical problems with topologically stochastic information that manifests in a various forms. We employ a stochastic programming framework that is based on a formalism of coherent risk measures in order to find minimum-risk graph structures with stochastic vertex weights. Namely, we propose a class of risk-averse maximum weighted subgraph problems that represent a stochastic extension of the so-called maximum weight subgraph problems considered in graph-theoretical literature. The structural nature of these model poses a twofold challenge in developing efficient solution algorithms. First, accurate quantification of uncertainties in mathematical programming via risk measures in the form of convex constraints typically requires very large representative scenario sets, thus incurring lengthy solution times. In this regard, we first introduce an efficient algorithms for solving large-scale stochastic optimization problems involving measures of risk that are based on centrality equivalents. The second major challenge relates to the fact that problems of finding a maximum subset of a defined property within a network are generally not solvable in polynomial time. Nevertheless, much emphasis has been placed on developing faster exact combinatorial solution methodologies that exploit the structural nature of the sought subgraphs. In pursuance of analogous frameworks, we propose a graph-based branch-and-bound algorithm for solving models in the risk-averse maximum weighted subgraph problem class that is generally applicable to problems where a subgraph's weight is given by a super-additive function whose evaluation requires solving an optimization problem. As an illustrative example of the developed concepts, we consider risk-averse maximum weighted clique and k-club problems. The mentioned network studies address problems with topologically exogenous information in the form uncertainties induced by stochastic factors associated with vertices. This assumption clearly relies on the premise that the network is structurally unvarying. For many application, however, it may also be of interest to examine decision making under conditions that admit topological changes of the network. To this end, we consider a two-stage stochastic recourse maximum clique problem that seeks to maximize the expected size of a clique over the decision time horizon. Namely, a subset of vertices composing a clique is select in the first-stage, after which realizations of uncertainty in the form of edge failures and creations arise. Then, the second-stage recourse is taken to "repair" the subset selected in the first stage by adding or removing nodes in order to ascertain its completeness. Numerical experiments for the above studies demonstrating the underlying problem properties and improvements in computational time achieved by the developed algorithms are conducted.
18

Estimation of Switching Activity in Sequential Circuits using Dynamic Bayesian Networks

Lingasubramanian, Karthikeyan 02 June 2004 (has links)
This thesis presents a novel, non-simulative, probabilistic model for switching activity in sequential circuits, capturing both spatio-temporal correlations at internal nodes and higher order temporal correlations due to feedback. Switching activity, one of the key components in dynamic power dissipation, is dependent on input streams and exhibits spatio-temporal correlation amongst the signals. One can handle dependency modeling of switching activity in a combinational circuit by Bayesian Networks [2] that encapsulates the underlying joint probability distribution function exactly. We present the underlying switching model of a sequential circuit as the time coupled logic induced directed acyclic graph (TC-LiDAG), that can be constructed from the logic structure and prove it to be a dynamic Bayesian Network. Dynamic Bayesian Networks over n time slices are also minimal representation of the dependency model where nodes denote the random variable and edges either denote direct dependency between variables at one time instant or denote dependencies between the random variables at different time instants. Dynamic Bayesian Networks are extremely powerful in modeling higher order temporal as well as spatial correlations; it is an exact model for the underlying conditional independencies. The attractive feature of this graphical representation of the joint probability function is that not only does it make the dependency relationships amongst the nodes explicit but it also serves as a computational mechanism for probabilistic inference. We use stochastic inference engines for dynamic Bayesian Networks which provides any-time estimates and scales well with respect to size We observe that less than a thousand samples usually converge to the correct estimates and that three time slices are sufficient for the ISCAS benchmark circuits. The average errors in switching probability of 0.006, with errors tightly distributed around the mean error values, on ISCAS'89 benchmark circuits involving up to 10000 signals are reported.
19

Graph Decomposition Using Node Labels

Johansson, Öjvind January 2001 (has links)
No description available.
20

Target oriented branch & bound method for global optimization

Stix, Volker January 2002 (has links) (PDF)
We introduce a very simple but efficient idea for branch & bound (B&B) algorithms in global optimization (GO). As input for our generic algorithm, we need an upper bound algorithm for the GO maximization problem and a branching rule. The latter reduces the problem into several smaller subproblems of the same type. The new B&B approach delivers one global optimizer or, if stopped before finished, improved upper and lower bounds for the problem. Its main difference to commonly used B&B techniques is its ability to approximate the problem from above and from below while traversing the problem tree. It needs no supplementary information about the system optimized and does not consume more time than classical B&B techniques. Experimental results with the maximum clique problem illustrate the benefit of this new method. (author's abstract) / Series: Working Papers on Information Systems, Information Business and Operations

Page generated in 0.045 seconds