• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 210
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 409
  • 159
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Critical Cliques and Their Application to Influence Maximization in Online Social Networks

Pandey, Nikhil 2012 May 1900 (has links)
Graph decompositions have useful applications in optimization problems that are categorized as NP-Hard. Modular Decomposition of a graph is a technique to decompose the graph into non-overlapping modules. A module M of an undirected graph G = (V, E) is commonly defined as a set of vertices such that any vertex outside of M is either adjacent or non-adjacent to all vertices in M . By the theory of modular decomposition, the modules can be categorized as parallel, series or prime modules. Series modules which are maximal and are also cliques are termed as simple series modules or critical cliques. There are modular decomposition algorithms that can be used to decompose the graph into modules and obtain critical cliques. In this current research, we present a new algorithm to decompose the graph into critical cliques without applying the process of modular decomposition. Given a simple, undirected graph G = (V, E), the runtime complexity of our proposed algorithm is O(|V| + |E|) under certain input constraints. Thus, one of our main contributions is to propose a novel algorithm for decomposing a simple, undirected graph directly into critical cliques. We apply the idea of critical cliques to propose a new way for solving the influence maximization problem in online social networks. Influence maximization in online social networks is the problem of identifying a small, initial set of influential individuals which can influence the maximum number of individuals in the network. In this research, we propose a new model of online social networks based on the notion of critical cliques. We utilize the properties of critical cliques to assign parameters for our proposed model and select an initial set of activation nodes. We then simulate the influence propagation process in the online social network using our proposed model and experimentally compare our approach to the greedy algorithm proposed by Kempe, Kleinberg and Tardos. Our main contribution in the influence maximization research is to propose a new model of online social network taking into account the structural properties of the social network graph and a new, faster algorithm for determining the initial set of influential individuals in the online social network.
2

Engineering scalable influence maximization

Khot, Akshay 18 December 2017 (has links)
In recent years, social networks have become an important part of our daily lives. Billions of people daily use Facebook and other prominent social media networks. This makes them an effective medium for advertising and marketing. Finding the most influential users in a social network is an interesting problem in this domain, as promoters can reach large audiences by targeting these few influential users. This is the influence maximization problem, where we want to maximize the influence spread using as few users as possible. As these social networks are huge, scalability and runtime of the algorithm to find the most influential users is of high importance. We propose innovative improvements in the implementation of the state-of-the-art sketching algorithm for influence analysis on social networks. The primary goal of this thesis is to make the algorithm fast, efficient, and scalable. We devise new data structures to improve the speed of the sketching algorithm. We introduce the compressed version of the algorithm which reduces the space taken in the memory by the data structures without compromising the runtime. By performing extensive experiments on real-world graphs, we prove that our algorithms are able to compute the most influential users within a reasonable amount of time and space on a consumer grade machine. These modifications can further be enhanced to reflect the constantly updating social media graphs to provide accurate estimations in real-time. / Graduate
3

IDENTIFYING MAVENS IN SOCIAL NETWORKS

Albinali, Hussah 14 December 2016 (has links)
This thesis studies social influence from the perspective of users' characteristics. The importance of users' characteristics in word-of-mouth applications has been emphasized in economics and marketing fields. We model a category of users called mavens where their unique characteristics nominate them to be the preferable seeds in viral marketing applications. In addition, we develop some methods to learn their characteristics based on a real dataset. We also illustrate the ways to maximize information flow through mavens in social networks. Our experiments show that our model can successfully detect mavens as well as fulfill significant roles in maximizing the information flow in a social network where mavens considerably outperform general influential users for influence maximization. The results verify the compatibility of our model with real marketing applications.
4

Choice Overload and Maximization: Implications for Disordered Gambling

Whiting, Seth William 01 August 2014 (has links)
As legalized gambling venues continue to emerge throughout the United States, the already present problem of pathological gambling is likely to evolve in to a great issue of social concern. The vast body of literature on the effects of choice and choice overload, or the experience of negative side effects due to large choice arrays, may further contribute to an understanding of gambling behavior and treatment. The current set of experiments sought to extend the previous literature on choice to a gambling context to expand the behavioral model of gambling. The purpose of Experiment I was to determine whether maximizers, or those who tend to carefully examine options, and satisficers, or those who choose with little deliberation, differ in terms of frequency of switching slot machines, a possible behavioral marker of maximization. The results demonstrated that maximizers switched among available slot machines significantly more frequently than satisficers. Experiment II investigated further links between gambling behavior and maximization. A significant correlation between maximization and outcomes of the Problem Gambling Severity Index were observed, suggesting that these phenomena are related. Experiment III tested the effects of an intervention requiring participants to make repeated choices as an abolishing operation on subsequent gambling behavior. Participants who repeatedly made choices gambled significantly fewer trials on slot machines when allowed to play freely compared to those who simply watched a gambling video. Overall, the literature on choice and the phenomena of maximization and choice overload add to the behavioral model of gambling by suggesting new relevant variables in the determination of gambling behavior.
5

Expectation-Maximization and Successive Interference Cancellation Algorithms For Separable Signals

Iltis, Ronald A., Kim, Sunwoo 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The expectation-maximization (EM) algorithm is well established as a computationally efficient method for separable signal parameter estimation. Here, a new geometric derivation and interpretation of the EM algorithm is given that facilitates the understanding of its convergence properties. Geometric considerations then lead to an alternative separable signal parameter estimator based on successive cancellation. The new Generalized Successive Interference Cancellation (GSIC) algorithm may offer better performance than EM in the presence of large signal power disparities. Finally, application of the GSIC algorithm to CDMA-based radiolocation is discussed, and simulation results are presented.
6

Fitting factor models for ranking data using efficient EM-type algorithms

Lee, Chun-fan., 李俊帆. January 2002 (has links)
published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
7

Welfare Losses from First-Come-First-Serve Course Enrollment: Outcome Estimation and Non-Market Maximization

Fontenot, Rory 01 January 2019 (has links)
College course enrollment operates as a market under supply cap. Because of the limited number of seats available for any given course some students who have a higher demand for a course are unable to enroll. The current registration system at the Claremont Colleges functions as a random draw system with added time costs. The lack of price signalling in the markets leads to a loss in overall welfare of the student body. By running data through simulated demand curves I am able to determine, on average, how much welfare is being lost by a random draw system. The percent of maximum welfare achieved compared to maximum possible ranges from forty-nine to eighty percent and largely depends on the proportion of enrolled students to the sum of enrolled + enroll requests as well as the demand function type. With price signalling, the student body would be able to reach the maximum achievable welfare.
8

Image Thresholding Technique Based On Fuzzy Partition And Entropy Maximization

Zhao, Mansuo January 2005 (has links)
Thresholding is a commonly used technique in image segmentation because of its fast and easy application. For this reason threshold selection is an important issue. There are two general approaches to threshold selection. One approach is based on the histogram of the image while the other is based on the gray scale information located in the local small areas. The histogram of an image contains some statistical data of the grayscale or color ingredients. In this thesis, an adaptive logical thresholding method is proposed for the binarization of blueprint images first. The new method exploits the geometric features of blueprint images. This is implemented by utilizing a robust windows operation, which is based on the assumption that the objects have &quote;C&quote; shape in a small area. We make use of multiple window sizes in the windows operation. This not only reduces computation time but also separates effectively thin lines from wide lines. Our method can automatically determine the threshold of images. Experiments show that our method is effective for blueprint images and achieves good results over a wide range of images. Second, the fuzzy set theory, along with probability partition and maximum entropy theory, is explored to compute the threshold based on the histogram of the image. Fuzzy set theory has been widely used in many fields where the ambiguous phenomena exist since it was proposed by Zadeh in 1965. And many thresholding methods have also been developed by using this theory. The concept we are using here is called fuzzy partition. Fuzzy partition means that a histogram is parted into several groups by some fuzzy sets which represent the fuzzy membership of each group because our method is based on histogram of the image . Probability partition is associated with fuzzy partition. The probability distribution of each group is derived from the fuzzy partition. Entropy which originates from thermodynamic theory is introduced into communications theory as a commonly used criteria to measure the information transmitted through a channel. It is adopted by image processing as a measurement of the information contained in the processed images. Thus it is applied in our method as a criterion for selecting the optimal fuzzy sets which partition the histogram. To find the threshold, the histogram of the image is partitioned by fuzzy sets which satisfy a certain entropy restriction. The search for the best possible fuzzy sets becomes an important issue. There is no efficient method for the searching procedure. Therefore, expansion to multiple level thresholding with fuzzy partition becomes extremely time consuming or even impossible. In this thesis, the relationship between a probability partition (PP) and a fuzzy C-partition (FP) is studied. This relationship and the entropy approach are used to derive a thresholding technique to select the optimal fuzzy C-partition. The measure of the selection quality is the entropy function defined by the PP and FP. A necessary condition of the entropy function arriving at a maximum is derived. Based on this condition, an efficient search procedure for two-level thresholding is derived, which makes the search so efficient that extension to multilevel thresholding becomes possible. A novel fuzzy membership function is proposed in three-level thresholding which produces a better result because a new relationship among the fuzzy membership functions is presented. This new relationship gives more flexibility in the search for the optimal fuzzy sets, although it also increases the complication in the search for the fuzzy sets in multi-level thresholding. This complication is solved by a new method called the &quote;Onion-Peeling&quote; method. Because the relationship between the fuzzy membership functions is so complicated it is impossible to obtain the membership functions all at once. The search procedure is decomposed into several layers of three-level partitions except for the last layer which may be a two-level one. So the big problem is simplified to three-level partitions such that we can obtain the two outmost membership functions without worrying too much about the complicated intersections among the membership functions. The method is further revised for images with a dominant area of background or an object which affects the appearance of the histogram of the image. The histogram is the basis of our method as well as of many other methods. A &quote;bad&quote; shape of the histogram will result in a bad thresholded image. A quadtree scheme is adopted to decompose the image into homogeneous areas and heterogeneous areas. And a multi-resolution thresholding method based on quadtree and fuzzy partition is then devised to deal with these images. Extension of fuzzy partition methods to color images is also examined. An adaptive thresholding method for color images based on fuzzy partition is proposed which can determine the number of thresholding levels automatically. This thesis concludes that the &quote;C&quote; shape assumption and varying sizes of windows for windows operation contribute to a better segmentation of the blueprint images. The efficient search procedure for the optimal fuzzy sets in the fuzzy-2 partition of the histogram of the image accelerates the process so much that it enables the extension of it to multilevel thresholding. In three-level fuzzy partition the new relationship presentation among the three fuzzy membership functions makes more sense than the conventional assumption and, as a result, performs better. A novel method, the &quote;Onion-Peeling&quote; method, is devised for dealing with the complexity at the intersection among the multiple membership functions in the multilevel fuzzy partition. It decomposes the multilevel partition into the fuzzy-3 partitions and the fuzzy-2 partitions by transposing the partition space in the histogram. Thus it is efficient in multilevel thresholding. A multi-resolution method which applies the quadtree scheme to distinguish the heterogeneous areas from the homogeneous areas is designed for the images with large homogeneous areas which usually distorts the histogram of the image. The new histogram based on only the heterogeneous area is adopted for partition and outperforms the old one. While validity checks filter out the fragmented points which are only a small portion of the whole image. Thus it gives good thresholded images for human face images.
9

Towards a Framework For Resource Allocation in Networks

Ranasingha, Maththondage Chamara Sisirawansha 26 May 2009 (has links)
Network resources (such as bandwidth on a link) are not unlimited, and must be shared by all networked applications in some manner of fairness. This calls for the development and implementation of effective strategies that enable optimal utilization of these scarce network resources among the various applications that share the network. Although several rate controllers have been proposed in the literature to address the issue of optimal rate allocation, they do not appear to capture other factors that are of critical concern. For example, consider a battlefield data fusion application where a fusion center desires to allocate more bandwidth to incoming flows that are perceived to be more accurate and important. For these applications, network users should consider transmission rates of other users in the process of rate allocation. Hence, a rate controller should consider application specific rate coordination directives given by the underlying application. The work reported herein addresses this issue of how a rate controller may establish and maintain the desired application specific rate coordination directives. We identify three major challenges in meeting this objective. First, the application specific performance measures must be formulated as rate coordination directives. Second, it is necessary to incorporate these rate coordination directives into a rate controller. Of course, the resulting rate controller must co-exist with ordinary rate controllers, such as TCP Reno, in a shared network. Finally, a mechanism for identifying those flows that require the rate allocation directives must be put in place. The first challenge is addressed by means of a utility function which allows the performance of the underlying application to be maximized. The second challenge is addressed by utilizing the Network Utility Maximization (NUM) framework. The standard utility function (i.e. utility function of the standard rate controller) is augmented by inserting the application specific utility function as an additive term. Then the rate allocation problem is formulated as a constrained optimization problem, where the objective is to maximize the aggregate utility of the network. The gradient projection algorithm is used to solve the optimization problem. The resulting solution is formulated and implemented as a window update function. To address the final challenge we resort to a machine learning algorithm. We demonstrate how data features estimated utilizing only a fraction of the flow can be used as evidential input to a series of Bayesian Networks (BNs). We account for the uncertainty introduced by partial flow data through the Dempster-Shafer (DS) evidential reasoning framework.
10

Statistical models for catch-at-length data with birth cohort information /

Chung, Sai-ho. January 2005 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006.

Page generated in 0.0941 seconds