• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Machine Learning Algorithms for Influence Maximization on Social Networks

Abhishek Kumar Umrawal (16787802) 08 August 2023 (has links)
<p>With an increasing number of users spending time on social media platforms and engaging with family, friends, and influencers within communities of interest (such as in fashion, cooking, gaming, etc.), there are significant opportunities for marketing firms to leverage word-of-mouth advertising on these platforms. In particular, marketing firms can select sets of influencers within relevant communities to sponsor, namely by providing free product samples to those influencers so that so they will discuss and promote the product on their social media accounts.</p><p>The question of which set of influencers to sponsor is known as <b>influence maximization</b> (IM) formally defined as follows: "if we can try to convince a subset of individuals in a social network to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?'' Under standard diffusion models, this optimization problem is known to be NP-hard. This problem has been widely studied in the literature and several approaches for solving it have been proposed. Some approaches provide near-optimal solutions but are costly in terms of runtime. On the other hand, some approaches are faster but heuristics, i.e., do not have approximation guarantees.</p><p>In this dissertation, we study the influence maximization problem extensively. We provide efficient algorithms for solving the original problem and its important generalizations. Furthermore, we provide theoretical guarantees and experimental evaluations to support the claims made in this dissertation.</p><p>We first study the original IM problem referred to as the discrete influence maximization (DIM) problem where the marketer can either provide a free sample to an influencer or not, i.e., they cannot give fractional discounts like 10% off, etc. As already mentioned the existing solution methods (for instance, the simulation-based greedy algorithm) provide near-optimal solutions that are costly in terms of runtime and the approaches that are faster do not have approximation guarantees. Motivated by the idea of addressing this trade-off between accuracy and runtime, we propose a community-aware divide-and-conquer framework to provide a time-efficient solution to the DIM problem. The proposed framework outperforms the standard methods in terms of runtime and the heuristic methods in terms of influence.</p><p>We next study a natural extension of the DIM problem referred to as the fractional influence maximization (FIM) problem where the marketer may offer fractional discounts (as opposed to either providing a free sample to an influencer or not in the DIM problem) to the influencers. Clearly, the FIM problem provides more flexibility to the marketer in allocating the available budget among different influencers. The existing solution methods propose to use a continuous extension of the simulation-based greedy approximation algorithm for solving the DIM problem. This continuous extension suggests greedily building the solution for the given fractional budget by taking small steps through the interior of the feasible region. On the contrary, we first characterize the solution to the FIM problem in terms of the solution to the DIM problem. We then use this characterization to propose an efficient greedy approximation algorithm that only iterates through the corners of the feasible region. This leads to huge savings in terms of runtime compared to the existing methods that suggest iterating through the interior of the feasible region. Furthermore, we provide an approximation guarantee for the proposed greedy algorithm to solve the FIM problem.</p><p>Finally, we study another extension of the DIM problem referred to as the online discrete influence maximization (ODIM) problem, where the marketer provides free samples not just once but repeatedly over a given time horizon and the goal is to maximize the cumulative influence over time while receiving instantaneous feedback. The existing solution methods are based on semi-bandit instantaneous feedback where the knowledge of some intermediate aspects of how the influence propagates in the social network is assumed or observed. For instance, which specific individuals became influenced at the intermediate steps during the propagation? However, for social networks with user privacy, this information is not available. Hence, we consider the ODIM problem with full-bandit feedback where no knowledge of the underlying social network or diffusion process is assumed. We note that the ODIM problem is an instance of the stochastic combinatorial multi-armed bandit (CMAB) problem with submodular rewards. To solve the ODIM problem, we provide an efficient algorithm that outperforms the existing methods in terms of influence, and time and space complexities.</p><p>Furthermore, we point out the connections of influence maximization with a related problem of disease outbreak prevention and a more general problem of submodular maximization. The methods proposed in this dissertation can also be used to solve those problems.</p>
32

Statistical models for catch-at-length data with birth cohort information

Chung, Sai-ho., 鍾世豪. January 2005 (has links)
published_or_final_version / abstract / Social Sciences / Doctoral / Doctor of Philosophy
33

Computational intelligence techniques for missing data imputation

Nelwamondo, Fulufhelo Vincent 14 August 2008 (has links)
Despite considerable advances in missing data imputation techniques over the last three decades, the problem of missing data remains largely unsolved. Many techniques have emerged in the literature as candidate solutions, including the Expectation Maximisation (EM), and the combination of autoassociative neural networks and genetic algorithms (NN-GA). The merits of both these techniques have been discussed at length in the literature, but have never been compared to each other. This thesis contributes to knowledge by firstly, conducting a comparative study of these two techniques.. The significance of the difference in performance of the methods is presented. Secondly, predictive analysis methods suitable for the missing data problem are presented. The predictive analysis in this problem is aimed at determining if data in question are predictable and hence, to help in choosing the estimation techniques accordingly. Thirdly, a novel treatment of missing data for online condition monitoring problems is presented. An ensemble of three autoencoders together with hybrid Genetic Algorithms (GA) and fast simulated annealing was used to approximate missing data. Several significant insights were deduced from the simulation results. It was deduced that for the problem of missing data using computational intelligence approaches, the choice of optimisation methods plays a significant role in prediction. Although, it was observed that hybrid GA and Fast Simulated Annealing (FSA) can converge to the same search space and to almost the same values they differ significantly in duration. This unique contribution has demonstrated that a particular interest has to be paid to the choice of optimisation techniques and their decision boundaries. iii Another unique contribution of this work was not only to demonstrate that a dynamic programming is applicable in the problem of missing data, but to also show that it is efficient in addressing the problem of missing data. An NN-GA model was built to impute missing data, using the principle of dynamic programing. This approach makes it possible to modularise the problem of missing data, for maximum efficiency. With the advancements in parallel computing, various modules of the problem could be solved by different processors, working together in parallel. Furthermore, a method for imputing missing data in non-stationary time series data that learns incrementally even when there is a concept drift is proposed. This method works by measuring the heteroskedasticity to detect concept drift and explores an online learning technique. New direction for research, where missing data can be estimated for nonstationary applications are opened by the introduction of this novel method. Thus, this thesis has uniquely opened the doors of research to this area. Many other methods need to be developed so that they can be compared to the unique existing approach proposed in this thesis. Another novel technique for dealing with missing data for on-line condition monitoring problem was also presented and studied. The problem of classifying in the presence of missing data was addressed, where no attempts are made to recover the missing values. The problem domain was then extended to regression. The proposed technique performs better than the NN-GA approach, both in accuracy and time efficiency during testing. The advantage of the proposed technique is that it eliminates the need for finding the best estimate of the data, and hence, saves time. Lastly, instead of using complicated techniques to estimate missing values, an imputation approach based on rough sets is explored. Empirical results obtained using both real and synthetic data are given and they provide a valuable and promising insight to the problem of missing data. The work, has significantly confirmed that rough sets can be reliable for missing data estimation in larger and real databases.
34

Relationen mellan aktielikviditet och utdelningspolicy : En kvantitativ analys av svenska aktiebolag mellan åren 2014-2017

Zethzon, Anna, Liljeberg, Sara January 2019 (has links)
Syftet med studien är att undersöka ifall det finns ett samband mellan företagens aktielikviditet och deras utdelningspolicy samt hur detta eventuella samband förefaller. Det undersöks utifrån forskningsfrågan “Hur påverkar aktielikviditet den eventuella utdelningen hos svenska aktiebolag?”. Aktielikviditet är ett begrepp som beskriver hur likvid en aktie är, det vill säga hur ofta den säljs och köps på börsen (Banerjee et al. 2007). En aktie med hög aktielikviditet är därför lättare att sälja än en aktie med låg aktielikviditet (ibid). Inom tidigare forskning har aktielikviditet undersökts och huruvida detta påverkar företags utdelningspolicy på den amerikanska respektive kinesiska marknaden och påvisat att det finns ett samband mellan dessa (Banerjee et al. 2007; Jiang et al. 2017). Deras samband var däremot tvärtemot varandra (ibid), varför det är intressant att undersöka hur sambandet förefaller på den svenska marknaden.   Hur likvid en aktie är påverkar aktieägarnas likviditetsbehov, då de lättare kan sälja en aktie med hög aktielikviditet och på så sätt skapa sig en “hemmagjord” utdelning (Banerjee et al. 2007). Tvärtom har en aktieägare som innehar mindre likvida aktier svårare att sälja dessa och är därmed i större behov av en generös utdelningspolicy för att tillfredsställa sitt likviditetsbehov (ibid). En generös utdelningspolicy är konsekvent med teorin Shareholder Value Maximization (Bento et al. 2016), vilken är central i denna studie. Teorin handlar om att företag kontinuerligt ska sträva efter att maximera värdet för aktieägarna (ibid).    Forskningsfrågan har undersökts utifrån en kvantitativ ansats med tvärsnittsdata i en multipel regressionsanalys. Inför undersökningen konstruerades antagandet att börsnoterade aktiebolag på den svenska marknaden är mer benägna att betala utdelning när de har låg aktielikviditet jämfört med hög aktielikviditet. Antagandet utformades med utgångspunkt i tidigare forskning. Studiens empiri kunde bekräfta detta antagande, även om sambandet var svagt. Det samband som återfanns kunde kopplas samman med relevanta teorier som signaleringshypotesen och Shareholder Value Maximization och därigenom visa att aktielikviditet har en betydelse för aktieägare och företags utdelningspolicys. / This paper aims to investigate whether there is a relationship between stock liquidity and companies’ dividend policy and if there is, how this relationship appears. Stock liquidity shows how liquid a stock is which means how often it is traded on the Stock market (Banerjee et al. 2007). A stock with high liquidity is easier to sell than a stock with low liquidity because the more liquid a stock is, the higher the demand (ibid). Previous research on the American and Chinese markets has found a connection between stock liquidity and dividend policies (Banerjee et al. 2007; Jiang et al. 2017), though this connection has appeared differently in each research project. This makes the subject interesting to investigate further and on other markets such as the Swedish market.   Stock liquidity has an impact on stockholders’ liquidity needs because owners of more liquid shares can sell their shares to create “homemade” dividends (Banerjee et al. 2007). Therefore, shareholders of less liquid stocks must rely heavily on dividends to satisfy their liquidity needs and has a larger interest in a generous dividend policy (ibid). A generous dividend policy is consistent with the theory Shareholder Value Maximization (Bento et al. 2016), which is a central part of this paper. The theory expresses that a company's main goal should always imply maximizing the value for their shareholders (ibid).   To answer this essay´s research question “How does stock liquidity impact the eventual dividend within companies listed on the Swedish stock market?”, this paper uses a quantitative approach with cross-sectional data in a multiple regression analysis. An assumption on the outcome of the analysis was deduced from earlier research. The assumption was that Swedish companies with low stock liquidity would be less inclined to initiate dividends than companies with high stock liquidity. This paper could confirm the assumption earlier described, but the conjunction was rather weak. The relationship found between stock liquidity and dividend policy could relate to theories such as Shareholder Value Maximization and the Signalling hypothesis. Therefore, this paper could confirm that stock liquidity has an impact on stockholders and companies´ dividend policies.
35

Improved iterative schemes for REML estimation of variance parameters in linear mixed models.

Knight, Emma January 2008 (has links)
Residual maximum likelihood (REML) estimation is a popular method of estimation for variance parameters in linear mixed models, which typically requires an iterative scheme. The aim of this thesis is to review several popular iterative schemes and to develop an improved iterative strategy that will work for a wide class of models. The average information (AI) algorithm is a computationally convenient and efficient algorithm to use when starting values are in the neighbourhood of the REML solution. However when reasonable starting values are not available, the algorithm can fail to converge. The expectation-maximisation (EM) algorithm and the parameter expanded EM (PXEM) algorithm are good alternatives in these situations but they can be very slow to converge. The formulation of these algorithms for a general linear mixed model is presented, along with their convergence properties. A series of hybrid algorithms are presented. EM or PXEM iterations are used initially to obtain variance parameter estimates that are in the neighbourhood of the REML solution, and then AI iterations are used to ensure rapid convergence. Composite local EM/AI and local PXEM/AI schemes are also developed; the local EM and local PXEM algorithms update only the random effect variance parameters, with the estimates of the residual error variance parameters held fixed. Techniques for determining when to use EM-type iterations and when to switch to AI iterations are investigated. Methods for obtaining starting values for the iterative schemes are also presented. The performance of these various schemes is investigated for several different linear mixed models. A number of data sets are used, including published data sets and simulated data. The performance of the basic algorithms is compared to that of the various hybrid algorithms, using both uninformed and informed starting values. The theoretical and empirical convergence rates are calculated and compared for the basic algorithms. The direct comparison of the AI and PXEM algorithms shows that the PXEM algorithm, although an improvement over the EM algorithm, still falls well short of the AI algorithm in terms of speed of convergence. However, when the starting values are too far from the REML solution, the AI algorithm can be unstable. Instability is most likely to arise in models with a more complex variance structure. The hybrid schemes use EM-type iterations to move close enough to the REML solution to enable the AI algorithm to successfully converge. They are shown to be robust to choice of starting values like the EM and PXEM algorithms, while demonstrating fast convergence like the AI algorithm. / Thesis (Ph.D.) - University of Adelaide, School of Agriculture, Food and Wine, 2008
36

Semiparametric maximum likelihood for regression with measurement error

Suh, Eun-Young 03 May 2001 (has links)
Semiparametric maximum likelihood analysis allows inference in errors-invariables models with small loss of efficiency relative to full likelihood analysis but with significantly weakened assumptions. In addition, since no distributional assumptions are made for the nuisance parameters, the analysis more nearly parallels that for usual regression. These highly desirable features and the high degree of modelling flexibility permitted warrant the development of the approach for routine use. This thesis does so for the special cases of linear and nonlinear regression with measurement errors in one explanatory variable. A transparent and flexible computational approach is developed, the analysis is exhibited on some examples, and finite sample properties of estimates, approximate standard errors, and likelihood ratio inference are clarified with simulation. / Graduation date: 2001
37

Oblivious and Non-oblivious Local Search for Combinatorial Optimization

Ward, Justin 07 January 2013 (has links)
Standard local search algorithms for combinatorial optimization problems repeatedly apply small changes to a current solution to improve the problem's given objective function. In contrast, non-oblivious local search algorithms are guided by an auxiliary potential function, which is distinct from the problem's objective. In this thesis, we compare the standard and non-oblivious approaches for a variety of problems, and derive new, improved non-oblivious local search algorithms for several problems in the area of constrained linear and monotone submodular maximization. First, we give a new, randomized approximation algorithm for maximizing a monotone submodular function subject to a matroid constraint. Our algorithm's approximation ratio matches both the known hardness of approximation bounds for the problem and the performance of the recent ``continuous greedy'' algorithm. Unlike the continuous greedy algorithm, our algorithm is straightforward and combinatorial. In the case that the monotone submodular function is a coverage function, we can obtain a further simplified, deterministic algorithm with improved running time. Moving beyond the case of single matroid constraints, we then consider general classes of set systems that capture problems that can be approximated well. While previous such classes have focused primarily on greedy algorithms, we give a new class that captures problems amenable to optimization by local search algorithms. We show that several combinatorial optimization problems can be placed in this class, and give a non-oblivious local search algorithm that delivers improved approximations for a variety of specific problems. In contrast, we show that standard local search algorithms give no improvement over known approximation results for these problems, even when allowed to search larger neighborhoods than their non-oblivious counterparts. Finally, we expand on these results by considering standard local search algorithms for constraint satisfaction problems. We develop conditions under which the approximation ratio of standard local search remains limited even for super-polynomial or exponential local neighborhoods. In the special case of MaxCut, we further show that a variety of techniques including random or greedy initialization, large neighborhoods, and best-improvement pivot rules cannot improve the approximation performance of standard local search.
38

The Bioeconomic Analysis of Longline Yellowfin Tuna in the Western and Central Pacific

Tsai, Ching-yu 11 July 2011 (has links)
In this study, based on the basic theory model ¢wGordon-Schaefer model is used to discuss the equilibrium levels for yellowfin tuna in the Western and Central Pacific of open access (OA) and present value maximization (MPV). And then to compare the catches and the stocks on the two model¡¦s equilibrium value, the result shows the management of yellowfin tuna in the Western and Central Pacific tend to MPV model, the regional fisheries organization (RFMO) to detect the implementation of the measures (MCS) is significant; in addition, use sensitivity analysis and then to understand the changes on the stocks and the effort quantities effected by varying different parameters. In OA, if you want to get effectively maintain the sustainability of the stocks, should be considered to reduce the price and the catch coefficient, increase the cost per unit of effort to control; in MPV, we can understand that the catch coefficient and the intrinsic growth rate have a bigger influence in the effort quantities; Finally, by simulating the catches and the stocks, that if it can continue to effectively manage fishery by MCS in the future, the catches and the stocks of yellowfin tuna will tend to balance the value of MPV, and so on, not only resources effective use of maximum profit and maintaining our fleet of ocean-going business interests, but also resources can be sustainable.
39

The impact on banks' portfolio under BIS amendment to the capital accord of 1996 and reserve requirement

Chiu, Yu-Fen 23 June 2000 (has links)
The impact on banks' portfolio under BIS amendment to the capital accord of 1996 and reserve requirement.
40

Joint Distributed Detection and Estimation for Cooperative Communication in Cluster-Based Networks

Pu, Jyun-Wei 11 August 2008 (has links)
In this thesis, a new scheme based on the concept of compress-and-forward (CF) technique has been proposed. And expectation maximization (EM) algorithm is utilized to attain the aim of converging to a local optimum solution. According to the characteristic of EM algorithm, destination node would feed back a better decision to the relay node to be the next initial value. After the iteration, relay node would obtain a better detection result which would converge to a local optimum performance. At last the destination node would receive the optimum detection result from each relay and make a final decision. In the new structure, channel estimation can also be made at the relay node by EM algorithm, which is the reason why it is called joint distributed detection and estimation. Simulation shows that the proposed scheme would acquire an iteration gain at both the relay and destination node.

Page generated in 0.0947 seconds