• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Inferring diffusion models with structural and behavioral dependency in social networks

Bao, Qing 23 August 2016 (has links)
Online social and information networks, like Facebook and Twitter, exploit the influence of neighbors to achieve effective information sharing and spreading. The process that information is spread via the connected nodes in social and information networks is referred to as diffusion. In the literature, a number of diffusion models have been proposed for different applications like influential user identification and personalized recommendation. However, comprehensive studies to discover the hidden diffusion mechanisms governing the information diffusion using the data-driven paradigm are still lacking. This thesis research aims to design novel diffusion models with the structural and behaviorable dependency of neighboring nodes for representing social networks, and to develop computational algorithms to infer the diffusion models as well as the underlying diffusion mechanisms based on information cascades observed in real social networks. By incorporating structural dependency and diversity of node neighborhood into a widely used diffusion model called Independent Cascade (IC) Model, we first propose a component-based diffusion model where the influence of parent nodes is exerted via connected components. Instead of estimating the node-based diffusion probabilities as in the IC Model, component-based diffusion probabilities are estimated using an expectation maximization (EM) algorithm derived under a Bayesian framework. Also, a newly derived structural diversity measure namely dynamic effective size is proposed for quantifying the dynamic information redundancy within each parent component. The component-based diffusion model suggests that node connectivity is a good proxy to quantify how a node's activation behavior is affected by its node neighborhood. To model directly the behavioral dependency of node neighborhood, we then propose a co-activation pattern based diffusion model by integrating the latent class model into the IC Model where the co-activation patterns of parent nodes form the latent classes for each node. Both the co-activation patterns and the corresponding pattern-based diffusion probabilities are inferred using a two-level EM algorithm. As compared to the component-based diffusion model, the inferred co-activation patterns can be interpreted as the soft parent components, providing insights on how each node is influenced by its neighbors as reflected by the observed cascade data. With the motivation to discover a common set of the over-represented temporal activation patterns (motifs) characterizing the overall diffusion in a social network, we further propose a motif-based diffusion model. By considering the temporal ordering of the parent activations and the social roles estimated for each node, each temporal activation motif is represented using a Markov chain with the social roles being its states. Again, a two-level EM algorithm is proposed to infer both the temporal activation motifs and the corresponding diffusion network simultaneously. The inferred activation motifs can be interpreted as the underlying diffusion mechanisms characterizing the diffusion happening in the social network. Extensive experiments have been carried out to evaluate the performance of all the proposed diffusion models using both synthetic and real data. The results obtained and presented in the thesis demonstrate the effectiveness of the proposed models. In addition, we discuss in detail how to interpret the inferred co-activation patterns and interaction motifs as the diffusion mechanisms under the context of different real social network data sets.
2

Multi-objective design of complex aircraft structures using evolutionary algorithms

Seeger, J., Wolf, K. 03 June 2019 (has links)
In this article, a design methodology for complex composite aircraft structures is presented. The developed approach combines a multi-objective optimization method and a parameterized simulation model using a design concept database. Due to the combination of discrete and continuous design variables describing the structures, evolutionary algorithms are used within the presented optimization approach. The approach requires an evaluation of the design alternatives that is performed by parameterized simulation models. The variability of these models is achieved using a design concept database that contains different layouts for each implemented structural part. Due to the complexity of the generated aircraft structures, the finite element method is applied for the calculation of the structural behaviour. The applicability of the developed design approach will be demonstrated by optimizing two composite aircraft fuselage examples. The obtained results show that the developed methodology is useful and reliable for designing complex aircraft structures.
3

Sequential Decision Making with Combinatorial Actions and High-Dimensional Contexts

Oh, Min-hwan January 2020 (has links)
In interactive sequential decision-making systems, the learning agent needs to react to new information both in the short term and in the long term, and learn to generalize through repeated interactions with the environment. Unlike in offline learning environments, the new data that arrives is typically a function of previous actions taken by the agent. One of the key challenges is to efficiently use and generalize from data that may never reappear. Furthermore, in many real-world applications, the agent only receives partial feedback on the decisions it makes. This necessitates a balanced exploration-exploitation approach, where the agent needs to both efficiently collect relevant information in order to prepare for future arrivals of feedback, and produce the desired outcome in the current periods by exploiting the already collected information. In this thesis, we focus on two classes of fundamental sequential learning problems: Contextual bandits with combinatorial actions and user choice (Chapter 2 and Chapter 3): We investigate the dynamic assortment selection problem by combining statistical estimation of choice models and generalization using contextual information. For this problem, we design and analyze both UCB and Thomson sampling algorithms with rigorous performance guarantees and tractability. High-dimensional contextual bandits (Chapter 4): We investigate policies that can efficiently exploit the structure in high-dimensional data, e.g., sparsity. We design and analyze an efficient sparse contextual bandit algorithm that does not require to know the sparsity of the underlying parameter -- information that essentially all existing sparse bandit algorithms to date require.
4

Multiple Causal Inference with Bayesian Factor Models

Wang, Yixin January 2020 (has links)
Causal inference from observational data is a vital problem, but it comes with strong assumptions. Most methods assume that we observe all confounders, variables that affect both the cause variables and the outcome variables. But whether we have observed all confounders is a famously untestable assumption. In this dissertation, we develop algorithms for causal inference from observational data, allowing for unobserved confounding. These algorithms focus on problems of multiple causal inference: scientific studies that involve many causes or many outcomes that are simultaneously of interest. Begin with multiple causal inference with many causes. We develop the deconfounder, an algorithm that accommodates unobserved confounding by leveraging the multiplicity of the causes. How does the deconfounder work? The deconfounder uses the correlation among the multiple causes as evidence for unobserved confounders, combining Bayesian factor models and predictive model checking to perform causal inference. We study the theoretical requirements for the deconfounder to provide unbiased causal estimates, along with its limitations and trade-offs. We also show how the deconfounder connects to the proxy-variable strategy for causal identification (Miao et al., 2018) by treating subsets of causes as proxies of the unobserved confounder. We demonstrate the deconfounder in simulation studies and real-world data. As an application, we develop the deconfounded recommender, a variant of the deconfounder tailored to causal inference on recommender systems. Finally, we consider multiple causal inference with many outcomes. We develop the control-outcome deconfounder, an algorithm that corrects for unobserved confounders using multiple negative control outcomes. Negative control outcomes are outcome variables for which the cause is a priori known to have no effect. The control-outcome deconfounder uses the correlation among these outcomes as evidence for unobserved confounders. We discuss the theoretical and empirical properties of the control-outcome deconfounder. We also show how the control-outcome deconfounder generalizes the method of synthetic controls (Abadie et al., 2010, 2015; Abadie and Gardeazabal, 2003), expanding its scope to nonlinear settings and non-panel data.
5

Data-Driven Combinatorial Optimization and Efficient Machine Learning Frameworks

Sakr, Nourhan January 2019 (has links)
Contemporary research in building optimization models and designing algorithms has become more data-centric and application-specific. While addressing three problems in the fields of combinatorial optimization and machine learning (ML), this work highlights the value of making data an important driver for building frameworks and designing solutions. In Chapter 2, we look into building frameworks for data-intensive applications, such as ML algorithms. We introduce a family of matrices, Structured Spinners, and use these to perform input data projections. This operation is commonly needed for a plethora of ML algorithms such as dimension reduction, cross-polytope LSH techniques or kernel approximation, yet comprises a major bottleneck in terms of time and space complexity. We design a generic framework that speeds up ML algorithms which perform such projections, with no or minor loss in accuracy. Our method applies for projections in both randomized and learned settings. We confirm our results via theoretical guarantees and numerical experiments. Moreover, we are the first to provide theoretical results for that type of work under adaptive settings. In Chapter 3, we rely on empirical analysis to design algorithms for an online packet scheduling problem with weighted packets and agreeable deadlines. We adopt the model presented in Jez et al. and use their Modified Greedy MG algorithm, which was the best-performing one in the literature, as our benchmark. The technique used for proving the competitive ratio for this benchmark is worst case analysis. Via extensive simulations, we shed light on practical bottlenecks and observe that these are not captured by the competitive analysis. We design three new algorithms, two of which make deterministic online decisions, while the third learns an adaptive one. When contrasted against the MG benchmark, our algorithms have significantly worse competitive ratios, yet noticeably better empirical performance. Our methodology is particularly useful for online algorithms and underlines the significance of leveraging data or simulations to guide algorithm design and testing. In Chapter 4, we study two different applications in cybersecurity: an adaptive ML problem and a game-theoretic model called PLADD. The common objective between both problems is to protect cybersystems against attacks by intelligent, adaptable and well-resourced adversaries while maintaining a cost budget. We introduce a novel combinatorial scheduling formulation to design useful defense strategies that meet this goal. Our work separates the formulation from the data-driven analysis and solution. As such, the scheduling formulation, which does not resemble any previously studied formulations from the scheduling literature, may be used as a new model by other researchers for different motivations. We keep the model generic enough for others to use, but design the algorithms that work best for our context. The formulation is inspired by stochastic programming and cast as a mixed integer program (MIP). We provide theoretical analysis, e.g. explore integrality gaps, exploit the combinatorial structure, prove NP-hardness, develop dynamic programming solutions for two-machine cases, then work towards data-driven heuristics and approximation algorithms using distribution assumptions and real data.
6

Controlling Randomness: Using Procedural Generation to Influence Player Uncertainty in Video Games

Fort, Travis 01 May 2015 (has links)
As video games increase in complexity and length, the use of automatic, or procedural, content generation has become a popular way to reduce the stress on game designers. However, the usage of procedural generation has certain consequences; in many instances, what the computer generates is uncertain to the designer. The intent of this thesis is to demonstrate how procedural generation can be used to intentionally affect the embedded randomness of a game system, enabling game designers to influence the level of uncertainty a player experiences in a nuanced way. This control affords game designers direct control over complex problems like dynamic difficulty adjustment, pacing, or accessibility. Game design will be examined from the perspective of uncertainty and how procedural generation can be used to intentionally add or reduce uncertainty. Various procedural generation techniques will be discussed alongside the types of uncertainty, using case studies to demonstrate the principles in action.
7

Performance Study and Dynamic Optimization Design for Thread Pool Systems

Dongping Xu January 2004 (has links)
19 Dec 2004. / Published through the Information Bridge: DOE Scientific and Technical Information. "IS-T 2359" Dongping Xu. 12/19/2004. Report is also available in paper and microfiche from NTIS.
8

Conceptual interplanetary space mission design using multi-objective evolutionary optimization and design grammars

Weber, A., Fasoulas, S., Wolf, K. 04 June 2019 (has links)
Conceptual design optimization (CDO) is a technique proposed for the structured evaluation of different design concepts. Design grammars provide a flexible modular modelling architecture. The model is generated by a compiler based on predefined components and rules. The rules describe the composition of the model; thus, different models can be optimized by the CDO in one run. This allows considering a mission design including the mission analysis and the system design. The combination of a CDO approach with a model based on design grammars is shown for the concept study of a near-Earth asteroid mission. The mission objective is to investigate two asteroids of different kinds. The CDO reveals that a mission concept using two identical spacecrafts flying to one target each is better than a mission concept with one spacecraft flying to two asteroids consecutively.
9

Efficient recovery algorithms with restricted access to strings

Sinha, Sandip January 2022 (has links)
We design efficient algorithms for computational problems over strings in several models where the algorithms have limited access to the input. These models, and algorithms developed respecting these constraints, are becoming increasingly relevant due to the rapidly increasing size of datasets in myriad applications. Our first problem of interest is \emph{trace reconstruction}. This is an important problem in learning theory and coding theory, and has applications in computational biology. In this problem, the goal is to recover an unknown string given independent samples (\emph{traces}) of it generated via a probabilistic noise process called the deletion channel. We give state-of-the-art algorithms for this problem in several settings. Then we consider the problem of estimating the \emph{longest increasing subsequence (LIS)} of a given string in sublinear time, given query access to the string. While the LIS of a string can be computed exactly in near-linear time, the optimal complexity of approximating the LIS length, especially when the LIS is much less than the string length, is still open. We significantly improve upon prior work in terms of both approximation and time complexity in this regime. The runtime of our algorithm essentially matches the trivial query complexity lower bound as a function of the length of the LIS. Finally, we consider the problem of local decoding, or random access, on compressed strings. The Burrows-Wheeler Transform (BWT) is an important preprocessing step in lossless text compression that rearranges a string into runs of identical characters (by exploiting context regularities), resulting in highly compressible strings. However, the decoding process of the BWT is inherently sequential, and prevents fast random access to the original string. We design a succinct data structure for locally decoding short substrings (and answering several other queries) of a given string under its compressed BWT efficiently.
10

Algorithm Design and Localization Analysis in Sequential and Statistical Learning

Xu, Yunbei January 2023 (has links)
Learning theory is a dynamic and rapidly evolving field that aims to provide mathematical foundations for designing and understanding the behavior of algorithms and procedures that can learn from data automatically. At the heart of this field lies the interplay between algorithm design and statistical complexity analysis, with sharp statistical complexity characterizations often requiring localization analysis. This dissertation aims to advance the fields of machine learning and decision making by contributing to two key directions: principled algorithm design and localized statistical complexity. Our research develops novel algorithmic techniques and analytical frameworks to build more effective and robust learning systems. Specifically, we focus on studying uniform convergence and localization in statistical learning theory, developing efficient algorithms using the optimism principle for contextual bandits, and creating Bayesian design principles for bandit and reinforcement learning problems.

Page generated in 0.0463 seconds