1091 |
Extending Event Sequence Processing:New Models and Optimization TechniquesLiu, Mo 25 April 2012 (has links)
Many modern applications, including online financial feeds, tag-based mass transit systems and RFID-based supply chain management systems transmit real-time data streams. There is a need for event stream processing technology to analyze this vast amount of sequential data to enable online operational decision making. This dissertation focuses on innovating several techniques at the core of a scalable E-Analytic system to achieve efficient, scalable and robust methods for in-memory multi-dimensional nested pattern analysis over high-speed event streams. First, I address the problem of processing flat pattern queries on event streams with out-of-order data arrival. I design two alternate solutions: aggressive and conservative strategies respectively. The aggressive strategy produces maximal output under the optimistic assumption that out-of-order event arrival is rare. The conservative method works under the assumption that out-of-order data may be common, and thus produces output only when its correctness can be guaranteed. Second, I design the integration of CEP and OLAP techniques (ECube model) for efficient multi-dimensional event pattern analysis at different abstraction levels. Strategies of drill-down (refinement from abstract to specific patterns) and of roll-up (generalization from specific to abstract patterns) are developed for the efficient workload evaluation. I design a cost-driven adaptive optimizer called Chase that exploits reuse strategies for optimal E-Cube hierarchy execution. Then, I explore novel optimization techniques to support the high- performance processing of powerful nested CEP patterns. A CEP query language called NEEL, is designed to express nested CEP pattern queries composed of sequence, negation, AND and OR operators. To allow flexible execution ordering, I devise a normalization procedure that employs rewriting rules for flattening a nested complex event expression. To conserve CPU and memory consumption, I propose several strategies for efficient shared processing of groups of normalized NEEL subexpressions. Our comprehensive experimental studies, using both synthetic as well as real data streams demonstrate superiority of our proposed strategies over alternate methods in the literature in both effectiveness and efficiency.
|
1092 |
Computational modelling and optimization of dry powder inhalersKopsch, Thomas January 2018 (has links)
Dry powder inhalers (DPIs) are a common therapeutic modality for lung diseases such as asthma, but they are also used to treat systemic diseases such as diabetes. Advantages of DPIs include their portable design and low manufacturing costs. Another advantage of DPIs is their breath activation, which makes them popular among patients. In a passive DPI drug is only released when the patient inhales. When the patient inhales, air flows through the device. The flow of air entrains a dry powder formulation inside the device and carries it to the lung. Currently, no DPI exists which can deliver drug independent of the patient to the desired target site in the lung. This is because drug release depends on the patient’s inhalation manoeuvre. To maximize the effect of the treatment it is necessary to optimize DPIs to achieve drug delivery that (A) is independent of the inhalation manoeuvre and (B) is targeted to the correct site in the lung. Therefore, this thesis aims to apply numerical and experimental methods to optimize DPIs systematically. First, two clinically justifiable cost functions have been developed corresponding to the DPI design objectives (A) and (B). An Eulerian-Eulerian (EE) computational fluid dynamics (CFD) approach has then been used to optimize a DPI entrainment geometry. Three different optimized entrainment geometries have been found corresponding to three different therapeutic applications. Second, the CFD approach has been validated experimentally. This is the first experimental study to validate an EE CFD approach for DPI modelling. Third, a personalized medicine approach to DPI design has been proposed. The development of this approach makes it possible to achieve the design objectives for patients with highly different lung functions. Finally, an adaptive DPI with a variable bypass element has been developed. This DPI achieves design objectives (A) and (B) for patients with highly different lung functions with a single device. In contrast to the personalized medicine approach, there is no need to select the optimal amount of bypass, since the device adapts automatically.
|
1093 |
A multidisciplinary design optimisation framework for structural problems with disparate variable dependenceOllar, Jonathan January 2017 (has links)
Multidisciplinary design optimisation incorporates several disciplines in one integrated optimisation problem. The benefi t of considering all requirements at once rather than in individual optimisations is that synergies between disciplines can be exploited to fi nd superior designs to what would otherwise be possible. The main obstacle for the use of multidisciplinary design optimisation in an industrial setting is the related computational cost which may become prohibitively large. This work is focused on the development of a multidisciplinary design optimisation framework that extends the existing trust-region based optimisation method known as the mid-range approximation method. The main novel contribution is an approach to solving multidisciplinary design optimisation problems using metamodels built in sub-spaces of the design variable space. Each metamodel is built in the sub-space relevant to the corresponding discipline while the optimisation problem is solved in the full design variable space. Since the metamodels are built in a space of reduced dimensionality, the computational budget for building them can be reduced without compromising their quality. Furthermore, a method for efficiently building kriging metamodels is proposed. This is done by means of a two-step hyper parameter tuning strategy. The fi rst step is a line search where the set of tuning parameters is treated as a single variable. The solution of the fi rst step is used in the second step, a gradient based hyper parameter optimisation where partial derivatives are obtained using the adjoint method. The framework is demonstrated on two examples, a multidisciplinary design optimisation of a thin-walled beam section subject to static and impact requirements, and a multidisciplinary design optimisation of an aircraft wing subject to static and bird strike requirements. In both cases the developed technique demonstrates a reduced computational effort compared to what would typically be achieved by existing methods.
|
1094 |
Lagrangian methods for ballistic impact simulations/Tupek, Michael Ronne January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and, (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 85-92). / This thesis explores various Lagrangian methods for simulating ballistic impact with the ultimate goal of finding a universal, robust and scalable computational framework to assist in the design of armor systems. An overview is provided of existing Lagrangian strategies including particle methods, meshless methods, and the peridynamic approach. We review the continuum formulation of mechanics and its discretization using finite elements. A rigid body contact algorithm for explicit dynamic finite elements is presented and used to model a rigid sphere impacting a confined alumina tile. The constitutive model for the alumina is provided by the Deshpande-Evans ceramic damage model. These simulations were shown to capture experimentally observed radial crack patterns. An adaptive remeshing strategy using finite elements is then explored and applied, with limited success, to the problem of predicting the transition from dwell to penetration for long-rod penetrators impacting confined ceramic targets at high velocities. Motivated by the difficulties of mesh-based Lagrangian approaches for modeling impact, an alternative Lagrangian approach is investigated which uses established constitutive relations within a particle-based computational framework. The resulting algorithm is based on a discretization of the peridynamic formulation of continuum mechanics. A validating benchmark example using a Taylor impact test is shown and compared to previous results from the literature. Further numerical examples involving ballistic impact and the crushing of an aluminum sandwich structures provide further demonstration of the method's potential for armor applications. / by Michael Ronne Tupek. / S.M.
|
1095 |
Approximation Algorithms for Demand-Response Contract Execution and Coflow SchedulingQiu, Zhen January 2016 (has links)
Solving operations research problems with approximation algorithms has been an important topic since approximation algorithm can provide near-optimal solutions to NP-hard problems while achieving computational efficiency. In this thesis, we consider two different problems in the field of optimal control and scheduling theory respectively and develop efficient approximation algorithms for those problems with performance guarantee.
Chapter 2 presents approximation algorithms for solving the optimal execution problem for demand-response contract in electricity markets. Demand side participation is essential for achieving real-time energy balance in today's electricity grid. Demand-response contracts, where an electric utility company buys options from consumers to reduce their load in the future, are an important tool to increase demand-side participation. In this chapter, we consider the operational problem of optimally exercising the available contracts over the planning horizon such that the total cost to satisfy the demand is minimized. In particular, we consider the objective of minimizing the sum of the expected ℓ_β-norm of the load deviations from given thresholds and the contract execution costs over the planning horizon. For β=∞, this reduces to minimizing the expected peak load. The peak load provides a good proxy to the total cost of the utility as spikes in electricity prices are observed only in peak load periods. We present a data driven near-optimal algorithm for the contract execution problem. Our algorithm is a sample average approximation (SAA) based dynamic program over a multi-period planning horizon. We provide a sample complexity bound on the number of demand samples required to compute a (1+ε)-approximate policy for any ε>0. Our SAA algorithm is quite general and we show that it can be adapted to quite general demand models including Markovian demands and objective functions. For the special case where the demand in each period is i.i.d., we show that a static solution is optimal for the dynamic problem. We also conduct a numerical study to compare the performance of our SAA based DP algorithm. Our numerical experiments show that we can achieve a (1+ε)-approximation in significantly smaller numbers of samples than what is implied by the theoretical bounds. Moreover, the structure of the approximate policy also shows that it can be well approximated by a simple affine function of the state.
In Chapter 3, we study the NP-hard coflow scheduling problem and develop a polynomial-time approximation algorithm for the problem with constant approximation ratio. Communications in datacenter jobs (such as the shuffle operations in MapReduce applications) often involve many parallel flows, which may be processed simultaneously. This highly parallel structure presents new scheduling challenges in optimizing job-level performance objectives in data centers. Chowdhury and Stoica [13] introduced the coflow abstraction to capture these communication patterns, and recently Chowdhury et al. [15] developed effective heuristics to schedule coflows. In this chapter, we consider the problem of efficiently scheduling coflows so as to minimize the total weighted completion time, which has been shown to be strongly NP-hard [15]. Our main result is the first polynomial-time deterministic approximation algorithm for this problem, with an approximation ratio of $64/3$, and a randomized version of the algorithm, with a ratio of 8+16sqrt{2}/3. Our results use techniques from both combinatorial scheduling and matching theory, and rely on a clever grouping of coflows.
In Chapter 4, we carry out a comprehensive experimental analysis on a Facebook trace and extensive simulated instances to evaluate the practical performance of several algorithms for coflow scheduling, including our approximation algorithms developed in Chapter 3. Our experiments suggest that simple algorithms provide effective approximations of the optimal, and that the performance of the approximation algorithm of Chapter 3 is relatively robust, near optimal, and always among the best compared with the other algorithms, in both the offline and online settings.
|
1096 |
Optimization in Strategic EnvironmentsFeigenbaum, Itai Izhak January 2016 (has links)
This work considers the problem faced by a decision maker (planner) trying to optimize over incomplete data. The missing data is privately held by agents whose objectives are dierent from the planner's, and who can falsely report it in order to advance their objectives. The goal is to design optimization mechanisms (algorithms) that achieve "good" results when agents' reports follow a game-theoretic equilibrium. In the first part of this work, the goal is to design mechanisms that provide a small worst-case approximation ratio (guarantee a large fraction of the optimal value in all instances) at equilibrium. The emphasis is on strategyproof mechanisms|where truthfulness is a dominant strategy equilibrium|and on the approximation ratio at that equilibrium. Two problems are considered|variants of knapsack and facility location problems. In the knapsack problem, items are privately owned by agents, who can hide items or report fake ones; each agent's utility equals the total value of their own items included in the knapsack, while the planner wishes to choose the items that maximize the sum of utilities. In the facility location problem, agents have private linear single sinked/peaked preferences regarding the location of a facility on an interval, while the planner wishes to locate the facility in a way that maximizes one of several objectives. A variety of mechanisms and lower bounds are provided for these problems. The second part of this work explores the problem of reassigning students to schools. Students have privately known preferences over the schools. After an initial assignment is made, the students' preferences change, get reported again, and a reassignment must be obtained. The goal is to design a reassignment mechanism that incentivizes truthfulness, provides high student welfare, transfers relatively few students from their initial assignment, and respects student priorities at schools. The class of mechanisms considered is permuted lottery deferred acceptance (PLDA) mechanisms, which is a natural class of mechanisms based on permuting the lottery numbers students initially draw to decide the initial assignment. Both theoretical and experimental evidence is provided to support the use of a PLDA mechanism called reversed lottery deferred acceptance (RLDA). The evidence suggests that under some conditions, all PLDA mechanisms generate roughly equal welfare, and that RLDA minimizes transfers among PLDA mechanisms.
|
1097 |
Nonconvex Recovery of Low-complexity ModelsQu, Qing January 2018 (has links)
Today we are living in the era of big data, there is a pressing need for efficient, scalable and robust optimization methods to analyze the data we create and collect. Although Convex methods offer tractable solutions with global optimality, heuristic nonconvex methods are often more attractive in practice due to their superior efficiency and scalability. Moreover, for better representations of the data, the mathematical model we are building today are much more complicated, which often results in highly nonlinear and nonconvex optimizations problems. Both of these challenges require us to go beyond convex optimization. While nonconvex optimization is extraordinarily successful in practice, unlike convex optimization, guaranteeing the correctness of nonconvex methods is notoriously difficult. In theory, even finding a local minimum of a general nonconvex function is NP-hard – nevermind the global minimum.
This thesis aims to bridge the gap between practice and theory of nonconvex optimization, by developing global optimality guarantees for nonconvex problems arising in real-world engineering applications, and provable, efficient nonconvex optimization algorithms. First, this thesis reveals that for certain nonconvex problems we can construct a model specialized initialization that is close to the optimal solution, so that simple and efficient methods provably converge to the global solution with linear rate. These problem include sparse basis learning and convolutional phase retrieval. In addition, the work has led to the discovery of a broader class of nonconvex problems – the so-called ridable saddle functions. Those problems possess characteristic structures, in which (i) all local minima are global, (ii) the energy landscape does not have any ''flat'' saddle points. More interestingly, when data are large and random, this thesis reveals that many problems in the real world are indeed ridable saddle, those problems include complete dictionary learning and generalized phase retrieval. For each of the aforementioned problems, the benign geometric structure allows us to obtain global recovery guarantees by using efficient optimization methods with arbitrary initialization.
|
1098 |
A differential geometry framework for multidisciplinary design optimizationBakker, Craig Kent Reddick January 2015 (has links)
No description available.
|
1099 |
Dynamic optimization of classification systems for adaptive incremental learning.Kapp, Marcelo Nepomoceno 25 May 2016 (has links) (PDF)
Tese de Doutorado, defendida na Université Du Québec, Canadian. 2010 / Submitted by Nilson Junior (nilson.junior@unila.edu.br) on 2016-05-25T23:32:09Z
No. of bitstreams: 2
MKapp_PhD_2010.pdf: 2076219 bytes, checksum: db47dfe0b9ee92594b31d05a2f2e7fef (MD5)
Recibo Deposito Legal_TESE_Marcelo Nepomoceno Kapp01.pdf: 205561 bytes, checksum: ac4617abe7f6d68472c1ac6ac02abd26 (MD5) / Made available in DSpace on 2016-05-25T23:32:22Z (GMT). No. of bitstreams: 2
MKapp_PhD_2010.pdf: 2076219 bytes, checksum: db47dfe0b9ee92594b31d05a2f2e7fef (MD5)
Recibo Deposito Legal_TESE_Marcelo Nepomoceno Kapp01.pdf: 205561 bytes, checksum: ac4617abe7f6d68472c1ac6ac02abd26 (MD5)
Previous issue date: 2010 / An incremental learning system updates itself in response to incoming data without reexamining all the old data. Since classification systems capable of incrementally storing, filtering, and classifying data are economical, in terms of both space and time, which makes them immensely useful for industrial, military, and commercial purposes, interest in designing them
is growing. However, the challenge with incremental learning is that classification tasks can no longer be seen as unvarying, since they can actually change with the evolution of the data. These changes in turn cause dynamic changes to occur in the classification system’s parameters If such variations are neglected, the overall performance of these systems will be compromised in the future. In this thesis, on the development of a system capable of incrementally accommodating new
data and dynamically tracking new optimum system parameters for self-adaptation, we first address the optimum selection of classifiers over time. We propose a framework which combines the power of Swarm Intelligence Theory and the conventional grid-search method to progressively identify potential solutions for gradually updating training datasets. The key here is to
consider the adjustment of classifier parameters as a dynamic optimization problem that depends on the data available. Specifically, it has been shown that, if the intention is to build efficient Support Vector Machine (SVM) classifiers from sources that provide data gradually and serially, then the best way to do this is to consider model selection as a dynamic process
which can evolve and change over time. This means that a number of solutions are required, depending on the knowledge available about the problem and uncertainties in the data. We also investigate measures for evaluating and selecting classifier ensembles composed of SVM classifiers. The measures employed are based on two different theories (diversity and margin)
commonly used to understand the success of ensembles. This study has given us valuable insights and helped us to establish confidence-based measures as a tool for the selection of classifier ensembles. The main contribution of this thesis is a dynamic optimization approach that performs incremental learning in an adaptive fashion by tracking, evolving, and combining optimum hypotheses over time. The approach incorporates various theories, such as dynamic Particle Swarm Optimization, incremental Support Vector Machine classifiers, change detection, and dynamic ensemble selection based on classifier confidence levels. Experiments carried out on synthetic and real-world databases demonstrate that the proposed approach outperforms the classification methods often used in incremental learning scenarios.
|
1100 |
Robust portfolio selection based on a multi-stage scenario tree.January 2005 (has links)
Shen Ruijun. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 72-74). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.ii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Portfolio Selection Problem --- p.1 / Chapter 1.1.1 --- The Mean-Variance Approach --- p.1 / Chapter 1.1.2 --- The Utility Function Approach --- p.2 / Chapter 1.2 --- Conic Programming and Duality Theory --- p.3 / Chapter 1.2.1 --- Cones and Conic Programming --- p.3 / Chapter 1.2.2 --- Second Order Cones --- p.4 / Chapter 1.3 --- Uncertainties and Robust Optimization --- p.5 / Chapter 1.4 --- Problem Formulation --- p.8 / Chapter 1.4.1 --- Utility Approach Based on a Single-Stage Tree --- p.8 / Chapter 1.4.2 --- Utility Approach Based on a Two-St age Tree --- p.10 / Chapter 1.4.3 --- Robust Counterpart of the Single-Stage Model --- p.14 / Chapter 1.4.4 --- Robust Counterpart of the Two-Stage Model --- p.16 / Chapter 2 --- Single-Stage Robust Selection --- p.20 / Chapter 2.1 --- A Specific Model --- p.20 / Chapter 2.1.1 --- Assumptions --- p.20 / Chapter 2.1.2 --- Formulation of the Model --- p.21 / Chapter 2.1.3 --- Solution for the Model --- p.22 / Chapter 2.2 --- The General Model --- p.26 / Chapter 2.2.1 --- Assumptions --- p.26 / Chapter 2.2.2 --- Solving the model --- p.27 / Chapter 3 --- Results on Two-Stage Models --- p.30 / Chapter 3.1 --- A Specific Two-Stage Robust Model --- p.30 / Chapter 3.1.1 --- Assumptions --- p.30 / Chapter 3.1.2 --- Formulation of the model --- p.32 / Chapter 3.1.3 --- Solution for the Model --- p.33 / Chapter 3.2 --- The General Two-Stage Robust Model --- p.40 / Chapter 3.2.1 --- Assumptions --- p.40 / Chapter 3.2.2 --- Solution for the Model --- p.41 / Chapter 3.2.3 --- General Model with Ellipsoidal Uncertainty Sets --- p.45 / Chapter 4 --- Numerical Results --- p.53 / Chapter 4.1 --- Scenario Tree Generation --- p.53 / Chapter 4.2 --- Numerical Results for the problem (SRP2) --- p.56 / Chapter 5 --- Conclusion --- p.67 / Chapter A --- Equation Derivation --- p.69 / Bibliography --- p.72
|
Page generated in 0.0374 seconds