• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 19
  • 8
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 101
  • 101
  • 17
  • 15
  • 14
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The finite-element contrast source inversion method for microwave imaging applications

Zakaria, Amer 27 March 2012 (has links)
This dissertation describes research conducted on the development and improvement of microwave tomography algorithms for imaging the bulk-electrical parameters of unknown objects. The full derivation of a new inversion algorithm based on the state-of-the-art contrast source inversion (CSI) algorithm coupled to a finite-element method (FEM) discretization of the Helmholtz differential operator formulation for the scattered electromagnetic field is presented. The algorithm is applied to two-dimensional (2D) scalar and vectorial configurations, as well as three-dimensional (3D) full-vectorial problems. The unknown electrical properties of the object are distributed on the elements of arbitrary meshes with varying densities. The use of FEM to represent the Helmholtz operator allows for the flexibility of having an inhomogeneous background medium, as well as the ability to accurately model any boundary shape or type: both conducting and absorbing. The CSI algorithm is used in conjunction with multiplicative regularization (MR), as it is typical in most implementations of CSI. Due to the use of arbitrary meshes in the present implementation, new techniques are introduced to perform the necessary spatial gradient and divergence operators of MR. The approach is different from other MR-CSI implementations where the unknown variables are located on a uniform grid of rectangular cells and represented using pulse basis functions; with rectangular cells finite-difference operators can be used, but this becomes unwieldy in FEM-CSI. Furthermore, an improvement for MR is proposed that accounts for the imbalance between the real and imaginary parts of the electrical properties of the unknown objects. The proposed method is not restricted to any particular formulation of the contrast source inversion. The functionality of the new inversion algorithm with the different enhancements is tested using a wide range of synthetic datasets, as well as experimental data collected by the University of Manitoba electromagnetic imaging group and research centers in Spain and France.
12

A design algorithm for continuous melt-phase polyester manufacturing processes optimal design, product sensitivity, and process flexibility.

Calmeyn, Timothy Joseph. January 1998 (has links)
Thesis (Ph. D.)--Ohio University, March, 1998. / Title from PDF t.p.
13

Optimizing algorithms for shortest path analysis /

Hojnacki, Susan M. January 1991 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 1991. / Typescript. Includes bibliographical references.
14

A linear constraint optimization for the displacement operator in map generalization /

Chen, Ji, January 1900 (has links)
Thesis (M. Sc.)--Carleton University, 2003. / Includes bibliographical references (p. 91-96). Also available in electronic format on the Internet.
15

Optimization Frameworks for Graph Clustering

Luke N Veldt (6636218) 15 May 2019 (has links)
<div>In graph theory and network analysis, communities or clusters are sets of nodes in a graph that share many internal connections with each other, but are only sparsely connected to nodes outside the set. Graph clustering, the computational task of detecting these communities, has been studied extensively due to its widespread applications and its theoretical richness as a mathematical problem. This thesis presents novel optimization tools for addressing two major challenges associated with graph clustering.</div><div></div><div>The first major challenge is that there already exists a plethora of algorithms and objective functions for graph clustering. The relationship between different methods is often unclear, and it can be very difficult to determine in practice which approach is the best to use for a specific application. To address this challenge, we introduce a generalized discrete optimization framework for graph clustering called LambdaCC, which relies on a single tunable parameter. The value of this parameter controls the balance between the internal density and external sparsity of clusters that are formed by optimizing an underlying objective function. LambdaCC unifies the landscape of graph clustering techniques, as a large number of previously developed approaches can be recovered as special cases for a fixed value of the LambdaCC input parameter. </div><div> </div><div>The second major challenge of graph clustering is the computational intractability of detecting the best way to cluster a graph with respect to a given NP-hard objective function. To address this intractability, we present new optimization tools and results which apply to LambdaCC as well as a broader class of graph clustering problems. In particular, we develop polynomial time approximation algorithms for LambdaCC and other more generalized clustering objectives. In particular, we show how to obtain a polynomial-time 2-approximation for cluster deletion, which improves upon the previous best approximation factor of 3. We also present a new optimization framework for solving convex relaxations of NP-hard graph clustering problems, which are frequently used in the design of approximation algorithms. Finally, we develop a new framework for efficiently setting tunable parameters for graph clustering objective functions, so that practitioners can work with graph clustering techniques that are especially well suited to their application. </div>
16

Study on efficient sparse and low-rank optimization and its applications

Lou, Jian 29 August 2018 (has links)
Sparse and low-rank models have been becoming fundamental machine learning tools and have wide applications in areas including computer vision, data mining, bioinformatics and so on. It is of vital importance, yet of great difficulty, to develop efficient optimization algorithms for solving these models, especially under practical design considerations of computational, communicational and privacy restrictions for ever-growing larger scale problems. This thesis proposes a set of new algorithms to improve the efficiency of the sparse and low-rank models optimization. First, facing a large number of data samples during training of empirical risk minimization (ERM) with structured sparse regularization, the gradient computation part of the optimization can be computationally expensive and becomes the bottleneck. Therefore, I propose two gradient efficient optimization algorithms to reduce the total or per-iteration computational cost of the gradient evaluation step, which are new variants of the widely used generalized conditional gradient (GCG) method and incremental proximal gradient (PG) method, correspondingly. In detail, I propose a novel algorithm under GCG framework that requires optimal count of gradient evaluations as proximal gradient. I also propose a refined variant for a type of gauge regularized problem, where approximation techniques are allowed to further accelerate linear subproblem computation. Moreover, under the incremental proximal gradient framework, I propose to approximate the composite penalty by its proximal average under incremental gradient framework, so that a trade-off is made between precision and efficiency. Theoretical analysis and empirical studies show the efficiency of the proposed methods. Furthermore, the large data dimension (e.g. the large frame size of high-resolution image and video data) can lead to high per-iteration computational complexity, thus results into poor-scalability of the optimization algorithm from practical perspective. In particular, in spectral k-support norm regularized robust low-rank matrix and tensor optimization, traditional proximal map based alternating direction method of multipliers (ADMM) requires to evaluate a super-linear complexity subproblem in each iteration. I propose a set of per-iteration computational efficient alternatives to reduce the cost to linear and nearly linear with respect to the input data dimension for matrix and tensor case, correspondingly. The proposed algorithms consider the dual objective of the original problem that can take advantage of the more computational efficient linear oracle of the spectral k-support norm to be evaluated. Further, by studying the sub-gradient of the loss of the dual objective, a line-search strategy is adopted in the algorithm to enable it to adapt to the Holder smoothness. The overall convergence rate is also provided. Experiments on various computer vision and image processing applications demonstrate the superior prediction performance and computation efficiency of the proposed algorithm. In addition, since machine learning datasets often contain sensitive individual information, privacy-preserving becomes more and more important during sparse optimization. I provide two differentially private optimization algorithms under two common large-scale machine learning computing contexts, i.e., distributed and streaming optimization, correspondingly. For the distributed setting, I develop a new algorithm with 1) guaranteed strict differential privacy requirement, 2) nearly optimal utility and 3) reduced uplink communication complexity, for a nearly unexplored context with features partitioned among different parties under privacy restriction. For the streaming setting, I propose to improve the utility of the private algorithm by trading the privacy of distant input instances, under the differential privacy restriction. I show that the proposed method can either solve the private approximation function by a projected gradient update for projection-friendly constraints, or by a conditional gradient step for linear oracle-friendly constraint, both of which improve the regret bound to match the nonprivate optimal counterpart.
17

Integrating surrogate modeling to improve DIRECT, DE and BA global optimization algorithms for computationally intensive problems

Saad, Abdulbaset Elha 02 May 2018 (has links)
Rapid advances of computer modeling and simulation tools and computing hardware have turned Model Based Design (MBD) a more viable technology. However, using a computationally intensive, “black-box” form MBD software tool to carry out design optimization leads to a number of key challenges. The non-unimodal objective function and/or non-convex feasible search region of the implicit numerical simulations in the optimization problems are beyond the capability of conventional optimization algorithms. In addition, the computationally intensive simulations used to evaluate the objective and/or constraint functions during the MBD process also make conventional stochastic global optimization algorithms unusable due to their requirement of a huge number of objective and constraint function evaluations. Surrogate model, or metamodeling-based global optimization techniques have been introduced to address these issues. Various surrogate models, including kriging, radial basis functions (RBF), multivariate adaptive regression splines (MARS), and polynomial regression (PR), are built using limited samplings on the original objective/constraint functions to reduce needed computation in the search of global optimum. In many real-world design optimization applications, computationally expensive numerical simulation models are used as objective and/or constraint functions. To solve these problems, enormous fitness function evaluations are required during the evolution based search process when advanced Global Optimization algorithms, such as DIRECT search, Differential Evolution (DE), and Bat Algorithm (BA) are used. In this work, improvements have been made to three widely used global optimization algorithms, Divided Rectangles (DIRECT), Differential Evolution (DE), and Bat Algorithm (BA) by integrating appropriate surrogate modeling methods to increase the computation efficiency of these algorithms to support MBD. The superior performance of these new algorithms in comparison with their original counterparts are shown using commonly used optimization algorithm testing benchmark problems. Integration of the surrogate modeling methods have considerably improved the search efficiency of the DIRECT, DE, and BA algorithms with significant reduction on the Number of Function Evaluations (NFEs). The newly introduced algorithms are then applied to a complex engineering design optimization problem, the design optimization of floating wind turbine platform, to test its effectiveness in real-world applications. These newly improved algorithms were able to identify better design solutions using considerably lower NFEs on the computationally expensive performance simulation model of the design. The methods of integrating surrogate modeling to improve DIRECT, DE and BA global optimization searches and the resulting algorithms proved to be effective for solving complex and computationally intensive global optimization problems, and formed a foundation for future research in this area. / Graduate
18

Using Social Networks for Modeling and Optimization in a Healthcare Setting

Curtis, Donald Ephraim 01 July 2011 (has links)
Social networks encode important information about the relationships between individuals. The structure of social networks has important implications for how ideas, information, and even diseases spread within a population. Data on online social networks is becoming increasingly available, but fine-grained data from which physical proximity networks can be inferred is still a largely elusive goal. We address this problem by using nearly 20 million anonymized login records from University of Iowa Hospitals and Clinics to construct healthcare worker (HCW) contact networks. These networks serve as proxies for potentially disease-spreading contact patterns among HCWs. We show that these networks exhibit properties similar to social networks arising in other contexts (e.g., scientific collaboration, friendship, etc.) such as the "Six Degrees of Kevin Bacon" (i.e., small-world) phenomenon. In order to develop a theoretic framework for analyzing these HCW contact networks we consider a number of random graph models and show that models which only pay attention to local structure may not adequately model disease spread. We then consider the best known approximation algorithms for a number of optimization problems that model the problem of determining an optimal set of HCWs to vaccinate in order to minimize the spread of disease. Our results show that, in general, the quality of solutions produced by these approximations is highly dependent on the dynamics of disease spread. However, experiments show that simple policies, like vaccinating the most well-connected or most mobile individuals, perform much better than a random vaccination policy. And finally we consider the problem of finding a set of individuals to act as indicators for important healthcare related events on a social network for infectious disease experts. We model this problem as a generalization of the budgeted maximum coverage problem studied previously and show that in fact our problem is much more difficult to solve in general. But by exposing a property of this network, we provide analysis showing that a simple greedy approach for picking indicators provides a near-optimal (constant-factor) approximation.
19

Minimum cost polygon overlay with rectangular shape stock panels

Siringoringo, Wilson S Unknown Date (has links)
Minimum Cost Polygon Overlay (MCPO) is a unique two-dimensional optimization problem that involves the task of covering a polygon shaped area with a series of rectangular shaped panels. The challenges in solving MCPO problems are related to the interdependencies that exist among the parameters and constraints that may be applied to the solution.This thesis examines the MCPO problem to construct a model that captures essential parameters to be solved using optimization algorithms. The purpose of the model is to make it possible that a solution for an MCPO problem can be generated automatically. A software application has been developed to provide a framework for validating the model.The development of the software has uncovered a host of geometric operations that are required to enable optimization to take place. Many of these operations are non-trivial, demanding novel, well-constructed algorithms based on careful appreciation of the nature of the problem.For the actual optimization task, three algorithms have been implemented: a greedy search, a Monte Carlo method, and a Genetic Algorithm. The behavior of the completed software is observed through its application on a series of test data. The results are presented to show the effectiveness of the software under various settings. This is followed by critical analysis of various findings of the research.Conclusions are drawn to summarize lessons learned from the research. Important issues about which no satisfactory explanation exists are given as material to be studied by future research.
20

Solving the Traveling Salesman Problem by Ant Colony Optimization Algorithms with DNA Computing

Huang, Hung-Wei 29 July 2004 (has links)
Previous research on DNA computing has shown that DNA algorithms are useful to solve some combinatorial problems, such as the Hamiltonian path problem and the traveling salesman problem. The basic concept implicit in previous DNA algorithms is the brute force method. That is, all possible solutions are created initially, then inappropriate solutions are eliminated, and finally the remaining solutions are correct or the best ones. However, correct solutions may be destroyed while the procedure is executed. In order to avoid such an error, we recommend combining the conventional concepts of DNA computing with a heuristic optimization method and apply the new approach to design strategies. In this thesis, we present a DNA algorithm based on ant colony optimization (ACO) for solving the traveling salesman problem (TSP). Our method manipulates DNA strands of candidate solutions initially. Even if the correct solutions are destroyed during the process of filtering out, the remaining solutions can be reconstructed and correct solutions can be reformed. After filtering out inappropriate solutions, we employ control of melting temperature to amplify the surviving DNA strings proportionally. The product is used as the input and the iteration is performed repeatedly. Accordingly, the concentration of correct solutions will be increased. Our results agree with that obtained by conventional ant colony optimization algorithms and are better than that obtained by genetic algorithms. The same idea can be applied to design methods for solving other combinatorial problems with DNA computing.

Page generated in 0.1176 seconds