• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6773
  • 2452
  • 1001
  • 805
  • 778
  • 234
  • 168
  • 120
  • 84
  • 79
  • 70
  • 63
  • 54
  • 52
  • 50
  • Tagged with
  • 15011
  • 2422
  • 1971
  • 1814
  • 1642
  • 1528
  • 1382
  • 1328
  • 1285
  • 1254
  • 1220
  • 1115
  • 973
  • 929
  • 927
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Minimum Crossing Problems on Graphs

Roh, Patrick January 2007 (has links)
This thesis will address several problems in discrete optimization. These problems are considered hard to solve. However, good approximation algorithms for these problems may be helpful in approximating problems in computational biology and computer science. Given an undirected graph G=(V,E) and a family of subsets of vertices S, the minimum crossing spanning tree is a spanning tree where the maximum number of edges crossing any single set in S is minimized, where an edge crosses a set if it has exactly one endpoint in the set. This thesis will present two algorithms for special cases of minimum crossing spanning trees. The first algorithm is for the case where the sets of S are pairwise disjoint. It gives a spanning tree with the maximum crossing of a set being 2OPT+2, where OPT is the maximum crossing for a minimum crossing spanning tree. The second algorithm is for the case where the sets of S form a laminar family. Let b_i be a bound for each S_i in S. If there exists a spanning tree where each set S_i is crossed at most b_i times, the algorithm finds a spanning tree where each set S_i is crossed O(b_i log n) times. From this algorithm, one can get a spanning tree with maximum crossing O(OPT log n). Given an undirected graph G=(V,E), and a family of subsets of vertices S, the minimum crossing perfect matching is a perfect matching where the maximum number of edges crossing any set in S is minimized. A proof will be presented showing that finding a minimum crossing perfect matching is NP-hard, even when the graph is bipartite and the sets of S are pairwise disjoint.
452

Linear Programming Tools and Approximation Algorithms for Combinatorial Optimization

Pritchard, David January 2009 (has links)
We study techniques, approximation algorithms, structural properties and lower bounds related to applications of linear programs in combinatorial optimization. The following "Steiner tree problem" is central: given a graph with a distinguished subset of required vertices, and costs for each edge, find a minimum-cost subgraph that connects the required vertices. We also investigate the areas of network design, multicommodity flows, and packing/covering integer programs. All of these problems are NP-complete so it is natural to seek approximation algorithms with the best provable approximation ratio. Overall, we show some new techniques that enhance the already-substantial corpus of LP-based approximation methods, and we also look for limitations of these techniques. The first half of the thesis deals with linear programming relaxations for the Steiner tree problem. The crux of our work deals with hypergraphic relaxations obtained via the well-known full component decomposition of Steiner trees; explicitly, in this view the fundamental building blocks are not edges, but hyperedges containing two or more required vertices. We introduce a new hypergraphic LP based on partitions. We show the new LP has the same value as several previously-studied hypergraphic ones; when no Steiner nodes are adjacent, we show that the value of the well-known bidirected cut relaxation is also the same. A new partition uncrossing technique is used to demonstrate these equivalences, and to show that extreme points of the new LP are well-structured. We improve the best known integrality gap on these LPs in some special cases. We show that several approximation algorithms from the literature on Steiner trees can be re-interpreted through linear programs, in particular our hypergraphic relaxation yields a new view of the Robins-Zelikovsky 1.55-approximation algorithm for the Steiner tree problem. The second half of the thesis deals with a variety of fundamental problems in combinatorial optimization. We show how to apply the iterated LP relaxation framework to the problem of multicommodity integral flow in a tree, to get an approximation ratio that is asymptotically optimal in terms of the minimum capacity. Iterated relaxation gives an infeasible solution, so we need to finesse it back to feasibility without losing too much value. Iterated LP relaxation similarly gives an O(k^2)-approximation algorithm for packing integer programs with at most k occurrences of each variable; new LP rounding techniques give a k-approximation algorithm for covering integer programs with at most k variable per constraint. We study extreme points of the standard LP relaxation for the traveling salesperson problem and show that they can be much more complex than was previously known. The k-edge-connected spanning multi-subgraph problem has the same LP and we prove a lower bound and conjecture an upper bound on the approximability of variants of this problem. Finally, we show that for packing/covering integer programs with a bounded number of constraints, for any epsilon > 0, there is an LP with integrality gap at most 1 + epsilon.
453

Convex relaxation for the planted clique, biclique, and clustering problems

Ames, Brendan January 2011 (has links)
A clique of a graph G is a set of pairwise adjacent nodes of G. Similarly, a biclique (U, V ) of a bipartite graph G is a pair of disjoint, independent vertex sets such that each node in U is adjacent to every node in V in G. We consider the problems of identifying the maximum clique of a graph, known as the maximum clique problem, and identifying the biclique (U, V ) of a bipartite graph that maximizes the product |U | · |V |, known as the maximum edge biclique problem. We show that finding a clique or biclique of a given size in a graph is equivalent to finding a rank one matrix satisfying a particular set of linear constraints. These problems can be formulated as rank minimization problems and relaxed to convex programming by replacing rank with its convex envelope, the nuclear norm. Both problems are NP-hard yet we show that our relaxation is exact in the case that the input graph contains a large clique or biclique plus additional nodes and edges. For each problem, we provide two analyses of when our relaxation is exact. In the first, the diversionary edges are added deterministically by an adversary. In the second, each potential edge is added to the graph independently at random with fixed probability p. In the random case, our bounds match the earlier bounds of Alon, Krivelevich, and Sudakov, as well as Feige and Krauthgamer for the maximum clique problem. We extend these results and techniques to the k-disjoint-clique problem. The maximum node k-disjoint-clique problem is to find a set of k disjoint cliques of a given input graph containing the maximum number of nodes. Given input graph G and nonnegative edge weights w, the maximum mean weight k-disjoint-clique problem seeks to identify the set of k disjoint cliques of G that maximizes the sum of the average weights of the edges, with respect to w, of the complete subgraphs of G induced by the cliques. These problems may be considered as a way to pose the clustering problem. In clustering, one wants to partition a given data set so that the data items in each partition or cluster are similar and the items in different clusters are dissimilar. For the graph G such that the set of nodes represents a given data set and any two nodes are adjacent if and only if the corresponding items are similar, clustering the data into k disjoint clusters is equivalent to partitioning G into k-disjoint cliques. Similarly, given a complete graph with nodes corresponding to a given data set and edge weights indicating similarity between each pair of items, the data may be clustered by solving the maximum mean weight k-disjoint-clique problem. We show that both instances of the k-disjoint-clique problem can be formulated as rank constrained optimization problems and relaxed to semidefinite programs using the nuclear norm relaxation of rank. We also show that when the input instance corresponds to a collection of k disjoint planted cliques plus additional edges and nodes, this semidefinite relaxation is exact for both problems. We provide theoretical bounds that guarantee exactness of our relaxation and provide empirical examples of successful applications of our algorithm to synthetic data sets, as well as data sets from clustering applications.
454

Advances in Inverse Transport Methods and Applications to Neutron Tomography

Wu, Zeyun 2010 December 1900 (has links)
The purpose of the inverse-transport problems that we address is to reconstruct the material distribution inside an unknown object undergoing a nondestructive evaluation. We assume that the object is subjected to incident beams of photons or particles and that the exiting radiation is measured with detectors around the periphery of the object. In the present work we focus on problems in which radiation can undergo significant scattering within the optically thick object. We develop a set of reconstruction strategies to infer the material distribution inside such objects. When we apply these strategies to a set of neutron-tomography test problems we find that the results are substantially superior to those obtained by previous methods. We first demonstrate that traditional analytic methods such as filtered back projection (FBP) methods do not work for very thick, highly scattering problems. Then we explore deterministic optimization processes, using the nonlinear conjugate gradient iterative updating scheme to minimize an objective functional that characterizes the misfits between forward predicted measurements and actual detector readings. We find that while these methods provide more information than the analytic methods such as FBP, they do not provide sufficiently accurate solutions of problems in which the radiation undergoes significant scattering. We proceed to present some advances in inverse transport methods. Our strategies offer several advantages over previous reconstruction methods. First, our optimization procedure involves the systematic use of both deterministic and stochastic methods, using the strengths of each to mitigate the weaknesses of the other. Another key feature is that we treat the material (a discrete quantity) as the unknown, as opposed to individual cross sections (continuous variables). This changes the mathematical nature of the problem and greatly reduces the dimension of the search space. In our hierarchical approach we begin by learning some characteristics of the object from relatively inexpensive calculations, and then use knowledge from such calculations to guide more sophisticated calculations. A key feature of our strategy is dimension-reduction schemes that we have designed to take advantage of known and postulated constraints. We illustrate our approach using some neutron-tomography model problems that are several mean-free paths thick and contain highly scattering materials. In these problems we impose reasonable constraints, similar to those that in practice would come from prior information or engineering judgment. Our results, which identify exactly the correct materials and provide very accurate estimates of their locations and masses, are substantially better than those of deterministic minimization methods and dramatically more efficient than those of typical stochastic methods.
455

The study of the Bioeconomics analysis Of Grey mullet in Taiwan

Cheng, Man-chun 29 January 2007 (has links)
Abstract This study is based on the theory of biology and economy to establish the open access model, dynamic optimization model and static optimization of fishery mathematical models, to discuss the problem of fishery management. To be aimed at getting the equilibrium of resource stock and effort, research data are mainly analyzed by comparative statues. In so doing, the amount of grey mullet, collect and analyze the estimation of exogenous variable. Then, we can use Mathematica program to calculate the equilibrium value resource stock and the effort, and do the sensitivity analysis by standing on the change of estimation of exogenous variable. The result of analysis is as follow: These three fishery mathematical models¡¦ resource stock and effort are consistency. In another view of CPUE, it is not obvious of the economic effect of open access model. We must strengthen the management in policy of fishing for grey mullet, to let the fisherman earn the highest economic benefits. Keyword: open access model static optimization model. dynamic optimization model.
456

A Study of the Parallel Hybrid Multilevel Genetic Algorithms for Geometrically Nonlinear Structural Optimization

Liang, Jun-Wei 21 June 2000 (has links)
The purpose of this study is to discuss the fitness of using PHMGA (Parallel Multilevel Hybrid Genetic Algorithm), which is a fast and efficient method, in the geometrically nonlinear structural optimization. Parallel genetic algorithms can solve the problem of traditional sequential genetic algorithms, such as premature convergence, large number of function evaluations, and a difficulty in setting parameters. By using several concurrent sub-population, parallel genetic algorithms can avoid premature convergence resulting from the single genetic searching environment of sequential genetic algorithms. It is useful to speed up the operation rate of joining timely multilevel optimization with parallel genetic algorithms. Because multilevel optimization can resolve one problem into several smaller subproblems, each subproblem is independent and not interference with one another. Then the subsystem of each level can be connected by sensitivity analysis. So we can solve the entire problem. Because each subproblem contains less variables and constrains, it can achieve the faster converge rate of the entire optimization. PHMGA integrates advantages of two methods including the parallel genetic algorithms and the multilevel optimization. In this study, PHMGA is adopted to solve several design optimization problems for nonlinear geometrically trusses on the parallel computer IBM SP2. The use of PHMGA helps reduce execution time because of integrating a multilevel optimization and a parallel technique. PHMGA helps speed up the searching efficiency in solving structural optimization problems of nonlinear truss. It is hoped that this study will demonstrate PHMGA is an efficient and powerful tool in solving large geometrically nonlinear structural optimization problems.
457

The biological and economical analysis of the resource of the shrimp Acetes intrmedius in TungKang,PingTung.

Yang, Chung-hao 27 June 2008 (has links)
The fishery of the shrimp Acetes intermedius in the southwestern coast of Taiwan has long history , and it is the food of many species of fishes and large-scale shrimps . Shrimp Acetes has not only fallen on dead ears , but also been ignored its importantce of ecologyical status in the southwestern coast by the academia because of less harvest and output value in the past . It then comes into operation the management of catch , leading the price going up and output value increasing rapidly when the establishment of TungKang producer organization of the shrimp Acetes intrmedius in 1994 , and it also becomes the important seasonal fishery . According to as was mentioned above , the study is based on the theory of biology and economy to put out the open access model , static optimization model and dynamic optimization of fishery mathematical models , and further discuss the problem of fishery management. In connection with getting the equilibrium of resource stock and effort , research data from the substitution of real data are mainly analyzed by compareative statues on exogenous variable .By means of understanding the sensitivity of variation on endogenous variable depending on exogenous variable , we can provide the member of TungKang producer organization of the shrimp Acetes intrmedius with the control on harvest and preservation of stock . The study can get the fact that the management of TungKang producer organization of the shrimp Acetes intrmedius has the notion of sustainable administration by the deriveation of theoretical model and the simulate analysis of historyical data. I hope the management of TungKang producer organization of the shrimp Acetes intrmedius can be popularized.
458

Information, complexity and structure in convex optimization

Guzman Paredes, Cristobal 08 June 2015 (has links)
This thesis is focused on the limits of performance of large-scale convex optimization algorithms. Classical theory of oracle complexity, first proposed by Nemirovski and Yudin in 1983, successfully established the worst-case behavior of methods based on local oracles (a generalization of first-order oracle for smooth functions) for nonsmooth convex minimization, both in the large-scale and low-scale regimes; and the complexity of approximately solving linear systems of equations (equivalent to convex quadratic minimization) over Euclidean balls, under a matrix-vector multiplication oracle. Our work extends the applicability of lower bounds in two directions: Worst-Case Complexity of Large-Scale Smooth Convex Optimization: We generalize lower bounds on the complexity of first-order methods for convex optimization, considering classes of convex functions with Hölder continuous gradients. Our technique relies on the existence of a smoothing kernel, which defines a smooth approximation for any convex function via infimal convolution. As a consequence, we derive lower bounds for \ell_p/\ell_q-setups, where 1\leq p,q\leq \infty, and extend to its matrix analogue: Smooth convex minimization (with respect to the Schatten q-norm) over matrices with bounded Schatten p-norm. The major consequences of this result are the near-optimality of the Conditional Gradient method over box-type domains (p=q=\infty), and the near-optimality of Nesterov's accelerated method over the cross-polytope (p=q=1). Distributional Complexity of Nonsmooth Convex Optimization: In this work, we prove average-case lower bounds for the complexity of nonsmooth convex ptimization. We introduce an information-theoretic method to analyze the complexity of oracle-based algorithms solving a random instance, based on the reconstruction principle. Our technique shows that all known lower bounds for nonsmooth convex optimization can be derived by an emulation procedure from a common String-Guessing Problem, which is combinatorial in nature. The derived average-case lower bounds extend to hold with high probability, and for algorithms with bounded probability error, via Fano's inequality. Finally, from the proposed technique we establish the equivalence (up to constant factors) of distributional, randomized, and worst-case complexity for black-box convex optimization. In particular, there is no gain from randomization in this setup.
459

Optimization models and methods under nonstationary uncertainty

Belyi, Dmitriy 07 December 2010 (has links)
This research focuses on finding the optimal maintenance policy for an item with varying failure behavior. We analyze several types of item failure rates and develop methods to solve for optimal maintenance schedules. We also illustrate nonparametric modeling techniques for failure rates, and utilize these models in the optimization methods. The general problem falls under the umbrella of stochastic optimization under uncertainty. / text
460

PipeSynth : automated topological and parametric design of fluid networks

Patterson, William Rey 16 February 2011 (has links)
PipeSynth is a design automation approach that combines various optimization research and artificial intelligence methods for synthesizing fluid networks. Starting with only the port locations, PipeSynth generates and optimizes the most effective network for a given application. This ideal network is found by not only optimizing the sizes of each pipe and orientation of fittings in the network (parameters), but also optimizing the layouts of how they are all connected (topology). Using Uniform-Cost-Search for topology optimization, and a combination of non-gradient based parametric optimization methods,PipeSynth demonstrates how advances in automated design can enable engineers to manage much more complex fluid network problems. PipeSynth uses a unique representation of fluid networks that synthesizes and optimizes networks one pipe at a time, in three-dimensional space. PipeSynth has successfully solved several problems containing multiple interlaced networks concurrently with multiple inputs and outputs. PipeSynth shows the power of automated design and optimization in producing solutions more effectively and efficiently than traditional design approaches. / text

Page generated in 0.0753 seconds