• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 11
  • Tagged with
  • 69
  • 69
  • 69
  • 69
  • 69
  • 19
  • 16
  • 15
  • 11
  • 11
  • 9
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Supply chain optimization in the forest industry

Gunnarsson Lidestam, Helene January 2007 (has links)
The scope of this thesis is modelling and solving large-scale planning problems in the supply chain within the forest industry. Five research papers are included, the first three of which focus on the modelling, and the last two on the solution methods. All problems included are tactical multi-commodity problems expressed as mixed integer programming (MIP) models. The work has been done in collaboration with two Swedish companies within the forest industry. In Paper I, a problem concerning the supply chain of forest fuel for Sydved Energileveranser AB is modelled and solved. We study the problem of deciding when and where forest residues are to be converted into wood chips, and how the residues and chips are to be transported and stored in order to satisfy energy demand at heating plants. The company has long-term contracts with forest owners and saw mills. Decisions in the model include whether or not additional harvest areas and saw mills are to be contracted and which terminals to use. The planning horizon is one year and monthly time periods are used. Papers II--V are based on planning problems at Södra Cell AB. The planning horizon is normally one year. Papers II--III consider only one time period. In Paper II the supply chain from pulp mills to customers is modelled and the combined problem of deciding terminal locations and which ship routes to use is studied. Shipping vessels chartered on short or long term are used to transport products to terminals in Europe. From each terminal, the products are transported to customers by truck, train, or a combination of both. In addition, trains and trucks can be used for transports directly to customers from mills. In Paper III the entire supply chain, from harvest areas to customers, is considered. Decisions included are transportation of raw materials, production mix, the distribution of pulp products, and the selection of potential orders and their quantities at customers. The ship routes are considered as flow links. In Papers IV--V the problems in Papers II--III are combined into one model and several time periods are used. Lagrangian heuristics based on Lagrangian decomposition are used as solution methods in both papers. In Paper IV, the approach leads to subproblems for each time period, whereas in Paper V, another approach that results in subproblems for different parts of the supply chain is developed. All models are based on real data from the companies. The models are detailed and describe the problems accurately. The solution methods are developed such that the solution time is kept within practical limits. Results from Papers II--III have been used by Södra Cell AB to support the change of the terminal structure as well as in budget planning. / Denna avhandling presenterar matematiska modeller och lösningsmetoder för optimering av olika logistikproblem inom skogsindustrin. Vi studerar försörjningskedjor för skogsbränsle och massaproduker, och beaktar den årliga planeringen i syfte att optimera flödet. Det första problemet behandlar skogsbränsle och är ett samarbete med Sydved Energileveranser AB. Råmaterial i form av grenar och toppar från avverkningsplatser ska flisas och transporteras till värmeverk, eventuellt via terminaler. Det finns möjlighet att flisa både i skogen och på terminaler. Biprodukter från sågverk kan också användas som råmaterial. Vid behov kan utbudet av råmaterial utökas genom att fler avverkningsplatser och sågverk kontrakteras. Värmeverken har en efterfrågan, angiven i kWh per månad, som ska uppfyllas. Exempel på beslut som ska tas är var flisning ska ske, om nya avverkningsplatser ska kontrakteras, var lagring ska ske, samt hur och när skogsbränslet ska transporteras. Nästföljande problem behandlar massaprodukter och är ett samarbete med Södra Cell AB. Olika sorters massaved från skogen och biprodukter från sågverk utgör råmaterial för produktion av massaprodukter. Råmaterialet transporteras till massabruk för tillverkning enligt specificerade recept. De färdiga produkterna transporteras sedan med fartyg till terminaler i Europa. Från terminalerna transporteras produkterna vidare till pappersbruk, vilka är företagets slutkunder. Massaprodukterna transporteras i vissa fall med lastbil eller tåg direkt från massabruken till kunderna. Efterfrågan är angiven inom vissa gränser i olika order. Vissa order är fasta, vilket innebär att dess efterfrågan måste uppfyllas, medan andra order är fria. Exempel på beslut som ska tas är vilka bruk olika produkter ska produceras på, hur många och vilka terminaler som ska användas, samt hur transporterna ska utföras för att ge bästa resultat. Utifrån ovanstående beskrivningar har matematiska modeller formulerats. Ge-nom att lösa dessa kan vi få svar på logistik- och transportfrågorna och ett optimalt flöde kan hittas. För att lösa modellerna har kommersiell programvara använts. Heuristiker och mer avancerade optimeringsmetoder har också utvecklats i syfte att producera bra lösningar snabbare. / Article 4 was a manuscript entitled "Solving a multi-period supply chain problem for a pulp industry using Lagrangian heuristics based on time periods" at the time of the thesis defence.
52

Spectral Estimation by Geometric, Topological and Optimization Methods

Enqvist, Per January 2001 (has links)
QC 20100601
53

Utilizing Problem Structure in Optimization of Radiation Therapy

Carlsson, Fredrik January 2008 (has links)
In this thesis, optimization approaches for intensity-modulated radiation therapy are developed and evaluated with focus on numerical efficiency and treatment delivery aspects. The first two papers deal with strategies for solving fluence map optimization problems efficiently while avoiding solutions with jagged fluence profiles. The last two papers concern optimization of step-and-shoot parameters with emphasis on generating treatment plans that can be delivered efficiently and accurately. In the first paper, the problem dimension of a fluence map optimization problem is reduced through a spectral decomposition of the Hessian of the objective function. The weights of the eigenvectors corresponding to the p largest eigenvalues are introduced as optimization variables, and the impact on the solution of varying p is studied. Including only a few eigenvector weights results in faster initial decrease of the objective value, but with an inferior solution, compared to optimization of the bixel weights. An approach combining eigenvector weights and bixel weights produces improved solutions, but at the expense of the pre-computational time for the spectral decomposition. So-called iterative regularization is performed on fluence map optimization problems in the second paper. The idea is to find regular solutions by utilizing an optimization method that is able to find near-optimal solutions with non-jagged fluence profiles in few iterations. The suitability of a quasi-Newton sequential quadratic programming method is demonstrated by comparing the treatment quality of deliverable step-and-shoot plans, generated through leaf sequencing with a fixed number of segments, for different number of bixel-weight iterations. A conclusion is that over-optimization of the fluence map optimization problem prior to leaf sequencing should be avoided. An approach for dynamically generating multileaf collimator segments using a column generation approach combined with optimization of segment shapes and weights is presented in the third paper. Numerical results demonstrate that the adjustment of leaf positions improves the plan quality and that satisfactory treatment plans are found with few segments. The method provides a tool for exploring the trade-off between plan quality and treatment complexity by generating a sequence of deliverable plans of increasing quality. The final paper is devoted to understanding the ability of the column generation approach in the third paper to find near-optimal solutions with very few columns compared to the problem dimension. The impact of different restrictions on the generated columns is studied, both in terms of numerical behaviour and convergence properties. A bound on the two-norm of the columns results in the conjugate-gradient method. Numerical results indicate that the appealing properties of the conjugate-gradient method on ill-conditioned problems are inherited in the column generation approach of the third paper. / QC 20100709
54

Modeling and Model Reduction by Analytic Interpolation and Optimization

Fanizza, Giovanna January 2008 (has links)
This thesis consists of six papers. The main topic of all these papers is modeling a class of linear time-invariant systems. The system class is parameterized in the context of interpolation theory with a degree constraint. In the papers included in the thesis, this parameterization is the key tool for the design of dynamical system models in fields such as spectral estimation and model reduction. A problem in spectral estimation amounts to estimating a spectral density function that captures characteristics of the stochastic process, such as covariance, cepstrum, Markov parameters and the frequency response of the process. A  model reduction problem consists in finding a small order system which replaces the original one so that the behavior of both systems is similar in an appropriately defined sense.  In Paper A a new spectral estimation technique based on the rational covariance extension theory is proposed. The novelty of this approach is in the design of a spectral density that optimally matches covariances and approximates the frequency response of a given process simultaneously.In Paper B  a model reduction problem is considered. In the literature there are several methods to perform model reduction. Our attention is focused on methods which preserve, in the model reduction phase, the stability and the positive real properties of the original system. A reduced-order model is computed employing the analytic interpolation theory with a degree constraint. We observe that in this theory there is a freedom in the placement of the spectral zeros and interpolation points. This freedom can be utilized for the computation of a rational positive real function of low degree which approximates the best a given system. A problem left open in Paper B is how to select spectral zeros and interpolation points in a systematic way in order to obtain the best approximation of a given system. This problem is the main topic in Paper C. Here, the problem is investigated in the analytic interpolation context and spectral zeros and interpolation points are obtained as solution of a optimization problem.In Paper D, the problem of modeling a floating body by a positive real function is investigated. The main focus is  on modeling the radiation forces and moment. The radiation forces are described as the forces that make a floating body oscillate in calm water. These forces are passive and usually they are modeled with system of high degree. Thus, for efficient computer simulation it is necessary to obtain a low order system which approximates the original one. In this paper, the procedure developed in Paper C is employed. Thus, this paper demonstrates the usefulness of the methodology described in Paper C for a real world application.In Paper E, an algorithm to compute the steady-state solution of a discrete-type Riccati equation, the Covariance Extension Equation, is considered. The algorithm is based on a homotopy continuation method with predictor-corrector steps. Although this approach does not seem to offer particular advantage to previous solvers, it provides insights into issues such as positive degree and model reduction, since the rank of the solution of the covariance extension problem coincides with the degree of the shaping filter. In Paper F a new algorithm for the computation of the analytic interpolant of a bounded degree is proposed. It applies to the class of non-strictly positive real interpolants and it is capable of treating the case with boundary spectral zeros. Thus, in Paper~F, we deal with a class of interpolation problems which could not be treated by the optimization-based algorithm proposed by Byrnes, Georgiou and Lindquist. The new procedure computes interpolants by solving a system of nonlinear equations. The solution of the system of nonlinear equations is obtained by a homotopy continuation method. / QC 20100721
55

Inverse Problems in Analytic Interpolation for Robust Control and Spectral Estimation

Karlsson, Johan January 2008 (has links)
This thesis is divided into two parts. The first part deals with theNevanlinna-Pick interpolation problem, a problem which occursnaturally in several applications such as robust control, signalprocessing and circuit theory. We consider the problem of shaping andapproximating solutions to the Nevanlinna-Pick problem in a systematicway. In the second part, we study distance measures between powerspectra for spectral estimation. We postulate a situation where wewant to quantify robustness based on a finite set of covariances, andthis leads naturally to considering the weak*-topology. Severalweak*-continuous metrics are proposed and studied in this context.In the first paper we consider the correspondence between weighted entropyfunctionals and minimizing interpolants in order to find appropriateinterpolants for, e.g., control synthesis. There are two basic issues that weaddress: we first characterize admissible shapes of minimizers bystudying the corresponding inverse problem, and then we developeffective ways of shaping minimizers via suitable choices of weights.These results are used in order to systematize feedback controlsynthesis to obtain frequency dependent robustness bounds with aconstraint on the controller degree.The second paper studies contractive interpolants obtained as minimizersof a weighted entropy functional and analyzes the role of weights andinterpolation conditions as design parameters for shaping theinterpolants. We first show that, if, for a sequence of interpolants,the values of the corresponding entropy gains converge to theoptimum, then the interpolants converge in H_2, but not necessarily inH-infinity. This result is then used to describe the asymptoticbehaviour of the interpolant as an interpolation point approaches theboundary of the domain of analyticity.A quite comprehensive theory of analytic interpolation with degreeconstraint, dealing with rational analytic interpolants with an apriori bound, has been developed in recent years. In the third paper,we consider the limit case when this bound is removed, and only stableinterpolants with a prescribed maximum degree are sought. This leadsto weighted H_2 minimization, where the interpolants areparameterized by the weights. The inverse problem of determining theweight given a desired interpolant profile is considered, and arational approximation procedure based on the theory is proposed. Thisprovides a tool for tuning the solution for attaining designspecifications. The purpose of the fourth paper is to study the topology and develop metricsthat allow for localization of power spectra, based on second-orderstatistics. We show that the appropriate topology is theweak*-topology and give several examples on how to construct suchmetrics. This allows us to quantify uncertainty of spectra in anatural way and to calculate a priori bounds on spectral uncertainty,based on second-order statistics. Finally, we study identification ofspectral densities and relate this to the trade-off between resolutionand variance of spectral estimates.In the fifth paper, we present an axiomatic framework for seekingdistances between power spectra. The axioms requirethat the sought metric respects the effects of additive andmultiplicative noise in reducing our ability to discriminate spectra.They also require continuity of statistical quantities withrespect to perturbations measured in the metric. We then present aparticular metric which abides by these requirements. The metric isbased on the Monge-Kantorovich transportation problem and iscontrasted to an earlier Riemannian metric based on theminimum-variance prediction geometry of the underlying time-series. Itis also being compared with the more traditional Itakura-Saitodistance measure, as well as the aforementioned prediction metric, ontwo representative examples. / QC 20100817
56

A convex optimization approach to complexity constrained analytic interpolation with applications to ARMA estimation and robust control

Blomqvist, Anders January 2005 (has links)
Analytical interpolation theory has several applications in systems and control. In particular, solutions of low degree, or more generally of low complexity, are of special interest since they allow for synthesis of simpler systems. The study of degree constrained analytic interpolation was initialized in the early 80's and during the past decade it has had significant progress. This thesis contributes in three different aspects to complexity constrained analytic interpolation: theory, numerical algorithms, and design paradigms. The contributions are closely related; shortcomings of previous design paradigms motivate development of the theory, which in turn calls for new robust and efficient numerical algorithms. Mainly two theoretical developments are studied in the thesis. Firstly, the spectral Kullback-Leibler approximation formulation is merged with simultaneous cepstral and covariance interpolation. For this formulation, both uniqueness of the solution, as well as smoothness with respect to data, is proven. Secondly, the theory is generalized to matrix-valued interpolation, but then only allowing for covariance-type interpolation conditions. Again, uniqueness and smoothness with respect to data is proven. Three algorithms are presented. Firstly, a refinement of a previous algorithm allowing for multiple as well as matrix-valued interpolation in an optimization framework is presented. Secondly, an algorithm capable of solving the boundary case, that is, with spectral zeros on the unit circle, is given. This also yields an inherent numerical robustness. Thirdly, a new algorithm treating the problem with both cepstral and covariance conditions is presented. Two design paradigms have sprung out of the complexity constrained analytical interpolation theory. Firstly, in robust control it enables low degree Hinf controller design. This is illustrated by a low degree controller design for a benchmark problem in MIMO sensitivity shaping. Also, a user support for the tuning of controllers within the design paradigm for the SISO case is presented. Secondly, in ARMA estimation it provides unique model estimates, which depend smoothly on the data as well as enables frequency weighting. For AR estimation, a covariance extension approach to frequency weighting is discussed, and an example is given as an illustration. For ARMA estimation, simultaneous cepstral and covariance matching is generalized to include prefiltering. An example indicates that this might yield asymptotically efficient estimates. / QC 20100928
57

Computation of Mileage Limits for Traveling Salesmen by Means of Optimization Techniques

Torstensson, Johan January 2008 (has links)
Many companies have traveling salesmen that market and sell their products.This results in much traveling by car due to the daily customer visits. Thiscauses costs for the company, in form of travel expenses compensation, and environmentaleffects, in form of carbon dioxide pollution. As many companies arecertified according to environmental management systems, such as ISO 14001,the environmental work becomes more and more important as the environmentalconsciousness increases every day for companies, authorities and public.The main task of this thesis is to compute reasonable limits on the mileage ofthe salesmen; these limits are based on specific conditions for each salesman’sdistrict. The objective is to implement a heuristic algorithm that optimizes thecustomer tours for an arbitrary chosen month, which will represent a “standard”month. The output of the algorithm, the computed distances, will constitute amileage limit for the salesman.The algorithm consists of a constructive heuristic that builds an initial solution,which is modified if infeasible. This solution is then improved by a local searchalgorithm preceding a genetic algorithm, which task is to improve the toursseparately.This method for computing mileage limits for traveling salesmen generates goodsolutions in form of realistic tours. The mileage limits could be improved if theinput data were more accurate and adjusted to each district, but the suggestedmethod does what it is supposed to do.
58

Feasible Direction Methods for Constrained Nonlinear Optimization : Suggestions for Improvements

Mitradjieva-Daneva, Maria January 2007 (has links)
This thesis concerns the development of novel feasible direction type algorithms for constrained nonlinear optimization. The new algorithms are based upon enhancements of the search direction determination and the line search steps. The Frank-Wolfe method is popular for solving certain structured linearly constrained nonlinear problems, although its rate of convergence is often poor. We develop improved Frank--Wolfe type algorithms based on conjugate directions. In the conjugate direction Frank-Wolfe method a line search is performed along a direction which is conjugate to the previous one with respect to the Hessian matrix of the objective. A further refinement of this method is derived by applying conjugation with respect to the last two directions, instead of only the last one. The new methods are applied to the single-class user traffic equilibrium problem, the multi-class user traffic equilibrium problem under social marginal cost pricing, and the stochastic transportation problem. In a limited set of computational tests the algorithms turn out to be quite efficient. Additionally, a feasible direction method with multi-dimensional search for the stochastic transportation problem is developed. We also derive a novel sequential linear programming algorithm for general constrained nonlinear optimization problems, with the intention of being able to attack problems with large numbers of variables and constraints. The algorithm is based on inner approximations of both the primal and the dual spaces, which yields a method combining column and constraint generation in the primal space. / The articles are note published due to copyright rextrictions.
59

The Origin-Destination Matrix Estimation Problem : Analysis and Computations

Peterson, Anders January 2007 (has links)
For most kind of analyses in the field of traffic planning, there is a need for origin--destination (OD) matrices, which specify the travel demands between the origin and destination nodes in the network. This thesis concerns the OD-matrix estimation problem, that is, the calculation of OD-matrices using observed link flows. Both time-independent and time-dependent models are considered, and we also study the placement of link flow detectors. Many methods have been suggested for OD-matrix estimation in time-independent models, which describe an average traffic situation. We assume a user equilibrium to hold for the link flows in the network and recognize a bilevel structure of the estimation problem. A descent heuristic is proposed, in which special attention is given to the issue of calculating the change of a link flow with respect to a change of the travel demand in a certain pair of origin and destination nodes. When a time-dimension is considered, the estimation problem becomes more complex. Besides the problem of distributing the travel demand onto routes, the flow propagation in time and space must also be handled. The time-dependent OD-matrix estimation problem is the subject for two studies. The first is a case study, where the conventional estimation technique is improved through introducing pre-adjustment schemes, which exploit the structure of the information contained in the OD-matrix and the link flow observations. In the second study, an algorithm for time-independent estimation is extended to the time-dependent case and tested for a network from Stockholm, Sweden. Finally, we study the underlying problem of finding those links where traffic flow observations are to be performed, in order to ensure the best possible quality of the estimated OD-matrix. There are different ways of quantifying a common goal to cover as much traffic as possible, and we create an experimental framework in which they can be evaluated. Presupposing that consistent flow observations from all the links in the network yields the best estimate of the OD-matrix, the lack of observations from some links results in a relaxation of the estimation problem, and a poorer estimate. We formulate the problem to place link flow detectors as to achieve the least relaxation with a limited number of detectors.
60

Liquidity and optimal consumption with random income

Zhelezov, Dmitry, Yamshchikov, Ivan January 2011 (has links)
In the first part of our work we focus on the model of the optimal consumption with a random income. We provide the three dimensional equation for this model, demonstrate the reduction to the two dimensional case and provide for two different utility functions the full point-symmetries' analysis of the equations. We also demonstrate that for the logarithmic utility there exists a unique and smooth viscosity solution the existence of which as far as we know was never demonstrated before. In the second part of our work we develop the concept of the empirical liquidity measure. We provide the retrospective view of the works on this issue, discuss the proposed definitions and develop our own empirical measure based on the intuitive mathematical model and comprising several features of the definitions that existed before. Then we verify the measure provided on the real data from the market and demonstrate the advantages of the proposed value for measuring the illiquidity.

Page generated in 0.1329 seconds