• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 14
  • Tagged with
  • 75
  • 74
  • 71
  • 71
  • 71
  • 21
  • 16
  • 15
  • 13
  • 11
  • 10
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Modeling and Model Reduction by Analytic Interpolation and Optimization

Fanizza, Giovanna January 2008 (has links)
This thesis consists of six papers. The main topic of all these papers is modeling a class of linear time-invariant systems. The system class is parameterized in the context of interpolation theory with a degree constraint. In the papers included in the thesis, this parameterization is the key tool for the design of dynamical system models in fields such as spectral estimation and model reduction. A problem in spectral estimation amounts to estimating a spectral density function that captures characteristics of the stochastic process, such as covariance, cepstrum, Markov parameters and the frequency response of the process. A  model reduction problem consists in finding a small order system which replaces the original one so that the behavior of both systems is similar in an appropriately defined sense.  In Paper A a new spectral estimation technique based on the rational covariance extension theory is proposed. The novelty of this approach is in the design of a spectral density that optimally matches covariances and approximates the frequency response of a given process simultaneously.In Paper B  a model reduction problem is considered. In the literature there are several methods to perform model reduction. Our attention is focused on methods which preserve, in the model reduction phase, the stability and the positive real properties of the original system. A reduced-order model is computed employing the analytic interpolation theory with a degree constraint. We observe that in this theory there is a freedom in the placement of the spectral zeros and interpolation points. This freedom can be utilized for the computation of a rational positive real function of low degree which approximates the best a given system. A problem left open in Paper B is how to select spectral zeros and interpolation points in a systematic way in order to obtain the best approximation of a given system. This problem is the main topic in Paper C. Here, the problem is investigated in the analytic interpolation context and spectral zeros and interpolation points are obtained as solution of a optimization problem.In Paper D, the problem of modeling a floating body by a positive real function is investigated. The main focus is  on modeling the radiation forces and moment. The radiation forces are described as the forces that make a floating body oscillate in calm water. These forces are passive and usually they are modeled with system of high degree. Thus, for efficient computer simulation it is necessary to obtain a low order system which approximates the original one. In this paper, the procedure developed in Paper C is employed. Thus, this paper demonstrates the usefulness of the methodology described in Paper C for a real world application.In Paper E, an algorithm to compute the steady-state solution of a discrete-type Riccati equation, the Covariance Extension Equation, is considered. The algorithm is based on a homotopy continuation method with predictor-corrector steps. Although this approach does not seem to offer particular advantage to previous solvers, it provides insights into issues such as positive degree and model reduction, since the rank of the solution of the covariance extension problem coincides with the degree of the shaping filter. In Paper F a new algorithm for the computation of the analytic interpolant of a bounded degree is proposed. It applies to the class of non-strictly positive real interpolants and it is capable of treating the case with boundary spectral zeros. Thus, in Paper~F, we deal with a class of interpolation problems which could not be treated by the optimization-based algorithm proposed by Byrnes, Georgiou and Lindquist. The new procedure computes interpolants by solving a system of nonlinear equations. The solution of the system of nonlinear equations is obtained by a homotopy continuation method. / QC 20100721
62

Inverse Problems in Analytic Interpolation for Robust Control and Spectral Estimation

Karlsson, Johan January 2008 (has links)
This thesis is divided into two parts. The first part deals with theNevanlinna-Pick interpolation problem, a problem which occursnaturally in several applications such as robust control, signalprocessing and circuit theory. We consider the problem of shaping andapproximating solutions to the Nevanlinna-Pick problem in a systematicway. In the second part, we study distance measures between powerspectra for spectral estimation. We postulate a situation where wewant to quantify robustness based on a finite set of covariances, andthis leads naturally to considering the weak*-topology. Severalweak*-continuous metrics are proposed and studied in this context.In the first paper we consider the correspondence between weighted entropyfunctionals and minimizing interpolants in order to find appropriateinterpolants for, e.g., control synthesis. There are two basic issues that weaddress: we first characterize admissible shapes of minimizers bystudying the corresponding inverse problem, and then we developeffective ways of shaping minimizers via suitable choices of weights.These results are used in order to systematize feedback controlsynthesis to obtain frequency dependent robustness bounds with aconstraint on the controller degree.The second paper studies contractive interpolants obtained as minimizersof a weighted entropy functional and analyzes the role of weights andinterpolation conditions as design parameters for shaping theinterpolants. We first show that, if, for a sequence of interpolants,the values of the corresponding entropy gains converge to theoptimum, then the interpolants converge in H_2, but not necessarily inH-infinity. This result is then used to describe the asymptoticbehaviour of the interpolant as an interpolation point approaches theboundary of the domain of analyticity.A quite comprehensive theory of analytic interpolation with degreeconstraint, dealing with rational analytic interpolants with an apriori bound, has been developed in recent years. In the third paper,we consider the limit case when this bound is removed, and only stableinterpolants with a prescribed maximum degree are sought. This leadsto weighted H_2 minimization, where the interpolants areparameterized by the weights. The inverse problem of determining theweight given a desired interpolant profile is considered, and arational approximation procedure based on the theory is proposed. Thisprovides a tool for tuning the solution for attaining designspecifications. The purpose of the fourth paper is to study the topology and develop metricsthat allow for localization of power spectra, based on second-orderstatistics. We show that the appropriate topology is theweak*-topology and give several examples on how to construct suchmetrics. This allows us to quantify uncertainty of spectra in anatural way and to calculate a priori bounds on spectral uncertainty,based on second-order statistics. Finally, we study identification ofspectral densities and relate this to the trade-off between resolutionand variance of spectral estimates.In the fifth paper, we present an axiomatic framework for seekingdistances between power spectra. The axioms requirethat the sought metric respects the effects of additive andmultiplicative noise in reducing our ability to discriminate spectra.They also require continuity of statistical quantities withrespect to perturbations measured in the metric. We then present aparticular metric which abides by these requirements. The metric isbased on the Monge-Kantorovich transportation problem and iscontrasted to an earlier Riemannian metric based on theminimum-variance prediction geometry of the underlying time-series. Itis also being compared with the more traditional Itakura-Saitodistance measure, as well as the aforementioned prediction metric, ontwo representative examples. / QC 20100817
63

Computation of Mileage Limits for Traveling Salesmen by Means of Optimization Techniques

Torstensson, Johan January 2008 (has links)
Many companies have traveling salesmen that market and sell their products.This results in much traveling by car due to the daily customer visits. Thiscauses costs for the company, in form of travel expenses compensation, and environmentaleffects, in form of carbon dioxide pollution. As many companies arecertified according to environmental management systems, such as ISO 14001,the environmental work becomes more and more important as the environmentalconsciousness increases every day for companies, authorities and public.The main task of this thesis is to compute reasonable limits on the mileage ofthe salesmen; these limits are based on specific conditions for each salesman’sdistrict. The objective is to implement a heuristic algorithm that optimizes thecustomer tours for an arbitrary chosen month, which will represent a “standard”month. The output of the algorithm, the computed distances, will constitute amileage limit for the salesman.The algorithm consists of a constructive heuristic that builds an initial solution,which is modified if infeasible. This solution is then improved by a local searchalgorithm preceding a genetic algorithm, which task is to improve the toursseparately.This method for computing mileage limits for traveling salesmen generates goodsolutions in form of realistic tours. The mileage limits could be improved if theinput data were more accurate and adjusted to each district, but the suggestedmethod does what it is supposed to do.
64

Feasible Direction Methods for Constrained Nonlinear Optimization : Suggestions for Improvements

Mitradjieva-Daneva, Maria January 2007 (has links)
This thesis concerns the development of novel feasible direction type algorithms for constrained nonlinear optimization. The new algorithms are based upon enhancements of the search direction determination and the line search steps. The Frank-Wolfe method is popular for solving certain structured linearly constrained nonlinear problems, although its rate of convergence is often poor. We develop improved Frank--Wolfe type algorithms based on conjugate directions. In the conjugate direction Frank-Wolfe method a line search is performed along a direction which is conjugate to the previous one with respect to the Hessian matrix of the objective. A further refinement of this method is derived by applying conjugation with respect to the last two directions, instead of only the last one. The new methods are applied to the single-class user traffic equilibrium problem, the multi-class user traffic equilibrium problem under social marginal cost pricing, and the stochastic transportation problem. In a limited set of computational tests the algorithms turn out to be quite efficient. Additionally, a feasible direction method with multi-dimensional search for the stochastic transportation problem is developed. We also derive a novel sequential linear programming algorithm for general constrained nonlinear optimization problems, with the intention of being able to attack problems with large numbers of variables and constraints. The algorithm is based on inner approximations of both the primal and the dual spaces, which yields a method combining column and constraint generation in the primal space. / The articles are note published due to copyright rextrictions.
65

The Origin-Destination Matrix Estimation Problem : Analysis and Computations

Peterson, Anders January 2007 (has links)
For most kind of analyses in the field of traffic planning, there is a need for origin--destination (OD) matrices, which specify the travel demands between the origin and destination nodes in the network. This thesis concerns the OD-matrix estimation problem, that is, the calculation of OD-matrices using observed link flows. Both time-independent and time-dependent models are considered, and we also study the placement of link flow detectors. Many methods have been suggested for OD-matrix estimation in time-independent models, which describe an average traffic situation. We assume a user equilibrium to hold for the link flows in the network and recognize a bilevel structure of the estimation problem. A descent heuristic is proposed, in which special attention is given to the issue of calculating the change of a link flow with respect to a change of the travel demand in a certain pair of origin and destination nodes. When a time-dimension is considered, the estimation problem becomes more complex. Besides the problem of distributing the travel demand onto routes, the flow propagation in time and space must also be handled. The time-dependent OD-matrix estimation problem is the subject for two studies. The first is a case study, where the conventional estimation technique is improved through introducing pre-adjustment schemes, which exploit the structure of the information contained in the OD-matrix and the link flow observations. In the second study, an algorithm for time-independent estimation is extended to the time-dependent case and tested for a network from Stockholm, Sweden. Finally, we study the underlying problem of finding those links where traffic flow observations are to be performed, in order to ensure the best possible quality of the estimated OD-matrix. There are different ways of quantifying a common goal to cover as much traffic as possible, and we create an experimental framework in which they can be evaluated. Presupposing that consistent flow observations from all the links in the network yields the best estimate of the OD-matrix, the lack of observations from some links results in a relaxation of the estimation problem, and a poorer estimate. We formulate the problem to place link flow detectors as to achieve the least relaxation with a limited number of detectors.
66

Liquidity and optimal consumption with random income

Zhelezov, Dmitry, Yamshchikov, Ivan January 2011 (has links)
In the first part of our work we focus on the model of the optimal consumption with a random income. We provide the three dimensional equation for this model, demonstrate the reduction to the two dimensional case and provide for two different utility functions the full point-symmetries' analysis of the equations. We also demonstrate that for the logarithmic utility there exists a unique and smooth viscosity solution the existence of which as far as we know was never demonstrated before. In the second part of our work we develop the concept of the empirical liquidity measure. We provide the retrospective view of the works on this issue, discuss the proposed definitions and develop our own empirical measure based on the intuitive mathematical model and comprising several features of the definitions that existed before. Then we verify the measure provided on the real data from the market and demonstrate the advantages of the proposed value for measuring the illiquidity.
67

Provisions estimation for portfolio of CDO in Gaussian financial environment

Maximchuk, Oleg, Volkov, Yury January 2011 (has links)
The problem of managing the portfolio provisions is of very high importance for any financial institution. In this paper we provide both static and dynamic models of provisions estimation for the case when the decision about provisions is made at the first moment of time subject to the absence of information and for the case of complete and incomplete information. Also the hedging strategy for the case of the defaultable market is presented in this work as another tool of reducing the risk of default. The default time is modelled as a first-passage time of a standard Brownian motion through a deterministic barrier. Some methods of numerical provision estimation are also presented.
68

Taktisk bemanningsplanering av läkare : modellutveckling och en pilotstudie / Tactical Workforce Planning of Physicians : model development and a pilot study

Lundén, Anna January 2010 (has links)
Inom vården utförs ofta schemaläggning av personal manuellt, vilket kräver mycket tid och resurser. Att planera arbetet för en grupp läkare, med dess ofta mycket komplexa sammansättning vad gäller exempelvis arbetsuppgifter och kompetenser, är ingen lätt uppgift. Detta examensarbete studerar huruvida en automatiserad taktisk bemanningsplanering med en tidshorisont på ett halvår till ett år, skulle kunna underlätta denna uppgift. I rapporten presenteras en måloptimeringsmodell som implementerats i AMPL för att med CPLEX som lösare generera förslag till bemanningsplaner. För att utveckla en matematisk modell som väl representerar de förutsättningar som råder vid bemanningsplanering av läkare har alternativa formuleringar provats och utvärderats. Den mest lovande av modellerna, som baseras på måloptimering, har i en pilotstudie testats på data från Onkologiska kliniken vid Linköpings universitetssjukhus. Flexibiliteten i modellen gjorde att den enkelt kunde användas på de data som erhölls därifrån. Resultatet från pilotstudien indikerar att den utvecklade modellen har kapacitet att ge förslag till rimliga bemanningsplaner. / Scheduling of staff in the health care industry is typically done by hand, thus consuming a lot of time and effort. To plan the work for a group of physicians is a complex task, having to take into account factors like individual preferences and competences among the physicians. This thesis studies whether an automated tactical workforce planning, with a time horizon of half a year to a year, could facilitate this task. This thesis presents a goal programming model for generating suggestions to workforce plans for physicians. The model has been implemented in AMPL and is solved using CPLEX. During the development of the mathematical model for workforce planning, alternative model formulations have been tested and evaluated, and some of these are presented in the report. The most promising of them is one based solely on goal programming, and it has been tested in a pilot study on data from the Oncology Clinic at the University Hospital in Linköping. The flexibility of the model made it easy to use on the data provided by the clinic. The result of the pilot study indicates that the developed model has the capacity to give reasonable suggestions for workforce plans.
69

Computation of Mileage Limits for Traveling Salesmen by Means of Optimization Techniques

Torstensson, Johan January 2008 (has links)
<p>Many companies have traveling salesmen that market and sell their products.This results in much traveling by car due to the daily customer visits. Thiscauses costs for the company, in form of travel expenses compensation, and environmentaleffects, in form of carbon dioxide pollution. As many companies arecertified according to environmental management systems, such as ISO 14001,the environmental work becomes more and more important as the environmentalconsciousness increases every day for companies, authorities and public.The main task of this thesis is to compute reasonable limits on the mileage ofthe salesmen; these limits are based on specific conditions for each salesman’sdistrict. The objective is to implement a heuristic algorithm that optimizes thecustomer tours for an arbitrary chosen month, which will represent a “standard”month. The output of the algorithm, the computed distances, will constitute amileage limit for the salesman.The algorithm consists of a constructive heuristic that builds an initial solution,which is modified if infeasible. This solution is then improved by a local searchalgorithm preceding a genetic algorithm, which task is to improve the toursseparately.This method for computing mileage limits for traveling salesmen generates goodsolutions in form of realistic tours. The mileage limits could be improved if theinput data were more accurate and adjusted to each district, but the suggestedmethod does what it is supposed to do.</p>
70

Optimering av beställningsrutiner och lagernivåer av färska råvaror hos en liten restaurang / Optimization of ordering routines and inventory levels of perishable products in a small restaurant

Hedengren, Sofia, Zargari Marandi, Ronya January 2021 (has links)
Arbetet syftade till att finna en passande modell för Moraberg AB:s beställningsrutiner för två färskvaror av anledning att optimera lagernivåer och minska matsvinn. Då efterfrågan hos Moraberg AB var okänd togs en modell fram för att prediktera och undersöka ifall det fanns ett linjärt samband mellan ett par parametrar och efterfrågan. Parametrarna som undersöktes var veckodag, temperatur, nederbörd och antal smittade personer i Covid-­19. Modellen baserades på historisk försäljningsdata för åren 2018– 2020. Två efterfrågemodeller togs fram, den första modellen innehöll alla nämnda parametrar förutom antal smittade personer i Covid­-19 och den andra modellen innehöll alla parametrar. Resultatet visade att temperatur, nederbörd och antal smittade personer i Covid­-19 har ett svagt beroende med efterfrågan hos företaget men parametern veckodag visade ett högt beroende med efterfrågan. Analys av modellerna visade att det inte existerade multikollinearitet samt att de inte bröt de fem antagandena om regression. Vidare visade resultatet att modell 2 presterade bättre än modell 1. Lageroptimeringsmodellen som var lämpligast för Moraberg AB, med avseende på de resurser och begräsningar som fanns inom ramen av detta arbete, var den deterministiska periodiska inspektions modellen som kan lösas med dynamisk programmering. Ett numeriskt exempel genomfördes på den valda lageroptimeringsmodellen med hjälp av modell 2. Det numeriska exemplet baserades på prognoser från vecka 17 år 2021. / This thesis aimed to find a suitable inventory model for Moraberg AB’s ordering routines to optimize inventory levels and reduce food waste for two perishable products. As the demand at Moraberg AB was unknown, a regression model was developed to predict and investigate whether there was a linear relationship between a few parameters and the demand. The parameters examined were weekday, temperature, precipitation, and number of infected people in Covid­19. The model was based on historical sales data for the years 2018–2020. Two demand models were developed, the first model contained all the mentioned parameters except the number of people infected in Covid-­19 and the second model contained all parameters.  The results showed that temperature, precipitation, and number of people infected in Covid­-19 have a weak dependence with the demand, however the parameter weekday showed a dependence with the demand. Analysis of the two models did not show any signs of multicollinearity and they did not violate the five assumptions regarding regression. Furthermore, the results showed that model 2 performed better than model 1.  The inventory model that was most suitable for Moraberg AB, regarding the resources and limitations that existed within the framework of this thesis, was the deterministic periodic ­review model that could be solved by dynamic programming. A numerical example was solved using the suitable inventory model and with the second demand model. The numerical example was based on forecasts from week 17 year 2021.

Page generated in 0.4825 seconds