• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 364
  • 84
  • Tagged with
  • 448
  • 192
  • 172
  • 149
  • 115
  • 112
  • 108
  • 93
  • 83
  • 72
  • 71
  • 53
  • 52
  • 47
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Utilizing Problem Structure in Optimization of Radiation Therapy

Carlsson, Fredrik January 2008 (has links)
In this thesis, optimization approaches for intensity-modulated radiation therapy are developed and evaluated with focus on numerical efficiency and treatment delivery aspects. The first two papers deal with strategies for solving fluence map optimization problems efficiently while avoiding solutions with jagged fluence profiles. The last two papers concern optimization of step-and-shoot parameters with emphasis on generating treatment plans that can be delivered efficiently and accurately. In the first paper, the problem dimension of a fluence map optimization problem is reduced through a spectral decomposition of the Hessian of the objective function. The weights of the eigenvectors corresponding to the p largest eigenvalues are introduced as optimization variables, and the impact on the solution of varying p is studied. Including only a few eigenvector weights results in faster initial decrease of the objective value, but with an inferior solution, compared to optimization of the bixel weights. An approach combining eigenvector weights and bixel weights produces improved solutions, but at the expense of the pre-computational time for the spectral decomposition. So-called iterative regularization is performed on fluence map optimization problems in the second paper. The idea is to find regular solutions by utilizing an optimization method that is able to find near-optimal solutions with non-jagged fluence profiles in few iterations. The suitability of a quasi-Newton sequential quadratic programming method is demonstrated by comparing the treatment quality of deliverable step-and-shoot plans, generated through leaf sequencing with a fixed number of segments, for different number of bixel-weight iterations. A conclusion is that over-optimization of the fluence map optimization problem prior to leaf sequencing should be avoided. An approach for dynamically generating multileaf collimator segments using a column generation approach combined with optimization of segment shapes and weights is presented in the third paper. Numerical results demonstrate that the adjustment of leaf positions improves the plan quality and that satisfactory treatment plans are found with few segments. The method provides a tool for exploring the trade-off between plan quality and treatment complexity by generating a sequence of deliverable plans of increasing quality. The final paper is devoted to understanding the ability of the column generation approach in the third paper to find near-optimal solutions with very few columns compared to the problem dimension. The impact of different restrictions on the generated columns is studied, both in terms of numerical behaviour and convergence properties. A bound on the two-norm of the columns results in the conjugate-gradient method. Numerical results indicate that the appealing properties of the conjugate-gradient method on ill-conditioned problems are inherited in the column generation approach of the third paper. / QC 20100709
362

Modeling and Model Reduction by Analytic Interpolation and Optimization

Fanizza, Giovanna January 2008 (has links)
This thesis consists of six papers. The main topic of all these papers is modeling a class of linear time-invariant systems. The system class is parameterized in the context of interpolation theory with a degree constraint. In the papers included in the thesis, this parameterization is the key tool for the design of dynamical system models in fields such as spectral estimation and model reduction. A problem in spectral estimation amounts to estimating a spectral density function that captures characteristics of the stochastic process, such as covariance, cepstrum, Markov parameters and the frequency response of the process. A  model reduction problem consists in finding a small order system which replaces the original one so that the behavior of both systems is similar in an appropriately defined sense.  In Paper A a new spectral estimation technique based on the rational covariance extension theory is proposed. The novelty of this approach is in the design of a spectral density that optimally matches covariances and approximates the frequency response of a given process simultaneously.In Paper B  a model reduction problem is considered. In the literature there are several methods to perform model reduction. Our attention is focused on methods which preserve, in the model reduction phase, the stability and the positive real properties of the original system. A reduced-order model is computed employing the analytic interpolation theory with a degree constraint. We observe that in this theory there is a freedom in the placement of the spectral zeros and interpolation points. This freedom can be utilized for the computation of a rational positive real function of low degree which approximates the best a given system. A problem left open in Paper B is how to select spectral zeros and interpolation points in a systematic way in order to obtain the best approximation of a given system. This problem is the main topic in Paper C. Here, the problem is investigated in the analytic interpolation context and spectral zeros and interpolation points are obtained as solution of a optimization problem.In Paper D, the problem of modeling a floating body by a positive real function is investigated. The main focus is  on modeling the radiation forces and moment. The radiation forces are described as the forces that make a floating body oscillate in calm water. These forces are passive and usually they are modeled with system of high degree. Thus, for efficient computer simulation it is necessary to obtain a low order system which approximates the original one. In this paper, the procedure developed in Paper C is employed. Thus, this paper demonstrates the usefulness of the methodology described in Paper C for a real world application.In Paper E, an algorithm to compute the steady-state solution of a discrete-type Riccati equation, the Covariance Extension Equation, is considered. The algorithm is based on a homotopy continuation method with predictor-corrector steps. Although this approach does not seem to offer particular advantage to previous solvers, it provides insights into issues such as positive degree and model reduction, since the rank of the solution of the covariance extension problem coincides with the degree of the shaping filter. In Paper F a new algorithm for the computation of the analytic interpolant of a bounded degree is proposed. It applies to the class of non-strictly positive real interpolants and it is capable of treating the case with boundary spectral zeros. Thus, in Paper~F, we deal with a class of interpolation problems which could not be treated by the optimization-based algorithm proposed by Byrnes, Georgiou and Lindquist. The new procedure computes interpolants by solving a system of nonlinear equations. The solution of the system of nonlinear equations is obtained by a homotopy continuation method. / QC 20100721
363

Inverse Problems in Analytic Interpolation for Robust Control and Spectral Estimation

Karlsson, Johan January 2008 (has links)
This thesis is divided into two parts. The first part deals with theNevanlinna-Pick interpolation problem, a problem which occursnaturally in several applications such as robust control, signalprocessing and circuit theory. We consider the problem of shaping andapproximating solutions to the Nevanlinna-Pick problem in a systematicway. In the second part, we study distance measures between powerspectra for spectral estimation. We postulate a situation where wewant to quantify robustness based on a finite set of covariances, andthis leads naturally to considering the weak*-topology. Severalweak*-continuous metrics are proposed and studied in this context.In the first paper we consider the correspondence between weighted entropyfunctionals and minimizing interpolants in order to find appropriateinterpolants for, e.g., control synthesis. There are two basic issues that weaddress: we first characterize admissible shapes of minimizers bystudying the corresponding inverse problem, and then we developeffective ways of shaping minimizers via suitable choices of weights.These results are used in order to systematize feedback controlsynthesis to obtain frequency dependent robustness bounds with aconstraint on the controller degree.The second paper studies contractive interpolants obtained as minimizersof a weighted entropy functional and analyzes the role of weights andinterpolation conditions as design parameters for shaping theinterpolants. We first show that, if, for a sequence of interpolants,the values of the corresponding entropy gains converge to theoptimum, then the interpolants converge in H_2, but not necessarily inH-infinity. This result is then used to describe the asymptoticbehaviour of the interpolant as an interpolation point approaches theboundary of the domain of analyticity.A quite comprehensive theory of analytic interpolation with degreeconstraint, dealing with rational analytic interpolants with an apriori bound, has been developed in recent years. In the third paper,we consider the limit case when this bound is removed, and only stableinterpolants with a prescribed maximum degree are sought. This leadsto weighted H_2 minimization, where the interpolants areparameterized by the weights. The inverse problem of determining theweight given a desired interpolant profile is considered, and arational approximation procedure based on the theory is proposed. Thisprovides a tool for tuning the solution for attaining designspecifications. The purpose of the fourth paper is to study the topology and develop metricsthat allow for localization of power spectra, based on second-orderstatistics. We show that the appropriate topology is theweak*-topology and give several examples on how to construct suchmetrics. This allows us to quantify uncertainty of spectra in anatural way and to calculate a priori bounds on spectral uncertainty,based on second-order statistics. Finally, we study identification ofspectral densities and relate this to the trade-off between resolutionand variance of spectral estimates.In the fifth paper, we present an axiomatic framework for seekingdistances between power spectra. The axioms requirethat the sought metric respects the effects of additive andmultiplicative noise in reducing our ability to discriminate spectra.They also require continuity of statistical quantities withrespect to perturbations measured in the metric. We then present aparticular metric which abides by these requirements. The metric isbased on the Monge-Kantorovich transportation problem and iscontrasted to an earlier Riemannian metric based on theminimum-variance prediction geometry of the underlying time-series. Itis also being compared with the more traditional Itakura-Saitodistance measure, as well as the aforementioned prediction metric, ontwo representative examples. / QC 20100817
364

Computation of Mileage Limits for Traveling Salesmen by Means of Optimization Techniques

Torstensson, Johan January 2008 (has links)
Many companies have traveling salesmen that market and sell their products.This results in much traveling by car due to the daily customer visits. Thiscauses costs for the company, in form of travel expenses compensation, and environmentaleffects, in form of carbon dioxide pollution. As many companies arecertified according to environmental management systems, such as ISO 14001,the environmental work becomes more and more important as the environmentalconsciousness increases every day for companies, authorities and public.The main task of this thesis is to compute reasonable limits on the mileage ofthe salesmen; these limits are based on specific conditions for each salesman’sdistrict. The objective is to implement a heuristic algorithm that optimizes thecustomer tours for an arbitrary chosen month, which will represent a “standard”month. The output of the algorithm, the computed distances, will constitute amileage limit for the salesman.The algorithm consists of a constructive heuristic that builds an initial solution,which is modified if infeasible. This solution is then improved by a local searchalgorithm preceding a genetic algorithm, which task is to improve the toursseparately.This method for computing mileage limits for traveling salesmen generates goodsolutions in form of realistic tours. The mileage limits could be improved if theinput data were more accurate and adjusted to each district, but the suggestedmethod does what it is supposed to do.
365

Packning i tid och rum : Korologisk förändring och strategier att hantera trängsel i handelsträdgården, bostadsområdet och på begravningsplatsen

Windarp, Helén January 2006 (has links)
The study Packning i tid och rum (Crowding in Time and Space) is a Master Thesis in Human Geography within Geography, presented at Södertörn University College. The aim is to investigate the connections between time and space, more particularly, geographical changes over time. This is done by focusing on the Study of Land use as a phenomenon and on-going processes in demarcated areas. Distinct areas are given special interest, i.e. how they are used. The study deals with three different kinds of sites in three levels of scale: a market garden, cemeteries and a residential area. The main focus of the study is on the cemeteries. Sources to geographical data and other pieces of information are geographical systems, statistics, interviews, own observations, and photographs. This material has been worked up with simple statistic methods, map studies, and qualitative methods. The Time Geography and the New Regional Geography are used as a theoretical framework. There is an ambition to search for general understanding. The work is strongly inspired by the geographer Torsten Hägerstrand’s work and approach. It is also influenced by Systems Theory. The results confirm that there is a closer crowding of geographic objects in time and space within the cemeteries. Chorological changes could indicate similar processes at the garden center and residential area. Space is a limited resource and packing problems need to be solved. Some strategies to achieve that aim are found. At the end is discussed if closer crowding, needs more of register, measuring and restrictions and that some things are accepted to take large place in space since they are temporal. / Studien Packning i tid och rum är ett examensarbete i ämnet geografi, inriktning kulturgeografi, vid Södertörns högskola. Syftet är att utforska sambandet mellan tid och rum och då som geografiska förändringar över tiden. Det sker genom att studera markanvändning som fenomen och pågående processer i avgränsade områden. Speciellt intresse ägnas åt hur ytor disponeras. Tre olika slags områden studeras: en handelsträdgård, begravningsplatser och ett bostadsområde. Tyngdpunkten i undersökningen ligger på studiet av begravningsplatser. Geografiska data och annan information har hämtats från geografiska informationssystem, statistik, intervjuer, egna observationer och fotografier. Materialet har bearbetats med enkel statistisk metod, kartstudier och kvalitativa metoder. Som teoretisk ram används tidsgeografi och den nya regionalgeografin. Det finns en ambition att söka efter generell förståelse. Arbetet är starkt inspirerat av geografen Torsten Hägerstrands arbete och synsätt. I arbetet finns även inslag av systemteoretiskt tänkande. Resultaten visar att det sker en tätare packning av geografiska objekt i tid och rum på kyrkogårdarna. Korologiska förändringar kan tyda på liknande processer i handelsträdgården och bostadsområdet. Utrymmet är en begränsad resurs och packning är ett problem att lösa. Olika strategier för att hantera trängseln observeras. Avslutningsvis diskuteras förhållandet att ju tätare packning desto mer av registerhållning, mätning och restriktioner fordras och att vissa saker tillåts breda ut sig i rummet om de är tillfälliga.
366

Feasible Direction Methods for Constrained Nonlinear Optimization : Suggestions for Improvements

Mitradjieva-Daneva, Maria January 2007 (has links)
This thesis concerns the development of novel feasible direction type algorithms for constrained nonlinear optimization. The new algorithms are based upon enhancements of the search direction determination and the line search steps. The Frank-Wolfe method is popular for solving certain structured linearly constrained nonlinear problems, although its rate of convergence is often poor. We develop improved Frank--Wolfe type algorithms based on conjugate directions. In the conjugate direction Frank-Wolfe method a line search is performed along a direction which is conjugate to the previous one with respect to the Hessian matrix of the objective. A further refinement of this method is derived by applying conjugation with respect to the last two directions, instead of only the last one. The new methods are applied to the single-class user traffic equilibrium problem, the multi-class user traffic equilibrium problem under social marginal cost pricing, and the stochastic transportation problem. In a limited set of computational tests the algorithms turn out to be quite efficient. Additionally, a feasible direction method with multi-dimensional search for the stochastic transportation problem is developed. We also derive a novel sequential linear programming algorithm for general constrained nonlinear optimization problems, with the intention of being able to attack problems with large numbers of variables and constraints. The algorithm is based on inner approximations of both the primal and the dual spaces, which yields a method combining column and constraint generation in the primal space. / The articles are note published due to copyright rextrictions.
367

The Origin-Destination Matrix Estimation Problem : Analysis and Computations

Peterson, Anders January 2007 (has links)
For most kind of analyses in the field of traffic planning, there is a need for origin--destination (OD) matrices, which specify the travel demands between the origin and destination nodes in the network. This thesis concerns the OD-matrix estimation problem, that is, the calculation of OD-matrices using observed link flows. Both time-independent and time-dependent models are considered, and we also study the placement of link flow detectors. Many methods have been suggested for OD-matrix estimation in time-independent models, which describe an average traffic situation. We assume a user equilibrium to hold for the link flows in the network and recognize a bilevel structure of the estimation problem. A descent heuristic is proposed, in which special attention is given to the issue of calculating the change of a link flow with respect to a change of the travel demand in a certain pair of origin and destination nodes. When a time-dimension is considered, the estimation problem becomes more complex. Besides the problem of distributing the travel demand onto routes, the flow propagation in time and space must also be handled. The time-dependent OD-matrix estimation problem is the subject for two studies. The first is a case study, where the conventional estimation technique is improved through introducing pre-adjustment schemes, which exploit the structure of the information contained in the OD-matrix and the link flow observations. In the second study, an algorithm for time-independent estimation is extended to the time-dependent case and tested for a network from Stockholm, Sweden. Finally, we study the underlying problem of finding those links where traffic flow observations are to be performed, in order to ensure the best possible quality of the estimated OD-matrix. There are different ways of quantifying a common goal to cover as much traffic as possible, and we create an experimental framework in which they can be evaluated. Presupposing that consistent flow observations from all the links in the network yields the best estimate of the OD-matrix, the lack of observations from some links results in a relaxation of the estimation problem, and a poorer estimate. We formulate the problem to place link flow detectors as to achieve the least relaxation with a limited number of detectors.
368

Liquidity and optimal consumption with random income

Zhelezov, Dmitry, Yamshchikov, Ivan January 2011 (has links)
In the first part of our work we focus on the model of the optimal consumption with a random income. We provide the three dimensional equation for this model, demonstrate the reduction to the two dimensional case and provide for two different utility functions the full point-symmetries' analysis of the equations. We also demonstrate that for the logarithmic utility there exists a unique and smooth viscosity solution the existence of which as far as we know was never demonstrated before. In the second part of our work we develop the concept of the empirical liquidity measure. We provide the retrospective view of the works on this issue, discuss the proposed definitions and develop our own empirical measure based on the intuitive mathematical model and comprising several features of the definitions that existed before. Then we verify the measure provided on the real data from the market and demonstrate the advantages of the proposed value for measuring the illiquidity.
369

Provisions estimation for portfolio of CDO in Gaussian financial environment

Maximchuk, Oleg, Volkov, Yury January 2011 (has links)
The problem of managing the portfolio provisions is of very high importance for any financial institution. In this paper we provide both static and dynamic models of provisions estimation for the case when the decision about provisions is made at the first moment of time subject to the absence of information and for the case of complete and incomplete information. Also the hedging strategy for the case of the defaultable market is presented in this work as another tool of reducing the risk of default. The default time is modelled as a first-passage time of a standard Brownian motion through a deterministic barrier. Some methods of numerical provision estimation are also presented.
370

Avhopp inom 12-stgsbehandling : En studie om vilka faktorer som finns till klienters avhopp inom 12-stegsbehandling och eventuella skillnader mellan könen.

Ferm, Anita, Josefsson, Sanna January 2011 (has links)
No description available.

Page generated in 0.0529 seconds