• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Algoritmes vir die maksimering van konvekse en verwante knapsakprobleme /

Visagie, S. E. January 2007 (has links)
Thesis (PhD)--University of Stellenbosch, 2007. / Bibliography. Also availabe via the Internet.
2

Mathematical optimization techniques for cognitive radar networks

Rossetti, Gaia January 2018 (has links)
This thesis discusses mathematical optimization techniques for waveform design in cognitive radars. These techniques have been designed with an increasing level of sophistication, starting from a bistatic model (i.e. two transmitters and a single receiver) and ending with a cognitive network (i.e. multiple transmitting and multiple receiving radars). The environment under investigation always features strong signal-dependent clutter and noise. All algorithms are based on an iterative waveform-filter optimization. The waveform optimization is based on convex optimization techniques and the exploitation of initial radar waveforms characterized by desired auto and cross-correlation properties. Finally, robust optimization techniques are introduced to account for the assumptions made by cognitive radars on certain second order statistics such as the covariance matrix of the clutter. More specifically, initial optimization techniques were proposed for the case of bistatic radars. By maximizing the signal to interference and noise ratio (SINR) under certain constraints on the transmitted signals, it was possible to iteratively optimize both the orthogonal transmission waveforms and the receiver filter. Subsequently, the above work was extended to a convex optimization framework for a waveform design technique for bistatic radars where both radars transmit and receive to detect targets. The method exploited prior knowledge of the environment to maximize the accumulated target return signal power while keeping the disturbance power to unity at both radar receivers. The thesis further proposes convex optimization based waveform designs for multiple input multiple output (MIMO) based cognitive radars. All radars within the system are able to both transmit and receive signals for detecting targets. The proposed model investigated two complementary optimization techniques. The first one aims at optimizing the signal to interference and noise ratio (SINR) of a specific radar while keeping the SINR of the remaining radars at desired levels. The second approach optimizes the SINR of all radars using a max-min optimization criterion. To account for possible mismatches between actual parameters and estimated ones, this thesis includes robust optimization techniques. Initially, the multistatic, signal-dependent model was tested against existing worst-case and probabilistic methods. These methods appeared to be over conservative and generic for the considered signal-dependent clutter scenario. Therefore a new approach was derived where uncertainty was assumed directly on the radar cross-section and Doppler parameters of the clutters. Approximations based on Taylor series were invoked to make the optimization problem convex and {subsequently} determine robust waveforms with specific SINR outage constraints. Finally, this thesis introduces robust optimization techniques for through-the-wall radars. These are also cognitive but rely on different optimization techniques than the ones previously discussed. By noticing the similarities between the minimum variance distortionless response (MVDR) problem and the matched-illumination one, this thesis introduces robust optimization techniques that consider uncertainty on environment-related parameters. Various performance analyses demonstrate the effectiveness of all the above algorithms in providing a significant increase in SINR in an environment affected by very strong clutter and noise.
3

Optimization Techniques for Image Processing

Chapagain, Prerak 01 April 2019 (has links)
This research thesis starts off with a basic introduction to optimization and image processing. Because there are several different tools to apply optimization in image processing applications, we started researching one category of mathematical optimization techniques, namely Convex Optimization. This thesis provides a basic background consisting of mathematical concepts, as well as some challenges of employing Convex Optimization in solving problems. One major issue is to be able to identify the convexity of the problem in a potential application (Boyd). After spending a couple of months researching and learning Convex Optimization, my advisor and I decided to go on a different route. We decided to use Heuristic Optimization techniques instead, and in particular, Genetic Algorithms (GA). We also conjectured that the application of GA in image processing for the purpose of object matching could potentially yield good results. As a first step, we used MATLAB as the programming language, and we wrote the GA code from scratch. Next, we applied the GA algorithm in object matching. More specifically, we constructed specific images to demonstrate the effectiveness of the algorithm in identifying objects of interest. The results presented in this thesis indicate that the technique is capable of identifying objects under noise conditions.
4

Derivative Free Algorithms For Large Scale Non-smooth Optimization And Their Applications

Tor, Ali Hakan 01 February 2013 (has links) (PDF)
In this thesis, various numerical methods are developed to solve nonsmooth and in particular, nonconvex optimization problems. More speci
5

Operator splitting methods for convex optimization : analysis and implementation

Banjac, Goran January 2018 (has links)
Convex optimization problems are a class of mathematical problems which arise in numerous applications. Although interior-point methods can in principle solve these problems efficiently, they may become intractable for solving large-scale problems or be unsuitable for real-time embedded applications. Iterations of operator splitting methods are relatively simple and computationally inexpensive, which makes them suitable for these applications. However, some of their known limitations are slow asymptotic convergence, sensitivity to ill-conditioning, and inability to detect infeasible problems. The aim of this thesis is to better understand operator splitting methods and to develop reliable software tools for convex optimization. The main analytical tool in our investigation of these methods is their characterization as the fixed-point iteration of a nonexpansive operator. The fixed-point theory of nonexpansive operators has been studied for several decades. By exploiting the properties of such an operator, it is possible to show that the alternating direction method of multipliers (ADMM) can detect infeasible problems. Although ADMM iterates diverge when the problem at hand is unsolvable, the differences between subsequent iterates converge to a constant vector which is also a certificate of primal and/or dual infeasibility. Reliable termination criteria for detecting infeasibility are proposed based on this result. Similar ideas are used to derive necessary and sufficient conditions for linear (geometric) convergence of an operator splitting method and a bound on the achievable convergence rate. The new bound turns out to be tight for the class of averaged operators. Next, the OSQP solver is presented. OSQP is a novel general-purpose solver for quadratic programs (QPs) based on ADMM. The solver is very robust, is able to detect infeasible problems, and has been extensively tested on many problem instances from a wide variety of application areas. Finally, operator splitting methods can also be effective in nonconvex optimization. The developed algorithm significantly outperforms a common approach based on convex relaxation of the original nonconvex problem.
6

Duality, Derivative-Based Training Methods and Hyperparameter Optimization for Support Vector Machines

Strasdat, Nico 18 October 2023 (has links)
In this thesis we consider the application of Fenchel's duality theory and gradient-based methods for the training and hyperparameter optimization of Support Vector Machines. We show that the dualization of convex training problems is possible theoretically in a rather general formulation. For training problems following a special structure (for instance, standard training problems) we find that the resulting optimality conditions can be interpreted concretely. This approach immediately leads to the well-known notion of support vectors and a formulation of the Representer Theorem. The proposed theory is applied to several examples such that dual formulations of training problems and associated optimality conditions can be derived straightforwardly. Furthermore, we consider different formulations of the primal training problem which are equivalent under certain conditions. We also argue that the relation of the corresponding solutions to the solution of the dual training problem is not always intuitive. Based on the previous findings, we consider the application of customized optimization methods to the primal and dual training problems. A particular realization of Newton's method is derived which could be used to solve the primal training problem accurately. Moreover, we introduce a general convergence framework covering different types of decomposition methods for the solution of the dual training problem. In doing so, we are able to generalize well-known convergence results for the SMO method. Additionally, a discussion of the complexity of the SMO method and a motivation for a shrinking strategy reducing the computational effort is provided. In a last theoretical part, we consider the problem of hyperparameter optimization. We argue that this problem can be handled efficiently by means of gradient-based methods if the training problems are formulated appropriately. Finally, we evaluate the theoretical results concerning the training and hyperparameter optimization approaches practically by means of several example training problems.

Page generated in 0.1378 seconds