Spelling suggestions: "subject:"none convex optimization"" "subject:"noun convex optimization""
131 |
Optimal Reinsurance Designs: from an Insurer’s PerspectiveWeng, Chengguo 09 1900 (has links)
The research on optimal reinsurance design dated back to the 1960’s. For nearly half a century, the quest for optimal reinsurance designs has remained a fascinating subject, drawing significant interests from both academicians and practitioners. Its fascination lies in its potential as an effective risk management tool for the insurers. There are many ways of formulating the optimal design of reinsurance, depending on the chosen objective and constraints. In this thesis, we address the problem of optimal reinsurance designs from an insurer’s perspective. For an insurer, an appropriate use of the reinsurance helps to reduce the adverse risk exposure and improve the overall viability of the underlying business. On the other hand, reinsurance incurs additional cost to the insurer in the form of reinsurance premium. This implies a classical risk and reward tradeoff faced by the insurer.
The primary objective of the thesis is to develop theoretically sound and yet practical solution in the quest for optimal reinsurance designs. In order to achieve such an objective, this thesis is divided into two parts. In the first part, a number of reinsurance models are developed and their optimal reinsurance treaties are derived explicitly. This part focuses on the risk measure minimization reinsurance models and discusses the optimal reinsurance treaties by exploiting two of the most common risk measures known as the Value-at-Risk (VaR) and the Conditional Tail Expectation (CTE). Some additional important economic factors such as the reinsurance premium budget, the insurer’s profitability are also considered. The second part proposes an innovative method in formulating the reinsurance models, which we refer as the empirical approach since it exploits explicitly the insurer’s empirical loss data. The empirical approach has the advantage that it is practical and intuitively appealing. This approach is motivated by the difficulty that the reinsurance models are often infinite dimensional optimization problems and hence the explicit solutions are achievable only in some special cases. The empirical approach effectively reformulates the optimal reinsurance problem into a finite dimensional optimization problem. Furthermore, we demonstrate that the second-order conic programming can be used to obtain the optimal solutions for a wide range of reinsurance models formulated by the empirical approach.
|
132 |
Interference Management in Non-cooperative NetworksMotahari, Seyed Abolfazl 02 October 2009 (has links)
Spectrum sharing is known as a key solution to accommodate the increasing number of users and the growing demand for throughput in wireless networks. While spectrum sharing improves the data rate in sparse networks, it suffers from interference of concurrent links in dense networks. In fact, interference is the primary barrier to enhance the overall throughput of the network, especially in the medium and high signal-to-noise ratios (SNR’s). Managing interference to overcome this barrier has emerged as a crucial step in developing efficient wireless networks. This thesis deals with optimum and sub-optimum interference management-cancelation in non-cooperative networks.
Several techniques for interference management including novel strategies such as interference alignment and structural coding are investigated. These methods are applied to obtain optimum and sub-optimum coding strategies in such networks. It is shown that a single strategy is not able to achieve the maximum throughput in all possible scenarios and in fact a careful design is required to fully exploit all available resources in each realization of the system.
This thesis begins with a complete investigation of the capacity region of the two-user Gaussian interference channel. This channel models the basic interaction between two users sharing the same spectrum for data communication. New outer bounds outperforming known bounds are derived using Genie-aided techniques. It is proved that these outer bounds meet the known inner bounds in some special cases, revealing the sum capacity of this channel over a certain range of parameters which has not been known in the past.
A novel coding scheme applicable in networks with single antenna nodes is proposed next. This scheme converts a single antenna system to an equivalent Multiple Input Multiple Output (MIMO) system with fractional dimensions. Interference can be aligned along these dimensions and higher multiplexing gains can be achieved. Tools from the field of Diophantine approximation in number theory are used to show that the proposed coding scheme in fact mimics the traditional schemes used in MIMO systems where each data stream is sent along a direction and alignment happens when several streams are received along the same direction. Two types of constellation are proposed for the encoding part, namely the single layer constellation and the multi-layer constellation. Using single layer constellations, the coding scheme is applied to the two-user $X$ channel. It is proved that the total Degrees-of-Freedom (DOF), i.e. $\frac{4}{3}$, of the channel is achievable almost surely. This is the first example in which it is shown that a time invariant single antenna system does not fall short of achieving this known upper bound on the DOF.
Using multi-layer constellations, the coding scheme is applied to the symmetric three-user GIC. Achievable DOFs are derived for all channel gains. It is observed that the DOF is everywhere discontinuous (as a function of the channel gain). In particular, it is proved that for the irrational channel gains the achievable DOF meets the upper bound of $\frac{3}{2}$. For the rational gains, the achievable DOF has a gap to the known upper bounds. By allowing carry over from multiple layers, however, it is shown that higher DOFs can be achieved for the latter.
The $K$-user single-antenna Gaussian Interference Channel (GIC) is considered, where the channel coefficients are NOT necessarily time-variant or frequency selective. It is proved that the total DOF of this channel is $\frac{K}{2}$ almost surely, i.e. each user enjoys half of its maximum DOF. Indeed, we prove that the static time-invariant interference channels are rich enough to allow simultaneous interference alignment at all receivers. To derive this result, we show that single-antenna interference channels can be treated as \emph{pseudo multiple-antenna systems} with infinitely-many antennas. Such machinery enables us to prove that the real or complex $M \times M$ MIMO GIC achieves its total DOF, i.e., $\frac{MK}{2}$, $M \geq 1$. The pseudo multiple-antenna systems are developed based on a recent result in the field of Diophantine approximation which states that the convergence part of the Khintchine-Groshev theorem holds for points on non-degenerate manifolds. As a byproduct of the scheme, the total DOFs of the $K\times M$ $X$ channel and the uplink of cellular systems are derived.
Interference alignment requires perfect knowledge of channel state information at all nodes. This requirement is sometimes infeasible and users invoke random coding to communicate with their corresponding receivers. Alternative interference management needs to be implemented and this problem is addressed in the last part of the thesis. A coding scheme for a single user communicating in a shared medium is proposed. Moreover, polynomial time algorithms are proposed to obtain best achievable rates in the system. Successive rate allocation for a $K$-user interference channel is performed using polynomial time algorithms.
|
133 |
Optimal Reinsurance Designs: from an Insurer’s PerspectiveWeng, Chengguo 09 1900 (has links)
The research on optimal reinsurance design dated back to the 1960’s. For nearly half a century, the quest for optimal reinsurance designs has remained a fascinating subject, drawing significant interests from both academicians and practitioners. Its fascination lies in its potential as an effective risk management tool for the insurers. There are many ways of formulating the optimal design of reinsurance, depending on the chosen objective and constraints. In this thesis, we address the problem of optimal reinsurance designs from an insurer’s perspective. For an insurer, an appropriate use of the reinsurance helps to reduce the adverse risk exposure and improve the overall viability of the underlying business. On the other hand, reinsurance incurs additional cost to the insurer in the form of reinsurance premium. This implies a classical risk and reward tradeoff faced by the insurer.
The primary objective of the thesis is to develop theoretically sound and yet practical solution in the quest for optimal reinsurance designs. In order to achieve such an objective, this thesis is divided into two parts. In the first part, a number of reinsurance models are developed and their optimal reinsurance treaties are derived explicitly. This part focuses on the risk measure minimization reinsurance models and discusses the optimal reinsurance treaties by exploiting two of the most common risk measures known as the Value-at-Risk (VaR) and the Conditional Tail Expectation (CTE). Some additional important economic factors such as the reinsurance premium budget, the insurer’s profitability are also considered. The second part proposes an innovative method in formulating the reinsurance models, which we refer as the empirical approach since it exploits explicitly the insurer’s empirical loss data. The empirical approach has the advantage that it is practical and intuitively appealing. This approach is motivated by the difficulty that the reinsurance models are often infinite dimensional optimization problems and hence the explicit solutions are achievable only in some special cases. The empirical approach effectively reformulates the optimal reinsurance problem into a finite dimensional optimization problem. Furthermore, we demonstrate that the second-order conic programming can be used to obtain the optimal solutions for a wide range of reinsurance models formulated by the empirical approach.
|
134 |
Interference Management in Non-cooperative NetworksMotahari, Seyed Abolfazl 02 October 2009 (has links)
Spectrum sharing is known as a key solution to accommodate the increasing number of users and the growing demand for throughput in wireless networks. While spectrum sharing improves the data rate in sparse networks, it suffers from interference of concurrent links in dense networks. In fact, interference is the primary barrier to enhance the overall throughput of the network, especially in the medium and high signal-to-noise ratios (SNR’s). Managing interference to overcome this barrier has emerged as a crucial step in developing efficient wireless networks. This thesis deals with optimum and sub-optimum interference management-cancelation in non-cooperative networks.
Several techniques for interference management including novel strategies such as interference alignment and structural coding are investigated. These methods are applied to obtain optimum and sub-optimum coding strategies in such networks. It is shown that a single strategy is not able to achieve the maximum throughput in all possible scenarios and in fact a careful design is required to fully exploit all available resources in each realization of the system.
This thesis begins with a complete investigation of the capacity region of the two-user Gaussian interference channel. This channel models the basic interaction between two users sharing the same spectrum for data communication. New outer bounds outperforming known bounds are derived using Genie-aided techniques. It is proved that these outer bounds meet the known inner bounds in some special cases, revealing the sum capacity of this channel over a certain range of parameters which has not been known in the past.
A novel coding scheme applicable in networks with single antenna nodes is proposed next. This scheme converts a single antenna system to an equivalent Multiple Input Multiple Output (MIMO) system with fractional dimensions. Interference can be aligned along these dimensions and higher multiplexing gains can be achieved. Tools from the field of Diophantine approximation in number theory are used to show that the proposed coding scheme in fact mimics the traditional schemes used in MIMO systems where each data stream is sent along a direction and alignment happens when several streams are received along the same direction. Two types of constellation are proposed for the encoding part, namely the single layer constellation and the multi-layer constellation. Using single layer constellations, the coding scheme is applied to the two-user $X$ channel. It is proved that the total Degrees-of-Freedom (DOF), i.e. $\frac{4}{3}$, of the channel is achievable almost surely. This is the first example in which it is shown that a time invariant single antenna system does not fall short of achieving this known upper bound on the DOF.
Using multi-layer constellations, the coding scheme is applied to the symmetric three-user GIC. Achievable DOFs are derived for all channel gains. It is observed that the DOF is everywhere discontinuous (as a function of the channel gain). In particular, it is proved that for the irrational channel gains the achievable DOF meets the upper bound of $\frac{3}{2}$. For the rational gains, the achievable DOF has a gap to the known upper bounds. By allowing carry over from multiple layers, however, it is shown that higher DOFs can be achieved for the latter.
The $K$-user single-antenna Gaussian Interference Channel (GIC) is considered, where the channel coefficients are NOT necessarily time-variant or frequency selective. It is proved that the total DOF of this channel is $\frac{K}{2}$ almost surely, i.e. each user enjoys half of its maximum DOF. Indeed, we prove that the static time-invariant interference channels are rich enough to allow simultaneous interference alignment at all receivers. To derive this result, we show that single-antenna interference channels can be treated as \emph{pseudo multiple-antenna systems} with infinitely-many antennas. Such machinery enables us to prove that the real or complex $M \times M$ MIMO GIC achieves its total DOF, i.e., $\frac{MK}{2}$, $M \geq 1$. The pseudo multiple-antenna systems are developed based on a recent result in the field of Diophantine approximation which states that the convergence part of the Khintchine-Groshev theorem holds for points on non-degenerate manifolds. As a byproduct of the scheme, the total DOFs of the $K\times M$ $X$ channel and the uplink of cellular systems are derived.
Interference alignment requires perfect knowledge of channel state information at all nodes. This requirement is sometimes infeasible and users invoke random coding to communicate with their corresponding receivers. Alternative interference management needs to be implemented and this problem is addressed in the last part of the thesis. A coding scheme for a single user communicating in a shared medium is proposed. Moreover, polynomial time algorithms are proposed to obtain best achievable rates in the system. Successive rate allocation for a $K$-user interference channel is performed using polynomial time algorithms.
|
135 |
Interference Management For Vector Gaussian Multiple Access ChannelsPadakandla, Arun 03 1900 (has links)
In this thesis, we consider a vector Gaussian multiple access channel (MAC) with users demanding reliable communication at specific (Shannon-theoretic) rates. The objective is to assign vectors and powers to these users such that their rate requirements are met and the sum of powers received is minimum.
We identify this power minimization problem as an instance of a separable convex optimization problem with linear ascending constraints. Under an ordering condition on the slopes of the functions at the origin, an algorithm that determines the optimum point in a finite number of steps is described. This provides a complete characterization of the minimum sum power for the vector Gaussian multiple access channel. Furthermore, we prove a strong duality between the above sum power minimization problem and the problem of sum rate maximization under power constraints.
We then propose finite step algorithms to explicitly identify an assignment of vectors and powers that solve the above power minimization and sum rate maximization problems. The distinguishing feature of the proposed algorithms is the size of the output vector sets. In particular, we prove an upper bound on the size of the vector sets that is independent of the number of users.
Finally, we restrict vectors to an orthonormal set. The goal is to identify an assignment of vectors (from an orthonormal set) to users such that the user rate requirements is met with minimum sum power. This is a combinatorial optimization problem. We study the complexity of the decision version of this problem. Our results indicate that when the dimensionality of the vector set is part of the input, the decision version is NP-complete.
|
136 |
Robust Control with Complexity Constraint : A Nevanlinna-Pick Interpolation ApproachNagamune, Ryozo January 2002 (has links)
No description available.
|
137 |
Learning algorithms and statistical software, with applications to bioinformaticsHocking, Toby Dylan 20 November 2012 (has links) (PDF)
Statistical machine learning is a branch of mathematics concerned with developing algorithms for data analysis. This thesis presents new mathematical models and statistical software, and is organized into two parts. In the first part, I present several new algorithms for clustering and segmentation. Clustering and segmentation are a class of techniques that attempt to find structures in data. I discuss the following contributions, with a focus on applications to cancer data from bioinformatics. In the second part, I focus on statistical software contributions which are practical for use in everyday data analysis.
|
138 |
Contributions to Signal Processing for MRIBjörk, Marcus January 2015 (has links)
Magnetic Resonance Imaging (MRI) is an important diagnostic tool for imaging soft tissue without the use of ionizing radiation. Moreover, through advanced signal processing, MRI can provide more than just anatomical information, such as estimates of tissue-specific physical properties. Signal processing lies at the very core of the MRI process, which involves input design, information encoding, image reconstruction, and advanced filtering. Based on signal modeling and estimation, it is possible to further improve the images, reduce artifacts, mitigate noise, and obtain quantitative tissue information. In quantitative MRI, different physical quantities are estimated from a set of collected images. The optimization problems solved are typically nonlinear, and require intelligent and application-specific algorithms to avoid suboptimal local minima. This thesis presents several methods for efficiently solving different parameter estimation problems in MRI, such as multi-component T2 relaxometry, temporal phase correction of complex-valued data, and minimizing banding artifacts due to field inhomogeneity. The performance of the proposed algorithms is evaluated using both simulation and in-vivo data. The results show improvements over previous approaches, while maintaining a relatively low computational complexity. Using new and improved estimation methods enables better tissue characterization and diagnosis. Furthermore, a sequence design problem is treated, where the radio-frequency excitation is optimized to minimize image artifacts when using amplifiers of limited quality. In turn, obtaining higher fidelity images enables improved diagnosis, and can increase the estimation accuracy in quantitative MRI.
|
139 |
Graph-based variational optimization and applications in computer visionCouprie, Camille 10 October 2011 (has links) (PDF)
Many computer vision applications such as image filtering, segmentation and stereovision can be formulated as optimization problems. Recently discrete, convex, globally optimal methods have received a lot of attention. Many graph-based methods suffer from metrication artefacts, segmented contours are blocky in areas where contour information is lacking. In the first part of this work, we develop a discrete yet isotropic energy minimization formulation for the continuous maximum flow problem that prevents metrication errors. This new convex formulation leads us to a provably globally optimal solution. The employed interior point method can optimize the problem faster than the existing continuous methods. The energy formulation is then adapted and extended to multi-label problems, and shows improvements over existing methods. Fast parallel proximal optimization tools have been tested and adapted for the optimization of this problem. In the second part of this work, we introduce a framework that generalizes several state-of-the-art graph-based segmentation algorithms, namely graph cuts, random walker, shortest paths, and watershed. This generalization allowed us to exhibit a new case, for which we developed a globally optimal optimization method, named "Power watershed''. Our proposed power watershed algorithm computes a unique global solution to multi labeling problems, and is very fast. We further generalize and extend the framework to applications beyond image segmentation, for example image filtering optimizing an L0 norm energy, stereovision and fast and smooth surface reconstruction from a noisy cloud of 3D points
|
140 |
Supervised metric learning with generalization guaranteesBellet, Aurélien 11 December 2012 (has links) (PDF)
In recent years, the crucial importance of metrics in machine learningalgorithms has led to an increasing interest in optimizing distanceand similarity functions using knowledge from training data to make them suitable for the problem at hand.This area of research is known as metric learning. Existing methods typically aim at optimizing the parameters of a given metric with respect to some local constraints over the training sample. The learned metrics are generally used in nearest-neighbor and clustering algorithms.When data consist of feature vectors, a large body of work has focused on learning a Mahalanobis distance, which is parameterized by a positive semi-definite matrix. Recent methods offer good scalability to large datasets.Less work has been devoted to metric learning from structured objects (such as strings or trees), because it often involves complex procedures. Most of the work has focused on optimizing a notion of edit distance, which measures (in terms of number of operations) the cost of turning an object into another.We identify two important limitations of current supervised metric learning approaches. First, they allow to improve the performance of local algorithms such as k-nearest neighbors, but metric learning for global algorithms (such as linear classifiers) has not really been studied so far. Second, and perhaps more importantly, the question of the generalization ability of metric learning methods has been largely ignored.In this thesis, we propose theoretical and algorithmic contributions that address these limitations. Our first contribution is the derivation of a new kernel function built from learned edit probabilities. Unlike other string kernels, it is guaranteed to be valid and parameter-free. Our second contribution is a novel framework for learning string and tree edit similarities inspired by the recent theory of (epsilon,gamma,tau)-good similarity functions and formulated as a convex optimization problem. Using uniform stability arguments, we establish theoretical guarantees for the learned similarity that give a bound on the generalization error of a linear classifier built from that similarity. In our third contribution, we extend the same ideas to metric learning from feature vectors by proposing a bilinear similarity learning method that efficiently optimizes the (epsilon,gamma,tau)-goodness. The similarity is learned based on global constraints that are more appropriate to linear classification. Generalization guarantees are derived for our approach, highlighting that our method minimizes a tighter bound on the generalization error of the classifier. Our last contribution is a framework for establishing generalization bounds for a large class of existing metric learning algorithms. It is based on a simple adaptation of the notion of algorithmic robustness and allows the derivation of bounds for various loss functions and regularizers.
|
Page generated in 0.4185 seconds