• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 65
  • 37
  • 24
  • 11
  • 10
  • 10
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Compressive sensing using lp optimization

Pant, Jeevan Kumar 26 April 2012 (has links)
Three problems in compressive sensing, namely, recovery of sparse signals from noise-free measurements, recovery of sparse signals from noisy measurements, and recovery of so called block-sparse signals from noisy measurements, are investigated. In Chapter 2, the reconstruction of sparse signals from noise-free measurements is investigated and three algorithms are developed. The first and second algorithms minimize the approximate L0 and Lp pseudonorms, respectively, in the null space of the measurement matrix using a sequential quasi-Newton algorithm. An efficient line search based on Banach's fixed-point theorem is developed and applied in the second algorithm. The third algorithm minimizes the approximate Lp pseudonorm in the null space by using a sequential conjugate-gradient (CG) algorithm. Simulation results are presented which demonstrate that the proposed algorithms yield improved signal reconstruction performance relative to that of the iterative reweighted (IR), smoothed L0 (SL0), and L1-minimization based algorithms. They also require a reduced amount of computations relative to the IR and L1-minimization based algorithms. The Lp-minimization based algorithms require less computation than the SL0 algorithm. In Chapter 3, the reconstruction of sparse signals and images from noisy measurements is investigated. First, two algorithms for the reconstruction of signals are developed by minimizing an Lp-pseudonorm regularized squared error as the objective function using the sequential optimization procedure developed in Chapter 2. The first algorithm minimizes the objective function by taking steps along descent directions that are computed in the null space of the measurement matrix and its complement space. The second algorithm minimizes the objective function in the time domain by using a CG algorithm. Second, the well known total variation (TV) norm has been extended to a nonconvex version called the TVp pseudonorm and an algorithm for the reconstruction of images is developed that involves minimizing a TVp-pseudonorm regularized squared error using a sequential Fletcher-Reeves' CG algorithm. Simulation results are presented which demonstrate that the first two algorithms yield improved signal reconstruction performance relative to the IR, SL0, and L1-minimization based algorithms and require a reduced amount of computation relative to the IR and L1-minimization based algorithms. The TVp-minimization based algorithm yields improved image reconstruction performance and a reduced amount of computation relative to Romberg's algorithm. In Chapter 4, the reconstruction of so-called block-sparse signals is investigated. The L2/1 norm is extended to a nonconvex version, called the L2/p pseudonorm, and an algorithm based on the minimization of an L2/p-pseudonorm regularized squared error is developed. The minimization is carried out using a sequential Fletcher-Reeves' CG algorithm and the line search described in Chapter 2. A reweighting technique for the reduction of amount of computation and a method to use prior information about the locations of nonzero blocks for the improvement in signal reconstruction performance are also proposed. Simulation results are presented which demonstrate that the proposed algorithm yields improved reconstruction performance and requires a reduced amount of computation relative to the L2/1-minimization based, block orthogonal matching pursuit, IR, and L1-minimization based algorithms. / Graduate
12

Mixed integer programming approaches for nonlinear and stochastic programming

Vielma Centeno, Juan Pablo 06 July 2009 (has links)
In this thesis we study how to solve some nonconvex optimization problems by using methods that capitalize on the success of Linear Programming (LP) based solvers for Mixed Integer Linear Programming (MILP). A common aspect of our solution approaches is the use, development and analysis of small but strong extended LP/MILP formulations and approximations. In the first part of this work we develop an LP based branch-and-bound algorithm for mixed integer conic quadratic programs. The algorithm is based on a lifted polyhedral relaxation of conic quadratic constraints by Ben-Tal and Nemirovski. We test the algorithm on a series of portfolio optimization problems and show that it provides a significant computational advantage. In the second part we study the modeling of a class of disjunctive constraints with a logarithmic number of variables. For specially structured disjunctive constraints we give sufficient conditions for constructing MILP formulations with a number of binary variables and extra constraints that is logarithmic in the number of terms of the disjunction. Using these conditions we introduce formulations with these characteristics for SOS1, SOS2 constraints and piecewise linear functions. We present computational results showing that they can significantly outperform other MILP formulations. In the third part we study the modeling of non-convex piecewise linear functions as MILPs. We review several new and existing MILP formulations for continuous piecewise linear functions with special attention paid to multivariate non-separable functions. We compare these formulations with respect to their theoretical properties and their relative computational performance. In addition, we study the extension of these formulations to lower semicontinuous piecewise linear functions. Finally, in the fourth part we study the strength of MILP formulations for LPs with Probabilistic Constraints. We first study the strength of existing MILP formulations that only considers one row of the probabilistic constraint at a time. We then introduce an extended formulation that considers more than one row of the constraint at a time and use it to computationally compare the relative strength between formulations that consider one and two rows at a time.
13

Topics in image recovery and image quality assessment /Cui Lei.

Cui, Lei 16 November 2016 (has links)
Image recovery, especially image denoising and deblurring is widely studied during the last decades. Variational models can well preserve edges of images while restoring images from noise and blur. Some variational models are non-convex. For the moment, the methods for non-convex optimization are limited. This thesis finds new non-convex optimizing method called difference of convex algorithm (DCA) for solving different variational models for various kinds of noise removal problems. For imaging system, noise appeared in images can show different kinds of distribution due to the different imaging environment and imaging technique. Here we show how to apply DCA to Rician noise removal and Cauchy noise removal. The performance of our experiments demonstrates that our proposed non-convex algorithms outperform the existed ones by better PSNR and less computation time. The progress made by our new method can improve the precision of diagnostic technique by reducing Rician noise more efficiently and can improve the synthetic aperture radar imaging precision by reducing Cauchy noise within. When applying variational models to image denoising and deblurring, a significant subject is to choose the regularization parameters. Few methods have been proposed for regularization parameter selection for the moment. The numerical algorithms of existed methods for parameter selection are either complicated or implicit. In order to find a more efficient and easier way to estimate regularization parameters, we create a new image quality sharpness metric called SQ-Index which is based on the theory of Global Phase Coherence. The new metric can be used for estimating parameters for a various of variational models, but also can estimate the noise intensity based on special models. In our experiments, we show the noise estimation performance with this new metric. Moreover, extensive experiments are made for dealing with image denoising and deblurring under different kinds of noise and blur. The numerical results show the robust performance of image restoration by applying our metric to parameter selection for different variational models.
14

When Can Nonconvex Optimization Problems be Solved with Gradient Descent? A Few Case Studies

Gilboa, Dar January 2020 (has links)
Gradient descent and related algorithms are ubiquitously used to solve optimization problems arising in machine learning and signal processing. In many cases, these problems are nonconvex yet such simple algorithms are still effective. In an attempt to better understand this phenomenon, we study a number of nonconvex problems, proving that they can be solved efficiently with gradient descent. We will consider complete, orthogonal dictionary learning, and present a geometric analysis allowing us to obtain efficient convergence rate for gradient descent that hold with high probability. We also show that similar geometric structure is present in other nonconvex problems such as generalized phase retrieval. Turning next to neural networks, we will also calculate conditions on certain classes of networks under which signals and gradients propagate through the network in a stable manner during the initial stages of training. Initialization schemes derived using these calculations allow training recurrent networks on long sequence tasks, and in the case of networks with low precision activation functions they make explicit a tradeoff between the reduction in precision and the maximal depth of a model that can be trained with gradient descent. We finally consider manifold classification with a deep feed-forward neural network, for a particularly simple configuration of the manifolds. We provide an end-to-end analysis of the training process, proving that under certain conditions on the architectural hyperparameters of the network, it can successfully classify any point on the manifolds with high probability given a sufficient number of independent samples from the manifold, in a timely manner. Our analysis relates the depth and width of the network to its fitting capacity and statistical regularity respectively in early stages of training.
15

Optimization Models and Analysis of Routing, Location, Distribution, and Design Problems on Networks

Subramanian, Shivaram 29 April 1999 (has links)
A variety of practical network optimization problems arising in the context of public supply and commercial transportation, emergency response and risk management, engineering design, and industrial planning are addressed in this study. The decisions to be made in these problems include the location of supply centers, the routing, allocation and scheduling of flow between supply and demand locations, and the design of links in the network. This study is concerned with the development of optimization models and the analysis of five such problems, and the subsequent design and testing of exact and heuristic algorithms for solving these various network optimization problems. The first problem addressed is the time-dependent shortest pair of disjoint paths problem. We examine computational complexity issues, models, and algorithms for the problem of finding a shortest pair of disjoint paths between two nodes of a network such that the total travel delay is minimized, given that the individual arc delays are time-dependent. It is shown that this problem, and many variations of it, are nP-Hard and a 0-1 linear programming model that can be used to solve this problem is developed. This model can accommodate various degrees of disjointedness of the pair of paths, from complete to partial with respect to specific arcs. Next, we examine a minimum-risk routing problem and pursue the development, analysis, and testing of a mathematical model for determining a route that attempts to reduce the risk of low probability-high consequence accidents related with the transportation of hazardous materials (hazmat). More specifically, the problem addressed in this study involves finding a path that minimizes the conditional expectation of a consequence, given that an accident occurs, subject to the expected value of the consequence being lesser than or equal to a specified level n, and the probability of an accident on the path being also constrained to be no more than some value h. Various insights into related modeling issues are also provided. The values n and h are user-prescribed and could be prompted by the solution of shortest path problems that minimize the respective corresponding linear risk functions. The proposed model is a discrete, fractional programming problem that is solved using a specialized branch-and-bound approach. The model is also tested using realistic data associated with a case concerned with routing hazmat through the roadways of Bethlehem, Pennsylvania. The third problem deals with the development of a resource allocation strategy for emergency and risk management. An important and novel issue addressed in modeling this problem is the effect of loss in coverage due to the non-availability of emergency response vehicles that are currently serving certain primary incidents. This is accommodated within the model by including in the objective function a term that reflects the opportunity cost for serving an additional incident that might occur probabilistically on the network. A mixed-integer programming model is formulated for the multiple incident - multiple response problem, and we show how its solution capability can be significantly enhanced by injecting a particular structure into the constraints that results in an equivalent alternative model representation. Furthermore, for certain special cases of the MIMR problem, efficient polynomial-time solution approaches are prescribed. An algorithmic module composed of these procedures, and used in concert with a computationally efficient LP-based heuristic scheme that is developed, has been incorporated into an area-wide incident management decision support system (WAIMSS) at the Center for Transportation Research, Virginia Tech. The fourth problem addressed in this study deals with the development of global optimization algorithms for designing a water distribution network, or expanding an already existing one, that satisfies specified flow demands at stated pressure head requirements. The nonlinear, nonconvex network problem is transformed into the space of certain design variables. By relaxing the nonlinear constraints in the transformed space via suitable polyhedral outer approximations and applying the Reformulation-Linearization Technique (RLT), a tight linear lower bounding problem is derived. This problem provides an enhancement and a more precise representation of previous lower bounding relaxations that use similar approximations. Computational experience on three standard test problems from the literature is provided. For all these problems, a proven global optimal solution within a tolerance of 10 -4 % and/or within 1$ of optimality is obtained. For the two larger instances dealing with the Hanoi and New York test networks that have been open for nearly three decades, the solutions derived represent significant improvements, and the global optimality has been verified at the stated level of accuracy for these problems for the very first time in the literature. A new real network design test problem based on the Town of Blacksburg Water Distribution System is also offered to be included in the available library of test cases, and related computational results on deriving global optimal solutions are presented. The final problem addressed in this study is concerned with a global optimization approach for solving capacitated Euclidean distance multifacility location-allocation problems, as well as the development of a new algorithm for solving the generalized lp distance location-allocation problem. There exists no global optimization algorithm that has been developed and tested for this class of problems, aside from a total enumeration approach. Beginning with the Euclidean distance problem, we design depth-first and best-first branch-and-bound algorithms based on a partitioning of the allocation space that finitely converges to a global optimum for this nonconvex problem. For deriving lower bounds at node subproblems in these partial enumeration schemes, we employ two types of procedures. The first approach computes a lower bound via a simple projected location space lower bounding (PLSB) subproblem. The second approach derives a significantly enhanced lower bound by using a Reformulation-Linearization Technique (RLT) to transform an equivalent representation of the original nonconvex problem into a higher dimensional linear programming relaxation. In addition, certain cut-set inequalities generated in the allocation space, objective function based cuts derived in the location space, and tangential linear supporting hyperplanes for the distance function are added to further tighten the lower bounding relaxation. The RLT procedure is then extended to the.general lp distance problem for 1 < p < 2. Various issues related to the selection of branching variables, the design of heuristics via special selective backtracking mechanisms, and the study of the sensitivity of the proposed algorithm to the value of p in the lp - norm, are computationally investigated. Computational experience is also provided on a set of test problems to investigate both the PLSB and the RLT-lower bounding schemes. The results indicate that the proposed global optimization approach using the RLT-based scheme offers a promising viable solution procedure. In fact, among the problems solved, for the only two test instances previously available in the literature for the Euclidean distance case that were posed in 1979, we report proven global optimal solutions within a tolerance of 0.1% for the first time. It is hoped that the modeling, analysis, insights, and concepts provided for these various network based problems that arise in diverse routing, location, distribution, and design contexts, will provide guidelines for studying many other problems that arise in related situations. / Ph. D.
16

Dual sequential approximation methods in structural optimisation

Wood, Derren Wesley 03 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2012 / ENGLISH ABSTRACT: This dissertation addresses a number of topics that arise from the use of a dual method of sequential approximate optimisation (SAO) to solve structural optimisation problems. Said approach is widely used because it allows relatively large problems to be solved efficiently by minimising the number of expensive structural analyses required. Some extensions to traditional implementations are suggested that can serve to increase the efficacy of such algorithms. The work presented herein is concerned primarily with three topics: the use of nonconvex functions in the definition of SAO subproblems, the global convergence of the method, and the application of the dual SAO approach to large-scale problems. Additionally, a chapter is presented that focuses on the interpretation of Sigmund’s mesh independence sensitivity filter in topology optimisation. It is standard practice to formulate the approximate subproblems as strictly convex, since strict convexity is a sufficient condition to ensure that the solution of the dual problem corresponds with the unique stationary point of the primal. The incorporation of nonconvex functions in the definition of the subproblems is rarely attempted. However, many problems exhibit nonconvex behaviour that is easily represented by simple nonconvex functions. It is demonstrated herein that, under certain conditions, such functions can be fruitfully incorporated into the definition of the approximate subproblems without destroying the correspondence or uniqueness of the primal and dual solutions. Global convergence of dual SAO algorithms is examined within the context of the CCSA method, which relies on the use and manipulation of conservative convex and separable approximations. This method currently requires that a given problem and each of its subproblems be relaxed to ensure that the sequence of iterates that is produced remains feasible. A novel method, called the bounded dual, is presented as an alternative to relaxation. Infeasibility is catered for in the solution of the dual, and no relaxation-like modification is required. It is shown that when infeasibility is encountered, maximising the dual subproblem is equivalent to minimising a penalised linear combination of its constraint infeasibilities. Upon iteration, a restorative series of iterates is produced that gains feasibility, after which convergence to a feasible local minimum is assured. Two instances of the dual SAO solution of large-scale problems are addressed herein. The first is a discrete problem regarding the selection of the point-wise optimal fibre orientation in the two-dimensional minimum compliance design for fibre-reinforced composite plates. It is solved by means of the discrete dual approach, and the formulation employed gives rise to a partially separable dual problem. The second instance involves the solution of planar material distribution problems subject to local stress constraints. These are solved in a continuous sense using a sparse solver. The complexity and dimensionality of the dual is controlled by employing a constraint selection strategy in tandem with a mechanism by which inconsequential elements of the Jacobian of the active constraints are omitted. In this way, both the size of the dual and the amount of information that needs to be stored in order to define the dual are reduced. / AFRIKAANSE OPSOMMING: Hierdie proefskrif spreek ’n aantal onderwerpe aan wat spruit uit die gebruik van ’n duale metode van sekwensi¨ele benaderde optimering (SBO; sequential approximate optimisation (SAO)) om strukturele optimeringsprobleme op te los. Hierdie benadering word breedvoerig gebruik omdat dit die moontlikheid skep dat relatief groot probleme doeltreffend opgelos kan word deur die aantal duur strukturele analises wat vereis word, te minimeer. Sommige uitbreidings op tradisionele implementerings word voorgestel wat kan dien om die doeltreffendheid van sulke algoritmes te verhoog. Die werk wat hierin aangebied word, het hoofsaaklik betrekking op drie onderwerpe: die gebruik van nie-konvekse funksies in die defini¨ering van SBO-subprobleme, die globale konvergensie van die metode, en die toepassing van die duale SBO-benadering op grootskaalse probleme. Daarbenewens word ’n hoofstuk aangebied wat fokus op die interpretasie van Sigmund se maasonafhanklike sensitiwiteitsfilter (mesh independence sensitivity filter) in topologie-optimering. Dit is standaard praktyk om die benaderde subprobleme as streng konveks te formuleer, aangesien streng konveksiteit ’n voldoende voorwaarde is om te verseker dat die oplossing van die duale probleem ooreenstem met die unieke stasionˆere punt van die primaal. Die insluiting van niekonvekse funksies in die definisie van die subprobleme word selde gepoog. Baie probleme toon egter nie-konvekse gedrag wat maklik deur eenvoudige nie-konvekse funksies voorgestel kan word. In hierdie werk word daar gedemonstreer dat sulke funksies onder sekere voorwaardes met vrug in die definisie van die benaderde subprobleme inkorporeer kan word sonder om die korrespondensie of uniekheid van die primale en duale oplossings te vernietig. Globale konvergensie van duale SBO-algoritmes word ondersoek binne die konteks van die CCSAmetode, wat afhanklik is van die gebruik en manipulering van konserwatiewe konvekse en skeibare benaderings. Hierdie metode vereis tans dat ’n gegewe probleem en elk van sy subprobleme verslap word om te verseker dat die sekwensie van iterasies wat geproduseer word, toelaatbaar bly. ’n Nuwe metode, wat die begrensde duaal genoem word, word aangebied as ’n alternatief tot verslapping. Daar word vir ontoelaatbaarheid voorsiening gemaak in die oplossing van die duaal, en geen verslappings-tipe wysiging word benodig nie. Daar word gewys dat wanneer ontoelaatbaarheid te¨engekom word, maksimering van die duaal-subprobleem ekwivalent is aan minimering van sy begrensingsontoelaatbaarhede (constraint infeasibilities). Met iterasie word ’n herstellende reeks iterasies geproduseer wat toelaatbaarheid bereik, waarna konvergensie tot ’n plaaslike KKT-punt verseker word. Twee gevalle van die duale SBO-oplossing van grootskaalse probleme word hierin aangespreek. Die eerste geval is ’n diskrete probleem betreffende die seleksie van die puntsgewyse optimale veselori¨entasie in die tweedimensionele minimum meegeefbaarheidsontwerp vir veselversterkte saamgestelde plate. Dit word opgelos deur middel van die diskrete duale benadering, en die formulering wat gebruik word, gee aanleiding tot ’n gedeeltelik skeibare duale probleem. Die tweede geval behels die oplossing van in-vlak materiaalverspredingsprobleme onderworpe aan plaaslike spanningsbegrensings. Hulle word in ’n kontinue sin opgelos met die gebruik van ’n yl oplosser. Die kompleksiteit en dimensionaliteit van die duaal word beheer deur gebruik te maak van ’n strategie om begrensings te selekteer tesame met ’n meganisme waardeur onbelangrike elemente van die Jacobiaan van die aktiewe begrensings uitgelaat word. Op hierdie wyse word beide die grootte van die duaal en die hoeveelheid inligting wat gestoor moet word om die duaal te definieer, verminder.
17

Asynchronous Parallel Algorithms for Big-Data Nonconvex Optimization

Loris Cannelli (6933851) 13 August 2019 (has links)
<div>The focus of this Dissertation is to provide a unified and efficient solution method for an important class of nonconvex, nonsmooth, constrained optimization problems. Specifically, we are interested in problems where the objective function can be written as the sum of a smooth, nonconvex term, plus a convex, but possibly nonsmooth, regularizer. It is also considered the presence of nonconvex constraints. This kind of structure arises in many large-scale applications, as diverse as information processing, genomics, machine learning, or imaging reconstruction.</div><div></div><div>We design the first parallel, asynchronous, algorithmic framework with convergence guarantees to stationary points of the class of problems under exam. The method we propose is based on Successive Convex Approximation techniques; it can be implemented with both fixed and diminishing stepsizes; and enjoys sublinear convergence rate in the general nonconvex case, and linear convergence case under strong convexity or under less stringent standard error bound conditions.The algorithmic framework we propose is very abstract and general and can be applied to different computing architectures (e.g., message-passing systems, cluster of computers, shared-memory environments), always converging under the same set of assumptions. </div><div></div><div>In the last Chapter we consider the case of distributed multi-agent systems. Indeed, in many practical applications the objective function has a favorable separable structure. In this case, we generalize our framework to take into consideration the presence of different agents, where each one of them knows only a portion of the overall function, which they want cooperatively to minimize. The result is the first fully decentralized asynchronous method for the setting described above. The proposed method achieve sublinear convergence rate in the general case, and linear convergence rate under standard error bound conditions.</div><div></div><div>Extensive simulation results on problems of practical interest (MRI reconstruction, LASSO, matrix completion) show that the proposed methods compare favorably to state-of-the art-schemes.</div>
18

Penalized methods and algorithms for high-dimensional regression in the presence of heterogeneity

Yi, Congrui 01 December 2016 (has links)
In fields such as statistics, economics and biology, heterogeneity is an important topic concerning validity of data inference and discovery of hidden patterns. This thesis focuses on penalized methods for regression analysis with the presence of heterogeneity in a potentially high-dimensional setting. Two possible strategies to deal with heterogeneity are: robust regression methods that provide heterogeneity-resistant coefficient estimation, and direct detection of heterogeneity while estimating coefficients accurately in the meantime. We consider the first strategy for two robust regression methods, Huber loss regression and quantile regression with Lasso or Elastic-Net penalties, which have been studied theoretically but lack efficient algorithms. We propose a new algorithm Semismooth Newton Coordinate Descent to solve them. The algorithm is a novel combination of Semismooth Newton Algorithm and Coordinate Descent that applies to penalized optimization problems with both nonsmooth loss and nonsmooth penalty. We prove its convergence properties, and show its computational efficiency through numerical studies. We also propose a nonconvex penalized regression method, Heterogeneity Discovery Regression (HDR) , as a realization of the second idea. We establish theoretical results that guarantees statistical precision for any local optimum of the objective function with high probability. We also compare the numerical performances of HDR with competitors including Huber loss regression, quantile regression and least squares through simulation studies and a real data example. In these experiments, HDR methods are able to detect heterogeneity accurately, and also largely outperform the competitors in terms of coefficient estimation and variable selection.
19

Novel techniques for estimation and tracking of radioactive sources

Baidoo-Williams, Henry Ernest 01 December 2014 (has links)
Radioactive source signal measurements are Poisson distributed due to the underlying radiation process. This fact, coupled with the ubiquitous normally occurring radioactive materials (NORM), makes it challenging to localize or track a radioactive source or target accurately. This leads to the necessity to either use highly accurate sensors to minimize measurement noise or many less accurate sensors whose measurements are averaged to minimize the noise. The cost associated with highly accurate sensors places a bound on the number that can realistically be deployed. Similarly, the degree of inaccuracy in cheap sensors also places a lower bound on the number of sensors needed to achieve realistic estimates of location or trajectory of a radioactive source in order to achieve reasonable error margins. We first consider the use of the smallest number of highly accurate sensors to localize radioactive sources. The novel ideas and algorithms we develop use no more than the minimum number of sensors required by triangulation based algorithms but avoid all the pitfalls manifest with triangulation based algorithms such as multiple local minima and slow convergence rate from algorithm reinitialization. Under the general assumption that we have a priori knowledge of the statistics of the intensity of the source, we show that if the source or target is known to be in one open half plane, then N sensors are enough to guarantee a unique solution, N being the dimension of the search space. If the assumptions are tightened such that the source or target lies in the open convex hull of the sensors, then N+1 sensors are required. Suppose we do not have knowledge of the statistics of the intensity of the source, we show that N+1 sensors is still the minimum number of sensors required to guarantee a unique solution if the source is in the open convex hull of the sensors. Second, we present tracking of a radioactive source using cheap low sensitivity binary proximity sensors under some general assumptions. Suppose a source or target moves in a straight line, and suppose we have a priori knowledge of the radiation intensity of the source, we show that three binary sensors and their binary measurements depicting the presence or absence of a source within their nominal sensing range suffices to localize the linear trajectory. If we do not have knowledge of the intensity of the source or target, then a minimum of four sensors suffices to localize the trajectory of the source. Finally we present some fundamental limits on the estimation accuracy of a stationary radioactive source using ideal mobile measurement sensors and provide a robust algorithm which achieves the estimation accuracy bounds asymptotically as the expected radiation count increases.
20

Derivative Free Algorithms For Large Scale Non-smooth Optimization And Their Applications

Tor, Ali Hakan 01 February 2013 (has links) (PDF)
In this thesis, various numerical methods are developed to solve nonsmooth and in particular, nonconvex optimization problems. More speci

Page generated in 0.0517 seconds