• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 14
  • 13
  • 11
  • 6
  • 1
  • 1
  • Tagged with
  • 226
  • 226
  • 53
  • 45
  • 42
  • 39
  • 38
  • 32
  • 25
  • 24
  • 24
  • 24
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Safe optimization algorithms for variable selection and hyperparameter tuning / Algorithmes d’optimisation sûrs pour la sélection de variables et le réglage d’hyperparamètre

Ndiaye, Eugene 04 October 2018 (has links)
Le traitement massif et automatique des données requiert le développement de techniques de filtration des informations les plus importantes. Parmi ces méthodes, celles présentant des structures parcimonieuses se sont révélées idoines pour améliorer l’efficacité statistique et computationnelle des estimateurs, dans un contexte de grandes dimensions. Elles s’expriment souvent comme solution de la minimisation du risque empirique régularisé s’écrivant comme une somme d’un terme lisse qui mesure la qualité de l’ajustement aux données, et d’un terme non lisse qui pénalise les solutions complexes. Cependant, une telle manière d’inclure des informations a priori, introduit de nombreuses difficultés numériques pour résoudre le problème d’optimisation sous-jacent et pour calibrer le niveau de régularisation. Ces problématiques ont été au coeur des questions que nous avons abordées dans cette thèse.Une technique récente, appelée «Screening Rules», propose d’ignorer certaines variables pendant le processus d’optimisation en tirant bénéfice de la parcimonie attendue des solutions. Ces règles d’élimination sont dites sûres lorsqu’elles garantissent de ne pas rejeter les variables à tort. Nous proposons un cadre unifié pour identifier les structures importantes dans ces problèmes d’optimisation convexes et nous introduisons les règles «Gap Safe Screening Rules». Elles permettent d’obtenir des gains considérables en temps de calcul grâce à la réduction de la dimension induite par cette méthode. De plus, elles s’incorporent facilement aux algorithmes itératifs et s’appliquent à un plus grand nombre de problèmes que les méthodes précédentes.Pour trouver un bon compromis entre minimisation du risque et introduction d’un biais d’apprentissage, les algorithmes d’homotopie offrent la possibilité de tracer la courbe des solutions en fonction du paramètre de régularisation. Toutefois, ils présentent des instabilités numériques dues à plusieurs inversions de matrice, et sont souvent coûteux en grande dimension. Aussi, ils ont des complexités exponentielles en la dimension du modèle dans des cas défavorables. En autorisant des solutions approchées, une approximation de la courbe des solutions permet de contourner les inconvénients susmentionnés. Nous revisitons les techniques d’approximation des chemins de régularisation pour une tolérance prédéfinie, et nous analysons leur complexité en fonction de la régularité des fonctions de perte en jeu. Il s’ensuit une proposition d’algorithmes optimaux ainsi que diverses stratégies d’exploration de l’espace des paramètres. Ceci permet de proposer une méthode de calibration de la régularisation avec une garantie de convergence globale pour la minimisation du risque empirique sur les données de validation.Le Lasso, un des estimateurs parcimonieux les plus célèbres et les plus étudiés, repose sur une théorie statistique qui suggère de choisir la régularisation en fonction de la variance des observations. Ceci est difficilement utilisable en pratique car, la variance du modèle est une quantité souvent inconnue. Dans de tels cas, il est possible d’optimiser conjointement les coefficients de régression et le niveau de bruit. Ces estimations concomitantes, apparues dans la littérature sous les noms de Scaled Lasso, Square-Root Lasso, fournissent des résultats théoriques aussi satisfaisants que celui du Lasso tout en étant indépendant de la variance réelle. Bien que présentant des avancées théoriques et pratiques importantes, ces méthodes sont aussi numériquement instables et les algorithmes actuellement disponibles sont coûteux en temps de calcul. Nous illustrons ces difficultés et nous proposons à la fois des modifications basées sur des techniques de lissage pour accroitre la stabilité numérique de ces estimateurs, ainsi qu’un algorithme plus efficace pour les obtenir. / Massive and automatic data processing requires the development of techniques able to filter the most important information. Among these methods, those with sparse structures have been shown to improve the statistical and computational efficiency of estimators in a context of large dimension. They can often be expressed as a solution of regularized empirical risk minimization and generally lead to non differentiable optimization problems in the form of a sum of a smooth term, measuring the quality of the fit, and a non-smooth term, penalizing complex solutions. Although it has considerable advantages, such a way of including prior information, unfortunately introduces many numerical difficulties both for solving the underlying optimization problem and to calibrate the level of regularization. Solving these issues has been at the heart of this thesis. A recently introduced technique, called "Screening Rules", proposes to ignore some variables during the optimization process by benefiting from the expected sparsity of the solutions. These elimination rules are said to be safe when the procedure guarantees to not reject any variable wrongly. In this work, we propose a unified framework for identifying important structures in these convex optimization problems and we introduce the "Gap Safe Screening Rules". They allows to obtain significant gains in computational time thanks to the dimensionality reduction induced by this method. In addition, they can be easily inserted into iterative algorithms and apply to a large number of problems.To find a good compromise between minimizing risk and introducing a learning bias, (exact) homotopy continuation algorithms offer the possibility of tracking the curve of the solutions as a function of the regularization parameters. However, they exhibit numerical instabilities due to several matrix inversions and are often expensive in large dimension. Another weakness is that a worst-case analysis shows that they have exact complexities that are exponential in the dimension of the model parameter. Allowing approximated solutions makes possible to circumvent the aforementioned drawbacks by approximating the curve of the solutions. In this thesis, we revisit the approximation techniques of the regularization paths given a predefined tolerance and we propose an in-depth analysis of their complexity w.r.t. the regularity of the loss functions involved. Hence, we propose optimal algorithms as well as various strategies for exploring the parameters space. We also provide calibration method (for the regularization parameter) that enjoys globalconvergence guarantees for the minimization of the empirical risk on the validation data.Among sparse regularization methods, the Lasso is one of the most celebrated and studied. Its statistical theory suggests choosing the level of regularization according to the amount of variance in the observations, which is difficult to use in practice because the variance of the model is oftenan unknown quantity. In such case, it is possible to jointly optimize the regression parameter as well as the level of noise. These concomitant estimates, appeared in the literature under the names of Scaled Lasso or Square-Root Lasso, and provide theoretical results as sharp as that of theLasso while being independent of the actual noise level of the observations. Although presenting important advances, these methods are numerically unstable and the currently available algorithms are expensive in computation time. We illustrate these difficulties and we propose modifications based on smoothing techniques to increase stability of these estimators as well as to introduce a faster algorithm.
162

Some approximation schemes in polynomial optimization / Quelques schémas d'approximation en optimisation polynomiale

Hess, Roxana 28 September 2017 (has links)
Cette thèse est dédiée à l'étude de la hiérarchie moments-sommes-de-carrés, une famille de problèmes de programmation semi-définie en optimisation polynomiale, couramment appelée hiérarchie de Lasserre. Nous examinons différents aspects de ses propriétés et applications. Comme application de la hiérarchie, nous approchons certains objets potentiellement compliqués, comme l'abscisse polynomiale et les plans d'expérience optimaux sur des domaines semi-algébriques. L'application de la hiérarchie de Lasserre produit des approximations par des polynômes de degré fixé et donc de complexité bornée. En ce qui concerne la complexité de la hiérarchie elle-même, nous en construisons une modification pour laquelle un taux de convergence amélioré peut être prouvé. Un concept essentiel de la hiérarchie est l'utilisation des modules quadratiques et de leurs duaux pour appréhender de manière flexible le cône des polynômes positifs et le cône des moments. Nous poursuivons cette idée pour construire des approximations étroites d'ensembles semi-algébriques à l'aide de séparateurs polynomiaux. / This thesis is dedicated to investigations of the moment-sums-of-squares hierarchy, a family of semidefinite programming problems in polynomial optimization, commonly called the Lasserre hierarchy. We examine different aspects of its properties and purposes. As applications of the hierarchy, we approximate some potentially complicated objects, namely the polynomial abscissa and optimal designs on semialgebraic domains. Applying the Lasserre hierarchy results in approximations by polynomials of fixed degree and hence bounded complexity. With regard to the complexity of the hierarchy itself, we construct a modification of it for which an improved convergence rate can be proved. An essential concept of the hierarchy is to use quadratic modules and their duals as a tractable characterization of the cone of positive polynomials and the moment cone, respectively. We exploit further this idea to construct tight approximations of semialgebraic sets with polynomial separators.
163

Optimal Signaling Strategies and Fundamental Limits of Next-Generation Energy-Efficient Wireless Networks

Ranjbar, Mohammad 29 August 2019 (has links)
No description available.
164

Algorithm Design for Low Latency Communication in Wireless Networks

ElAzzouni, Sherif 11 September 2020 (has links)
No description available.
165

Energy and Delay-aware Communication and Computation in Wireless Networks

Masoudi, Meysam January 2020 (has links)
Power conservation has become a severe issue in devices since battery capability advancement is not keeping pace with the swift development of other technologies such as processing technologies. This issue becomes critical when both the number of resource-intensive applications and the number of connected devices are rapidly growing. The former results in an increase in power consumption per device, and the latter causes an increase in the total power consumption of devices. Mobile edge computing (MEC) and low power wide area networks (LPWANs) are raised as two important research areas in wireless networks, which can assist devices to save power. On the one hand, devices are being considered as a platform to run resource-intensive applications while they have limited resources such as battery and processing capabilities. On the other hand, LPWANs raised as an important enabler for massive IoT (Internet of Things) to provide long-range and reliable connectivity for low power devices. The scope of this thesis spans over these two main research areas: (1) MEC, where devices can use radio resources to offload their processing tasks to the cloud to save energy. (2) LPWAN, with grant-free radio access where devices from different technology transmit their packets without any handshaking process. In particular, we consider a MEC network, where the processing resources are distributed in the proximity of the users. Hence, devices can save energy by transmitting the data to be processed to the edge cloud provided that the delay requirement is met and transmission power consumption is less than the local processing power consumption. This thesis addresses the question of whether to offload or not to minimize the uplink power consumption in a multi-cell multi-user MEC network. We consider the maximum acceptable delay as the QoS metric to be satisfied in our system. We formulate the problem as a mixed-integer nonlinear problem, which is converted into a convex form using D.C. approximation. To solve the converted optimization problem, we have proposed centralized and distributed algorithms for joint power allocation and channel assignment together with decision-making on job offloading. Our results show that there exists a region in which offloading can save power at mobile devices and increases the battery lifetime. Another focus of this thesis is on LPWANs, which are becoming more and more popular, due to the limited battery capacity and the ever-increasing need for durable battery lifetime for IoT networks. Most studies evaluate the system performance assuming single radio access technology deployment. In this thesis, we study the impact of coexisting competing radio access technologies on the system performance. We consider K technologies, defined by time and frequency activity factors, bandwidth, and power, which share a set of radio resources. Leveraging tools from stochastic geometry, we derive closed-form expressions for the successful transmission probability, expected battery lifetime, experienced delay, and expected number of retransmissions. Our analytical model, which is validated by simulation results, provides a tool to evaluate the coexistence scenarios and analyze how the introduction of a new coexisting technology may degrade the system performance in terms of success probability, delay, and battery lifetime. We further investigate the interplay between traffic load, the density of access points, and reliability/delay of communications, and examine the bounds beyond which, mean delay becomes infinite. / Antalet anslutna enheter till nätverk ökar. Det finns olika trender som mobil edgecomputing (MEC) och low power wide area-nätverk (LPWAN) som har blivit intressantai trådlösa nätverk. Därför står trådlösa nätverk inför nya utmaningar som ökadenergiförbrukning. I den här avhandlingen beaktar vi dessa två mobila nätverk. I MECavlastar mobila enheter sina bearbetningsuppgifter till centraliserad beräkningsresurser (”molnet”). I avhandlingensvarar vi på följande fråga: När det är energieffektivt att avlasta dessa beräkningsuppgifter till molnet?Vi föreslår två algoritmer för att bestämma den rätta tiden för överflyttning av beräkningsuppgifter till molnet.I LPWANs, antar vi att det finns ett mycket stort antal enheter av olika art som kommunicerar mednätverket. De använder s.k. ”Grant-free”-åtkomst för att ansluta till nätverket, där basstationerna inte ger explicita sändningstillstånd till enheterna. Denanalytiska modell som föreslås i avhandlingen utgör ett verktyg för att utvärdera sådana samexistensscenarier.Med verktygen kan vi analysera olika systems prestanda när det gäller framgångssannolikhet, fördröjning och batteriershållbarhetstid. / <p>QC 20200228</p> / SOOGreen
166

Extremal Mechanisms for Pointwise Maximal Leakage / Extremala Mekanismer för Pointwise Maximal Leakage

Grosse, Leonhard January 2023 (has links)
In order to implement privacy preservation for individuals, systems need to utilize privacy mechanisms that privatize sensitive data by randomization. The goal of privacy mechanism design is to find optimal tradeoffs between maximizing the utility of the privatized data while providing a strict sense of privacy defined by a chosen privacy measure. In this thesis, we explore this tradeoff for the pointwise maximal leakage measure. Pointwise maximal leakage (PML) was recently proposed as an operationally meaningful privacy measure that quantifies the guessing advantage of an adversary that is interested in a random function of the private data. Opposite to many other information-theoretic measures, PML considers the privacy loss for every outcome of the privatized view separately, thereby enabling more flexible privacy guarantees that move away from averaging over all outcomes. We start by using PML to analyze the prior distribution-dependent behavior of the established randomized response mechanism designed for local differential privacy. Then, we formulate a general optimization problem for the privacy-utility tradeoff with PML as a privacy measure and utility functions based on sub-linear functions. Using methods from convex optimization, we analyze the valid region of mechanisms satisfying a PML privacy guarantee and show that the optimization can be solved by a linear program. We arrive at extremal formulations that yield closed-form solutions for some important special cases: Binary mechanism, general high-privacy regions, i.e., regions in which the required level of privacy is high, and low-privacy mechanisms for equal priors. We further present an approximate solution for general priors in this setting. Finally, we analyze the loss of optimality of this construction for different prior distributions. / För att kunna implementera integritetsskydd för individer, så behöver system utnyttja integritetsmekanismer som privatiserar känslig data genom randomisering. Målet vid design av integritetsmekanismer är att hitta den optimala balansen mellan att användbarheten av privatiserad data maximeras, samtidigt som det tillhandahålls integritet i strikt mening. Detta definierat av något valt typ av integritetsmått. I den här avhandlingen, så undersöks detta utbyte specifikt med “pointwise maximal leakage”-måttet. Pointwise maximal leakage (PML) har nyligen föreslagits som ett operativt meningsfullt integritetsmått som kvantifierar en gissande motparts informationstillgång om denna är intresserad av en slumpmässig funktion av den privata datan. Till skillnad mot många andra informations-teoretiska mått, så tar PML i åtanke integritetsinskränkningen separat för varje utfall av den privata slumpmässiga variabeln. Därmed möjliggörs mer flexibla försäkringar av integriteten, som strävar bort från genomsnittet av alla utfall. Först, används PML för att analysera det ursprungsberoende beteendet av den etablerade “randomized response”-mekanismen designad för local differential privacy. Därefter formuleras ett generellt optimeringsproblem för integritets-användbarhets-kompromissen med PML som ett integritetsmått och användbarhetsfunktioner baserade på sublinjära funktioner. Genom att utnyttja metoder från konvex optimering, analyseras den giltiga regionen av mekanismer som tillfredsställer en PML-integritetsgaranti och det visas att optimeringen kan lösas av ett linjärt program. Det leder till extremala formuleringar som ger slutna lösningar för några viktiga specialfall: Binär mekanism, allmänna högintegritets-regioner (d.v.s. regioner där kravet på nivån av integritet är hög) och lågintegritets-mekanismer för ekvivalenta ursprungliga distributioner. Vidare presenteras en approximativ lösning för allmänna ursprungliga distributioner i denna miljö. Slutligen, analyseras förlusten av optimalitet hos denna konstruktion för olika ursprungliga distributioner.
167

[pt] RESOLVENDO ONLINE PACKING IPS SOB A PRESENÇA DE ENTRADAS ADVERSÁRIAS / [en] SOLVING THE ONLINE PACKING IP UNDER SOME ADVERSARIAL INPUTS

DAVID BEYDA 23 January 2023 (has links)
[pt] Nesse trabalho, estudamos online packing integer programs, cujas colunas são reveladas uma a uma. Já que algoritmos ótimos foram encontrados para o modelo RANDOMORDER– onde a ordem na qual as colunas são reveladas para o algoritmo é aleatória – o foco da área se voltou para modelo menos otimistas. Um desses modelos é o modelo MIXED, no qual algumas colunas são ordenadas de forma adversária, enquanto outras chegam em ordem aleatória. Pouquíssimos resultados são conhecidos para online packing IPs no modelo MIXED, que é o objeto do nosso estudo. Consideramos problemas de online packing com d dimensões de ocupação (d restrições de empacotamento), cada uma com capacidade B. Assumimos que todas as recompensas e ocupações dos itens estão no intervalo [0, 1]. O objetivo do estudo é projetar um algoritmo no qual a presença de alguns itens adversários tenha um efeito limitado na competitividade do algoritmo relativa às colunas de ordem aleatória. Portanto, usamos como benchmark OPTStoch, que é o valor da solução ótima offline que considera apenas a parte aleatória da instância. Apresentamos um algoritmo que obtém recompensas de pelo menos (1 − 5lambda − Ó de epsilon)OPTStoch com alta probabilidade, onde lambda é a fração de colunas em ordem adversária. Para conseguir tal garantia, projetamos um algoritmo primal-dual onde as decisões são tomadas pelo algoritmo pela avaliação da recompensa e ocupação de cada item, de acordo com as variáveis duais do programa inteiro. Entretanto, diferentemente dos algoritmos primais-duais para o modelo RANDOMORDER, não podemos estimar as variáveis duais pela resolução de um problema reduzido. A causa disso é que, no modelo MIXED, um adversário pode facilmente manipular algumas colunas, para atrapalhar nossa estimação. Para contornar isso, propomos o uso de tecnicas conhecidas de online learning para aprender as variáveis duais do problema de forma online, conforme o problema progride. / [en] We study online packing integer programs, where the columns arrive one by one. Since optimal algorithms were found for the RANDOMORDER model – where columns arrive in random order – much focus of the area has been on less optimistic models. One of those models is the MIXED model, where some columns are adversarially ordered, while others come in random-order. Very few results are known for packing IPs in the MIXED model, which is the object of our study. We consider online IPs with d occupation dimensions (d packing constraints), each one with capacity (or right-hand side) B. We also assume all items rewards and occupations to be less or equal to 1. Our goal is to design an algorithm where the presence of adversarial columns has a limited effect on the algorithm s competitiveness relative to the random-order columns. Thus, we use OPTStoch – the offline optimal solution considering only the random-order part of the input – as a benchmark.We present an algorithm that, relative to OPTStoch, is (1−5 lambda− OBig O of epsilon)-competitive with high probability, where lambda is the fraction of adversarial columns. In order to achieve such a guarantee, we make use of a primal-dual algorithm where the decision variables are set by evaluating each item s reward and occupation according to the dual variables of the IP, like other algorithms for the RANDOMORDER model do. However, we can t hope to estimate those dual variables by solving a scaled version of problem, because they could easily be manipulated by an adversary in the MIXED model. Our solution was to use online learning techniques to learn all aspects of the dual variables in an online fashion, as the problem progresses.
168

MAJORIZED MULTI-AGENT CONSENSUS EQUILIBRIUM FOR 3D COHERENT LIDAR IMAGING

Tony Allen (18502518) 06 May 2024 (has links)
<pre>Coherent lidar uses a chirped laser pulse for 3D imaging of distant targets.However, existing coherent lidar image reconstruction methods do not account for the system's aperture, resulting in sub-optimal resolution.Moreover, these methods use majorization-minimization for computational efficiency, but do so without a theoretical treatment of convergence.<br> <br>In this work, we present Coherent Lidar Aperture Modeled Plug-and-Play (CLAMP) for multi-look coherent lidar image reconstruction.CLAMP uses multi-agent consensus equilibrium (a form of PnP) to combine a neural network denoiser with an accurate physics-based forward model.CLAMP introduces an FFT-based method to account for the effects of the aperture and uses majorization of the forward model for computational efficiency.We also formalize the use of majorization-minimization in consensus optimization problems and prove convergence to the exact consensus equilibrium solution.Finally, we apply CLAMP to synthetic and measured data to demonstrate its effectiveness in producing high-resolution, speckle-free, 3D imagery.</pre><p></p>
169

Cutting plane methods and dual problems

Gladin, Egor 28 August 2024 (has links)
Die vorliegende Arbeit befasst sich mit Schnittebenenverfahren, einer Gruppe von iterativen Algorithmen zur Minimierung einer (möglicherweise nicht glatten) konvexen Funktion über einer kompakten konvexen Menge. Wir betrachten zwei prominente Beispiele, nämlich die Ellipsoidmethode und die Methode der Vaidya, und zeigen, dass ihre Konvergenzrate auch bei Verwendung eines ungenauen Orakels erhalten bleibt. Darüber hinaus zeigen wir, dass es möglich ist, diese Methoden im Rahmen der stochastischen Optimierung effizient zu nutzen. Eine andere Richtung, in der Schnittebenenverfahren nützlich sein können, sind duale Probleme. In der Regel können die Zielfunktion und ihre Ableitungen bei solchen Problemen nur näherungsweise berechnet werden. Daher ist die Unempfindlichkeit der Methoden gegenüber Fehlern in den Subgradienten von großem Nutzen. Als Anwendungsbeispiel schlagen wir eine linear konvergierende duale Methode für einen Markow-Entscheidungsprozess mit Nebenbedienungen vor, die auf der Methode der Vaidya basiert. Wir demonstrieren die Leistungsfähigkeit der vorgeschlagenen Methode in einem einfachen RL Problem. Die Arbeit untersucht auch das Konzept der Genauigkeitszertifikate für konvexe Minimierungsprobleme. Zertifikate ermöglichen die Online-Überprüfung der Genauigkeit von Näherungslösungen. In dieser Arbeit verallgemeinern wir den Begriff der Genauigkeitszertifikate für die Situation eines ungenauen Orakels erster Ordnung. Darüber hinaus schlagen wir einen expliziten Weg zur Konstruktion von Genauigkeitszertifikaten für eine große Klasse von Schnittebenenverfahren vor. Als Nebenprodukt zeigen wir, dass die betrachteten Methoden effizient mit einem verrauschten Orakel verwendet werden können, obwohl sie ursprünglich für ein exaktes Orakel entwickelt wurden. Schließlich untersuchen wir die vorgeschlagenen Zertifikate in numerischen Experimenten und zeigen, dass sie eine enge obere Schranke für das objektive Residuum liefern. / The present thesis studies cutting plane methods, which are a group of iterative algorithms for minimizing a (possibly nonsmooth) convex function over a compact convex set. We consider two prominent examples, namely, the ellipsoid method and Vaidya's method, and show that their convergence rate is preserved even when an inexact oracle is used. Furthermore, we demonstrate that it is possible to use these methods in the context of stochastic optimization efficiently. Another direction where cutting plane methods can be useful is Lagrange dual problems. Commonly, the objective and its derivatives can only be computed approximately in such problems. Thus, the methods' insensitivity to error in subgradients comes in handy. As an application example, we propose a linearly converging dual method for a constrained Markov decision process (CMDP) based on Vaidya's algorithm. We demonstrate the performance of the proposed method in a simple RL environment. The work also investigates the concept of accuracy certificates for convex minimization problems. Certificates allow for online verification of the accuracy of approximate solutions. In this thesis, we generalize the notion of accuracy certificates for the setting of an inexact first-order oracle. Furthermore, we propose an explicit way to construct accuracy certificates for a large class of cutting plane methods. As a by-product, we show that the considered methods can be efficiently used with a noisy oracle even though they were originally designed to be equipped with an exact oracle. Finally, we illustrate the work of the proposed certificates in numerical experiments highlighting that they provide a tight upper bound on the objective residual.
170

Localization algorithms for passive sensor networks

Ismailova, Darya 23 January 2017 (has links)
Locating a radiating source based on range or range measurements obtained from a network of passive sensors has been a subject of research over the past two decades due to the problem’s importance in applications in wireless communications, surveillance, navigation, geosciences, and several other fields. In this thesis, we develop new solution methods for the problem of localizing a single radiating source based on range and range-difference measurements. Iterative re-weighting algorithms are developed for both range-based and range-difference-based least squares localization. Then we propose a penalty convex-concave procedure for finding an approximate solution to nonlinear least squares problems that are related to the range measurements. Finally, the sequential convex relaxation procedures are proposed to obtain the nonlinear least squares estimate of source coordinates. Localization in wireless sensor network, where the RF signals are used to derive the ranging measurements, is the primary application area of this work. However, the solution methods proposed are general and could be applied to range and range-difference measurements derived from other types of signals. / Graduate / 0544 / ismailds@uvic.ca

Page generated in 0.1115 seconds