• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 4
  • 1
  • Tagged with
  • 21
  • 21
  • 8
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Efficient Resource Allocation In Energy Harvesting Wireless Networks

Tekbiyik Ersoy, Neyre 01 December 2012 (has links) (PDF)
This thesis presents various studies on energy efficient design of wireless networks. It starts with a survey on recent shortest path based energy efficient routing algorithms developed for ad hoc and sensor networks, making a comprehensive classification for these algorithms. In addition to energy efficient design, sustainable and environmentally friendly deployment of wireless networks demands increased use of renewable energy. However, this calls for novel design principles to efficiently utilize the variation in the availability of the energy. The thesis continues with an investigation of state-of-the-art resource management and scheduling algorithms developed for energy harvesting wireless sensor networks. Building on the stateof- the-art, the main contribution of this thesis is to formulate and solve a utility maximizing scheduling problem in a multiuser broadcast channel with an energy harvesting transmitter. The goal is to determine the optimal power and time allocations to users between energy arrivals. The structural properties of the problem are analyzed, and its biconvexity is proved. A Block Coordinate Descent (BCD) based algorithm is developed to obtain the optimal solution. Two simple and computationally scalable heuristics, PTF and ProNTO, which mimic the characteristics of the optimal policy, are proposed. Finally, an online algorithm, PTF-On,that will bypass the need for offline knowledge about the energy harvesting statistics, is developed. PTF-On uses a Kalman filter based energy harvesting prediction algorithm, developed in this thesis, to predict the energy that will arrive in the future.
12

Block Coordinate Descent for Regularized Multi-convex Optimization

Xu, Yangyang 16 September 2013 (has links)
This thesis considers regularized block multi-convex optimization, where the feasible set and objective function are generally non-convex but convex in each block of variables. I review some of its interesting examples and propose a generalized block coordinate descent (BCD) method. The generalized BCD uses three different block-update schemes. Based on the property of one block subproblem, one can freely choose one of the three schemes to update the corresponding block of variables. Appropriate choices of block-update schemes can often speed up the algorithm and greatly save computing time. Under certain conditions, I show that any limit point satisfies the Nash equilibrium conditions. Furthermore, I establish its global convergence and estimate its asymptotic convergence rate by assuming a property based on the Kurdyka-{\L}ojasiewicz inequality. As a consequence, this thesis gives a global linear convergence result of cyclic block coordinate descent for strongly convex optimization. The proposed algorithms are adapted for factorizing nonnegative matrices and tensors, as well as completing them from their incomplete observations. The algorithms were tested on synthetic data, hyperspectral data, as well as image sets from the CBCL, ORL and Swimmer databases. Compared to the existing state-of-the-art algorithms, the proposed algorithms demonstrate superior performance in both speed and solution quality.
13

Studies on Optimization Methods for Nonlinear Semidefinite Programming Problems / 非線形半正定値計画問題に対する最適化手法の研究

Yamakawa, Yuya 23 March 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第19122号 / 情博第568号 / 新制||情||100(附属図書館) / 32073 / 京都大学大学院情報学研究科数理工学専攻 / (主査)教授 山下 信雄, 教授 太田 快人, 教授 永持 仁 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
14

Review of Large-Scale Coordinate Descent Algorithms for Multi-class Classification with Memory Constraints

Jovanovich, Aleksandar 03 June 2013 (has links)
No description available.
15

Métodos de busca em coordenada / Coordinate descent methods

Santos, Luiz Gustavo de Moura dos 22 November 2017 (has links)
Problemas reais em áreas como aprendizado de máquina têm chamado atenção pela enorme quantidade de variáveis (> 10^6) e volume de dados. Em problemas dessa escala o custo para se obter e trabalhar com informações de segunda ordem são proibitivos. Tais problemas apresentam características que podem ser aproveitadas por métodos de busca em coordenada. Essa classe de métodos é caracterizada pela alteração de apenas uma ou poucas variáveis a cada iteração. A variante do método comumente descrita na literatura é a minimização cíclica de variáveis. Porém, resultados recentes sugerem que variantes aleatórias do método possuem melhores garantias de convergência. Nessa variante, a cada iteração, a variável a ser alterada é sorteada com uma probabilidade preestabelecida não necessariamente uniforme. Neste trabalho estudamos algumas variações do método de busca em coordenada. São apresentados aspectos teóricos desses métodos, porém focamos nos aspectos práticos de implementação e na comparação experimental entre variações do método de busca em coordenada aplicados a diferentes problemas com aplicações reais. / Real world problemas in areas such as machine learning are known for the huge number of decision variables (> 10^6) and data volume. For such problems working with second order derivatives is prohibitive. These problems have properties that benefits the application of coordinate descent/minimization methods. These kind of methods are defined by the change of a single, or small number of, decision variable at each iteration. In the literature, the commonly found description of this type of method is based on the cyclic change of variables. Recent papers have shown that randomized versions of this method have better convergence properties. This version is based on the change of a single variable chosen randomly at each iteration, based on a fixed, but not necessarily uniform, distribution. In this work we present some theoretical aspects of such methods, but we focus on practical aspects.
16

Filtragem adaptativa de baixa complexidade computacional. / Low-complexity adaptive filtering.

Almeida Neto, Fernando Gonçalves de 20 February 2015 (has links)
Neste texto são propostos algoritmos de filtragem adaptativa de baixo custo computacional para o processamento de sinais lineares no sentido amplo e para beamforming. Novas técnicas de filtragem adaptativa com baixo custo computacional são desenvolvidas para o processamento de sinais lineares no sentido amplo, representados por números complexos ou por quaternions. Os algoritmos propostos evitam a redundância de estatísticas de segunda ordem na matriz de auto correlação, o que é obtido por meio da substituição do vetor de dados original por um vetor de dados real contendo as mesmas informações. Dessa forma, evitam-se muitas operações entre números complexos (ou entre quaternions), que são substituídas por operações entre reais e números complexos (ou entre reais e quaternions), de menor custo computacional. Análises na media e na variância para qualquer algoritmo de quaternions baseados na técnica least-mean squares (LMS) são desenvolvidas. Também é obtido o algoritmo de quaternions baseado no LMS e com vetor de entrada real de mais rápida convergência. Uma nova versão estável e de baixo custo computacional do algoritmo recursive least squares (RLS) amplamente linear também é desenvolvida neste texto. A técnica é modificada para usar o método do dichotomous coordinate descent (DCD), resultando em uma abordagem de custo computacional linear em relação ao comprimento N do vetor de entrada (enquanto o algoritmo original possui custo computacional quadrático em N). Para aplicações em beamforming, são desenvolvidas novas técnicas baseadas no algoritmo adaptive re-weighting homotopy. As novas técnicas são aplicadas para arrays em que o número de fontes é menor do que o número de sensores, tal que a matriz de auto correlação se torna mal-condicionada. O algoritmo DCD é usado para obter uma redução adicional do custo computacional. / In this text, low-cost adaptive filtering techniques are proposed for widely-linear processing and beamforming applications. New reduced-complexity versions of widely-linear adaptive filters are proposed for complex and quaternion processing. The low-cost techniques avoid redundant secondorder statistics in the autocorrelation matrix, which is obtained replacing the original widely-linear data vector by a real vector with the same information. Using this approach, many complex-complex (or quaternion-quaternion) operations are substituted by less costly real-complex (or real-quaternion) computations in the algorithms. An analysis in the mean and in the variance is performed for quaternion-based techniques, suitable for any quaternion least-mean squares (LMS) algorithm. The fastest-converging widely-linear quaternion LMS algorithm with real-valued input is obtained. For complex-valued processing, a low-cost and stable version of the widely-linear recursive least-squares (RLS) algorithm is also developed. The widely-linear RLS technique is modified to apply the dichotomous coordinate descent (DCD) method, which leads to an algorithm with computational complexity linear on the data vector length N (in opposition to the original WL technique, for which the complexity is quadratic in N). New complex-valued techniques based on the adaptive re-weighting homotopy algorithm are developed for beamforming. The algorithms are applied to sensor arrays in which the number of interferer sources is less than the number of sensors, so that the autocorrelation matrix is ill-conditioned. DCD iterations are applied to further reduce the computational complexity.
17

Filtragem adaptativa de baixa complexidade computacional. / Low-complexity adaptive filtering.

Fernando Gonçalves de Almeida Neto 20 February 2015 (has links)
Neste texto são propostos algoritmos de filtragem adaptativa de baixo custo computacional para o processamento de sinais lineares no sentido amplo e para beamforming. Novas técnicas de filtragem adaptativa com baixo custo computacional são desenvolvidas para o processamento de sinais lineares no sentido amplo, representados por números complexos ou por quaternions. Os algoritmos propostos evitam a redundância de estatísticas de segunda ordem na matriz de auto correlação, o que é obtido por meio da substituição do vetor de dados original por um vetor de dados real contendo as mesmas informações. Dessa forma, evitam-se muitas operações entre números complexos (ou entre quaternions), que são substituídas por operações entre reais e números complexos (ou entre reais e quaternions), de menor custo computacional. Análises na media e na variância para qualquer algoritmo de quaternions baseados na técnica least-mean squares (LMS) são desenvolvidas. Também é obtido o algoritmo de quaternions baseado no LMS e com vetor de entrada real de mais rápida convergência. Uma nova versão estável e de baixo custo computacional do algoritmo recursive least squares (RLS) amplamente linear também é desenvolvida neste texto. A técnica é modificada para usar o método do dichotomous coordinate descent (DCD), resultando em uma abordagem de custo computacional linear em relação ao comprimento N do vetor de entrada (enquanto o algoritmo original possui custo computacional quadrático em N). Para aplicações em beamforming, são desenvolvidas novas técnicas baseadas no algoritmo adaptive re-weighting homotopy. As novas técnicas são aplicadas para arrays em que o número de fontes é menor do que o número de sensores, tal que a matriz de auto correlação se torna mal-condicionada. O algoritmo DCD é usado para obter uma redução adicional do custo computacional. / In this text, low-cost adaptive filtering techniques are proposed for widely-linear processing and beamforming applications. New reduced-complexity versions of widely-linear adaptive filters are proposed for complex and quaternion processing. The low-cost techniques avoid redundant secondorder statistics in the autocorrelation matrix, which is obtained replacing the original widely-linear data vector by a real vector with the same information. Using this approach, many complex-complex (or quaternion-quaternion) operations are substituted by less costly real-complex (or real-quaternion) computations in the algorithms. An analysis in the mean and in the variance is performed for quaternion-based techniques, suitable for any quaternion least-mean squares (LMS) algorithm. The fastest-converging widely-linear quaternion LMS algorithm with real-valued input is obtained. For complex-valued processing, a low-cost and stable version of the widely-linear recursive least-squares (RLS) algorithm is also developed. The widely-linear RLS technique is modified to apply the dichotomous coordinate descent (DCD) method, which leads to an algorithm with computational complexity linear on the data vector length N (in opposition to the original WL technique, for which the complexity is quadratic in N). New complex-valued techniques based on the adaptive re-weighting homotopy algorithm are developed for beamforming. The algorithms are applied to sensor arrays in which the number of interferer sources is less than the number of sensors, so that the autocorrelation matrix is ill-conditioned. DCD iterations are applied to further reduce the computational complexity.
18

Generalized unit commitment by the radar multiplier method

Beltran Royo, César 09 July 2001 (has links)
This operations research thesis should be situated in the field of the power generation industry. The general objective of this work is to efficiently solve the Generalized Unit Commitment (GUC) problem by means of specialized software. The GUC problem generalizes the Unit Commitment (UC) problem by simultane-ously solving the associated Optimal Power Flow (OPF) problem. There are many approaches to solve the UC and OPF problems separately, but approaches to solve them jointly, i.e. to solve the GUC problem, are quite scarce. One of these GUC solving approaches is due to professors Batut and Renaud, whose methodology has been taken as a starting point for the methodology presented herein.This thesis report is structured as follows. Chapter 1 describes the state of the art of the UC and GUC problems. The formulation of the classical short-term power planning problems related to the GUC problem, namely the economic dispatching problem, the OPF problem, and the UC problem, are reviewed. Special attention is paid to the UC literature and to the traditional methods for solving the UC problem. In chapter 2 we extend the OPF model developed by professors Heredia and Nabona to obtain our GUC model. The variables used and the modelling of the thermal, hydraulic and transmission systems are introduced, as is the objective function. Chapter 3 deals with the Variable Duplication (VD) method, which is used to decompose the GUC problem as an alternative to the Classical Lagrangian Relaxation (CLR) method. Furthermore, in chapter 3 dual bounds provided by the VDmethod or by the CLR methods are theoretically compared.Throughout chapters 4, 5, and 6 our solution methodology, the Radar Multiplier (RM) method, is designed and tested. Three independent matters are studied: first, the auxiliary problem principle method, used by Batut and Renaud to treat the inseparable augmented Lagrangian, is compared with the block coordinate descent method from both theoretical and practical points of view. Second, the Radar Sub- gradient (RS) method, a new Lagrange multiplier updating method, is proposed and computationally compared with the classical subgradient method. And third, we study the local character of the optimizers computed by the Augmented Lagrangian Relaxation (ALR) method when solving the GUC problem. A heuristic to improve the local ALR optimizers is designed and tested.Chapter 7 is devoted to our computational implementation of the RM method, the MACH code. First, the design of MACH is reviewed brie y and then its performance is tested by solving real-life large-scale UC and GUC instances. Solutions computed using our VD formulation of the GUC problem are partially primal feasible since they do not necessarily fulfill the spinning reserve constraints. In chapter 8 we study how to modify this GUC formulation with the aim of obtaining full primal feasible solutions. A successful test based on a simple UC problem is reported. The conclusions, contributions of the thesis, and proposed further research can be found in chapter 9.
19

Some Topics in Roc Curves Analysis

Huang, Xin 07 May 2011 (has links)
The receiver operating characteristic (ROC) curves is a popular tool for evaluating continuous diagnostic tests. The traditional definition of ROC curves incorporates implicitly the idea of "hard" thresholding, which also results in the empirical curves being step functions. The first topic is to introduce a novel definition of soft ROC curves, which incorporates the idea of "soft" thresholding. The softness of a soft ROC curve is controlled by a regularization parameter that can be selected suitably by a cross-validation procedure. A byproduct of the soft ROC curves is that the corresponding empirical curves are smooth. The second topic is on combination of several diagnostic tests to achieve better diagnostic accuracy. We consider the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a non-parametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. However, the above best-subset variable selection method is not practical when the number of diagnostic tests is large. The third topic is to further develop a LASSO-type procedure for variable selection. To solve the non-convex maximization problem in the proposed procedure, an efficient algorithm is developed based on soft ROC curves, difference convex programming, and coordinate descent algorithm.
20

Métodos de busca em coordenada / Coordinate descent methods

Luiz Gustavo de Moura dos Santos 22 November 2017 (has links)
Problemas reais em áreas como aprendizado de máquina têm chamado atenção pela enorme quantidade de variáveis (> 10^6) e volume de dados. Em problemas dessa escala o custo para se obter e trabalhar com informações de segunda ordem são proibitivos. Tais problemas apresentam características que podem ser aproveitadas por métodos de busca em coordenada. Essa classe de métodos é caracterizada pela alteração de apenas uma ou poucas variáveis a cada iteração. A variante do método comumente descrita na literatura é a minimização cíclica de variáveis. Porém, resultados recentes sugerem que variantes aleatórias do método possuem melhores garantias de convergência. Nessa variante, a cada iteração, a variável a ser alterada é sorteada com uma probabilidade preestabelecida não necessariamente uniforme. Neste trabalho estudamos algumas variações do método de busca em coordenada. São apresentados aspectos teóricos desses métodos, porém focamos nos aspectos práticos de implementação e na comparação experimental entre variações do método de busca em coordenada aplicados a diferentes problemas com aplicações reais. / Real world problemas in areas such as machine learning are known for the huge number of decision variables (> 10^6) and data volume. For such problems working with second order derivatives is prohibitive. These problems have properties that benefits the application of coordinate descent/minimization methods. These kind of methods are defined by the change of a single, or small number of, decision variable at each iteration. In the literature, the commonly found description of this type of method is based on the cyclic change of variables. Recent papers have shown that randomized versions of this method have better convergence properties. This version is based on the change of a single variable chosen randomly at each iteration, based on a fixed, but not necessarily uniform, distribution. In this work we present some theoretical aspects of such methods, but we focus on practical aspects.

Page generated in 0.0586 seconds