• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 159
  • 14
  • 14
  • 13
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 247
  • 247
  • 54
  • 45
  • 44
  • 42
  • 39
  • 35
  • 28
  • 27
  • 26
  • 25
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Convex optimization for cosegmentation / Optimisation convexe pour la cosegmentation

Joulin, Armand 17 December 2012 (has links)
La simplicité apparente avec laquelle un humain perçoit ce qui l'entoure suggère que le processus impliqué est en partie mécanique, donc ne nécessite pas un haut degré de réflexion. Cette observation suggère que notre perception visuelle du monde peut être simulée sur un ordinateur. La vision par ordinateur est le domaine de recherche consacré au problème de la création d'une forme de perception visuelle pour des ordinateurs. La puissance de calcul des ordinateurs des années 50 ne permettait pas de traiter et d'analyser les données visuelles nécessaires à l'élaboration d'une perception visuelle virtuelle. Depuis peu, la puissance de calcul et la capacité de stockage ont permis à ce domaine de vraiment émerger. En deux décennies, la vision par ordinateur a permis de répondre à problèmes pratiques ou industrielles comme la détection des visages, de personnes au comportement suspect dans une foule ou de défauts de fabrication dans des chaînes de production. En revanche, en ce qui concerne l'émergence d'une perception visuelle virtuelle non spécifique à une tâche donnée, peu de progrès ont été réalisés et la communauté est toujours confrontée à des problèmes fondamentaux. Un de ces problèmes est de segmenter un stimuli optique ou une image en régions porteuses de sens, en objets ou actions. La segmentation de scène est naturelle pour les humains, mais aussi essentielle pour comprendre pleinement son environnement. Malheureusement elle est aussi extrêmement difficile à reproduire sur un ordinateur car il n'existe pas de définition claire de la région "significative''. En effet, en fonction de la scène ou de la situation, une région peut avoir des interprétations différentes. Etant donnée une scène se passant dans la rue, on peut considérer que distinguer un piéton est important dans cette situation, par contre ses vêtements ne le semblent pas nécessairement. Si maintenant nous considérons une scène ayant lieu pendant un défilé de mode, un vêtement devient un élément important, donc une région significative. Ici, nous nous concentrons sur ce problème de segmentation et nous l'abordons sous un angle particulier pour éviter cette difficulté fondamentale. Nous considérerons la segmentation comme un problème d'apprentissage faiblement supervisé, c'est-à-dire qu'au lieu de segmenter des images selon une certaine définition prédéfinie de régions "significatives'', nous développons des méthodes permettant de segmenter simultanément un ensemble d'images en régions qui apparaissent régulièrement. Nous définissons donc une région "significative'' d'un point de vue statistique: Ce sont les régions qui apparaissent régulièrement dans l'ensemble des images données. Pour cela nous concevons des modèles ayant une portée qui va au-delà de l'application à la vision. Notre approche prend ses racines dans l'apprentissage statistique, dont l'objectif est de concevoir des méthodes efficaces pour extraire et/ou apprendre des motifs récurrents dans des jeux de données. Ce domaine a récemment connu une forte popularité en raison de l'augmentation du nombre et de la taille des bases de données disponibles. Nous nous concentrons ici sur des méthodes conçues pour découvrir l'information "cachée'' dans une base à partir d'annotations incomplètes ou inexistantes. Enfin, nos travaux prennent racine dans le domaine de l'optimisation numérique afin d'élaborer des algorithmes efficaces et adaptés à nos problèmes. En particulier, nous utilisons et adaptons des outils récemment développés afin de relaxer des problèmes combinatoires complexes en des problèmes convexes pour lesquels il est garanti de trouver la solution optimale. Nous illustrons la qualité de nos formulations et algorithmes aussi sur des problèmes tirés de domaines autres que la vision par ordinateur. En particulier, nous montrons que nos travaux peuvent être utilisés dans la classification de texte et en biologie cellulaire. / People and most animals have a natural ability to see the world and understand it effortlessly. The apparent simplicity of this task suggests that this ability is, to some extend, mechanical, i.e., does not require high level thinking or profound reasoning. This observation suggests that this visual perception of the world should be reproducible on a mechanical device such as a computer. Computer vision is the field of research dedicated to creating a form of visual perception on computers. The first work on computer vision dates from the 50's but the amount of power needed for treating and analyzing visual data was not available at this time. It is only recently that improvements in computer power and storage capacities, have permitted this field to really emerge. On the one hand, constant progress in computer vision has allowed to develop dedicated solutions to practical or industrial problems. Detecting human faces, tracking people in crowded areas or default in production chains are industrial applications where computer vision is used. On the other hand, when it comes to creating a general visual perception for computers, it is probably fair to say that less progress has been made, and the community is still struggling with fundamental problems. One of these problems is to reproduce our ability of grouping into meaningful regions, the visual input data recorded by an optical device. This procedure, called segmentation, separates a scene into meaningful entities (e.g., objects or actions). Segmentation seems not only natural but essential for people to fully understand a given scene, but it is still very challenging for a computer. One reason is the difficulty of clearly identify what ``meaningful'' should be, i.e., depending on the scene or the situation, a region may have different interpretations. In this thesis, we will focus on the segmentation task and will try to avoid this fundamental difficulty by considering segmentation as a weakly supervised learning problem. Instead of segmenting images according to some predefined definition of ``meaningful'' regions, we develop methods to segment multiple images jointly into entities that repeatedly appear across the set of images. In other words, we define ``meaningful'' regions from a statistical point of view: they are regions that appears frequently in a dataset, and we design procedures to discover them. This leads us to design models whose a scope goes beyond this application to vision. Our approach takes its roots in the field of machine learning, whose goal is to design efficient methods to retrieve and/or learn common patterns in data. The field of machine learning has also gained in popularity in the last decades due to the recent improvement in computer power and the ever growing size of databases now available. In this thesis, we focus on methods tailored to retrieving hidden information from poorly annotated data, i.e., with incomplete or partial annotations. In particular, given a specific segmentation task defined by a set of images, we aim at segmenting the images and learn a related model as to segment unannotated images. Finally, our research drives us to explore the field of numerical optimization so as to design algorithms especially tailored for our problems. In particular, many numerical problems considered in this thesis cannot be solved by off-the-shelf software because of the complexity of their formulation. We use and adapt recently developed tools to approximate problems by solvable ones. We illustrate the promise of our formulations and algorithms on other general applications in different fields beside computer vision. In particular, we show that our work may also be used in text classification and discovery of cell configurations.
42

Optimizing Optimization: Scalable Convex Programming with Proximal Operators

Wytock, Matt 01 March 2016 (has links)
Convex optimization has developed a wide variety of useful tools critical to many applications in machine learning. However, unlike linear and quadratic programming, general convex solvers have not yet reached sufficient maturity to fully decouple the convex programming model from the numerical algorithms required for implementation. Especially as datasets grow in size, there is a significant gap in speed and scalability between general solvers and specialized algorithms. This thesis addresses this gap with a new model for convex programming based on an intermediate representation of convex problems as a sum of functions with efficient proximal operators. This representation serves two purposes: 1) many problems can be expressed in terms of functions with simple proximal operators, and 2) the proximal operator form serves as a general interface to any specialized algorithm that can incorporate additional `2-regularization. On a single CPU core, numerical results demonstrate that the prox-affine form results in significantly faster algorithms than existing general solvers based on conic forms. In addition, splitting problems into separable sums is attractive from the perspective of distributing solver work amongst multiple cores and machines. We apply large-scale convex programming to several problems arising from building the next-generation, information-enabled electrical grid. In these problems (as is common in many domains) large, high-dimensional datasets present opportunities for novel data-driven solutions. We present approaches based on convex models for several problems: probabilistic forecasting of electricity generation and demand, preventing failures in microgrids and source separation for whole-home energy disaggregation.
43

Factor analysis of dynamic PET images

Cruz Cavalcanti, Yanna 31 October 2018 (has links) (PDF)
Thanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the $\beta$-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics.
44

Novos métodos incrementais para otimização convexa não-diferenciável em dois níveis com aplicações em reconstrução de imagens em tomografia por emissão / New incremental methods for bivel nondifferentiable convex optimization with applications on image reconstruction in emission tomography

Simões, Lucas Eduardo Azevedo 28 March 2013 (has links)
Apresentamos dois novos métodos para a solução de problemas de otimização convexa em dois níveis não necessariamente diferenciáveis, i.e., mostramos que as sequências geradas por ambos os métodos convergem para o conjunto ótimo de uma função não suave sujeito a um conjunto que também envolve a minimização de uma função não diferenciável. Ambos os algoritmos dispensam qualquer tipo de resolução de subproblemas ou busca linear durante suas iterações. Ao final, para demonstrar que os métodos são viáveis, resolvemos um problema de reconstrução de imagens tomográficas / We present two new methods for solving bilevel convex optimization problems, where both functions are not necessarily differentiable, i.e., we show that the sequences generated by those methods converge to the optimal set of a nonsmooth function subject to a set that also involves a function minimization. Both algorithms do not require any kind of subproblems resolution or linear search during the iterations. At the end, to prove that our methods are viable, we solve a problem of tomographic image reconstruction
45

Tópicos em métodos ótimos para otimização convexa / Topics in optimal methods for convex optimization

Rossetto, Diane Rizzotto 29 March 2012 (has links)
Neste trabalho apresentamos um novo método ótimo para otimização de uma função convexa diferenciável sujeita a restrições convexas. Nosso método é baseado em ideias de Nesterov e Auslender e Teboulle. A proposta dos últimos autores usa uma distância de Bregman coerciva para garantir que os iterados permaneçam no interior do conjunto viável. Nosso método estende esses resultados para permitir o emprego da distância Euclidiana ao quadrado. Mostramos também como estimar a constante de Lipschitz para o gradiente da função objetivo, o que resulta em uma melhora na eficiência numérica do método. Finalmente, apresentamos experimentos numéricos para validar nossa proposta e comparar com o algoritmo de Nesterov. / In this work we introduce a new optimal method for constrained differentiable convex optimization which is based on previous ideas by Nesterov and Auslender and Teboulle. The method proposed by the last authors use a coercive Bregman distance to ensure that the iterates remain in the interior of the feasible set. Our results extend this method to allow the use of the squared Euclidean distance. We also show how to estimate the Lipschitz constant of the gradient of the objective function, improving the numerical behavior of the method. Finally, we present numerical experiments to validate our approach and compare it to Nesterov\'s algorithm.
46

Arquitetura de controle de movimento para um robô móvel sobre rodas visando otimização energética. / Motion control architecture for a wheeled mobile robot to energy optimization.

Serralheiro, Werther Alexandre de Oliveira 05 March 2018 (has links)
Este trabalho apresenta uma arquitetura de controle de movimento entre duas posturas distintas para um robô móvel sob rodas com acionamento diferencial em um ambiente estruturado e livre de obstáculos. O conceito clássico de eficiência foi utilizado para a definição das estratégias de controle: um robô se movimenta de forma eficiente quando realiza a tarefa determinada no menor tempo e utilizando menor quantidade energética. A arquitetura proposta é um recorte do modelo de Controle Hierárquico Aninhado (NHC), composto por três níveis de abstração: (i) Planejamento de Caminho, (ii) Planejamento de Trajetória e (iii) Rastreamento de Trajetória. O Planejamento de Caminho proposto suaviza uma geodésica Dubins - o caminho mais eficiente - por uma Spline Grampeada para que este caminho seja definido por uma curva duplamente diferenciável. Uma transformação do espaço de configuração do robô é realizada. O Planejamento de Trajetória é um problema de otimização convexa na forma de Programação Cônica de Segunda Ordem, cujo objetivo é uma função ponderada entre tempo e energia. Como o tempo de percurso e a energia total consumida pelo robô possui uma relação hiperbólica, um algoritmo de sintonia do coeficiente de ponderação entre estas grandezas é proposta. Por fim, um Rastreador de Trajetória de dupla malha baseado em linearização entrada-saída e controle PID é proposto, e obteve resultados satisfatórios no rastreamento do caminho pelo robô. / This work presents a motion control architecture between two different positions for a differential driven wheeled mobile robot in a obstacles free structured environment. The classic concept of efficiency was used to define the control strategies: a robot moves efficiently when it accomplishes the determined task in the shortest time and using less amount of energy. The proposed architecture is a clipping of the Nested Hierarchical Controller (NHC) model, composed of three levels of abstraction: (i) Path Planning, (ii) Trajectory Planning and (iii) Trajectory Tracking. The proposed Path Planning smoothes a geodesic Dubins - the most efficient path - by a Clamped Spline as this path is defined by a twice differentiable curve. A transformation of the robot configuration space is performed. The Trajectory Planning is a convex optimization problem in the form of Second Order Cone Programming, whose objective is a weighted function between time and energy. As the travel time and the total energy consumed by the robot has a hyperbolic relation, a tuning algorithm to the weighting is proposed. Finnaly, a dual-loop Trajectory Tracker based on input-output feedback linearization and PID control is proposed, which obtained satisfactory results in tracking the path by the robot.
47

Distributed Statistical Learning under Communication Constraints

El Gamal, Mostafa 21 June 2017 (has links)
"In this thesis, we study distributed statistical learning, in which multiple terminals, connected by links with limited capacity, cooperate to perform a learning task. As the links connecting the terminals have limited capacity, the messages exchanged between the terminals have to be compressed. The goal of this thesis is to investigate how to compress the data observations at multiple terminals and how to use the compressed data for inference. We first focus on the distributed parameter estimation problem, in which terminals send messages related to their local observations using limited rates to a fusion center that will obtain an estimate of a parameter related to the observations of all terminals. It is well known that if the transmission rates are in the Slepian-Wolf region, the fusion center can fully recover all observations and hence can construct an estimator having the same performance as that of the centralized case. One natural question is whether Slepian-Wolf rates are necessary to achieve the same estimation performance as that of the centralized case. In this thesis, we show that the answer to this question is negative. We then examine the optimality of data dimensionality reduction via sufficient statistics compression in distributed parameter estimation problems. The data dimensionality reduction step is often needed especially if the data has a very high dimension and the communication rate is not as high as the one characterized above. We show that reducing the dimensionality by extracting sufficient statistics of the parameter to be estimated does not degrade the overall estimation performance in the presence of communication constraints. We further analyze the optimal estimation performance in the presence of communication constraints and we verify the derived bound using simulations. Finally, we study distributed optimization problems, for which we examine the randomized distributed coordinate descent algorithm with quantized updates. In the literature, the iteration complexity of the randomized distributed coordinate descent algorithm has been characterized under the assumption that machines can exchange updates with an infinite precision. We consider a practical scenario in which the messages exchange occurs over channels with finite capacity, and hence the updates have to be quantized. We derive sufficient conditions on the quantization error such that the algorithm with quantized update still converge."
48

Hosting Capacity for Renewable Generations in Distribution Grids

January 2018 (has links)
abstract: Nowadays, the widespread introduction of distributed generators (DGs) brings great challenges to the design, planning, and reliable operation of the power system. Therefore, assessing the capability of a distribution network to accommodate renewable power generations is urgent and necessary. In this respect, the concept of hosting capacity (HC) is generally accepted by engineers to evaluate the reliability and sustainability of the system with high penetration of DGs. For HC calculation, existing research provides simulation-based methods which are not able to find global optimal. Others use OPF (optimal power flow) based methods where too many constraints prevent them from obtaining the solution exactly. They also can not get global optimal solution. Due to this situation, I proposed a new methodology to overcome the shortcomings. First, I start with an optimization problem formulation and provide a flexible objective function to satisfy different requirements. Power flow equations are the basic rule and I transfer them from the commonly used polar coordinate to the rectangular coordinate. Due to the operation criteria, several constraints are incrementally added. I aim to preserve convexity as much as possible so that I can obtain optimal solution. Second, I provide the geometric view of the convex problem model. The process to find global optimal can be visualized clearly. Then, I implement segmental optimization tool to speed up the computation. A large network is able to be divided into segments and calculated in parallel computing where the results stay the same. Finally, the robustness of my methodology is demonstrated by doing extensive simulations regarding IEEE distribution networks (e.g. 8-bus, 16-bus, 32-bus, 64-bus, 128-bus). Thus, it shows that the proposed method is verified to calculate accurate hosting capacity and ensure to get global optimal solution. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2018
49

Optimization Methods for a Reconfigurable OTA Chamber

Arnold, Matthew David 01 April 2018 (has links)
Multiple-input multiple-output (MIMO) technology has enabled increased performance of wireless communication devices. The increased complexity associated with MIMO devices requires more realistic testing environments to ensure device performance. This testing can be accomplished by either very accurate but expensive anechoic chambers, less accurate but inexpensive mode-stirred chambers, or the newly introduced reconfigurable over-the-air chamber (ROTAC) that combines the benefits of both anechoic chambers and reverberation chambers. This work focuses on efficient optimization methods to quantify the performance of the ROTAC. First, an efficient optimization technique that combines convex optimization and a simple gradient descent algorithm is developed that can be applied to different ROTAC performance metrics. Plane wave synthesis is used to benchmark performance versus chamber complexity, where the complexity is defined in terms of chamber size and the number of ports in the chamber. Next, the optimization technique is used to study the spatial channel characteristics (power angular spectrum) of the chamber and the generation of arbitrary fading statistics inside the chamber. Lastly, simulation results are compared with practical hardware measurements to highlight the accuracy of the simulation model for the chamber. Overall, this work provides a comprehensive analysis for optimization of different ROTAC performance metrics.
50

Optimization Techniques for Image Processing

Chapagain, Prerak 01 April 2019 (has links)
This research thesis starts off with a basic introduction to optimization and image processing. Because there are several different tools to apply optimization in image processing applications, we started researching one category of mathematical optimization techniques, namely Convex Optimization. This thesis provides a basic background consisting of mathematical concepts, as well as some challenges of employing Convex Optimization in solving problems. One major issue is to be able to identify the convexity of the problem in a potential application (Boyd). After spending a couple of months researching and learning Convex Optimization, my advisor and I decided to go on a different route. We decided to use Heuristic Optimization techniques instead, and in particular, Genetic Algorithms (GA). We also conjectured that the application of GA in image processing for the purpose of object matching could potentially yield good results. As a first step, we used MATLAB as the programming language, and we wrote the GA code from scratch. Next, we applied the GA algorithm in object matching. More specifically, we constructed specific images to demonstrate the effectiveness of the algorithm in identifying objects of interest. The results presented in this thesis indicate that the technique is capable of identifying objects under noise conditions.

Page generated in 0.0854 seconds