• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 14
  • 13
  • 11
  • 6
  • 1
  • 1
  • Tagged with
  • 224
  • 224
  • 53
  • 45
  • 41
  • 39
  • 38
  • 32
  • 24
  • 24
  • 23
  • 23
  • 22
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Decentralized probabilistic density control of swarm of autonomous agents with conflict avoidance constraints

Demir, Nazlı 01 October 2014 (has links)
This report describes a method to control the density distribution of a large number of autonomous agents. The approach is based on the fact that there are a large number of agents in the system, and hence the time evolution of the probabilistic density distribution of agents can be described as a Markov chain. The main contribution of this paper is the synthesis of a Markov matrix which will guide the multi-agent system density to a desired steady-state density distribution, in a probabilistic sense, while satisfying some motion and safety constraints. Also, an adaptive density control method based on real time density feedback is introduced to synthesize a time-varying Markov ma- trix, which leads to better convergence to the desired density distribution. Finally, a decentralized density computation method is described. This method guarantees that all agents will have a best, and common, density estimate in a finite, with an explicit bound, number of communication updates. / text
32

Parallel magnetic resonance imaging reconstruction problems using wavelet representations / Problèmes de reconstruction en imagerie par résonance magnétique parallèle à l'aide de représentations en ondelettes

Chaari, Lotfi 05 November 2010 (has links)
Pour réduire le temps d'acquisition ou bien améliorer la résolution spatio-temporelle dans certaines application en IRM, de puissantes techniques parallèles utilisant plusieurs antennes réceptrices sont apparues depuis les années 90. Dans ce contexte, les images d'IRM doivent être reconstruites à partir des données sous-échantillonnées acquises dans le « k-space ». Plusieurs approches de reconstruction ont donc été proposées dont la méthode SENSitivity Encoding (SENSE). Cependant, les images reconstruites sont souvent entâchées par des artéfacts dus au bruit affectant les données observées, ou bien à des erreurs d'estimation des profils de sensibilité des antennes. Dans ce travail, nous présentons de nouvelles méthodes de reconstruction basées sur l'algorithme SENSE, qui introduisent une régularisation dans le domaine transformé en ondelettes afin de promouvoir la parcimonie de la solution. Sous des conditions expérimentales dégradées, ces méthodes donnent une bonne qualité de reconstruction contrairement à la méthode SENSE et aux autres techniques de régularisation classique (e.g. Tikhonov). Les méthodes proposées reposent sur des algorithmes parallèles d'optimisation permettant de traiter des critères convexes, mais non nécessairement différentiables contenant des a priori parcimonieux. Contrairement à la plupart des méthodes de reconstruction qui opèrent coupe par coupe, l'une des méthodes proposées permet une reconstruction 4D (3D + temps) en exploitant les corrélations spatiales et temporelles. Le problème d'estimation d'hyperparamètres sous-jacent au processus de régularisation a aussi été traité dans un cadre bayésien en utilisant des techniques MCMC. Une validation sur des données réelles anatomiques et fonctionnelles montre que les méthodes proposées réduisent les artéfacts de reconstruction et améliorent la sensibilité/spécificité statistique en IRM fonctionnelle / To reduce scanning time or improve spatio-temporal resolution in some MRI applications, parallel MRI acquisition techniques with multiple coils have emerged since the early 90's as powerful methods. In these techniques, MRI images have to be reconstructed from acquired undersampled « k-space » data. To this end, several reconstruction techniques have been proposed such as the widely-used SENSitivity Encoding (SENSE) method. However, the reconstructed images generally present artifacts due to the noise corrupting the observed data and coil sensitivity profile estimation errors. In this work, we present novel SENSE-based reconstruction methods which proceed with regularization in the complex wavelet domain so as to promote the sparsity of the solution. These methods achieve accurate image reconstruction under degraded experimental conditions, in which neither the SENSE method nor standard regularized methods (e.g. Tikhonov) give convincing results. The proposed approaches relies on fast parallel optimization algorithms dealing with convex but non-differentiable criteria involving suitable sparsity promoting priors. Moreover, in contrast with most of the available reconstruction methods which proceed by a slice by slice reconstruction, one of the proposed methods allows 4D (3D + time) reconstruction exploiting spatial and temporal correlations. The hyperparameter estimation problem inherent to the regularization process has also been addressed from a Bayesian viewpoint by using MCMC techniques. Experiments on real anatomical and functional data show that the proposed methods allow us to reduce reconstruction artifacts and improve the statistical sensitivity/specificity in functional MRI
33

Dynamical system decomposition and analysis using convex optimization

Anderson, James David January 2012 (has links)
This thesis is concerned with investigating new methods for the analysis of large-scale dynamical systems using convex optimization. The proposed methodology is based on composite Lyapunov theory and is computationally implemented using polynomial programming techniques. The main result of this work is the development of a system decomposition framework that makes it possible to analyze systems that are of such a scale that traditional methods cannot cope with. We begin by addressing the problem of model invalidation. A barrier certificate method for invalidating models in the presence of uncertain data is presented for both continuous and discrete time models. It is shown how a re-parameterization of the time dependent variables can improve the numerical conditioning of the underlying optimization problem. The main contribution of this thesis is the development of an automated dynamical system decomposition framework that permits us to verify the stability of systems that typically have a state dimension large enough to render traditional computational methods intractable. The underlying idea is to decompose a system into a set of lower order subsystems connected in feedback in such a manner that composite methods for stability verification may be employed. What is unique about the algorithm presented is that it takes into account both dynamics and the topology of the interconnection graph. In the first instance we illustrate the methodology with an ecological network and primal Internet congestion control scheme. The versatility of the decomposition framework is also highlighted when it is shown that when applied to a model of the EGF-MAPK signaling pathway it is capable of identifying biologically relevant subsystems in addition to stability verification. Finally we introduce stability metrics for interconnected dynamical systems based on the theory of dissipativity. We conclude by outlining a clustering based decomposition algorithm that explicitly takes into account the input and output dynamics when determining the system decomposition.
34

Computational and Statistical Advances in Testing and Learning

Ramdas, Aaditya Kumar 01 July 2015 (has links)
This thesis makes fundamental computational and statistical advances in testing and estimation, making critical progress in theory and application of classical statistical methods like classification, regression and hypothesis testing, and understanding the relationships between them. Our work connects multiple fields in often counter-intuitive and surprising ways, leading to new theory, new algorithms, and new insights, and ultimately to a cross-fertilization of varied fields like optimization, statistics and machine learning. The first of three thrusts has to do with active learning, a form of sequential learning from feedback-driven queries that often has a provable statistical advantage over passive learning. We unify concepts from two seemingly different areas—active learning and stochastic firstorder optimization. We use this unified view to develop new lower bounds for stochastic optimization using tools from active learning and new algorithms for active learning using ideas from optimization. We also study the effect of feature noise, or errors-in-variables, on the ability to actively learn. The second thrust deals with the development and analysis of new convex optimization algorithms for classification and regression problems. We provide geometrical and convex analytical insights into the role of the margin in margin-based classification, and develop new greedy primal-dual algorithms for non-linear classification. We also develop a unified proof for convergence rates of randomized algorithms for the ordinary least squares and ridge regression problems in a variety of settings, with the purpose of investigating which algorithm should be utilized in different settings. Lastly, we develop fast state-of-the-art numerically stable algorithms for an important univariate regression problem called trend filtering with a wide variety of practical extensions. The last thrust involves a series of practical and theoretical advances in nonparametric hypothesis testing. We show that a smoothedWasserstein distance allows us to connect many vast families of univariate and multivariate two sample tests. We clearly demonstrate the decreasing power of the families of kernel-based and distance-based two-sample tests and independence tests with increasing dimensionality, challenging existing folklore that they work well in high dimensions. Surprisingly, we show that these tests are automatically adaptive to simple alternatives and achieve the same power as other direct tests for detecting mean differences. We discover a computation-statistics tradeoff, where computationally more expensive two-sample tests have a provable statistical advantage over cheaper tests. We also demonstrate the practical advantage of using Stein shrinkage for kernel independence testing at small sample sizes. Lastly, we develop a novel algorithmic scheme for performing sequential multivariate nonparametric hypothesis testing using the martingale law of the iterated logarithm to near-optimally control both type-1 and type-2 errors. One perspective connecting everything in this thesis involves the closely related and fundamental problems of linear regression and classification. Every contribution in this thesis, from active learning to optimization algorithms, to the role of the margin, to nonparametric testing fits in this picture. An underlying theme that repeats itself in this thesis, is the computational and/or statistical advantages of sequential schemes with feedback. This arises in our work through comparing active with passive learning, through iterative algorithms for solving linear systems instead of direct matrix inversions, and through comparing the power of sequential and batch hypothesis tests.
35

Convex optimization for cosegmentation / Optimisation convexe pour la cosegmentation

Joulin, Armand 17 December 2012 (has links)
La simplicité apparente avec laquelle un humain perçoit ce qui l'entoure suggère que le processus impliqué est en partie mécanique, donc ne nécessite pas un haut degré de réflexion. Cette observation suggère que notre perception visuelle du monde peut être simulée sur un ordinateur. La vision par ordinateur est le domaine de recherche consacré au problème de la création d'une forme de perception visuelle pour des ordinateurs. La puissance de calcul des ordinateurs des années 50 ne permettait pas de traiter et d'analyser les données visuelles nécessaires à l'élaboration d'une perception visuelle virtuelle. Depuis peu, la puissance de calcul et la capacité de stockage ont permis à ce domaine de vraiment émerger. En deux décennies, la vision par ordinateur a permis de répondre à problèmes pratiques ou industrielles comme la détection des visages, de personnes au comportement suspect dans une foule ou de défauts de fabrication dans des chaînes de production. En revanche, en ce qui concerne l'émergence d'une perception visuelle virtuelle non spécifique à une tâche donnée, peu de progrès ont été réalisés et la communauté est toujours confrontée à des problèmes fondamentaux. Un de ces problèmes est de segmenter un stimuli optique ou une image en régions porteuses de sens, en objets ou actions. La segmentation de scène est naturelle pour les humains, mais aussi essentielle pour comprendre pleinement son environnement. Malheureusement elle est aussi extrêmement difficile à reproduire sur un ordinateur car il n'existe pas de définition claire de la région "significative''. En effet, en fonction de la scène ou de la situation, une région peut avoir des interprétations différentes. Etant donnée une scène se passant dans la rue, on peut considérer que distinguer un piéton est important dans cette situation, par contre ses vêtements ne le semblent pas nécessairement. Si maintenant nous considérons une scène ayant lieu pendant un défilé de mode, un vêtement devient un élément important, donc une région significative. Ici, nous nous concentrons sur ce problème de segmentation et nous l'abordons sous un angle particulier pour éviter cette difficulté fondamentale. Nous considérerons la segmentation comme un problème d'apprentissage faiblement supervisé, c'est-à-dire qu'au lieu de segmenter des images selon une certaine définition prédéfinie de régions "significatives'', nous développons des méthodes permettant de segmenter simultanément un ensemble d'images en régions qui apparaissent régulièrement. Nous définissons donc une région "significative'' d'un point de vue statistique: Ce sont les régions qui apparaissent régulièrement dans l'ensemble des images données. Pour cela nous concevons des modèles ayant une portée qui va au-delà de l'application à la vision. Notre approche prend ses racines dans l'apprentissage statistique, dont l'objectif est de concevoir des méthodes efficaces pour extraire et/ou apprendre des motifs récurrents dans des jeux de données. Ce domaine a récemment connu une forte popularité en raison de l'augmentation du nombre et de la taille des bases de données disponibles. Nous nous concentrons ici sur des méthodes conçues pour découvrir l'information "cachée'' dans une base à partir d'annotations incomplètes ou inexistantes. Enfin, nos travaux prennent racine dans le domaine de l'optimisation numérique afin d'élaborer des algorithmes efficaces et adaptés à nos problèmes. En particulier, nous utilisons et adaptons des outils récemment développés afin de relaxer des problèmes combinatoires complexes en des problèmes convexes pour lesquels il est garanti de trouver la solution optimale. Nous illustrons la qualité de nos formulations et algorithmes aussi sur des problèmes tirés de domaines autres que la vision par ordinateur. En particulier, nous montrons que nos travaux peuvent être utilisés dans la classification de texte et en biologie cellulaire. / People and most animals have a natural ability to see the world and understand it effortlessly. The apparent simplicity of this task suggests that this ability is, to some extend, mechanical, i.e., does not require high level thinking or profound reasoning. This observation suggests that this visual perception of the world should be reproducible on a mechanical device such as a computer. Computer vision is the field of research dedicated to creating a form of visual perception on computers. The first work on computer vision dates from the 50's but the amount of power needed for treating and analyzing visual data was not available at this time. It is only recently that improvements in computer power and storage capacities, have permitted this field to really emerge. On the one hand, constant progress in computer vision has allowed to develop dedicated solutions to practical or industrial problems. Detecting human faces, tracking people in crowded areas or default in production chains are industrial applications where computer vision is used. On the other hand, when it comes to creating a general visual perception for computers, it is probably fair to say that less progress has been made, and the community is still struggling with fundamental problems. One of these problems is to reproduce our ability of grouping into meaningful regions, the visual input data recorded by an optical device. This procedure, called segmentation, separates a scene into meaningful entities (e.g., objects or actions). Segmentation seems not only natural but essential for people to fully understand a given scene, but it is still very challenging for a computer. One reason is the difficulty of clearly identify what ``meaningful'' should be, i.e., depending on the scene or the situation, a region may have different interpretations. In this thesis, we will focus on the segmentation task and will try to avoid this fundamental difficulty by considering segmentation as a weakly supervised learning problem. Instead of segmenting images according to some predefined definition of ``meaningful'' regions, we develop methods to segment multiple images jointly into entities that repeatedly appear across the set of images. In other words, we define ``meaningful'' regions from a statistical point of view: they are regions that appears frequently in a dataset, and we design procedures to discover them. This leads us to design models whose a scope goes beyond this application to vision. Our approach takes its roots in the field of machine learning, whose goal is to design efficient methods to retrieve and/or learn common patterns in data. The field of machine learning has also gained in popularity in the last decades due to the recent improvement in computer power and the ever growing size of databases now available. In this thesis, we focus on methods tailored to retrieving hidden information from poorly annotated data, i.e., with incomplete or partial annotations. In particular, given a specific segmentation task defined by a set of images, we aim at segmenting the images and learn a related model as to segment unannotated images. Finally, our research drives us to explore the field of numerical optimization so as to design algorithms especially tailored for our problems. In particular, many numerical problems considered in this thesis cannot be solved by off-the-shelf software because of the complexity of their formulation. We use and adapt recently developed tools to approximate problems by solvable ones. We illustrate the promise of our formulations and algorithms on other general applications in different fields beside computer vision. In particular, we show that our work may also be used in text classification and discovery of cell configurations.
36

Factor analysis of dynamic PET images

Cruz Cavalcanti, Yanna 31 October 2018 (has links) (PDF)
Thanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the $\beta$-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics.
37

Novos métodos incrementais para otimização convexa não-diferenciável em dois níveis com aplicações em reconstrução de imagens em tomografia por emissão / New incremental methods for bivel nondifferentiable convex optimization with applications on image reconstruction in emission tomography

Simões, Lucas Eduardo Azevedo 28 March 2013 (has links)
Apresentamos dois novos métodos para a solução de problemas de otimização convexa em dois níveis não necessariamente diferenciáveis, i.e., mostramos que as sequências geradas por ambos os métodos convergem para o conjunto ótimo de uma função não suave sujeito a um conjunto que também envolve a minimização de uma função não diferenciável. Ambos os algoritmos dispensam qualquer tipo de resolução de subproblemas ou busca linear durante suas iterações. Ao final, para demonstrar que os métodos são viáveis, resolvemos um problema de reconstrução de imagens tomográficas / We present two new methods for solving bilevel convex optimization problems, where both functions are not necessarily differentiable, i.e., we show that the sequences generated by those methods converge to the optimal set of a nonsmooth function subject to a set that also involves a function minimization. Both algorithms do not require any kind of subproblems resolution or linear search during the iterations. At the end, to prove that our methods are viable, we solve a problem of tomographic image reconstruction
38

Arquitetura de controle de movimento para um robô móvel sobre rodas visando otimização energética. / Motion control architecture for a wheeled mobile robot to energy optimization.

Serralheiro, Werther Alexandre de Oliveira 05 March 2018 (has links)
Este trabalho apresenta uma arquitetura de controle de movimento entre duas posturas distintas para um robô móvel sob rodas com acionamento diferencial em um ambiente estruturado e livre de obstáculos. O conceito clássico de eficiência foi utilizado para a definição das estratégias de controle: um robô se movimenta de forma eficiente quando realiza a tarefa determinada no menor tempo e utilizando menor quantidade energética. A arquitetura proposta é um recorte do modelo de Controle Hierárquico Aninhado (NHC), composto por três níveis de abstração: (i) Planejamento de Caminho, (ii) Planejamento de Trajetória e (iii) Rastreamento de Trajetória. O Planejamento de Caminho proposto suaviza uma geodésica Dubins - o caminho mais eficiente - por uma Spline Grampeada para que este caminho seja definido por uma curva duplamente diferenciável. Uma transformação do espaço de configuração do robô é realizada. O Planejamento de Trajetória é um problema de otimização convexa na forma de Programação Cônica de Segunda Ordem, cujo objetivo é uma função ponderada entre tempo e energia. Como o tempo de percurso e a energia total consumida pelo robô possui uma relação hiperbólica, um algoritmo de sintonia do coeficiente de ponderação entre estas grandezas é proposta. Por fim, um Rastreador de Trajetória de dupla malha baseado em linearização entrada-saída e controle PID é proposto, e obteve resultados satisfatórios no rastreamento do caminho pelo robô. / This work presents a motion control architecture between two different positions for a differential driven wheeled mobile robot in a obstacles free structured environment. The classic concept of efficiency was used to define the control strategies: a robot moves efficiently when it accomplishes the determined task in the shortest time and using less amount of energy. The proposed architecture is a clipping of the Nested Hierarchical Controller (NHC) model, composed of three levels of abstraction: (i) Path Planning, (ii) Trajectory Planning and (iii) Trajectory Tracking. The proposed Path Planning smoothes a geodesic Dubins - the most efficient path - by a Clamped Spline as this path is defined by a twice differentiable curve. A transformation of the robot configuration space is performed. The Trajectory Planning is a convex optimization problem in the form of Second Order Cone Programming, whose objective is a weighted function between time and energy. As the travel time and the total energy consumed by the robot has a hyperbolic relation, a tuning algorithm to the weighting is proposed. Finnaly, a dual-loop Trajectory Tracker based on input-output feedback linearization and PID control is proposed, which obtained satisfactory results in tracking the path by the robot.
39

Hosting Capacity for Renewable Generations in Distribution Grids

January 2018 (has links)
abstract: Nowadays, the widespread introduction of distributed generators (DGs) brings great challenges to the design, planning, and reliable operation of the power system. Therefore, assessing the capability of a distribution network to accommodate renewable power generations is urgent and necessary. In this respect, the concept of hosting capacity (HC) is generally accepted by engineers to evaluate the reliability and sustainability of the system with high penetration of DGs. For HC calculation, existing research provides simulation-based methods which are not able to find global optimal. Others use OPF (optimal power flow) based methods where too many constraints prevent them from obtaining the solution exactly. They also can not get global optimal solution. Due to this situation, I proposed a new methodology to overcome the shortcomings. First, I start with an optimization problem formulation and provide a flexible objective function to satisfy different requirements. Power flow equations are the basic rule and I transfer them from the commonly used polar coordinate to the rectangular coordinate. Due to the operation criteria, several constraints are incrementally added. I aim to preserve convexity as much as possible so that I can obtain optimal solution. Second, I provide the geometric view of the convex problem model. The process to find global optimal can be visualized clearly. Then, I implement segmental optimization tool to speed up the computation. A large network is able to be divided into segments and calculated in parallel computing where the results stay the same. Finally, the robustness of my methodology is demonstrated by doing extensive simulations regarding IEEE distribution networks (e.g. 8-bus, 16-bus, 32-bus, 64-bus, 128-bus). Thus, it shows that the proposed method is verified to calculate accurate hosting capacity and ensure to get global optimal solution. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2018
40

Optimization Methods for a Reconfigurable OTA Chamber

Arnold, Matthew David 01 April 2018 (has links)
Multiple-input multiple-output (MIMO) technology has enabled increased performance of wireless communication devices. The increased complexity associated with MIMO devices requires more realistic testing environments to ensure device performance. This testing can be accomplished by either very accurate but expensive anechoic chambers, less accurate but inexpensive mode-stirred chambers, or the newly introduced reconfigurable over-the-air chamber (ROTAC) that combines the benefits of both anechoic chambers and reverberation chambers. This work focuses on efficient optimization methods to quantify the performance of the ROTAC. First, an efficient optimization technique that combines convex optimization and a simple gradient descent algorithm is developed that can be applied to different ROTAC performance metrics. Plane wave synthesis is used to benchmark performance versus chamber complexity, where the complexity is defined in terms of chamber size and the number of ports in the chamber. Next, the optimization technique is used to study the spatial channel characteristics (power angular spectrum) of the chamber and the generation of arbitrary fading statistics inside the chamber. Lastly, simulation results are compared with practical hardware measurements to highlight the accuracy of the simulation model for the chamber. Overall, this work provides a comprehensive analysis for optimization of different ROTAC performance metrics.

Page generated in 0.1083 seconds