Spelling suggestions: "subject:"convex"" "subject:"konvex""
391 |
Arqueologia do Noroeste Mineiro: análise de indústria lítica da bacia do Rio Preto - Unaí, Minas Gerais, Brasil / Archaeology of Minas Gerais Northwest: lithic industry analysis from Rio Preto bassin - Unaí, Minas Gerais, Brasil.Leandro Augusto Franco Xavier 12 February 2008 (has links)
O objetivo deste dissertação é apresentar análise da indústria lítica de superfície do Sítio Corredor de Chumbo, da bacia do Rio Preto, situada na região de Unaí, Noroeste de Minas Gerais. Partindo das informações disponibilizadas pelas pesquisas do IAB (Instituto de Arqueologia Brasileira na década de 1970 e 1980 por meio do PRONAPA (Programa Nacional de Pesquisa Arqueológica) e PROPEVALE (Programa de Pesquisas Arqueológicas do Vale do São Paulo), a pesquisa procurou responder a questões relativas aos sítios líticos de superfície, que ainda não eram bem conhecidos na região. O trabalho incluiu ainda as relações entre o meio físico, a paisagem e os aspectos arqueológicos relativos ao sítio estudado. A metodologia utilizada procurou dialogar entre tipologia e tecnologia dos instrumentos, além de formalizar uma Cadeia Operatória para a indústria lítica analisada. Os resultados indicam que o sítio se constitui em uma mina a céu aberto (Pellegrin, 1995), sendo identificado parte de seu tratamento in situ. Contudo, as partes mais avançadas da Cadeia Operatória estão presentes dentre os vestígios analisados, demonstrando que um sítio dado como de extração e tratamento, também foi utilizado para a finalização de uma gama de instrumentos. Os tipos mais observados, que se destacam pela quantidade e pela excelência são os artefatos Plano-Convexos, os Raspadores sobre lascas (Façonnage e debitagem) e os Artefatos de Ocasião - este último, indicando um alto nível de reaproveitamento de matérias primas marginais, enquanto as mesmas abundavam no sítio e suas imediações. / This dissertation objective is to present an analysis on the lithic industry in Corredor de Chumbo site, in Rio Preto basin, located in the Northwest part of Minas Gerais state. From IAB researches information. In the decade of 70 and 80 by means of PRONAPA and PROPEVALE, this research aimed to answer to the questions related to the surfaces lithic sites that had not been contemplated in a systematic way (Dias Jr & Carvalho, 1982). The work still included relations between the archaeological environment, landscape and Archaeological aspects relating to the studied site. The used methodology purposed to converse between the instruments typology and technology, besides formalizing an Operational Chain for the analyzed lithic industry. The results indicate that the site is composed of an open-air mine (Pellegrin, 1995), being identified as a part of its treatment in situ. However, the Operating Chain most advanced parts are presented amongst the analyzed vestiges, demonstrating that a site considered as an extraction and treatment one also was used for a gamut of instruments finalization. The most observed types, that are outstanding for its quantity excellency are the Convex-Flat devices, the Scrapes (on flakes) and the Expedit Tools - this last one, indicating a considerable level of reuse of despised raw material unused, while those ones appeared in great quantity in the site and its immediacy.
|
392 |
Arquitetura de controle de movimento para um robô móvel sobre rodas visando otimização energética. / Motion control architecture for a wheeled mobile robot to energy optimization.Werther Alexandre de Oliveira Serralheiro 05 March 2018 (has links)
Este trabalho apresenta uma arquitetura de controle de movimento entre duas posturas distintas para um robô móvel sob rodas com acionamento diferencial em um ambiente estruturado e livre de obstáculos. O conceito clássico de eficiência foi utilizado para a definição das estratégias de controle: um robô se movimenta de forma eficiente quando realiza a tarefa determinada no menor tempo e utilizando menor quantidade energética. A arquitetura proposta é um recorte do modelo de Controle Hierárquico Aninhado (NHC), composto por três níveis de abstração: (i) Planejamento de Caminho, (ii) Planejamento de Trajetória e (iii) Rastreamento de Trajetória. O Planejamento de Caminho proposto suaviza uma geodésica Dubins - o caminho mais eficiente - por uma Spline Grampeada para que este caminho seja definido por uma curva duplamente diferenciável. Uma transformação do espaço de configuração do robô é realizada. O Planejamento de Trajetória é um problema de otimização convexa na forma de Programação Cônica de Segunda Ordem, cujo objetivo é uma função ponderada entre tempo e energia. Como o tempo de percurso e a energia total consumida pelo robô possui uma relação hiperbólica, um algoritmo de sintonia do coeficiente de ponderação entre estas grandezas é proposta. Por fim, um Rastreador de Trajetória de dupla malha baseado em linearização entrada-saída e controle PID é proposto, e obteve resultados satisfatórios no rastreamento do caminho pelo robô. / This work presents a motion control architecture between two different positions for a differential driven wheeled mobile robot in a obstacles free structured environment. The classic concept of efficiency was used to define the control strategies: a robot moves efficiently when it accomplishes the determined task in the shortest time and using less amount of energy. The proposed architecture is a clipping of the Nested Hierarchical Controller (NHC) model, composed of three levels of abstraction: (i) Path Planning, (ii) Trajectory Planning and (iii) Trajectory Tracking. The proposed Path Planning smoothes a geodesic Dubins - the most efficient path - by a Clamped Spline as this path is defined by a twice differentiable curve. A transformation of the robot configuration space is performed. The Trajectory Planning is a convex optimization problem in the form of Second Order Cone Programming, whose objective is a weighted function between time and energy. As the travel time and the total energy consumed by the robot has a hyperbolic relation, a tuning algorithm to the weighting is proposed. Finnaly, a dual-loop Trajectory Tracker based on input-output feedback linearization and PID control is proposed, which obtained satisfactory results in tracking the path by the robot.
|
393 |
Controle H2 / H "Infinito' via desigualdades matriciais lineares para atenuação de vibrações em estruturas flexiveis / Mixed H2 / H "Infinity' control through linear inequalities for flexible structures vibration reductionKing, Diego Melo 02 January 2005 (has links)
Orientador: Alberto Luiz Serpa / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica / Made available in DSpace on 2018-08-04T03:06:58Z (GMT). No. of bitstreams: 1
King_DiegoMelo_M.pdf: 7156086 bytes, checksum: e7d5891f788e5e71c720d6a82897b54d (MD5)
Previous issue date: 2005 / Resumo: Este trabalho tem como objetivo verificar a ação de uma formulação obtida para o controle misto H2/ Hoo via realimentação de saída de uma estrutura modelada por elementos finÍtos. Para isso, também se verificaram as respectivas formulações via realimentação de saída dos controles H2 e Hoo isoladamente, para que os resultados obtidos em cada um deles pudessem validar o equacionamento descrito neste trabalho para o controle misto e servissem de parâmetro para balizar os seus resultados esperados. Os equacionamentos dos controladores seguiram a formulação da programação semi-definida, isto é, minimizam um objetivo linear sujeito a restrições em forma de LMI. Estas LMI representam as condições de estabilidade a que o sistema está sujeito, e as características da interação entre o modelo e a ação do controlador. Uma vez definidas todas estas condições, o problema de minimização, que é convexo, foi implementado para ser resolvido através do Matlab. Os resultados obtidos pela formulação para o controle misto também foram comparados com resultados produzidos pelas formulações clássicas do próprio Matlab, para que este último também servisse como um aferidor dos resultados da formulação apresentada neste trabalho. O modelo da estrutura, definida para uma viga engastada em balanço, também foi calculado através do Matlab, discretizada nos elementos finÍtos apropriados. As determinações das matrizes de massa, de rigidez elástica e de amortecimento, esta determinada pelo modelo de amortecimento proporcional, fazem a ligação com as matrizes de estado que representam o sistema na formulação utilizada para os controladores. Os resultados do controle misto mostram que a formulação convexa do controle via realimentação de saída é viável. Os valores apresentados pelas normas H2 e Hoominimizadas do sistema, e as respostas deste a uma entrada aleatória exógena, exibem a capacidade do controlador misto. Mesmo para um controle não-colocado, as vibrações da estrutura são atenuadas junto com a minimização do sinal de controle, e os picos da resposta em freqüência, vistos nas funções de resposta em freqüência, são reduzidos / Abstract: This work has the goal to verify the action of a mixed H2/ Hoo control formulation through output feedback for a structure modelIed in finite elements. Therefore, the pure H2 and Hoo control formulations through output feedback are also verified to validate the described equations in this work for the mixed control and show a reference to their expected results. These formulations are based on the semi-definite programming formulation, and consist of minimizing a linear objective subjected to LMI constraints. These LMI represent the system stability conditions and interaction characteristics between the model and the controller. Once all these conditions are settled, the minimization problem, which is convex, was solved through the Matlab software. The results obtained for the mixed control formulation were compared to the classical results obtained with the Matlab. The purpose is to validate the results ofthe formulation obtained in this work. The structure model, defined for a cantilever beam, was also obtained through the Matlab using the finite elements technique. The mass, stiffness and damping matrices, this last one obtained by the proportional damping model, make the connections to the state-space matrices representing the system for the controller formulation. The results obtained for the mixed control show the convex formulation to output feedback may offer good results. The minimized values of H2 and H00 norms and the behaviour of the controlIed systeIl! for the external random input show the capability of mixed control. Even for a noncollocated control problem the structure vibrations are reduced together with the control signal minimization, and the frequency response peaks, visible in the frequency response graphs, are also reduced. / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
|
394 |
Creating triangle strips from clustered point setsJenke, Peter January 2010 (has links)
To create a digital model of the surface of some object from a setof points, representing positions on the surface of this object, requiresinformation about the relationship between the points. This informa-tion is not immediatly accessible. Thus, for creating such a model itis necessary to establish relationsships between the points of the set.In addition, it should be possible to render the resulting modelas efficiently as possible. Modern graphics cards offer to send vertexinformations as triangle strips; by using triangle strips the informationabout the triangles can be compressed.This work is about a method for retrieving information about therelations between points in an unstructered spatial point set and trans-forming this information into triangle strips. It is based on the convexlayers of a planar point set and an algorithm for triangulating theannuli of the convex layers, which uses the Rotating Calipers.
|
395 |
Search path generation with UAV applications using approximate convex decompositionÖst, Gustav January 2012 (has links)
This work focuses on the problem that pertains to area searching with UAVs. Specifically developing algorithms that generate flight paths that are short with- out sacrificing flyability. For instance, very sharp turns will compromise flyability since fixed wing aircraft cannot make very sharp turns. This thesis provides an analysis of different types of search methods, area decompositions, and combi- nations thereof. The search methods used are side to side searching and spiral searching. In side to side searching the aircraft goes back and forth only making 90-degree turns. Spiral search searches the shape in a spiral pattern starting on the outer perimeter working its way in. The idea being that it should generate flight paths that are easy to fly since all turns should be with a large turn radii. Area decomposition is done to divide complex shapes into smaller more manage- able shapes. The report concludes that with the implemented methods the side to side scanning method without area decomposition yields good and above all very reliable results. The reliability stems from the fact that all turns are 90 degrees and that algorithm never get stuck or makes bad mistakes. Only having 90 degree turns results in only four different types of turns. This allows the airplanes behav- ior along the route to be predictable after flying the first four turns. Although this assumes that the strength of the wind is a greater influence than the turbulences effect on the aircraft’s flight characteristics. This is a very valuable feature for an operator in charge of a flight. The other tested methods and area decompositions often yield a shorter flight path, however, despite extensive adjustments to the algorithms they never came to handle all cases in a satisfactory manner. These methods may also generate any kind of turn at any time, including turns of nearly 180 degrees. These turns can lead to an airplane missing the intended flight path and thus missing to scan the intended area properly. Area decomposition proves to be really effective only when the area has many protrusions that stick out in different directions, think of a starfish shape. In these cases the side to side algo- rithm generate a path that has long legs over parts that are not in the search area. When the area is decomposed the algorithm starts with, for example, one arm of the starfish at a time and then search the rest of the arms and body in turn.
|
396 |
Estimation robuste pour les systèmes incertains / Robust estimation for uncertain systemsBayon, Benoît 06 December 2012 (has links)
Un système est dit robuste s'il est possible de garantir son bon comportement dynamique malgré les dispersions de ses caractéristiques lors de sa fabrication, les variations de l'environnement ou encore son vieillissement. Au-delà du fait que la dispersion des caractéristiques est inéluctable, une plus grande dispersion permet notamment de diminuer fortement les coûts de production. La prise en compte explicite de la robustesse par les ingénieurs est donc un enjeu crucial lors de la conception d'un système. Des propriétés robustes peuvent être garanties lors de la synthèse d'un correcteur en boucle fermée. Il est en revanche beaucoup plus difficile de garantir ces propriétés en boucle ouverte, ce qui concerne par exemple des cas comme la synthèse d'estimateur.Prendre en compte la robustesse lors de la synthèse est une problématique importante de la communauté du contrôle robuste. Un certain nombre d'outils ont été développés pour analyser la robustesse d'un système vis-à-vis d'un ensemble d'incertitudes(μ analyse par exemple). Bien que le problème soit intrinsèquement complexe au sens algorithmique, des relaxations ont permis de formuler des conditions suffisantes pour tester la stabilité d'un système vis-à-vis d'un ensemble d'incertitudes. L'émergence de l'Optimisation sous contrainte Inégalité Matricielle Linéaire (LMI) a permis de tester ces conditions suffisantes au moyen d'un algorithme efficace, c'est-à-dire convergeant vers une solution en un temps raisonnable grâce au développement des méthodes des points intérieurs.En se basant sur ces résultats d'analyse, le problème de synthèse de correcteurs en boucle fermée ne peut pas être formulé sous la forme d'un problème d'optimisation pour lequel un algorithme efficace existe. En revanche, pour certains cas comme la synthèse de filtres robustes, le problème de synthèse peut être formulé sous la forme d'un problème d'optimisation sous contrainte LMI pour lequel un algorithme efficace existe. Ceci laisse entrevoir un certain potentiel de l'approche robuste pour la synthèse d'estimateurs.Exploitant ce fait, cette thèse propose une approche complète du problème de synthèse d'estimateurs robustes par l'intermédiaire des outils d'analyse de la commande robuste en conservant le caractère efficace de la synthèse lié aux outils classiques. Cette approche passe par une ré-interprétation de l'estimation nominale (sans incertitude) par l'optimisation sous contrainte LMI, puis par une extension systématique des outils de synthèse et d'analyse développés pour l'estimation nominale à l'estimation robuste.Cette thèse présente des outils de synthèse d'estimateurs, mais également des outils d'analyse qui permettront de tester les performances robustes atteintes par les estimateurs.Les résultats présentés dans ce document sont exprimés sous la forme de théorèmes présentant des contraintes LMI. Ces théorèmes peuvent se mettre de façon systématique sous la forme d'un problème d'optimisation pour lequel un algorithme efficace existe.Pour finir, les problèmes de synthèse d'estimateurs robustes appartiennent à une classe plus générale de problèmes de synthèse robuste : les problèmes de synthèse robuste en boucle ouverte. Ces problèmes de synthèse ont un potentiel très intéressant. Des résultats de base sont formulés pour la synthèse en boucle ouverte, permettant de proposer des méthodes de synthèse robustes dans des cas pour lesquels la mise en place d'une boucle de rétroaction est impossible. Une extension aux systèmes LPV avec une application à la commande de position sans capteur de position est également proposée. / A system is said to be robust if it is possible to guarantee his dynamic behaviour despite dispersion of his features due to production, environmental changes or aging. beyond the fact that a dispersion is ineluctable, a greater one allows to reduce production costs. Thus, considering robustness is a crucial stake during the conception of a system.Robustness can be achieved using feedback, but is more difficult in Open-Loop, which concerns estimator synthesis for instance.Robustness is a major concern of the Robust Control Community. Many tools have been developed to analyse robustness of a system towards a set of uncertainties (μ analysis for instance). And even if the problem is known to be difficult (speaking of complexity), sufficient conditions allow to formulate results to test the robust stability of a system. Thanks to the development of interior point methods, the emergence of optimization under Linear Matrix Inequalities Constraints allows to test these results using an efficient algorithm.Based on these analysis results, the robust controller synthesis problem cannot be recast as a convex optimization problem involving LMI. But for some cases such as filter synthesis, the synthesis problem can recast as a convex optimization problem. This fact let sense that robust control tools have some potential for estimators synthesis.Exploiting this fact, this thesis ofiers a complete approach of robust estimator synthesis, using robust control tools, while keeping what made the nominal approaches successful : eficient computation tools. this approach goes through reinterpretation of nominal estimation using LMI optimization, then propose a systematic extension of these tools to robust estimation.This thesis presents not only synthesis tools, but also analysis tools, allowing to test the robust performance reached by estimators All the results are proposed as convex optimization problems involving LMI.As a conclusion, robust estimator synthesis problems belong to a wider class of problems : robust open-loop synthesis problems, which have a great potential in many applications. Basic results are formulated for open-loop synthesis, providing results for cases where feedback cannot be used. An extension to LPV systems with an application to sensorless control is given.
|
397 |
Multidimensional adaptive radio links for broadband communicationsCodreanu, M. (Marian) 06 November 2007 (has links)
Abstract
Advanced multiple-input multiple-output (MIMO) transceiver structures which utilize the knowledge of channel state information (CSI) at the transmitter side to optimize certain link parameters (e.g., throughput, fairness, spectral efficiency, etc.) under different constraints (e.g., maximum transmitted power, minimum quality of services (QoS), etc.) are considered in this thesis.
Adaptive transmission schemes for point-to-point MIMO systems are considered first. A robust link adaptation method for time-division duplex systems employing MIMO-OFDM channel eigenmode based transmission is developed. A low complexity bit and power loading algorithm which requires low signaling overhead is proposed.
Two algorithms for computing the sum-capacity of MIMO downlink channels with full CSI knowledge are derived. The first one is based on the iterative waterfilling method. The convergence of the algorithm is proved analytically and the computer simulations show that the algorithm converges faster than the earlier variants of sum power constrained iterative waterfilling algorithms. The second algorithm is based on the dual decomposition method. By tracking the instantaneous error in the inner loop, a faster version is developed.
The problem of linear transceiver design in MIMO downlink channels is considered for a case when the full CSI of scheduled users only is available at the transmitter. General methods for joint power control and linear transmit and receive beamformers design are provided. The proposed algorithms can handle multiple antennas at the base station and at the mobile terminals with an arbitrary number of data streams per scheduled user. The optimization criteria are fairly general and include sum power minimization under the minimum signal-to-interference-plus-noise ratio (SINR) constraint per data stream, the balancing of SINR values among data streams, minimum SINR maximization, weighted sum-rate maximization, and weighted sum mean square error minimization. Besides the traditional sum power constraint on the transmit beamformers, multiple sum power constraints can be imposed on arbitrary subsets of the transmit antennas.This extends the applicability of the results to novel system architectures, such as cooperative base station transmission using distributed MIMO antennas. By imposing per antenna power constraints, issues related to the linearity of the power amplifiers can be handled as well.
The original linear transceiver design problems are decomposed as a series of remarkably simpler optimization problems which can be efficiently solved by using standard convex optimization techniques. The advantage of this approach is that it can be easily extended to accommodate various supplementary constraints such as upper and/or lower bounds for the SINR values and guaranteed QoS for different subsets of users. The ability to handle transceiver optimization problems where a network-centric objective (e.g., aggregate throughput or transmitted power) is optimized subject to user-centric constraints (e.g., minimum QoS requirements) is an important feature which must be supported by future broadband communication systems.
|
398 |
Measurement of three-dimensional coherent fluid structure in high Reynolds number turbulent boundary layersClark, Thomas Henry January 2012 (has links)
The turbulent boundary layer is an aspect of fluid flow which dominates the performance of many engineering systems - yet the analytic solution of such flows is intractable for most applications. Our understanding of boundary layers is therefore limited by our ability to simulate and measure them. Tomographic Particle Image Velocimetry (TPIV) is a recently developed technique for direct measurement of fluid velocity within a 3D region. This allows new insight into the topological structure of turbulent boundary layers. Increasing Reynolds Number increases the range of scales at which turbulence exists; a measurement technique must have a larger 'dynamic range' to fully resolve the flow. Tomographic PIV is currently limited in spatial dynamic range (which is also linked to the spatial and temporal resolution) due to a high degree of noise. Results also contain significant bias error. This work proposes a modification of the technique to use more than two exposures in the PIV process, which (for four exposures) is shown to improve random error by a factor of 2 to 7 depending on experimental setup parameters. The dynamic range increases correspondingly and can be doubled again in highly turbulent flows. Bias error is reduced by up to 40%. An alternative reconstruction approach is also presented, based on application of a reduction strategy (elimination of coefficients based on a first guess) to the tomographic weightings matrix Wij. This facilitates a potentially significant increase in computational efficiency. Despite the achieved reduction in error, measurements contain non-zero divergence due to noise and sampling errors. The same problem affects visualisation of topology and coherent fluid structures. Using Projection Onto Convex Sets, a framework for post-processing operators is implemented which includes a divergence minimisation procedure and a scale-limited denoising strategy which is resilient to 'false' vectors contained in the data. Finally, developed techniques are showcased by visualisation of topological information in the inner region of a high Reynolds Number boundary layer (δ+ = 1890, Reθ = 3650). Comments are made on the visible flow structures and tentative conclusions are drawn.
|
399 |
Numerical Modelling of van der Waals FluidsOdeyemi, Tinuade A. January 2012 (has links)
Many problems in fluid mechanics and material sciences deal with liquid-vapour flows. In these flows, the ideal gas assumption is not accurate and the van der Waals equation of state is usually used. This equation of state is non-convex and causes the solution domain to have two hyperbolic regions separated by an elliptic region. Therefore, the governing equations of these flows have a mixed elliptic-hyperbolic nature.
Numerical oscillations usually appear with standard finite-difference space discretization schemes, and they persist when the order of accuracy of the semi-discrete scheme is increased. In this study, we propose to use a Chebyshev pseudospectral method for solving the governing equations. A comparison of the results of this method with very high-order (up to tenth-order accurate) finite difference schemes is presented, which shows that the proposed method leads to a lower level of numerical oscillations than other high-order finite difference schemes, and also does not exhibit fast-traveling packages of short waves which are usually observed in high-order finite difference methods. The proposed method can thus successfully capture various complex regimes of waves and phase transitions in both elliptic and hyperbolic regimes
|
400 |
Proximal methods for convex minimization of Phi-divergences : application to computer vision. / Méthodes proximales convexes pour la minimisation des Phi-divergences : applications à la stéréo visionEl Gheche, Mireille 27 May 2014 (has links)
Cette thèse s'inscrit dans le contexte de l'optimisation convexe. Elle apporte à ce domaine deux contributions principales. La première porte sur les méthodes d'optimisation convexe non lisse appliquées à la vision par ordinateur. Quant à la seconde, elle fournit de nouveaux résultats théoriques concernant la manipulation de mesures de divergences, telles que celles utilisées en théorie de l'information et dans divers problèmes d'optimisation. Le principe de la stéréovision consiste à exploiter deux images d'une même scène prises sous deux points de vue, afin de retrouver les pixels homologues et de se ramener ainsi à un problème d'estimation d'un champ de disparité. Dans ce travail, le problème de l'estimation de la disparité est considéré en présence de variations d'illumination. Ceci se traduit par l'ajout, dans la fonction objective globale à minimiser, d'un facteur multiplicatif variant spatialement, estimé conjointement avec la disparité. Nous avons mis l'accent sur l'avantage de considérer plusieurs critères convexes et non-nécessairement différentiables, et d'exploiter des images multicomposantes (par exemple, des images couleurs) pour améliorer les performances de notre méthode. Le problème d'estimation posé est résolu en utilisant un algorithme parallèle proximal basé sur des développements récents en analyse convexe. Dans une seconde partie, nous avons étendu notre approche au cas multi-vues qui est un sujet de recherche relativement nouveau. Cette extension s'avère particulièrement utile dans le cadre d'applications où les zones d'occultation sont très larges et posent de nombreuses difficultés. Pour résoudre le problème d'optimisation associé, nous avons utilisé des algorithmes proximaux en suivant des approches multi-étiquettes relaxés de manière convexe. Les algorithmes employés présentent l'avantage de pouvoir gérer simultanément un grand nombre d'images et de contraintes, ainsi que des critères convexes et non convexes. Des résultats sur des images synthétiques ont permis de valider l'efficacité de ces méthodes, pour différentes mesures d'erreur. La dernière partie de cette thèse porte sur les problèmes d'optimisation convexe impliquant des mesures d'information (Phi-divergences), qui sont largement utilisés dans le codage source et le codage canal. Ces mesures peuvent être également employées avec succès dans des problèmes inverses rencontrés dans le traitement du signal et de l'image. Les problèmes d'optimisation associés sont souvent difficiles à résoudre en raison de leur grande taille. Dans ce travail, nous avons établi les expressions des opérateurs proximaux de ces divergences. En s'appuyant sur ces résultats, nous avons développé une approche proximale reposant sur l'usage de méthodes primales-duales. Ceci nous a permis de répondre à une large gamme de problèmes d'optimisation convexe dont la fonction objective comprend un terme qui s'exprime sous la forme de l'une de ces divergences / Convex optimization aims at searching for the minimum of a convex function over a convex set. While the theory of convex optimization has been largely explored for about a century, several related developments have stimulated a new interest in the topic. The first one is the emergence of efficient optimization algorithms, such as proximal methods, which allow one to easily solve large-size nonsmooth convex problems in a parallel manner. The second development is the discovery of the fact that convex optimization problems are more ubiquitous in practice than was thought previously. In this thesis, we address two different problems within the framework of convex optimization. The first one is an application to computer stereo vision, where the goal is to recover the depth information of a scene from a pair of images taken from the left and right positions. The second one is the proposition of new mathematical tools to deal with convex optimization problems involving information measures, where the objective is to minimize the divergence between two statistical objects such as random variables or probability distributions. We propose a convex approach to address the problem of dense disparity estimation under varying illumination conditions. A convex energy function is derived for jointly estimating the disparity and the illumination variation. The resulting problem is tackled in a set theoretic framework and solved using proximal tools. It is worth emphasizing the ability of this method to process multicomponent images under illumination variation. The conducted experiments indicate that this approach can effectively deal with the local illumination changes and yields better results compared with existing methods. We then extend the previous approach to the problem of multi-view disparity estimation. Rather than estimating a single depth map, we estimate a sequence of disparity maps, one for each input image. We address this problem by adopting a discrete reformulation that can be efficiently solved through a convex relaxation. This approach offers the advantage of handling both convex and nonconvex similarity measures within the same framework. We have shown that the additional complexity required by the application of our method to the multi-view case is small with respect to the stereo case. Finally, we have proposed a novel approach to handle a broad class of statistical distances, called $varphi$-divergences, within the framework of proximal algorithms. In particular, we have developed the expression of the proximity operators of several $varphi$-divergences, such as Kulback-Leibler, Jeffrey-Kulback, Hellinger, Chi-Square, I$_{alpha}$, and Renyi divergences. This allows proximal algorithms to deal with problems involving such divergences, thus overcoming the limitations of current state-of-the-art approaches for similar problems. The proposed approach is validated in two different contexts. The first is an application to image restoration that illustrates how to employ divergences as a regularization term, while the second is an application to image registration that employs divergences as a data fidelity term
|
Page generated in 0.0328 seconds