• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 26
  • 12
  • 3
  • 1
  • 1
  • Tagged with
  • 99
  • 99
  • 45
  • 38
  • 20
  • 19
  • 18
  • 17
  • 16
  • 16
  • 16
  • 15
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Méthodes variationnelles d'ensemble itératives pour l'assimilation de données non-linéaire : Application au transport et la chimie atmosphérique / Iterative ensemble variational methods for nonlinear data assimilation : Application to transport and atmospheric chemistry

Haussaire, Jean-Matthieu 23 June 2017 (has links)
Les méthodes d'assimilation de données sont en constante évolution pour s'adapter aux problèmes à résoudre dans les multiples domaines d’application. En sciences de l'atmosphère, chaque nouvel algorithme a d'abord été implémenté sur des modèles de prévision numérique du temps avant d'être porté sur des modèles de chimie atmosphérique. Ce fut le cas des méthodes variationnelles 4D et des filtres de Kalman d'ensemble par exemple. La nouvelle génération d'algorithmes variationnels d'ensemble quadridimensionnels (EnVar 4D) ne fait pas exception. Elle a été développée pour tirer partie des deux approches variationnelle et ensembliste et commence à être appliquée au sein des centres opérationnels de prévision numérique du temps, mais n'a à ce jour pas été testée sur des modèles opérationnels de chimie atmosphérique.En effet, la complexité de ces modèles rend difficile la validation de nouvelles méthodes d’assimilation. Il est ainsi nécessaire d'avoir à disposition des modèles d’ordre réduit, qui doivent être en mesure de synthétiser les phénomènes physiques à l'{oe}uvre dans les modèles opérationnels tout en limitant certaines des difficultés liées à ces derniers. Un tel modèle, nommé L95-GRS, a donc été développé. Il associe la météorologie simpliste du modèle de Lorenz-95 à un module de chimie de l'ozone troposphérique avec 7 espèces chimiques. Bien que de faible dimension, il reproduit des phénomènes physiques et chimiques observables en situation réelle. Une méthode d'assimilation de donnée, le lisseur de Kalman d'ensemble itératif (IEnKS), a été appliquée sur ce modèle. Il s'agit d'une méthode EnVar 4D itérative qui résout le problème non-linéaire variationnel complet. Cette application a permis de valider les méthodes EnVar 4D dans un contexte de chimie atmosphérique non-linéaire, mais aussi de soulever les premières limites de telles méthodes.Fort de cette expérience, les résultats ont été étendus au cas d’un modèle réaliste de prévision de pollution atmosphérique. Les méthodes EnVar 4D, via l'IEnKS, ont montré leur potentiel pour tenir compte de la non-linéarité du modèle de chimie dans un contexte maîtrisé, avec des observations synthétiques. Cependant, le passage à des observations réelles d'ozone troposphérique mitige ces résultats et montre la difficulté que représente l'assimilation de données en chimie atmosphérique. En effet, une très forte erreur est associée à ces modèles, provenant de sources d'incertitudes variées. Deux démarches doivent alors être entreprises pour pallier ce problème.Tout d’abord, la méthode d’assimilation doit être en mesure de tenir compte efficacement de l’erreur modèle. Cependant, la majorité des méthodes sont développées en supposant au contraire un modèle parfait. Pour se passer de cette hypothèse, une nouvelle méthode a donc été développée. Nommée IEnKF-Q, elle étend l'IEnKS au cas avec erreur modèle. Elle a été validée sur un modèle jouet, démontrant sa supériorité par rapport à des méthodes d'assimilation adaptées naïvement pour tenir compte de l’erreur modèle.Toutefois, une telle méthode nécessite de connaître la nature et l'amplitude exacte de l'erreur modèle qu'elle doit prendre en compte. Aussi, la deuxième démarche consiste à recourir à des outils statistiques pour quantifier cette erreur modèle. Les algorithmes d'espérance-maximisation, de emph{randomize-then-optimize} naïf et sans biais, un échantillonnage préférentiel fondé sur l'approximation de Laplace, ainsi qu'un échantillonnage avec une méthode de Monte-Carlo par chaînes de Markov, y compris transdimensionnelle, ont ainsi été évalués, étendus et comparés pour estimer l'incertitude liée à la reconstruction du terme source des accidents des centrales nucléaires de Tchernobyl et Fukushima-Daiichi.Cette thèse a donc enrichi le domaine de l'assimilation de données EnVar 4D par ses apports méthodologiques et en ouvrant la voie à l’application de ces méthodes sur les modèles de chimie atmosphérique / Data assimilation methods are constantly evolving to adapt to the various application domains. In atmospheric sciences, each new algorithm has first been implemented on numerical weather prediction models before being ported to atmospheric chemistry models. It has been the case for 4D variational methods and ensemble Kalman filters for instance. The new 4D ensemble variational methods (4D EnVar) are no exception. They were developed to take advantage of both variational and ensemble approaches and they are starting to be used in operational weather prediction centers, but have yet to be tested on operational atmospheric chemistry models.The validation of new data assimilation methods on these models is indeed difficult because of the complexity of such models. It is hence necessary to have at our disposal low-order models capable of synthetically reproducing key physical phenomenons from operational models while limiting some of their hardships. Such a model, called L95-GRS, has therefore been developed. It combines the simple meteorology from the Lorenz-95 model to a tropospheric ozone chemistry module with 7 chemical species. Even though it is of low dimension, it reproduces some of the physical and chemical phenomenons observable in real situations. A data assimilation method, the iterative ensemble Kalman smoother (IEnKS), has been applied to this model. It is an iterative 4D EnVar method which solves the full non-linear variational problem. This application validates 4D EnVar methods in the context of non-linear atmospheric chemistry, but also raises the first limits of such methods.After this experiment, results have been extended to a realistic atmospheric pollution prediction model. 4D EnVar methods, via the IEnKS, have once again shown their potential to take into account the non-linearity of the chemistry model in a controlled environment, with synthetic observations. However, the assimilation of real tropospheric ozone concentrations mitigates these results and shows how hard atmospheric chemistry data assimilation is. A strong model error is indeed attached to these models, stemming from multiple uncertainty sources. Two steps must be taken to tackle this issue.First of all, the data assimilation method used must be able to efficiently take into account the model error. However, most methods are developed under the assumption of a perfect model. To avoid this hypothesis, a new method has then been developed. Called IEnKF-Q, it expands the IEnKS to the model error framework. It has been validated on a low-order model, proving its superiority over data assimilation methods naively adapted to take into account model error.Nevertheless, such methods need to know the exact nature and amplitude of the model error which needs to be accounted for. Therefore, the second step is to use statistical tools to quantify this model error. The expectation-maximization algorithm, the naive and unbiased randomize-then-optimize algorithms, an importance sampling based on a Laplace proposal, and a Markov chain Monte Carlo simulation, potentially transdimensional, have been assessed, expanded, and compared to estimate the uncertainty on the retrieval of the source term of the Chernobyl and Fukushima-Daiichi nuclear power plant accidents.This thesis therefore improves the domain of 4D EnVar data assimilation by its methodological input and by paving the way to applying these methods on atmospheric chemistry models
42

[en] CONVENTIONAL, HYBRID AND SIMPLIFIED BOUNDARY ELEMENT METHODS / [pt] MÉTODOS DE ELEMENTOS DE CONTORNO CONVENCIONAL, HÍBRIDOS E SIMPLIFICADOS

MARIA FERNANDA FIGUEIREDO DE OLIVEIRA 08 October 2004 (has links)
[pt] Apresentam-se as formulações, consolidando a nomenclatura e os principais conceitos dos métodos de elementos de contorno: convencional (MCCEC), híbrido de tensões (MHTEC), híbrido de deslocamentos (MHDEC) e híbrido simplificado de tensões (MHSTEC). proposto o método híbrido simplificado de deslocamentos (MHSDEC), em contrapartida ao MHSTEC, baseando-se nas mesmas hipóteses de aproximação de tensões e deslocamentos do MHDEC e supondo que a solução fundamental em termos de tensões seja válida no contorno. Como decorrência do MHSTEC e do MHSDEC, é apresentado também o método híbrido de malha reduzida dos elementos de contorno (MHMREC), com aplicação computacionalmente vantajosa a problemas no domínio da freqüência ou envolvendo materiais não-homogêneos. A partir da investigação das equações matriciais desses métodos, são identificadas quatro novas relações matriciais, das quais uma verifica-se como válida para a obtenção dos elementos das matrizes de flexibilidade e de deslocamento que não podem ser determinados por integração ou avaliação direta. Também é proposta a correta consideração, ainda não muito bem explicada na literatura, de que forças de superfície devem ser interpoladas em função de atributos de superfície e não de atributos nodais. São apresentadas aplicações numéricas para problemas de potencial para cada método mencionado, em que é verificada a validade das novas relações matriciais. / [en] A consolidated, unified formulation of the conventional (CCBEM), hybrid stress (HSBEM), hybrid displacement (HDBEM) and simplified hybrid stress (SHSBEM) boundary element methods is presented. As a counterpart of SHSBEM, the simplified hybrid displacement boundary element method (SHDBEM) is proposed on the basis of the same stress and displacement approximation hypotheses of the HDBEM and on the assumption that stress fundamental solutions are also valid on the boundary. A combination of the SHSBEM and the SHDBEM gives rise to a provisorily called mesh-reduced hybrid boundary element method (MRHBEM), which seems computationally advantageous when applied to frequency domain problems or non-homogeneous materials. Four new matrix relations are identified, one of which may be used to obtain the flexibility and displacement matrix coefficients that cannot be determined by integration or direct evaluation. It is also proposed the correct consideration, still not well explained in the technical literature, that traction forces should be interpolated as functions of surface and not of nodal attributes. Numerical examples of potential problems are presented for each method, in which the validity of the new matrix relations is verified.
43

Segmentation par contours actifs basés alpha-divergences : application à la segmentation d’images médicales et biomédicales / Active contours segmentation based on alpha-divergences : Segmentation of medical and biomedical images

Meziou, Leïla Ikram 28 November 2013 (has links)
La segmentation de régions d'intérêt dans le cadre de l'analyse d'images médicales et biomédicales reste encore à ce jour un challenge en raison notamment de la variété des modalités d'acquisition et des caractéristiques associées (bruit par exemple).Dans ce contexte particulier, cet exposé présente une méthode de segmentation de type contour actif dont l ‘énergie associée à l'obtention de l'équation d'évolution s'appuie sur une mesure de similarité entre les densités de probabilités (en niveau de gris) des régions intérieure et extérieure au contour au cours du processus itératif de segmentation. En particulier, nous nous intéressons à la famille particulière des alpha-divergences. L'intérêt principal de cette méthode réside (i) dans la flexibilité des alpha-divergences dont la métrique intrinsèque peut être paramétrisée via la valeur du paramètre alpha et donc adaptée aux distributions statistiques des régions de l'image à segmenter ; et (ii) dans la capacité unificatrice de cette mesure statistique vis-à-vis des distances classiquement utilisées dans ce contexte (Kullback- Leibler, Hellinger...). Nous abordons l'étude de cette mesure statistique tout d'abord d'un point de vue supervisé pour lequel le processus itératif de segmentation se déduit de la minimisation de l'alpha-divergence (au sens variationnel) entre la densité de probabilité courante et une référence définie a priori. Puis nous nous intéressons au point de vue non supervisé qui permet de s'affranchir de l'étape de définition des références par le biais d'une maximisation de distance entre les densités de probabilités intérieure et extérieure au contour. Par ailleurs, nous proposons une démarche d'optimisation de l'évolution du paramètre alpha conjointe au processus de minimisation ou de maximisation de la divergence permettant d'adapter itérativement la divergence à la statistique des données considérées. Au niveau expérimental, nous proposons une étude comparée des différentes approches de segmentation : en premier lieu, sur des images synthétiques bruitées et texturées, puis, sur des images naturelles. Enfin, nous focalisons notre étude sur différentes applications issues des domaines biomédicaux (microscopie confocale cellulaire) et médicaux (radiographie X, IRM) dans le contexte de l'aide au diagnotic. Dans chacun des cas, une discussion sur l'apport des alpha-divergences est proposée. / In the particular field of Computer-Aided-Diagnosis, the segmentation of particular regions of interest corresponding usually to organs is still a challenging issue mainly because of the various existing for which the charateristics of acquisition are very different (corrupting noise for instance). In this context, this PhD work introduces an original histogram-based active contour segmentation using alpha-divergence family as similarity measure. The method keypoint are twofold: (i) the flexibility of alpha-divergences whose metric could be parametrized using alpha value can be adaptedto the statistical distribution of the different regions of the image and (ii) the ability of alpha-divergence ability to enbed standard distances like the Kullback-Leibler's divergence or the Hellinger's one makes these divergences an interesting unifying tool.In this document, first, we propose a supervised version of proposed approach:. In this particular case, the iterative process of segmentation comes from alpha-divergenceminimization between the current probability density function and a reference one which can be manually defined for instance. In a second part, we focus on the non-supervised version of the method inorder to be able.In that particular case, the alpha-divergence maximization between probabilitydensity functions of inner and outer regions defined by the active contour is maximized. In addition, we propose an optimization scheme of the alpha parameter jointly with the optimization of the divergence in order to adapt iteratively the divergence to the inner statistics of processed data. Furthermore, a comparative study is proposed between the different segmentation schemes : first, on synthetic images then, on natural images. Finally, we focus on different kinds of biomedical images (cellular confocal microscopy) and medical ones (X-ray) for computer-aided diagnosis.
44

[en] AN EXPEDITE IMPLEMENTATION OF THE HYBRID BOUNDARY ELEMENT METHOD FOR POTENTIAL AND ELASTICITY PROBLEMS / [pt] UMA IMPLEMENTAÇÃO EXPEDITA DO MÉTODO HÍBRIDO DOS ELEMENTOS DE CONTORNO PARA PROBLEMAS DE POTENCIAL E ELASTICIDADE

CARLOS ANDRES AGUILAR MARON 14 January 2015 (has links)
[pt] O desenvolvimento consistente do método convencional dos elementos de contorno (CBEM), com a adição de conceitos da versão simplificada do método híbrido dos elementos de contorno (HBEM), proveniente do potencial variacional de Hellinger-Reissner, conduz-se a um processo computacionalmente mais econômico, sem a necessidade de ter sua precisão numérica reduzida para problemas de grande escala, podendo ser bidimensional ou tridimensional, de potencial ou elasticidade. Conseguiu-se mostrar que as matrizes de potencial duplo e simples do CBEM, H e G, respectivamente, cuja avaliação numérica requer a manipulação de integrais singulares e impróprias, podem ser obtidas de maneira expedita, eliminando-se quase toda a integração numérica, com exceção de algumas integrais regulares. Uma importante característica da formulação proposta, que advém da base variacional do HBEM, é a facilidade da obtenção de resultados em pontos internos, de maneira direta e sem a utilização de qualquer integral de contorno, já que a solução fundamental é a própria solução do problema. O presente trabalho pertence a um projeto cujo resultado final deve ser um código computacional para problemas de grande escala (milhões de graus de liberdade). Nesta fase, alguns exemplos numéricos foram testados para avaliar a aplicabilidade do método expedito, o seu esforço computacional e a convergência do resultado para as variáveis envolvidas no método. Para isso, foram implementados algoritmos para problemas bidimensionais de potencial e elasticidade - usando elementos lineares, quadráticos e cúbicos - e tridimensionais - usando elementos triangulares e quadrilaterais, lineares e quadráticos nos dois casos. Os códigos computacionais foram implementados focando na solução de problemas de grande escala. Espera-se que numa etapa final o projeto possa ser bem mais eficaz, com a incorporação de procedimentos do método fast multipole. / [en] The consistent development of the conventional boundary elements method (CBEM) by adding the concepts of the hybrid boundary element simplified method (HBEM) , from the Hellinger-Reissner variational potential leads to a computationally less intensive procedure, although not necessarily less accurate for large scale, two-dimensional or three-dimensional problems of potential and elasticity. It was shown that both single-layer and double-layer potential matrices, G and H, respectively, are obtained in an expeditious way that vanish almost any numerical integration, except for a few regular integrals, even G and H evaluation requires the handling of singular and improper integrals. The proposed formulation comes from the HBEM variational base and its evaluation at internal points is straightforward without the application of any boundary integral, since the fundamental solution is the analytical one. This work belongs to a project that aims a computer code for large-scale problems (millions of degrees of freedom). At this stage, some numerical examples were analyzed to evaluate the applicability of the method expeditious its computational effort and convergence of the results for the variables involved in the method. It was developed by the algorithms implementation for potential and elasticity problems. In the case of two-dimensional were employed linear, quadratic and cubic elements and to the three-dimensional case were employed triangular, quadrilateral, linear and quadratic elements in both cases. The computational codes were always implemented focused on solving largescale problems. It is expected that in a final stage of the project with the incorporation procedure of the method fast multipole, it can be more efficiently.
45

Positive solutions for Schrödinger-Poisson type systems / Soluções positivas para sistemas do tipo Schrödinger-Poisson

Rodriguez, Edwin Gonzalo Murcia 09 June 2017 (has links)
In this thesis we study Schrödinger-Poisson systems and we look for positive solutions. Our work consists in three chapters. Chapter 1 includes some basic facts on critical point theory. In Chapter 2 we consider a fractional Schrödinger-Poisson system in the whole space R^N in presence of a positive potential and depending on a small positive parameter . We show that, for suitably small (i.e. in the \"semiclassical limit\") the number of positive solutions is estimated below by the Ljusternick-Schnirelmann category of the set of minima of the potential. Finally, in Chapter 3, we analyze a Schrödinger-Poisson system in R^3 under an asymptotically cubic nonlinearity. We prove the existence of positive, radial solutions inside a ball and in an exterior domain. / Nesta tese nós estudamos sistemas de Schrödinger-Poisson e procuramos soluções positivas. Nosso trabalho consiste em três capítulos. O Capítulo 1 contém alguns fatos básicos sobre a teoria de pontos críticos. No Capítulo 2 nós consideramos um sistema fracionário de Schrödinger-Poisson em todo o espaço R^N em presença de um potencial positivo e que depende de um pequeno parâmetro positivo . Nós mostramos que, para suficentemente pequeno (i.e. no limite semiclássico) o número de soluções positivas é estimado por abaixo pela categoria de Ljusternick-Schnirelmann dos conjuntos onde o potencial é mínimo. Finalmente, no Capítulo 3 nós analisamos um sistema Schrödinger-Poisson em R^3 sob a não linearidade assintoticamente cúbica. Mostramos a existência de soluções radiais positivas dentro de uma bola e em um domínio exterior.
46

Image matching for 3D reconstruction using complementary optical and geometric information / Appariement d'images pour la reconstruction 3D en utilisant l'information optique et géométrique

Galindo, Patricio A. 20 January 2015 (has links)
L’appariement d’images est un sujet central de recherche en vision par ordinateur. La recherche sur cette problématique s’est longuement concentrée sur ses aspects optiques, mais ses aspects géométriques ont reçu beaucoup moins d’attention. Cette thèse promeut l’utilisation de la géométrie pour compléter les informations optique dans les tâches de mise en correspondance d’images. Tout d’abord, nous nous penchons sur les méthodes globales pour lesquelles les occlusions et arêtes vives posent des défis. Dans ces scenarios, le résultat est fortement dépendant de la contribution du terme de régularisation. À l'aide d’une caractérisation géométrique de ce comportement, nous formulons une méthode d’appariement qui dirige les lignes de la grille loin des régions problématiques. Bien que les méthodes variationnelles fournissent des résultats qui se comportent bien en général, les méthodes locales basées sur la propagation de correspondances fournissent des résultats qui s’adaptent mieux à divers structures 3D mais au détriment de la cohérence globale de la surface reconstruite. Par conséquent, nous présentons une nouvelle méthode de propagation guidée par des reconstructions locales de surface / AbstractImage matching is a central research topic in computer vision which has been mainly focused on optical aspects. The aim of the work presented herein consists in the direct use of geometry to complement optical information in the tasks of 2D matching. First, we focus on global methods based on the calculus of variations. In such methods occlusions and sharp features raise difficult challenges. In these scenarios only the contribution of the regularizer accounts for results. Based on a geometric characterization of this behaviour, we formulate a variational matching method that steers grid lines away from problematic regions. While variational methods provide well behaved results, local methods based on match propagation provide results that adapt closely to varying 3D structures although choppy in nature. Therefore, we present a novel method to propagate matches using local information about surface regularity correcting 3D positions along with corresponding 2D matchings
47

On Hamiltonian elliptic systems with exponential growth in dimension two / Sistemas elípticos hamiltonianos com crescimento exponencial em dimensão dois

Leuyacc, Yony Raúl Santaria 23 June 2017 (has links)
In this work we study the existence of nontrivial weak solutions for some Hamiltonian elliptic systems in dimension two, involving a potential function and nonlinearities which possess maximal growth with respect to a critical curve (hyperbola). We consider four different cases. First, we study Hamiltonian systems in bounded domains with potential function identically zero. The second case deals with systems of equations on the whole space, the potential function is bounded from below for some positive constant and satisfies some integrability conditions, while the nonlinearities involve weight functions containing a singulatity at the origin. In the third case, we consider systems with coercivity potential functions and nonlinearities with weight functions which may have singularity at the origin or decay at infinity. In the last case, we study Hamiltonian systems, where the potential can be unbounded or can vanish at infinity. To establish the existence of solutions, we use variational methods combined with Trudinger-Moser type inequalities for Lorentz-Sobolev spaces and a finite-dimensional approximation. / Neste trabalho estudamos a existência de soluções fracas não triviais para sistemas hamiltonianos do tipo elíptico, em dimensão dois, envolvendo uma função potencial e não linearidades tendo crescimento exponencial máximo com respeito a uma curva (hipérbole) crítica. Consideramos quatro casos diferentes. Primeiramente estudamos sistemas de equações em domínios limitados com potencial nulo. No segundo caso, consideramos sistemas de equações em domínio ilimitado, sendo a função potencial limitada inferiormente por alguma constante positiva e satisfazendo algumas de integrabilidade, enquanto as não linearidades contêm funções-peso tendo uma singularidade na origem. A classe seguinte envolve potenciais coercivos e não linearidades com funções peso que podem ter singularidade na origem ou decaimento no infinito. O quarto caso é dedicado ao estudo de sistemas em que o potencial pode ser ilimitado ou decair a zero no infinito. Para estabelecer a existência de soluções, utilizamos métodos variacionais combinados com desigualdades do tipo Trudinger-Moser em espaços de Lorentz-Sobolev e a técnica de aproximação em dimensão finita.
48

Resultados de multiplicidade para equações de Schrödinger com campo magnético via teoria de Morse e topologia do domínio / Multiplicity results for nonlinear Schrödinger equations with magnetic field via Morse theory and domain topology

Nemer, Rodrigo Cohen Mota 02 December 2013 (has links)
Neste trabalho, estudamos a existência de soluções não triviais para uma classe de equações de Schrödinger não lineares envolvendo um campo magnético com condição de Dirichlet ou condição de fronteira mista Dirichlet-Neumann. Nos dois primeiros capítulos, damos uma estimativa para o número de soluções não triviais para o problema de Dirichlet em termos da topologia do domínio. Nos dois capítulos restantes, consideramos o problema de fronteira mista e estimamos o número de soluções não triviais em termos da topologia da porção da fronteira onde é prescrita a condição de Neumann. Em ambos os casos, usamos a teoria de categoria de Ljusternik-Schnirelmann e a teoria de Morse / We study the existence of nontrivial solutions for a class of nonlinear Schrödinger equations involving a magnetic field with Dirichlet or mixed DirichletNeumann boundary condition. In the first two chapters we give an estimate for the number of nontrivial solutions for the Dirichlet boundary value problem in terms of topology of the domain. In the last two chapters we consider mixed DirichletNeumann boundary value problems and the estimation of the number of nontrivial solutions is given in terms of the topology of the part of the boundary where the Neumann condition is prescribed. In both cases, we use Lyusternik- Shnirelman category and the Morse theory
49

Optimal, Multi-Modal Control with Applications in Robotics

Mehta, Tejas R. 04 April 2007 (has links)
The objective of this dissertation is to incorporate the concept of optimality to multi-modal control and apply the theoretical results to obtain successful navigation strategies for autonomous mobile robots. The main idea in multi-modal control is to breakup a complex control task into simpler tasks. In particular, number of control modes are constructed, each with respect to a particular task, and these modes are combined according to some supervisory control logic in order to complete the overall control task. This way of modularizing the control task lends itself particularly well to the control of autonomous mobile robot, as evidenced by the success of behavior-based robotics. Many challenging and interesting research issues arise when employing multi-modal control. This thesis aims to address these issues within an optimal control framework. In particular, the contributions of this dissertation are as follows: We first addressed the problem of inferring global behaviors from a collection of local rules (i.e., feedback control laws). Next, we addressed the issue of adaptively varying the multi-modal control system to further improve performance. Inspired by adaptive multi-modal control, we presented a constructivist framework for the learning from example problem. This framework was applied to the DARPA sponsored Learning Applied to Ground Robots (LAGR) project. Next, we addressed the optimal control of multi-modal systems with infinite dimensional constraints. These constraints are formulated as multi-modal, multi-dimensional (M3D) systems, where the dimensions of the state and control spaces change between modes to account for the constraints, to ease the computational burdens associated with traditional methods. Finally, we used multi-modal control strategies to develop effective navigation strategies for autonomous mobile robots. The theoretical results presented in this thesis are verified by conducting simulated experiments using Matlab and actual experiments using the Magellan Pro robot platform and the LAGR robot. In closing, the main strength of multi-modal control lies in breaking up complex control task into simpler tasks. This divide-and-conquer approach helps modularize the control system. This has the same effect on complex control systems that object-oriented programming has for large-scale computer programs, namely it allows greater simplicity, flexibility, and adaptability.
50

Rapid simultaneous hypersonic aerodynamic and trajectory optimization for conceptual design

Grant, Michael James 30 March 2012 (has links)
Traditionally, the design of complex aerospace systems requires iteration among segregated disciplines such as aerodynamic modeling and trajectory optimization. Multidisciplinary design optimization algorithms have been developed to efficiently orchestrate the interaction among these disciplines during the design process. For example, vehicle capability is generally obtained through sequential iteration among vehicle shape, aerodynamic performance, and trajectory optimization routines in which aerodynamic performance is obtained from large pre-computed tables that are a function of angle of attack, sideslip, and flight conditions. This numerical approach segregates advancements in vehicle shape design from advancements in trajectory optimization. This investigation advances the state-of-the-art in conceptual hypersonic aerodynamic analysis and trajectory optimization by removing the source of iteration between aerodynamic and trajectory analyses and capitalizing on fundamental linkages across hypersonic solutions. Analytic aerodynamic relations, like those derived in this investigation, are possible in any flow regime in which the flowfield can be accurately described analytically. These relations eliminate the large aerodynamic tables that contribute to the segregation of disciplinary advancements. Within the limits of Newtonian flow theory, many of the analytic expressions derived in this investigation provide exact solutions that eliminate the computational error of approximate methods widely used today while simultaneously improving computational performance. To address the mathematical limit of analytic solutions, additional relations are developed that fundamentally alter the manner in which Newtonian aerodynamics are calculated. The resulting aerodynamic expressions provide an analytic mapping of vehicle shape to trajectory performance. This analytic mapping collapses the traditional, segregated design environment into a single, unified, mathematical framework which enables fast, specialized trajectory optimization methods to be extended to also include vehicle shape. A rapid trajectory optimization methodology suitable for this new, mathematically integrated design environment is also developed by relying on the continuation of solutions found via indirect methods. Examples demonstrate that families of optimal hypersonic trajectories can be quickly constructed for varying trajectory parameters, vehicle shapes, atmospheric properties, and gravity models to support design space exploration, trade studies, and vehicle requirements definition. These results validate the hypothesis that many hypersonic trajectory solutions are connected through fast indirect optimization methods. The extension of this trajectory optimization methodology to include vehicle shape through the development of analytic hypersonic aerodynamic relations enables the construction of a unified mathematical framework to perform rapid, simultaneous hypersonic aerodynamic and trajectory optimization. Performance comparisons relative to state-of-the-art methodologies illustrate the computational advantages of this new, unified design environment.

Page generated in 0.0807 seconds