• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 256
  • 76
  • 46
  • 36
  • 20
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 494
  • 494
  • 145
  • 135
  • 80
  • 76
  • 75
  • 69
  • 69
  • 68
  • 65
  • 61
  • 57
  • 54
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Transport optimal pour l'assimilation de données images / Optimal transportation for images data assimilation

Feyeux, Nelson 08 December 2016 (has links)
Pour prédire l'évolution d'un système physique, nous avons besoin d'initialiser le modèle mathématique le représentant, donc d'estimer la valeur de l'état du système au temps initial. Cet état n'est généralement pas directement mesurable car souvent trop complexe. En revanche, nous disposons d'informations du système, prises à des temps différents, incomplètes, mais aussi entachées d'erreurs, telles des observations, de précédentes estimations, etc. Combiner ces différentes informations partielles et imparfaites pour estimer la valeur de l'état fait appel à des méthodes d'assimilation de données dont l'idée est de trouver un état initial proche de toutes les informations. Ces méthodes sont très utilisées en météorologie. Nous nous intéressons dans cette thèse à l'assimilation d'images, images qui sont de plus en plus utilisées en tant qu'observations. La spécificité de ces images est leur cohérence spatiale, l'oeil humain peut en effet percevoir des structures dans les images que les méthodes classiques d'assimilation ne considèrent généralement pas. Elles ne tiennent compte que des valeurs de chaque pixel, ce qui résulte dans certains cas à des problèmes d'amplitude dans l'état initial estimé. Pour résoudre ce problème, nous proposons de changer d'espace de représentation des données : nous plaçons les données dans un espace de Wasserstein où la position des différentes structures compte. Cet espace, équipé d'une distance de Wasserstein, est issue de la théorie du transport optimal et trouve beaucoup d'applications en imagerie notamment.Dans ce travail nous proposons une méthode d'assimilation variationnelle de données basée sur cette distance de Wasserstein. Nous la présentons ici, ainsi que les algorithmes numériques liés et des expériences montrant ses spécificités. Nous verrons dans les résultats comment elle permet de corriger ce qu'on appelle erreurs de position. / Forecasting of a physical system is computed by the help of a mathematical model. This model needs to be initialized by the state of the system at initial time. But this state is not directly measurable and data assimilation techniques are generally used to estimate it. They combine all sources of information such as observations (that may be sparse in time and space and potentially include errors), previous forecasts, the model equations and error statistics. The main idea of data assimilation techniques is to find an initial state accounting for the different sources of informations. Such techniques are widely used in meteorology, where data and particularly images are more and more numerous due to the increasing number of satellites and other sources of measurements. This, coupled with developments of meteorological models, have led to an ever-increasing quality of the forecast.Spatial consistency is one specificity of images. For example, human eyes are able to notice structures in an image. However, classical methods of data assimilation do not handle such structures because they take only into account the values of each pixel separately. In some cases it leads to a bad initial condition. To tackle this problem, we proposed to change the representation of an image: images are considered here as elements of the Wasserstein space endowed with the Wasserstein distance coming from the optimal transport theory. In this space, what matters is the positions of the different structures.This thesis presents a data assimilation technique based on this Wasserstein distance. This technique and its numerical procedure are first described, then experiments are carried out and results shown. In particularly, it appears that this technique was able to give an analysis of corrected position.
322

Efficient high-dimension gaussian sampling based on matrix splitting : application to bayesian Inversion / Échantillonnage gaussien en grande dimension basé sur le principe du matrix splitting. : application à l’inversion bayésienne

Bӑrbos, Andrei-Cristian 10 January 2018 (has links)
La thèse traite du problème de l’échantillonnage gaussien en grande dimension.Un tel problème se pose par exemple dans les problèmes inverses bayésiens en imagerie où le nombre de variables atteint facilement un ordre de grandeur de 106_109.La complexité du problème d’échantillonnage est intrinsèquement liée à la structure de la matrice de covariance. Pour résoudre ce problème différentes solutions ont déjà été proposées,parmi lesquelles nous soulignons l’algorithme de Hogwild qui exécute des mises à jour de Gibbs locales en parallèle avec une synchronisation globale périodique.Notre algorithme utilise la connexion entre une classe d’échantillonneurs itératifs et les solveurs itératifs pour les systèmes linéaires. Il ne cible pas la distribution gaussienne requise, mais cible une distribution approximative. Cependant, nous sommes en mesure de contrôler la disparité entre la distribution approximative est la distribution requise au moyen d’un seul paramètre de réglage.Nous comparons d’abord notre algorithme avec les algorithmes de Gibbs et Hogwild sur des problèmes de taille modérée pour différentes distributions cibles. Notre algorithme parvient à surpasser les algorithmes de Gibbs et Hogwild dans la plupart des cas. Notons que les performances de notre algorithme dépendent d’un paramètre de réglage.Nous comparons ensuite notre algorithme avec l’algorithme de Hogwild sur une application réelle en grande dimension, à savoir la déconvolution-interpolation d’image.L’algorithme proposé permet d’obtenir de bons résultats, alors que l’algorithme de Hogwild ne converge pas. Notons que pour des petites valeurs du paramètre de réglage, notre algorithme ne converge pas non plus. Néanmoins, une valeur convenablement choisie pour ce paramètre permet à notre échantillonneur de converger et d’obtenir de bons résultats. / The thesis deals with the problem of high-dimensional Gaussian sampling.Such a problem arises for example in Bayesian inverse problems in imaging where the number of variables easily reaches an order of 106_109. The complexity of the sampling problem is inherently linked to the structure of the covariance matrix. Different solutions to tackle this problem have already been proposed among which we emphasizethe Hogwild algorithm which runs local Gibbs sampling updates in parallel with periodic global synchronisation.Our algorithm makes use of the connection between a class of iterative samplers and iterative solvers for systems of linear equations. It does not target the required Gaussian distribution, instead it targets an approximate distribution. However, we are able to control how far off the approximate distribution is with respect to the required one by means of asingle tuning parameter.We first compare the proposed sampling algorithm with the Gibbs and Hogwild algorithms on moderately sized problems for different target distributions. Our algorithm manages to out perform the Gibbs and Hogwild algorithms in most of the cases. Let us note that the performances of our algorithm are dependent on the tuning parameter.We then compare the proposed algorithm with the Hogwild algorithm on a large scalereal application, namely image deconvolution-interpolation. The proposed algorithm enables us to obtain good results, whereas the Hogwild algorithm fails to converge. Let us note that for small values of the tuning parameter our algorithm fails to converge as well.Not with standing, a suitably chosen value for the tuning parameter enables our proposed sampler to converge and to deliver good results.
323

Thermographie infrarouge et méthodes d'inférence statistique pour la détermination locale et transitoire de termes-sources et diffusivité thermique / Thermographic measurements and inverse problems for the source-term estimation

Massard da Fonseca, Henrique 11 January 2012 (has links)
Ce travail a pour objectif de développer des techniques théoriques et expérimentales pour la détermination des propriétés thermophysiques et terme source. Deux formes de comportement temporel pour le terme source ont été étudiées : un constant et un qui varie dans le temps. La variation dans le temps a été considérée comme une pulse carrée ou une variation sinusoïdale. Deux formes d’échauffement ont été utilisées : une résistance électrique et un laser diode. Pour l’acquisition des données une caméra de thermographie par infrarouge a été utilisée. La stratégie nodale a été utilisée pour contourner le problème des grosses quantités de données générées par la caméra. Le problème direct a été résolu par différences finies, et deux approches pour la solution du problème inverse ont été utilisées, en fonction du comportement temporel du terme source. Les deux approches sont basées sur des méthodes d’inférence statistiques dans une approche Bayésienne, avec la méthode de Monte Carlo via les Chaînes de Markov pour le terme source constant, et le filtre de Kalman pour le problème dont le terme source varie dans le temps. Des manipulations contrôlées ont été faites dans un échantillon avec des propriétés thermophysiques déterminées par des méthodes classiques dans la littérature. / This work deals with the development of new theoretical and experimental techniques for the efficient estimation of thermophysical properties and source-term in micro and macro-scale. Two kinds of source term were studied: a constant and a time varying source term. The time wise variation of the source term had a sinusoidal and a pulse form. Two devices were used for the sample heating: An electrical resistance and a laser diode. For the data acquisition, an infrared camera was used, providing a full cartography of properties of the medium and also non-contact temperature measurements. The direct problem was solved by the finite differences method, and two approaches were used for the solution of the inverse problem, depending on the time varying behavior of the source term. Both approaches deal with the parameters estimation within the Bayesian framework, using the Markov Chain Monte Carlo (MCMC) method via the Metropolis Hastings (MH) algorithm for the constant source term, and the Kalman filter for the time-varying source term. The nodal strategy is presented as a method to deal with the large number of experimental data problems. Experiments were carried out in a sample with well-known thermophysical properties, determined by classical methods.
324

Valeurs propres de transmission et leur utilisation dans l'identification d'inclusions à partir de mesures électromagnétiques. / Transmission eigenvalues and their use in the identification of inclusions form electromagnetic measurements

Cossonnière, Anne 08 December 2011 (has links)
La théorie des problèmes de diffraction inverses pour les ondes acoustiques et électromagnétiques est un domaine de recherche très actif qui a connu des avancées significatives ces dernières années. La Linear Sampling Method (LSM), permettant de reconstituer la forme d’un objet à partir de sa réponse acoustique ou électromagnétique avec peu de données a priori sur les propriétés physiques de l’objet, a révélé l’existence de fréquences de résonance appelées valeurs propres de transmission, pour lesquelles cette méthode échoue dans le cas d’objets diffractants pénétrables. Ces fréquences particulières peuvent être étudiées à partir d’un nouveau type de problème appelé problème de transmission intérieur. Ces valeurs propres s’avèrent utiles dans le problème d’identification puisqu’elles peuvent aussi être calculées à partir des mesures à l’infini et quelles apportent des informations qualitatives sur les propriétés physiques de l’objet. Dans cette thèse, nous prouvons l’existence et le caractère discret de l’ensemble des valeurs propres de transmission pour deux nouvelles configurations, correspondant aux cas où l’objet diffractant pénétrable contient une cavité ou un conducteur parfait. De plus, nous proposons une nouvelle approche utilisant les équations intégrales permettant de calculer numériquement les valeurs propres de transmission / The theory of inverse scattering for acoustic or electromagnetic waves is an active area of research with significant developments in the past few years. The Linear Sampling Method (LSM) is a method that allows the reconstruction of the shape of an object from its acoustic or electromagnetic response with a few a priori knowledge on the physical properties of the scatterer. However, this method fails for resonance frequencies called transmission eigenvalues in the case of penetrable objects. These transmission eigenvalues are the eigenvalues of a new type of problem called the interior transmission problem. Their main feature is that not only they can give information on the physical properties of the scatterer but they can also be computed from far field measurements. In this thesis, we prove the existence and the discreteness of the set of transmission eigenvalues for two new configurations corresponding to the cases of a scatterer containing a cavity or a perfect conductor. A new approach using surface integral equations is also developed to compute numerically transmission eigenvalues for general geometries
325

Reconstruction tridimensionnelle par stéréophotométrie / 3D-reconstruction by photometric stereo

Quéau, Yvain 26 November 2015 (has links)
Cette thèse traite de la reconstruction 3D par stéréophotométrie, qui consiste à utiliser plusieurs photographies d'une scène prises sous le même angle, mais sous différents éclairages. Nous nous intéressons dans un premier temps à des techniques robustes pour l'estimation des normales à la surface, et pour leur intégration en une carte de profondeur. Nous étudions ensuite deux situations où le problème est mal posé : lorsque les éclairages sont inconnus, ou lorsque seuls deux éclairages sont utilisés. La troisième partie est consacrée à l'étude de modèles plus réalistes, à la fois en ce qui concerne les éclairages et la réflectance de la surface. Ces trois premières parties nous amènent aux limites de la formulation classique de la stéréophotométrie : nous introduisons finalement, dans la partie 4, une reformulation variationnelle et différentielle du problème qui permet de dépasser ces limites. / This thesis tackles the photometric stereo problem, a 3D-reconstruction technique consisting in taking several pictures of a scene under different lightings. We first focus on robust techniques for estimating the normals to the surface, and for integrating these normals into a depth map. Then, we study two situations where the problem is ill-posed: when lightings are unknown and when only two images are used. Part 3 is devoted to more realistic models, in terms of lightings and of surface reflectance. These first three parts bring us to the limits of the usual formulation of photometric stereo: we eventually introduce in Part 4 a variational and differential reformulation of this problem which allows us to overcome these limits.
326

Desenvolvimento e avaliação de novas abordagens de modelagem de processos de separação em leito móvel simulado / Development and evaluation of new approaches to modeling of the separations process in simulated moving bed

Anderson Luis Jeske Bihain 10 February 2014 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O Leito Móvel Simulado (LMS) é um processo de separação de compostos por adsorção muito eficiente, por trabalhar em um regime contínuo e também possuir fluxo contracorrente da fase sólida. Dentre as diversas aplicações, este processo tem se destacado na resolução de petroquímicos e principalmente na atualidade na separação de misturas racêmicas que são separações de um grau elevado de dificuldade. Neste trabalho foram propostas duas novas abordagens na modelagem do LMS, a abordagem Stepwise e a abordagem Front Velocity. Na modelagem Stepwise as colunas cromatográficas do LMS foram modeladas com uma abordagem discreta, onde cada uma delas teve seu domínio dividido em N células de mistura interligadas em série, e as concentrações dos compostos nas fases líquida e sólida foram simuladas usando duas cinéticas de transferência de massa distintas. Essa abordagem pressupõe que as interações decorrentes da transferência de massa entre as moléculas do composto nas suas fases líquida e sólida ocorram somente na superfície, de forma que com essa suposição pode-se admitir que o volume ocupado por cada molécula nas fases sólida e líquida é o mesmo, o que implica que o fator de residência pode ser considerado igual a constante de equilíbrio. Para descrever a transferência de massa que ocorre no processo cromatográfico a abordagem Front Velocity estabelece que a convecção é a fase dominante no transporte de soluto ao longo da coluna cromatográfica. O Front Velocity é um modelo discreto (etapas) em que a vazão determina o avanço da fase líquida ao longo da coluna. As etapas são: avanço da fase líquida e posterior transporte de massa entre as fases líquida e sólida, este último no mesmo intervalo de tempo. Desta forma, o fluxo volumétrico experimental é utilizado para a discretização dos volumes de controle que se deslocam ao longo da coluna porosa com a mesma velocidade da fase líquida. A transferência de massa foi representada por dois mecanismos cinéticos distintos, sem (tipo linear) e com capacidade máxima de adsorção (tipo Langmuir). Ambas as abordagens propostas foram estudadas e avaliadas mediante a comparação com dados experimentais de separação em LMS do anestésico cetamina e, posteriormente, com o fármaco Verapamil. Também foram comparados com as simulações do modelo de equilíbrio dispersivo para o caso da Cetamina, usado por Santos (2004), e para o caso do Verapamil (Perna 2013). Na etapa de caracterização da coluna cromatográfica as novas abordagens foram associadas à ferramenta inversa R2W de forma a determinar os parâmetros globais de transferência de massa apenas usando os tempos experimentais de residência de cada enantiômero na coluna de cromatografia líquida de alta eficiência (CLAE). Na segunda etapa os modelos cinéticos desenvolvidos nas abordagens foram aplicados nas colunas do LMS com os valores determinados na caracterização da coluna cromatográfica, para a simulação do processo de separação contínua. Os resultados das simulações mostram boa concordância entre as duas abordagens propostas e os experimentos de pulso para a caracterização da coluna na separação enantiomérica da cetamina ao longo do tempo. As simulações da separação em LMS, tanto do Verapamil quando da Cetamina apresentam uma discrepância com os dados experimentais nos primeiros ciclos, entretanto após esses ciclos iniciais a correlação entre os dados experimentais e as simulações. Para o caso da separação da cetamina (Santos, 2004), a qual a concentração da alimentação era relativamente baixa, os modelos foram capazes de predizer o processo de separação com as cinéticas Linear e Langmuir. No caso da separação do Verapamil (Perna, 2013), onde a concentração da alimentação é relativamente alta, somente a cinética de Langmuir representou o processo, devido a cinética Linear não representar a saturação das colunas cromatográficas. De acordo como o estudo conduzido ambas as abordagens propostas mostraram-se ferramentas com potencial na predição do comportamento cromatográfico de uma amostra em um experimento de pulso, assim como na simulação da separação de um composto no LMS, apesar das pequenas discrepâncias apresentadas nos primeiros ciclos de trabalho do LMS. Além disso, podem ser facilmente implementadas e aplicadas na análise do processo, pois requer um baixo número de parâmetros e são constituídas de equações diferenciais ordinárias. / Simulated Moving Bed (SMB) is a very efficient process in the compounds separation by adsorption, because works in a continuous regime, and with countercurrent flow of the solid phase. Among different applications, SMB has stood out in the petrochemical products separation and mainly in the separation of racemic compounds, which are separations of a high degree of difficulty. In this work, two new approaches to modeling the LMS process have been proposed, stepwise approach and Front Velocity approach. In the Stepwise approach, each chromatographic column of the SMB, is divided in to N cells connected in series, and the concentrations of compounds in liquid and solid phases were simulated using two different kinetics of mass transfer. This approach assumes that the interactions resulting from the mass transfer between the molecules of the compound in its liquid and solid phases occur only on the surface. So that with this assumption the volume occupied by each molecule in the solid and liquid phases is the same, implying that the factor of residence is equal to the equilibrium constant. To describe the mass transfer that occurs in the Chromatographic process, the Front Velocity approach considers that the convection is the dominant phase in the solute transport along the chromatographic column. The "Front Velocity" is a discrete model (steps) where the flow rate determines the liquid phase advance along the column. The steps are: advancing liquid phase and subsequent mass transfer between the liquid and solid phases, the latter in the same time interval. Thus, the experimental volumetric flow is used for the discretization of the control volume moving along the porous column with the same velocity of the liquid phase. The mass transfer was represented by two distinct kinetic mechanisms without (linear type) and with maximum adsorption capacity (Langmuir type). Both proposed approaches were studied and evaluated by comparison with experimental data separation LMS of the anesthetic ketamine and subsequently with the drug Verapamil. Were also compared with the simulations of dispersive equilibrium model for the case of ketamine used by Santos (2004) and the simulations of the software Help for the case of Verapamil (Perna 2013). In the chromatographic column characterization step, the new approaches have been associated with inverse R2W tool to determine the global mass transfer parameters using only the experimental residence times of each enantiomer in the high performance liquid chromatography (HPLC) column. In the second step, the kinetic models developed in both approaches were applied to the columns of the LMS with the values determined in the characterization of the chromatographic column step, for the simulation of continuous separation process. The simulation results show good agreement between the two proposed approaches and pulse experiments to characterize the column in the enantiomeric separation of ketamine over time. In the simulation of the SMB process, when the approaches admit one kinetic mechanism of the Langmuir type showed good agreement with the results obtained from the dispersive equilibrium model, it is a classical tool for the simulation of this process. While using a kinetic linear mechanism the results is more similar to the experimental data. According to the study conducted, both the proposed approaches were shown to be potential tools to predict the chromatographic behavior of a sample in a test pulse, as well as the simulation of separation of a compound in SMB process despite minor discrepancies presented in the first work cycles of the SMB. Moreover, the approaches can be easily programed and applied in the analysis of the process, because it requires a low number of parameters and consist of ordinary differential equations.
327

Problemas inversos aplicados à identificação de parâmetros hidrodinâmicos de um modelo do estuário do rio Macaé / Inverse problems applied to the identication of hydrodynamic parameters of a model of the Macae river estuary

Edgar Barbosa Lima 27 February 2012 (has links)
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro / Esta tese tem por objetivo propor uma estratégia de obtenção automática de parâmetros hidrodinâmicos e de transporte através da solução de problemas inversos. A obtenção dos parâmetros de um modelo físico representa um dos principais problemas em sua calibração, e isso se deve em grande parte à dificuldade na medição em campo desses parâmetros. Em particular na modelagem de rios e estuários, a altura da rugosidade e o coeficiente de difusão turbulenta representam dois dos parâmetros com maior dificuldade de medição. Nesta tese é apresentada uma técnica automatizada de estimação desses parâmetros através deum problema inverso aplicado a um modelo do estuário do rio Macaé, localizado no norte do Rio de Janeiro. Para este estudo foi utilizada a plataforma MOHID, desenvolvida na Universidade Técnica de Lisboa, e que tem tido ampla aplicação na simulação de corpos hídricos. Foi realizada uma análise de sensibilidade das respostas do modelo com relação aos parâmetros de interesse. Verificou-se que a salinidade é uma variável sensível a ambos parâmetros. O problema inverso foi então resolvido utilizando vários métodos de otimização através do acoplamento da plataforma MOHID a códigos de otimização implementados em Fortran. O acoplamento foi realizado de forma a não alterar o código fonte do MOHID, possibilitando a utilização da ferramenta computacional aqui desenvolvida em qualquer versão dessa plataforma, bem como a sua alteração para o uso com outros simuladores. Os testes realizados confirmam a eficiência da técnica e apontam as melhores abordagens para uma rápida e precisa estimação dos parâmetros. / This thesis presents a strategy for automatically obtaining hydrodynamic and transport parameters by means of the solution of inverse problems. Obtaining the parameters of a physical model represents a major problem in its calibration, and this is largely due to the difficulty associated to the field measurements of these parameters. In particular by modeling rivers and estuaries, the roughness height and the turbulent diffusion coefficient represent two of the most difficult parameters to measure or estimate. Here, an automated technique for estimation of these parameters through an inverse problem is applied to a model of the Macaé estuary, located in northern Rio de Janeiro. For this investigation, hydrodynamic and transport models were built in the MOHID platform, developed in the Technical University of Lisbon, which has had wide application in simulation of water bodies. A sensitivity analysis was performed regarding the model responses with respect to the parameters of interest. It was verified that salinity is a sensitive variable for both parameters. The inverse problem was then solved using various optimization methods by coupling the MOHID platform to optimization codes implemented in Fortran. The coupling was carried in a way to not change the MOHID source code, allowing the use of the computational tool here developed in any version of this platform, as well as its modification for use with other simulators. The tests confirm the efficiency of the technique and suggest the best approaches for a fast and accurate estimation of the parameters.
328

Um problema inverso em dois passos para estimação de perfis de temperatura na atmosfera com nuvens a partir de medidas de radiância feitas por satélite / A two step inverse problem to retrieve vertical temperature profile in the atmosphere with clouds from radiance measurements made by satellite

Patricia Oliva Soares 04 January 2013 (has links)
Esta tese tem por objetivo propor uma metodologia para recuperação de perfis verticais de temperatura na atmosfera com nuvens a partir de medidas de radiância feitas por satélite, usando redes neurais artificiais. Perfis verticais de temperatura são importantes condições iniciais para modelos de previsão de tempo, e são usualmente obtidos a partir de medidas de radiâncias feitas por satélites na faixa do infravermelho. No entanto, quando estas medidas são feitas na presença de nuvens, não é possível, com as técnicas atuais, efetuar a recuperação deste perfil. É uma perda significativa de informação, pois, em média, 20% dos pixels das imagens acusam presença de nuvens. Nesta tese, este problema é resolvido como um problema inverso em dois passos: o primeiro passo consiste na determinação da radiância que atinge a base da nuvem a partir da radiância medida pelos satélites; o segundo passo consiste na determinação do perfil vertical de temperaturas a partir da informação de radiância fornecida pelo primeiro passo. São apresentadas reconstruções do perfil de temperatura para quatro casos testes. Os resultados obtidos mostram que a metodologia adotada produz resultados satisfatórios e tem grande potencial de uso, permitindo incorporar informações sobre uma região mais ampla do globo e, consequentemente, melhorar os modelos de previsão do tempo. / This thesis presents a methodology for retrieving vertical temperature profiles in the atmosphere with clouds from radiance measurements made by satellite, using artificial neural networks. Vertical temperature profiles are important initial conditions for numerical weather prediction models, and are usually obtained from measurements of radiance using infrared channels. Though, when these measurements are performed in the atmosphere with clouds, it is not possible to retrieve the temperature profile with current techniques. It is a significant loss of information, since on average 20% of the pixels of the images have clouds. In this thesis, this problem is solved as a two-step inverse problem: the first step is an inverse problem of boundary condition estimation, where the radiance reaching the cloud basis is determined from radiance measured by satellite; the second step consists in determining the vertical temperature profile from the boundary condition estimated in the first step. Reconstructions of temperature profile are presented for four test cases. The results show that the proposed methodology produces satisfactory results and has great potential for use, allowing to incorporate information from a wider area of the planet and thus to improve numerical weather prediction models.
329

Nouvelle méthode de traitement d'images multispectrales fondée sur un modèle d'instrument pour la haut contraste : application à la détection d'exoplanètes / New method of multispectral image post-processing based on an instrument model for high contrast imaging systems : Application to exoplanet detection

Ygouf, Marie 06 December 2012 (has links)
Ce travail de thèse porte sur l'imagerie multispectrale à haut contraste pour la détection et la caractérisation directe d'exoplanètes. Dans ce contexte, le développement de méthodes innovantes de traitement d'images est indispensable afin d'éliminer les tavelures quasi-statiques dans l'image finale qui restent à ce jour, la principale limitation pour le haut contraste. Bien que les aberrations résiduelles instrumentales soient à l'origine de ces tavelures, aucune méthode de réduction de données n'utilise de modèle de formation d'image coronographique qui prend ces aberrations comme paramètres. L'approche adoptée dans cette thèse comprend le développement, dans un cadre bayésien, d'une méthode d'inversion fondée sur un modèle analytique d'imagerie coronographique. Cette méthode estime conjointement les aberrations instrumentales et l'objet d'intérêt, à savoir les exoplanètes, afin de séparer correctement ces deux contributions. L'étape d'estimation des aberrations à partir des images plan focal (ou phase retrieval en anglais), est la plus difficile car le modèle de réponse instrumentale sur l'axe dont elle dépend est fortement non-linéaire. Le développement et l'étude d'un modèle approché d'imagerie coronographique plus simple se sont donc révélés très utiles pour la compréhension du problème et m'ont inspiré des stratégies de minimisation. J'ai finalement pu tester ma méthode et d'estimer ses performances en terme de robustesse et de détection d'exoplanètes. Pour cela, je l'ai appliquée sur des images simulées et j'ai notamment étudié l'effet des différents paramètres du modèle d'imagerie utilisé. J'ai ainsi démontré que cette nouvelle méthode, associée à un schéma d'optimisation fondé sur une bonne connaissance du problème, peut fonctionner de manière relativement robuste, en dépit des difficultés de l'étape de phase retrieval. En particulier, elle permet de détecter des exoplanètes dans le cas d'images simulées avec un niveau de détection conforme à l'objectif de l'instrument SPHERE. Ce travail débouche sur de nombreuses perspectives dont celle de démontrer l'utilité de cette méthode sur des images simulées avec des coronographes plus réalistes et sur des images réelles de l'instrument SPHERE. De plus, l'extension de la méthode pour la caractérisation des exoplanètes est relativement aisée, tout comme son extension à l'étude d'objets plus étendus tels que les disques circumstellaire. Enfin, les résultats de ces études apporteront des enseignements importants pour le développement des futurs instruments. En particulier, les Extremely Large Telescopes soulèvent d'ores et déjà des défis techniques pour la nouvelle génération d'imageurs de planètes. Ces challenges pourront très probablement être relevés en partie grâce à des méthodes de traitement d'image fondées sur un modèle direct d'imagerie. / This research focuses on high contrast multispectral imaging in the view of directly detecting and characterizing Exoplanets. In this framework, the development of innovative image post-processing methods is essential in order to eliminate the quasi-static speckles in the final image, which remain the main limitation for high contrast. Even though the residual instrumental aberrations are responsible for these speckles, no post-processing method currently uses a model of coronagraphic imaging, which takes these aberrations as parameters. The research approach adopted includes the development of a method, in a Bayesian Framework, based on an analytical coronagraphic imaging model and an inversion algorithm, to estimate jointly the instrumental aberrations and the object of interest, i.e. the exoplanets, in order to separate properly these two contributions. The instrumental aberration estimation directly from focal plane images, also named phase retrieval, is the most difficult step because the model of on-axis instrumental response, of which these aberrations depend on, is highly non-linear. The development and the study of an approximate model of coronagraphic imaging thus proved very useful to understand the problem at hand and inspired me some minimization strategies. I finally tested my method and estimated its performances in terms of robustness and exoplanets detection. For this, I applied it to simulated images and I studied the effect of the different parameters of the imaging model I used. The findings from this research provide evidence that this method, in association with an optimization scheme based on a good knowledge of the problem at hand, can operate in a relatively robust way, despite the difficulties of the phase retrieval step. In particular, it allows the detection of exoplanets in the case of simulated images with a detection level compliant with the goal of the SPHERE instrument. The next steps will be to verify the efficiency of this new method on simulated images using more realistic coronagraphs and on real images from the SPHERE instrument. In addition, the extension of the method for the characterization of exoplanets is relatively easy, as its extension to the study of larger objects such as circumstellar disks. Finally, the results of this work will also bring some crucial insights for the development of future instruments. In particular, the Extremely Large Telescopes have already risen some technical challenges for the next generation of planet finders, which may partly be addressed by an image processing method based on an imaging model.
330

Algoritmo híbrido para avaliação da integridade estrutural: uma abordagem heurística / Hybrid algorithm for damage detection: a heuristic approach

Oscar Javier Begambre Carrillo 25 June 2007 (has links)
Neste estudo, o novo algoritmo hibrido autoconfigurado PSOS (Particle Swarm Optimization - Simplex) para avaliação da integridade estrutural a partir de respostas dinâmicas é apresentado. A formulação da função objetivo para o problema de minimização definido emprega funções de resposta em freqüência e/ou dados modais do sistema. Uma nova estratégia para o controle dos parâmetros do algoritmo Particle Swarm Optimization (PSO), baseada no uso do método de Nelder - Mead é desenvolvida; conseqüentemente, a convergência do PSO fica independente dos parâmetros heurísticos e sua estabilidade e precisão são melhoradas. O método híbrido proposto teve melhor desempenho, nas diversas funções teste analisadas, quando comparado com os algoritmos simulated annealing, algoritmos genéticos e o PSO. São apresentados diversos problemas de detecção de dano, levando em conta os efeitos do ruído e da falta de dados experimentais. Em todos os casos, a posição e extensão do dano foram determinadas com sucesso. Finalmente, usando o PSOS, os parâmetros de um oscilador não linear (oscilador de Duffing) foram identificados. / In this study, a new auto configured Particle Swarm Optimization - Simplex algorithm for damage detection has been proposed. The formulation of the objective function for the minimization problem is based on the frequency response functions (FRFs) and the modal parameters of the system. A novel strategy for the control of the Particle Swarm Optimization (PSO) parameters based on the Nelder-Mead algorithm (Simplex method) is presented; consequently, the convergence of the PSOS becomes independent of the heuristic constants and its stability and accuracy are enhanced. The formulated hybrid method performs better in different benchmark functions than the Simulated Annealing (SA), the Genetic Algorithm (GA) and the basic PSO. Several damage identification problems, taking into consideration the effects of noisy and incomplete data, were studied. In these cases, the damage location and extent were determined successfully. Finally, using the PSOS, a non-linear oscillator (Duffing oscillator) was identified with good results.

Page generated in 0.0805 seconds