• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • Tagged with
  • 9
  • 9
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Implementation Of Different Flux Evaluation Schemes Into A Two-dimensional Euler Solver

Eraslan, Elvan 01 September 2006 (has links) (PDF)
This study investigates the accuracy and efficiency of several flux splitting methods for the compressible, two-dimensional Euler equations. Steger-Warming flux vector splitting method, Van Leer flux vector splitting method, The Advection Upstream Splitting Method (AUSM), Artificially Upstream Flux Vector Splitting Scheme (AUFS) and Roe&rsquo / s flux difference splitting schemes were implemented using the first- and second-order reconstruction methods. Limiter functions were embedded to the second-order reconstruction methods. The flux splitting methods are applied to subsonic, transonic and supersonic flows over NACA0012 airfoil, as well as subsonic, transonic and supersonic flows in a channel. The comparison of the obtained results with each other and the ones in the literature is presented. The advantages and disadvantages of each scheme among others are identified.
2

Crab flare observations with H.E.S.S. phase II

Balzer, Arnim January 2014 (has links)
The H.E.S.S. array is a third generation Imaging Atmospheric Cherenkov Telescope (IACT) array. It is located in the Khomas Highland in Namibia, and measures very high energy (VHE) gamma-rays. In Phase I, the array started data taking in 2004 with its four identical 13 m telescopes. Since then, H.E.S.S. has emerged as the most successful IACT experiment to date. Among the almost 150 sources of VHE gamma-ray radiation found so far, even the oldest detection, the Crab Nebula, keeps surprising the scientific community with unexplained phenomena such as the recently discovered very energetic flares of high energy gamma-ray radiation. During its most recent flare, which was detected by the Fermi satellite in March 2013, the Crab Nebula was simultaneously observed with the H.E.S.S. array for six nights. The results of the observations will be discussed in detail during the course of this work. During the nights of the flare, the new 24 m × 32 m H.E.S.S. II telescope was still being commissioned, but participated in the data taking for one night. To be able to reconstruct and analyze the data of the H.E.S.S. Phase II array, the algorithms and software used by the H.E.S.S. Phase I array had to be adapted. The most prominent advanced shower reconstruction technique developed by de Naurois and Rolland, the template-based model analysis, compares real shower images taken by the Cherenkov telescope cameras with shower templates obtained using a semi-analytical model. To find the best fitting image, and, therefore, the relevant parameters that describe the air shower best, a pixel-wise log-likelihood fit is done. The adaptation of this advanced shower reconstruction technique to the heterogeneous H.E.S.S. Phase II array for stereo events (i.e. air showers seen by at least two telescopes of any kind), its performance using MonteCarlo simulations as well as its application to real data will be described. / Das H.E.S.S. Experiment misst sehr hochenergetische Gammastrahlung im Khomas Hochland von Namibia. Es ist ein sogenanntes abbildendes atmosphärisches Cherenkov-Teleskopsystem welches in der 1. Phase, die im Jahr 2004 mit der Datennahme begann, aus vier identischen 13 m Spiegelteleskopen bestand. Seitdem hat sich H.E.S.S. als das erfolgreichstes Experiment in der bodengebundenen Gammastrahlungsastronomie etabliert. Selbst die älteste der mittlerweile fast 150 entdeckten Quellen von sehr hochenergetischer Gammastrahlung, der Krebsnebel, fasziniert immernoch Wissenschaftler mit neuen bisher unbekannten und unerwarteten Phänomenen. Ein Beispiel dafür sind die vor kurzem entdeckten sehr energiereichen Ausbrüche von hochenergetischer Gammastrahlung. Bei dem letzten deratigen Ausbruch des Krebsnebels im März 2013 hat das H.E.S.S. Experiment für sechs Nächte simultan mit dem Fermi-Satelliten, welcher den Ausbruch entdeckte, Daten genommen. Die Analyse der Daten, deren Ergebnis und deren Interpretation werden im Detail in dieser Arbeit vorgestellt. Während dieser Beobachtungen befand sich ein neues 24 m × 32 m großes Spiegelteleskop, das H.E.S.S. II- Teleskop, noch in seiner Inbetriebnahme, trotzdem hat es für eine dieser sechs Nächte an der Datennahme des gesamten Teleskopsystems teilgenommen. Um die Daten rekonstruieren und analysieren zu können, mussten die für die 1. Phase des Experiments entwickelten Algorithmen und die Software des H.E.S.S.- Experiments angepasst werden. Die fortschrittlichste Schauerrekonstruktionsmethode, welche von de Naurois und Rolland entwickelt wurde, basiert auf dem Vergleich von echten Schauerbildern, die mit Hilfe der Cherenkov-Kameras der einzelnen Teleskope aufgenommen wurden, mit Schauerschablonen die mit Hilfe eines semianalytischen Modells erzeugt wurden. Das am besten passende Bild und damit auch alle relevanten Schauerparameter, wird mit Hilfe einer pixelweisen Loglikelihood-Anpassung ermittelt. Die nötigen Änderungen um Multiteleskopereignisse, welche vom heterogenen H.E.S.S. Phase II Detektor gemessen wurden, mit Hilfe dieser fortschrittlichen Schauerrekonstruktionsmethode analysieren zu können, sowie die resultierenden Ergebnisse von MonteCarlo-Simulationen, als auch die Anwendung auf echte Daten, werden im Rahmen dieser Arbeit präsentiert.
3

Contribuição ao desenvolvimento de uma nova técnica de reconstrução tomográfica para sondas de visualização direta / Contribution to the development of a new image reconstruction method for direct imaging probes

Rolnik, Vanessa Portioli 05 November 2003 (has links)
O principal objetivo deste trabalho é contribuir para o desenvolvimento de uma nova técnica de reconstrução numérica do problema de tomografia por impedância elétrica. A abordagem adotada baseia-se na minimização de um funcional de erro convenientemente definido, cujo ponto de mínimo global está relacionado com a imagem do escoamento sensoriado. Nesta formulação, o mau condicionamento se manifesta através de características topológicas dos funcionais de erro (patologia) que prejudicam o desempenho dos métodos de otimização na obtenção do mínimo. Esta abordagem tem vantagens significativas em relação às abordagens tradicionais, normalmente baseadas em hipóteses restritivas e pouco realistas como, por exemplo, considerar o campo de sensoriamento bidimensional e paralelo, além de independente do escoamento. Testes numéricos permitiram realizar estudos preliminares sobre as características topológicas do funcional de erro, necessários para a seleção de métodos de otimização passíveis de serem especializados para a solução do problema tratado neste trabalho. Nestes testes identificou-se a patologia característica do problema tratado: presença de uma região plana (inclinação virtualmente nula) circundando o mínimo global procurado. Dentre os diferentes métodos de otimização considerados, optou-se pelo desenvolvimento de uma estratégia baseada em algoritmos genéticos, devido às suas características serem melhor adaptáveis à patologia do problema em questão. O desempenho do método de otimização desenvolvido foi testado extensivamente em dois problemas básicos: a) posicionar corretamente uma inclusão de forma e contraste conhecidos e b) determinar os valores do contraste em uma sub-região do domínio de sensoriamento, no interior do qual sabe-se que existe uma inclusão. No primeiro caso, os resultados mostraram que, de fato, o algoritmo genético superou a patologia do problema e convergiu para a solução correta. No segundo caso, de dimensionalidade maior, a convergência em um tempo aceitável só pode ser alcançada com a introdução de informações à priori, seja na forma de restrições sobre o espaço de busca, seja na forma de penalidades aplicadas ao funcional de erro. / The main objective of this work is to contribute to the development of a new two-phase flow tomographic reconstruction method suited for electrical impedance tomography. The adopted approach consists in minimizing an error functional, defined so that is global minimum is related with the sensed flow image. In this formulation, the ill conditioning appears through topological features of the error functionals (pathologies) which compromises the performance of the optimization algorithms employed to determine the minimum. This approach has several important advantages over the classical ones, generally based on restrictive and unrealistic hypothesis such as the sensing field being two-dimensional, parallel and independent of the flow. Numerical simulations permitted to conduct preliminary studies about the topological features of the error functional, necessary to select possible optimization methods to be specialized to reach the solution of the problem treated in this work. The characteristic pathology of the problem was identified in these tests: the presence of a flat region (virtually null inclination) around the sought global minimum. Among the different considered methods, genetic algorithms were adopted because of their characteristics of being best adaptive to the pathologies of the current problem. The performance of the developed optimization method was tested through extensive numerical tests in two basic problems: a) to correctly place aninclusion with known shape and contrast and b) to determine the values of the contrast inside a sub-region of the sensed domain, which is known that contains the inclusion. In the first case, results show that the genetic algorithm overcame the pathologies of the problem and converged to the correct solution. In the second case, with higher dimensionality, convergence was achieved in an acceptable time only after the introduction of a priori information, either in the form of restrictions on the search space or in the form of penalties applied to the error functional.
4

Z →ττ Cross Section Measurement and ττ Mass Reconstruction with the ATLAS Detector at the LHC / Z→ττ Produktionswirkungsquerschnitte Messung und ττ Massenrekonstruktionsmethoden mit der ATLAS Detektor des LHC

Evangelakou, Despoina 18 July 2012 (has links)
No description available.
5

Photon reconstruction for the H.E.S.S. 28 m telescope and analysis of Crab Nebula and galactic centre observations

Holler, Markus January 2014 (has links)
In the presented thesis, the most advanced photon reconstruction technique of ground-based γ-ray astronomy is adapted to the H.E.S.S. 28 m telescope. The method is based on a semi-analytical model of electromagnetic particle showers in the atmosphere. The properties of cosmic γ-rays are reconstructed by comparing the camera image of the telescope with the Cherenkov emission that is expected from the shower model. To suppress the dominant background from charged cosmic rays, events are selected based on several criteria. The performance of the analysis is evaluated with simulated events. The method is then applied to two sources that are known to emit γ-rays. The first of these is the Crab Nebula, the standard candle of ground-based γ-ray astronomy. The results of this source confirm the expected performance of the reconstruction method, where the much lower energy threshold compared to H.E.S.S. I is of particular importance. A second analysis is performed on the region around the Galactic Centre. The analysis results emphasise the capabilities of the new telescope to measure γ-rays in an energy range that is interesting for both theoretical and experimental astrophysics. The presented analysis features the lowest energy threshold that has ever been reached in ground-based γ-ray astronomy, opening a new window to the precise measurement of the physical properties of time-variable sources at energies of several tens of GeV. / In der vorliegenden Dissertation wird die zur Zeit sensitivste Methode zur Photonrekonstruktion in der bodengebundenen Gammastrahlungsastronomie an das 28 m H.E.S.S. Teleskop angepasst. Die Analyse basiert auf einem semi-analytischen Modell von elektromagnetischen Teilchenschauern in der Erdatmosphäre. Die Rekonstruktion erfolgt durch den Vergleich des Bildes der Teleskopkamera mit der Tscherenkow-Emission, die mittels des Schauermodells berechnet wurde. Zur Verringerung des dominanten Untergrundes, der hauptsächlich durch Teilchen der geladenen kosmischen Strahlung hervorgerufen wird, werden Ereignisse anhand bestimmter Kriterien ausgewählt. Die Leistungsfähigkeit der Analyse wird unter Verwendung simulierter Ereignisse evaluiert. Die Methode wird anschließend auf zwei Gammastrahlungsquellen angewendet. Zuerst wird der Krebsnebel analysiert, die Standardkerze der bodengebundenen Gammastrahlungsastronomie. Die Resultate der Analyse des Krebsnebels bestätigen die bereits zuvor erwartete Leistungsfähigkeit der Rekonstruktionsmethode, wobei hier insbesondere die im Vergleich zu H.E.S.S. I stark verringerte Energieschwelle hervorzuheben ist. Als Zweites werden Beobachtungen der Region um das galaktische Zentrum ausgewertet. Die Analyseergebnisse dieser Daten unterstreichen die Fähigkeiten des neuen Teleskops zur Messung kosmischer Gammastrahlung in einem für die theoretische und experimentelle Astrophysik interessanten Energiebereich. Die vorgestellte Analyse besitzt die niedrigste Energieschwelle, die in der bodengebundenen Gammastrahlungsastronomie je erreicht wurde. Sie ermöglicht damit präzise Messungen der physikalischen Eigenschaften von zeitabhängigen Quellen im Energiebereich von 10 bis 100 GeV.
6

Contribuição ao desenvolvimento de uma nova técnica de reconstrução tomográfica para sondas de visualização direta / Contribution to the development of a new image reconstruction method for direct imaging probes

Vanessa Portioli Rolnik 05 November 2003 (has links)
O principal objetivo deste trabalho é contribuir para o desenvolvimento de uma nova técnica de reconstrução numérica do problema de tomografia por impedância elétrica. A abordagem adotada baseia-se na minimização de um funcional de erro convenientemente definido, cujo ponto de mínimo global está relacionado com a imagem do escoamento sensoriado. Nesta formulação, o mau condicionamento se manifesta através de características topológicas dos funcionais de erro (patologia) que prejudicam o desempenho dos métodos de otimização na obtenção do mínimo. Esta abordagem tem vantagens significativas em relação às abordagens tradicionais, normalmente baseadas em hipóteses restritivas e pouco realistas como, por exemplo, considerar o campo de sensoriamento bidimensional e paralelo, além de independente do escoamento. Testes numéricos permitiram realizar estudos preliminares sobre as características topológicas do funcional de erro, necessários para a seleção de métodos de otimização passíveis de serem especializados para a solução do problema tratado neste trabalho. Nestes testes identificou-se a patologia característica do problema tratado: presença de uma região plana (inclinação virtualmente nula) circundando o mínimo global procurado. Dentre os diferentes métodos de otimização considerados, optou-se pelo desenvolvimento de uma estratégia baseada em algoritmos genéticos, devido às suas características serem melhor adaptáveis à patologia do problema em questão. O desempenho do método de otimização desenvolvido foi testado extensivamente em dois problemas básicos: a) posicionar corretamente uma inclusão de forma e contraste conhecidos e b) determinar os valores do contraste em uma sub-região do domínio de sensoriamento, no interior do qual sabe-se que existe uma inclusão. No primeiro caso, os resultados mostraram que, de fato, o algoritmo genético superou a patologia do problema e convergiu para a solução correta. No segundo caso, de dimensionalidade maior, a convergência em um tempo aceitável só pode ser alcançada com a introdução de informações à priori, seja na forma de restrições sobre o espaço de busca, seja na forma de penalidades aplicadas ao funcional de erro. / The main objective of this work is to contribute to the development of a new two-phase flow tomographic reconstruction method suited for electrical impedance tomography. The adopted approach consists in minimizing an error functional, defined so that is global minimum is related with the sensed flow image. In this formulation, the ill conditioning appears through topological features of the error functionals (pathologies) which compromises the performance of the optimization algorithms employed to determine the minimum. This approach has several important advantages over the classical ones, generally based on restrictive and unrealistic hypothesis such as the sensing field being two-dimensional, parallel and independent of the flow. Numerical simulations permitted to conduct preliminary studies about the topological features of the error functional, necessary to select possible optimization methods to be specialized to reach the solution of the problem treated in this work. The characteristic pathology of the problem was identified in these tests: the presence of a flat region (virtually null inclination) around the sought global minimum. Among the different considered methods, genetic algorithms were adopted because of their characteristics of being best adaptive to the pathologies of the current problem. The performance of the developed optimization method was tested through extensive numerical tests in two basic problems: a) to correctly place aninclusion with known shape and contrast and b) to determine the values of the contrast inside a sub-region of the sensed domain, which is known that contains the inclusion. In the first case, results show that the genetic algorithm overcame the pathologies of the problem and converged to the correct solution. In the second case, with higher dimensionality, convergence was achieved in an acceptable time only after the introduction of a priori information, either in the form of restrictions on the search space or in the form of penalties applied to the error functional.
7

Development of experimental and analysis methods to calibrate and validate super-resolution microscopy technologies / Développement de méthodes expérimentales et d'analyse pour calibrer et valider les technologies de microscopie de super-résolution

Salas, Desireé 27 November 2015 (has links)
Les méthodes de microscopie de super-résolution (SRM) telles que la microscopie PALM (photoactivated localization microscopy), STORM (stochastic optical reconstruction microscopy), BALM (binding-activated localization microscopy) et le DNA-PAINT, représentent un nouvel ensemble de techniques de microscopie optique qui permettent de surpasser la limite de diffraction ( > 200 nm dans le spectre visible). Ces méthodes sont basées sur la localisation de la fluorescence de molécules uniques, et peuvent atteindre des résolutions de l'ordre du nanomètres (~20 nm latéralement et 50 nm axialement). Les techniques SRM ont un large spectre d'applications dans les domaines de la biologie et de la biophysique, rendant possible l'accès à l'information tant dynamique que structurale de structures connues ou non, in vivo et in vitro. Beaucoup d'efforts ont été fournis durant la dernière décennie afin d'élargir le potentiel de ces méthodes en développant des méthodes de localisation à la fois plus précise et plus rapide, d'améliorer la photophysique des fluorophores, de développer des algorithmes pour obtenir une information quantitative et augmenter la précision de localisation, etc. Cependant, très peu de méthodes ont été développées pour examiner l'hétérogénéité des images et extraire les informations statistiquement pertinent à partir de plusieurs milliers d'images individuelles super-résolues. Dans mon travail de thèse, je me suis spécifiquement attaquée à ces limitations en: (1) construisant des objets de dimensions nanométriques et de structures bien définies, avec la possibilité d'être adaptés aux besoins. Ces objets sont basés sur les origamis d'ADN. (2) développant des approches de marquage afin d'acquérir des images homogènes de ces objets. (3) implémentant des outils statistiques dans le but d'améliorer l'analyse et la validation d'images. Ces outils se basent sur des méthodes de reconstruction de molécules uniques communément appliquées aux reconstructions d'images de microscopie électronique. J'ai spécifiquement appliqué ces développements à la reconstruction de formes 3D de deux origamis d'ADN modèles (en une et trois dimensions). Je montre comment ces méthodes permettent la dissection de l'hétérogénéité de l'échantillon, et la combinaison d'images similaires afin d'améliorer le rapport signal sur bruit. La combinaison de différentes classes moyennes ont permis la reconstruction des formes tridimensionnelles des origamis d'ADN. Particulièrement, car cette méthode utilise la projection 2D de différentes vues d'une même structure, elle permet la récupération de résolutions isotropes en trois dimensions. Des fonctions spécifiques ont été adaptées à partir de méthodologies existantes afin de quantifier la fiabilité des reconstructions et de leur résolution. A l'avenir, ces développements seront utiles pour la reconstruction 3D de tous types d'objets biologiques pouvant être observés à haute résolution par des méthodologies dérivées de PALM, STORM ou PAINT. / Super resolution microscopy (SRM) methods such as photoactivated localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), binding-activated localization microscopy (BALM) and DNA-PAINT represent a new collection of light microscopy techniques that allow to overpass the diffraction limit barrier ( > 200 nm in the visible spectrum). These methods are based on the localization of bursts of fluorescence from single fluorophores, and can reach nanometer resolutions (~20 nm in lateral and 50 nm in axial direction, respectively). SRM techniques have a broad spectrum of applications in the field of biology and biophysics, allowing access to structural and dynamical information of known and unknown biological structures in vivo and in vitro. Many efforts have been made over the last decade to increase the potential of these methods by developing more precise and faster localization techniques, to improve fluorophore photophysics, to develop algorithms to obtain quantitative information and increase localization precision, etc. However, very few methods have been developed to dissect image heterogeneity and to extract statistically relevant information from thousands of individual super-resolved images. In my thesis, I specifically tackled these limitations by: (1) constructing objects with nanometer dimensions and well-defined structures with the possibility of be tailored to any need. These objects are based on DNA origami. (2) developing labeling approaches to homogeneously image these objects. These approaches are based on adaptations of BALM and DNA-PAINT microscopies. (3) implemented statistical tools to improve image analysis and validation. These tools are based on single-particle reconstruction methods commonly applied to image reconstruction in electron microscopy.I specifically applied these developments to reconstruct the 3D shape of two model DNA origami (in one and three dimensions). I show how this method permits the dissection of sample heterogeneity, and the combination of similar images in order to improve the signal-to-noise ratio. The combination of different average classes permitted the reconstruction of the three dimensional shape of DNA origami. Notably, because this method uses the 2D projections of different views of the same structure, it permits the recovery of isotropic resolutions in three dimensions. Specific functions were adapted from previous methodologies to quantify the reliability of the reconstructions and their resolution.In future, these developments will be helpful for the 3D reconstruction of any biological object that can be imaged at super resolution by PALM, STORM or PAINT-derived methodologies.
8

Observation et modélisation du déferlement des vagues / Observation and modelisation of wave breaking

Leckler, Fabien 18 December 2013 (has links)
Les récentes paramétrisations utilisées dans les modèles spectraux de vagues offrent des résultats intéressants en termes de prévision et rejeux des états de mer. Cependant, de nombreux phénomènes physiques présents dans ces modèles sont encore mal compris et donc mal modélisés, notamment le terme de dissipation lié au déferlement des vagues.Le travail présenté dans cette thèse vise dans un premier temps à analyser et critiquer les paramétrisations existantes de la dissipation, au travers de la modélisation explicite des propriétés du déferlement sous-jacentes. Du constat de l’échec de ces paramétrisations à reproduire les observations in situ et satellite du déferlement, une nouvelle méthode d’observation et d’analyse des déferlements est proposée à l’aide de systèmes de stéréo vidéo. Cette méthode permet l’observation des déferlements sur des surfaces de mer reconstruites à haute résolution par stéréo triangulation. Ainsi, une méthode complète de reconstruction des surfaces de mer en présence de vagues déferlantes est proposée et validée. La détection des vagues déferlantes sur les images et leur reprojection sur les surfaces reconstruites est également discutée. Bien que peu d’acquisitions soient disponibles, les différents paramètres observables grâce à l’utilisation de la stéréo vidéo sont mis en avant. Ce travail montre l’intérêt des systèmes vidéo stéréo pour une meilleure observation et compréhension du déferlement des vagues, pour le développement des paramétrisations de la dissipation dans les modèles spectraux de vague. / The recent parameterizations used in spectral wave models provide today interesting results in terms of forecast and hindcast of the sea states. Nevertheless, many physical phenomena present in these models are still poorly understood and therefore poorly modeled, in particular the dissipation source term due to breaking. First, the work presented in this thesis is aimed at analyzing and criticizing the existing parameterizations of the dissipation through the explicit modeling of the underlying properties of breaking. The finding of the failure of these parameterizations to reproduce the in situ and satellite observations, a new method for the observation and the analysis of breaking is proposed using stereo video systems . This method allows the observation of breaking waves on the high-resolution stereo-reconstructed sea surfaces. Therefore, a complete method for reconstruction of the sea surfaces in the presence of breaking waves is proposed and validated.The detection of breaking waves on the images and their reprojection on reconstructed surface is also discussed. Although too few acquisitions are available to draw firm results, an overview of the various observable parameters through the use of stereo video is given.This work shows the importance of stereo video systems to a better observation and understanding of the breaking waves, required in order to improve dissipation source term in spectral wave models.
9

Analysis and Computation for the Inverse Scattering Problem with Conductive Boundary Conditions

Rafael Ceja Ayala (18340938) 11 April 2024 (has links)
<p dir="ltr">In this thesis, we consider the inverse problem of reconstructing the shape, position, and size of an unknown scattering object. We will talk about different methods used for nondestructive testing in scattering theory. We will consider qualitative reconstruction methods to understand and determine important information about the support of unknown scattering objects. We will also discuss the material properties of the system and connect them to certain crucial aspects of the region of interest, as well as develop useful techniques to determine physical information using inverse scattering theory. </p><p><br></p><p dir="ltr">In the first part of the analysis, we consider the transmission eigenvalue (TE) problem associated with the scattering of a plane wave for an isotropic scatterer. In particular, we examine the transmission eigenvalue problem with two conductivity boundary parameters. In previous studies, this eigenvalue problem was analyzed with one conductive boundary parameter, whereas we will consider the case of two parameters. We will prove the existence and discreteness of the transmission eigenvalues. In addition, we will study the dependence of the TE's on the physical parameters and connect the first transmission eigenvalue to the physical parameters of the problem by a monotone-type argument. Lastly, we will consider the limiting procedure as the second boundary parameter vanishes at the boundary of the scattering region and provide numerical examples to validate the theory presented in Chapter 2. </p><p><br></p><p dir="ltr">The connection between transmission eigenvalues and the system's physical parameters provides a way to do testing in a nondestructive way. However, to understand the region of interest in terms of its shape, size, and position, one needs to use different techniques. As a result, we consider reconstructing extended scatterers using an analogous method to the Direct Sampling Method (DSM), a new sampling method based on the Landweber iteration. We will need a factorization of the far-field operator to analyze the corresponding imaging function for the new Landweber direct sampling method. Then, we use the factorization and the Funk--Hecke integral identity to prove that the new imaging function will accurately recover the scatterer. The method studied here falls under the category of qualitative reconstruction methods, where an imaging function is used to retrieve the scatterer. We prove the stability of our new imaging function as well as derive a discrepancy principle for recovering the regularization parameter. The theoretical results are verified with numerical examples to show how the reconstruction performs by the new Landweber direct sampling method.</p><p><br></p><p dir="ltr">Motivated by the work done with the transmission eigenvalue problem with two conductivity parameters, we also study the direct and inverse problem for isotropic scatterers with two conductive boundary conditions. In such a problem, one analyzes the behavior of the scattered field as one of the conductivity parameters vanishes at the boundary. Consequently, we prove the convergence of the scattered field dealing with two conductivity parameters to the scattered field dealing with only one conductivity parameter. We consider the uniqueness of recovering the coefficients from the known far-field data at a fixed incident direction for multiple frequencies. Then, we consider the inverse shape problem for recovering the scatterer for the measured far-field data at a fixed frequency. To this end, we study the direct sampling method for recovering the scatterer by studying the factorization for the far-field operator. The direct sampling method is stable concerning noisy data and valid in two dimensions for partial aperture data. The theoretical results are verified with numerical examples to analyze the performance using the direct sampling method. </p>

Page generated in 0.1264 seconds