Spelling suggestions: "subject:"senstivity analysis"" "subject:"festivity analysis""
1 |
Analysis of an Interferometric Stokes Imaging PolarimeterMurali, Sukumar January 2010 (has links)
Estimation of Stokes vector components from an interferometric fringe encoded image is a novel way of measuring the State Of Polarization (SOP) distribution across a scene. Imaging polarimeters employing interferometric techniques encode SOP information in a single image in the form of fringes. The lack of moving parts and the use of a single image eliminates the problems of conventional polarimetry - vibration, spurious signal generation due to artifacts, beam wander and the need for registration routines. However, interferometric polarimeters are limited by narrow band pass operation and short exposure time operations which decrease the Signal to Noise Ratio (SNR) in the detected image.A simulation environment for designing an Interferometric Stokes Imaging polarimeter (ISIP) and a detector with noise effects is created and presented. A user is capable of imaging an object with defined SOP through an ISIP on to a detector producing a digitized image output. The simulation also includes band pass imaging capabilities, control of detector noise, and object brightness levels.The Stokes images are estimated from a fringe encoded image of a scene by means of a reconstructor algorithm. A spatial domain methodology involving the idea of a unit cell and slide approach is applied to the reconstructor model developed using Mueller calculus. The validation of this methodology and effectiveness compared to a discrete approach is demonstrated with suitable examples. The pixel size required to sample the fringes and the minimum unit cell size required for reconstruction are investigated using condition numbers. The importance of the PSF of fore-optics (telescope) used in imaging the object is investigated and analyzed using a point source imaging example and a Nyquist criteria is presented.Reconstruction of fringe modulated images in the presence of noise involves choosing an optimal sized unitcell. The choice of the unit cell based on the size of the polarization domain and illumination level is analyzed using a bias-variance tradeoff to obtain the minimum root mean square error. A similar tradeoff study is used to analyze the choice of the band pass filters under various illumination levels. Finally, a sensitivity analysis of the ISIP is presented to explore the applicability of this device to detect low degrees of polarization in areas like remote sensing.
|
2 |
Modélisation des procédés pour la correction des effets de proximity en lithographie électronique / Process modeling for proximity effect correction in electron beam lithographyFigueiro, Thiago Rosa 19 January 2015 (has links)
Depuis l'apparition du premier circuit intégré, le nombre de composants constituant une puce électronique n'a cessé d'augmenter tandis que les dimensions des composants ont continuellement diminué. Pour chaque nouveau nœud technologique, les procédés de fabrication se sont complexifiés pour permettre cette réduction de taille. L'étape de lithographie est une des étapes la plus critique pour permettre la miniaturisation. La technique de lithographie qui permet la production en masse est la lithographie optique par projection. Néanmoins cette technologie approche de ses limites en résolution et l'industrie cherche de nouvelles techniques pour continuer à réduire la taille des composants. Les candidats sont l'écriture en plusieurs passes, la lithographie EUV, l'écriture directe, la nano-impression ou l'auto-organisation dirigée. Même si ces alternatives reposent sur des principes très différents, chacune a en commun l'utilisation de la lithographie électronique à un moment ou à un autre de leur réalisation. La lithographie électronique est sujette à des phénomènes spécifiques qui impactent la résolution finale, tels la diffusion des électrons, le « fogging », la diffusion d'acide, la CMP etc… La solution choisie par l'industrie pour tenir compte de tous ces phénomènes est de les prévoir puis de les compenser. Cette correction nécessite de les prédire à l'aide de modélisation, la précision de ces modèles décrivant les procédés étant primordiale. Dans cette thèse, les concepts de base permettant de développer un modèle sont présentés. L'évaluation de la qualité des données, la méthodologie de choix d'un modèle ainsi que la validation de ce model sont introduites. De plus, les concepts d'analyse de sensibilité locale et globale seront définis. L'état de l'art des stratégies utilisées ou envisagées pour les procédés lithographiques actuels ou futurs sont énoncés, chacune des principales étapes lithographiques étant détaillée. Les modèles tenant compte de la physique et de la chimie impactant sur la résolution après écriture par e-beam sont étudiés. De plus, les modèles compacts permettant de prédire les résultats obtenus par e-beam seront détaillés, pour finalement décrire les limitations des stratégies actuelles. De nouveaux modèles compactes sont proposés en introduisant de nouvelles familles de fonctions telles que les fonctions Gamma ou les fonctions de Voigt. De plus, l'utilisation des fonctions d'interpolations de type Spline sont également proposés. Un modèle résine d'utilisation souple a également été développé pour tenir compte de la plupart des comportements expérimentaux observés en évaluant les dimensions de motifs d'un dessin en utilisant des métriques appropriés. Les résultats obtenus en utilisant de telles méthodes montrent une amélioration de la précision de la modélisation, notamment en ce qui concerne les motifs critiques. D'autres modèles spécifiques permettant de décrire les effets d'extrême longue portée ou permettant de compenser les déviations entre deux procédés sont également décrits dans ce travail. Le choix du jeu de motifs de calibration est critique pour permettre à l'algorithme de calibration d'obtenir des valeurs robustes des paramètres du modèle. Plusieurs stratégies utilisées dans la littérature sont brièvement décrites avant l'introduction d'une technique qui utilise l'analyse de sensibilité globale basée sur la variance afin de sélectionner les types de géométries optimales pour la calibration. Une stratégie permettant la sélection de ces motifs de calibration est détaillée. L'étude de l'impact du procédé et des incertitudes de mesures issue de la métrologie est également abordée, ce qui permet d'énoncer les limites à attendre du modèle sachant que les mesures peuvent être imprécises. Finalement, des techniques permettant de s'assurer de la qualité d'un modèle sont détaillées, telle l'utilisation de la validation croisée. La pertinence de ces techniques est démontrée pour quelques cas réel. / Since the development of the first integrated circuit, the number of components fabricated in a chip continued to grow while the dimensions of each component continued to be reduced. For each new technology node proposed, the fabrication process had to cope with the increasing complexity of its scaling down. The lithography step is one of the most critical for miniaturization due to the tightened requirements in both precision and accuracy of the pattern dimension printed into the wafer. Current mass production lithography technique is optical lithography. This technology is facing its resolution limits and the industry is looking for new approaches, such as Multi-patterning (MP), EUV lithography, Direct Write (DW), Nano-imprint or Direct Self-Assembly (DSA). Although these alternatives present significant differences among each other, they all present something in common: they rely on e-beam writers at some point of their flow. E-beam based lithography is subject to phenomena that impact resolution such as are electron scattering, fogging, acid diffusion, CMP loading, etc. The solution the industry adopted to address these effects is to predict and compensate for them. This correction requires predicting the effects, which is achieved through modeling. Hence the importance of developing accurate models for e-beam process. In this thesis, the basic concepts involving modeling are presented. Topics such as data quality, model selection and model validation are introduced as tools for modeling of e-beam lithography. Moreover, the concepts of local and global sensitivity analysis were also presented. Different strategies of global sensitivity analysis were presented and discussed as well as one of the main aspects in its evaluation, which is the space sampling approach. State-of-the-art strategies for todays and future lithography processes were presented and each of their main steps were described. First Principle models that explain the physics and chemistry of the most influential steps in the process resolution were also discussed. Moreover, general Compact models for predicting the results from e-beam lithography were also presented. Finally, some of the limitations of the current approach were described. New compact models described as Point-Spread-Function (PSF) are proposed based on new distributions, such as Gamma and Voigt. Besides, a technique using Splines for describing a PSF is also proposed. Moreover, a flexible resist model able to integrate most of the observed behavior was also proposed, based on evaluating any pattern on the layout using metrics. Results using such method further improved the any of the PSF distribution approach on the critical features that were limiting the future technology nodes. Other specific models and strategies for describing and compensating for extreme-long-range effects and for matching two different fabrication processes are also proposed and described in this work. The calibration layout is a key factor for providing the calibration algorithm with the experimental data necessary to determine the values of each of the parameters of the model. Several strategies from the literature were briefly described before introducing one of the main propositions of this thesis, which is employing variance-based global sensitivity analysis to determine which patterns are more suitable to be used for calibration. A complete flow for selecting patterns for a calibration layout was presented. A study regarding the impact of process and metrology variability over the calibration result was presented, indicating the limits one may expect from the generated model according to the quality of the data used. Finally, techniques for assuring the quality of a model such as cross-validation were also presented and demonstrated in some real-life situations.
|
3 |
Non-invasive estimation of skin chromophores using Hyperspectral ImagingKarambor Chakravarty, Sriya 21 August 2023 (has links)
Melanomas account for more than 1.7% of global cancer diagnoses and about 1% of all skin cancer diagnoses in the United States. This type of cancer occurs in the melanin-producing cells in the epidermis and exhibits distinctive variations in melanin and blood concentration values in the form of skin lesions. The current approach for evaluating skin cancer lesions involves visual inspection with a dermatoscope, typically followed by biopsy and histopathological analysis. However, this process, to decrease the risk of misdiagnosis, results in unnecessary biopsies, contributing to the emotional and financial distress of patients. The implementation of a non-invasive imaging technique to aid the analysis of skin lesions in the early stages can potentially mitigate these consequences.
Hyperspectral imaging (HSI) has shown promise as a non-invasive technique to analyze skin lesions. Images taken of human skin using a hyperspectral camera are a result of numerous elements in the skin. Being a turbid, inhomogeneous material, the skin has chromophores and scattering agents, which interact with light and produce characteristic back-scattered energy that can be harnessed and examined with an HSI camera. In this study, a mathematical model of the skin is used to extract meaningful information from the hyperspectral data in the form of melanin concentration, blood volume fraction and blood oxygen saturation in the skin. The human skin is modelled as a bi-layer planar system, whose surface reflectance is theoretically calculated using the Kubelka-Munk theory and absorption laws by Beer and Lambert. Hyperspectral images of the dorsal portion of three volunteer subjects' hands 400 - 1000 nm range, were used to estimate the contributing parameters. The mean and standard deviation of these estimates are reported compared with theoretical values from the literature. The model is also evaluated for its sensitivity with respect to these parameters, and then fitted to measured hyperspectral data of three volunteer subjects in different conditions. The wavelengths and wavelength groups which were identified to result in the maximum change in percentage reflectance calculated from the model were 450 and 660 nm for melanin, 500 - 520 nm and 590 - 625 nm for blood volume fraction and 606, 646 and 750 nm for blood oxygen saturation. / Master of Science / Melanoma, the most serious type of skin cancer, develops in the melanin-producing cells in the epidermis. A characteristic marker of skin lesions is the abrupt variations in melanin and blood concentration in areas of the lesion. The present technique to inspect skin cancer lesions involves dermatoscopy, which is a qualitative visual analysis of the lesion's features using a few standardized techniques such as the 7-point checklist and the ABCDE rule. Typically, dermatoscopy is followed by a biopsy and then a histopathological analysis of the biopsy. To reduce the possibility of misdiagnosing actual melanomas, a considerable number of dermoscopically unclear lesions are biopsied, increasing emotional, financial, and medical consequences. A non-invasive imaging technique to analyze skin lesions during the dermoscopic stage can help alleviate some of these consequences.
Hyperspectral imaging (HSI) is a promising methodology to non-invasively analyze skin lesions. Images taken of human skin using a hyperspectral camera are a result of numerous elements in the skin. Being a turbid, inhomogeneous material, the skin has chromophores and scattering agents, which interact with light and produce characteristic back-scattered energy that can be harnessed and analyzed with an HSI camera. In this study, a mathematical model of the skin is used to extract meaningful information from the hyperspectral data in the form of melanin concentration, blood volume fraction and blood oxygen saturation. The mean and standard deviation of these estimates are reported compared with theoretical values from the literature. The model is also evaluated for its sensitivity with respect to these parameters, and then fitted to measured hyperspectral data of six volunteer subjects in different conditions. Wavelengths which capture the most influential changes in the model response are identified to be 450 and 660 nm for melanin, 500 - 520 nm and 590 - 625 nm for blood volume fraction and 606, 646 and 750 nm for blood oxygen saturation.
|
Page generated in 0.076 seconds