1 |
Efficient Numerical Methods for High-Dimensional Approximation ProblemsWolfers, Sören 06 February 2019 (has links)
In the field of uncertainty quantification, the effects of parameter uncertainties on scientific simulations may be studied by integrating or approximating a quantity of interest as a function over the parameter space. If this is done numerically, using regular grids with a fixed resolution, the required computational work increases exponentially with respect to the number of uncertain parameters – a phenomenon known as the curse of dimensionality. We study two methods that can help break this curse: discrete least squares polynomial approximation and kernel-based approximation. For the former, we adaptively determine sparse polynomial bases and use evaluations in random, quasi-optimally distributed evaluation nodes; for the latter, we use evaluations in sparse grids, as introduced by Smolyak. To mitigate the additional cost of solving differential equations at each evaluation node, we extend multilevel methods to the approximation of response surfaces. For this purpose, we provide a general analysis that exhibits multilevel algorithms as special cases of an abstract version of Smolyak’s algorithm.
In financial mathematics, high-dimensional approximation problems occur in the pricing of derivatives with multiple underlying assets. The value function of American options can theoretically be determined backwards in time using the dynamic programming principle. Numerical implementations, however, face the curse of dimensionality because each asset corresponds to a dimension in the domain of the value function. Lack of regularity of the value function at the optimal exercise boundary further increases the computational complexity. As an alternative, we propose a novel method that determines an optimal exercise strategy as the solution of a stochastic optimization problem and subsequently computes the option value by simple Monte Carlo simulation. For this purpose, we represent the American option price as the supremum of the expected payoff over a set of randomized exercise strategies. Unlike the corresponding classical representation over subsets of Euclidean space, this relax- ation gives rise to a well-behaved objective function that can be globally optimized
using standard optimization routines. Read more
|
2 |
On Methodology for Verification, Validation and Uncertainty Quantification in Power Electronic Converters ModelingRashidi Mehrabadi, Niloofar 18 September 2014 (has links)
This thesis provides insight into quantitative accuracy assessment of the modeling and simulation of power electronic converters. Verification, Validation, and Uncertainty quantification (VVandUQ) provides a means to quantify the disagreement between computational simulation results and experimental results in order to have quantitative comparisons instead of qualitative comparisons. Due to the broad applications of modeling and simulation in power electronics, VVandUQ is used to evaluate the credibility of modeling and simulation results. The topic of VVandUQ needs to be studied exclusively for power electronic converters. To carry out this work, the formal procedure for VVandUQ of power electronic converters is presented. The definition of the fundamental words in the proposed framework is also provided.
The accuracy of the switching model of a three-phase Voltage Source Inverter (VSI) is quantitatively assessed following the proposed procedure. Accordingly, this thesis describes the hardware design and development of the switching model of the three-phase VSI. / Master of Science
|
3 |
Uncertainty quantification for risk assessment of loss-of-coolant accident frequencies in nuclear power plantsPan, Ying-An 02 December 2013 (has links)
This research presents the methodologies used to resolve the Nuclear Regulatory Commission Generic Safety Issue 191. The presented results are specific to South Texas Project Nuclear Operating Company (STPNOC). However, the proposed methodologies may be applicable to other nuclear power plants given the appropriate plant-specific frequencies.
This research provides important inputs to CASA Grande, a computer program used to model physical phenomena and quantify uncertainties to obtain estimates of failure probabilities for post-loss-of-coolant accident events at the STPNOC containment. We provide modeling and sampling methods for loss-of-coolant accident (LOCA) frequencies and break sizes. We focus on a study known as NUREG-1829 (Tregoning et al., 2008), which includes an expert elicitation of quantiles governing the (annual) frequency of a LOCA in boiling water reactors and pressurized water reactors. We propose to model LOCA frequencies with bounded Johnson distributions and to sample break sizes using uniform distributions. We then develop a new method to distribute LOCA frequencies to different locations within a plant to account for the location-dependent differences while preserving the NUREG-1829 frequencies. We also propose to linearly interpolate the NUREG-1829 LOCA frequencies to obtain the frequencies for any break sizes other than those from NUREG-1829. In addition, we present a method to obtain the distribution of LOCA frequency within a break-size interval providing important inputs to the probabilistic risk assessment quantification for STPNOC.
We review methods of combining the probability distributions of multiple experts to obtain a single probability distribution. More specifically, we describe the relative merits of the arithmetic mean (AM) and geometric mean (GM) as ways of performing this aggregation in the context of probabilities associated with rare events. Examining a set of pressurized water reactor results from NUREG-1829, we conclude that the GM represents a consistently sensible notion of the middle of the opinions expressed by nine experts. We further conclude that the AM is inappropriate for representing the center of the group's opinion for large effective break sizes. Instead, as the break size grows large a single expert's opinion dominates the combination produced by the AM. / text Read more
|
4 |
Predictive simulations of ammonia spray dynamics and multi-regime combustion: fundamental physics and modeling aspectsAngelilli, Lorenzo 06 1900 (has links)
Because of its thermochemical qualities, ammonia is an attractive alternative to carbon-based fuels. Indeed, the lack of carbon atoms in its molecular structure and the ease of storage make its widespread use desirable. However, there are a number of technological challenges that must be overcome due to the slow burning rate and its large latent heat. The objective of the dissertation is to model ammonia spray flames because direct liquid fuel injection in a combustion chamber is an essential aspect of the design of practical devices. The topic has been divided into a number of sub-problems, which are examined in each chapter of the thesis, due to the lack of fundamental physical details of the individual processes occurring and modeling considerations that cannot be ignored anymore.To better understand how the large latent heat affects the spray dynamics, a campaign of direct numerical simulations is initially performed at various ambient temperatures. Then, conducting large eddy simulations is preferred to lower the computational cost. The assessment of the dispersion models showed that the available options, however, are unable to reproduce the averaged droplet distribution across the entire domain and an improved model is proposed. Droplet evaporation causes local inhomogeneities in the mixture, which simultaneously induces multiple combustion modes. The Darmstadt Multi-Regime Burner (MRB) was the ideal candidate to investigate the physical aspects in advance. The best option for capturing its flame structure was the physically-derived multi-modal manifold and a regime classification index is formulated and tested on the MRB.Then, a machine learning strategy based on neural networks is suggested to quicken the look-up procedure, and preliminary validation of the methodology revealed that a time reduction of 30% is achieved without affecting the results' accuracy. Read more
|
5 |
Fabrication and characterization of graphene nanoribbons epitaxially grown on SiC(0001)Aranha Galves, Lauren 29 November 2018 (has links)
Einzelschichten von Graphen-Nanobänders (GNRs) wurden auf SiC(0001)-Substraten mit zwei unterschiedlichen Fehlschnitten bei Temperaturen von 1410 bis 1460 °C synthetisiert. Das GNR-Wachstum lässt sich bei niedriger Stufenkantenhöhe am besten durch eine exponentielle Wachstumsrate, welche mit der Energiebarriere für die Ausdiffusion von Si korreliert ist. Anderseits wird bei Substraten mit höheren Stufenkanten eine nicht-exponentielle Rate beobachtet, was mit der Bildung von mehrlagigen Graphen an den Stufenkanten in Verbindung gebracht wird.
Die Sauerstoffinterkalation von epitaktischen GNRs mittels Ausglühen an Luft von Bändern wird als nächstes untersucht, welche auf unterschiedlichen SiC-Substraten gewachsen wurden. Neben der Umwandlung von monolagigem zu zweilagigem Graphen in der Nähe der Stufenkanten von SiC, führt die Sauerstoffinterkalation zusätzlich zu der Bildung einer Oxidschicht auf den Terrassen des Substrats, was die zweilagigen GNRs elektrisch isoliert voneinander zurücklässt. Die elektrische Charakterisierung der zweilagigen GNRs zeigten dass die Bänder durch die Behandlung mit Sauerstoff elektrisch voneinander entkoppelt sind. Eine robuste Lochkonzentration von etwa 1x10¹³ cm-² und Mobilitäten von bis zu 700 cm²/(Vs) wurden für die GNRs mit einer typischen Breite von 100 nm bei Raumtemperatur gemessen.
Wohl definierte Mesastrukturen gebildet mittels Elektronenstrahllithographie auf SiC-Substraten, wurde zuletzt untersucht. Die Charakterisierung des Ladungsträgertransports von GNRs die auf den Seitenwänden der strukturierten Terrassen gewachsen wurden, zeigt eine Mobilität im Bereich von 1000 bis 2000 cm²/(Vs), welche für verschiedene Strukturen auf der gesamten Probe homogen ist, was die Reproduzierbarkeit dieses Herstellungsverfahrens hervorhebt, sowie dessen Potential für die Implementierung in zukünftigen Technologien, welche auf epitaktischgewachsenene GNRs basieren. / Monolayer graphene nanoribbons (GNRs) were synthesized on SiC(0001) substrates with two different miscut angles at temperatures ranging from 1410 to 1460 °C. The GNR growth in lower step heights is best described by an exponential growth rate, which is correlated with the energy barrier for Si out-diffusion. On the other hand, a non-exponential rate is observed for substrates with higher steps, which is associated with the formation of few-layer graphene on the step edges.
Oxygen intercalation of epitaxial GNRs is investigated next by air annealing ribbons grown in different SiC(0001) substrates. Besides the conversion of monolayer into bilayer graphene near the step edges of SiC, the oxygen intercalation also leads to the formation of an oxide layer on the terraces of the substrate, leaving the bilayer GNRs electronically isolated from each other. Electrical characterization of bilayer GNRs reveals that the ribbons are electrically decoupled from the substrate by the oxygen treatment. A robust hole concentration of around 1x10¹³ cm-² and mobilities up to 700 cm²/(Vs) at room temperature are measured for GNRs whose typical width is 100 nm.
Well defined mesa structures patterned by electron beam lithography on the surface of SiC substrates is lastly researched. Transport characterization of GNRs grown on the sidewalls of the patterned terraces shows a mobility in the range of 1000 – 2000 cm²/(Vs), which is homogeneous for various structures throughout the sample, indicating the reproducibility of this fabrication method and its potential for implementation in future technologies based on epitaxially grown GNRs. Read more
|
6 |
Contrast varied small-angle scattering on disordered materials using X-ray, neutron, and anomalous scatteringGericke, Eike 28 January 2022 (has links)
Schwerpunkt dieser Arbeit ist die Untersuchung der Struktur von Materialien und ihrer Entwicklung unter in situ Bedingungen. Dabei werden nanoskopische Strukturmotive in amorphen, ungeordneten und porösen Festkörpern mit Hilfe von Kleinwinkelstreuungstechniken identifiziert und quantifiziert.
Es werden drei verschiedene wissenschaftliche Fragestellungen bezüglich drei unterschiedlicher Materialsystemen diskutiert. Erstens wird die Nanostruktur von Dichtefluktuationen in hydriertem amorphen Silizium (a-Si:H) charakterisiert. In den untersuchten a-Si:H Materialien wurden zwei unterschiedliche in die a-Si:H-Matrix eingebettete Phasen identifiziert und anhand ihrer Streuquerschnitte quantifiziert. Diese neuen Ergebnisse beantworten eine seit 20 Jahren ungelöste Fragestellung über das a Si:H Material. Zweitens wird die Adsorption, Kondensation und Desorption von Xenon (Xe) in den Poren einer mesoporösen Silizium (Si) Membran untersucht. Dabei werden Xe-spezifischen Charakterisierungsmethoden eingesetzt. Die neuen Ergebnisse führen zu einem detaillierten Verständnis der Physisorption von Xe in porösem Silizium und zeigen deutliche Unterschiede zwischen Porenfüllungs- und Porenentleerungsmechanismen auf. Zuletzt wird die natürliche Alterung (NA) einer Aluminium-Magnesium-Silizium-Modelllegierung diskutiert. Die Streuexperimente weisen auf das Vorhandensein von Segregationszonen hin und unterstützen die Interpretation dieser Zonen als MgSi-Nanophasen in der Al-Matrix. / The investigation of material structures and their evolution under in situ conditions is the main focus of this work. Thereby, nanostructural motives in amorphous, disordered, and porous solids are identified and quantified using small-angle scattering techniques.
Three different scientific questions concerning three different material systems are discussed. First, the nanostructure of density fluctuations in hydrogenated amorphous silicon (a-Si:H) is evaluated and quantified. Second, the adsorption, condensation, and desorption of xenon (Xe) confined in the pores of a mesoporous silicon (Si) membrane is studied in situ using Xe-specific characterization methods. Finally, the natural aging (NA) of an aluminum-magnesium-silicon model alloy (Al-0.6Mg-0.8Si) is discussed. Read more
|
7 |
Seismic experimental analyses and surrogate models of multi-component systems in special-risk industrial facilitiesNardin, Chiara 22 December 2022 (has links)
Nowadays, earthquakes are one of the most catastrophic natural events that have a significant human, socio-economic and environmental impact. Besides, based on both observations of damage following recent major/moderate seismic events and numerical/experimental studies, it clearly emerges that critical non-structural components (NSCs) that are ubiquitous to most industrial facilities are particularly and even disproportionately vulnerable to those events.
Nonetheless and despite their great importance, seismic provisions for industrial facilities and their process equipment are still based on the classical load-and-resistance factor design (LRFD) approach; a performance-based earthquake engineering (PBEE) approach should, instead, be preferred. Along this vein, in recent years, much research has been devoted to setting computational fragility frameworks for special-risk industrial components and structures.
However, within a PBEE perspective, studies have clearly remarked: i) a lack of definition of performance objectives for NSCs; ii) the need for fully comprehensive testing campaigns data on coupling effects between main structures and NSCs. In this respect, this doctorate thesis introduces a computational framework for an efficient and accurate seismic state-dependent fragility analysis; it is based on a combination of data acquired from an extensive experimental shake table test campaign on a full-scale prototype industrial steel frame structure and the most recent surrogate-based UQ forward analysis advancements. Specifically, the framework is applied to a real-world application consisting of seismic shake table tests of a representative industrial multi-storey frame structure equipped with complex process components, carried out at the EUCENTRE facility in Italy, within the European SPIF project: Seismic Performance of Multi-Component Systems in Special Risk Industrial Facilities. The results of this experimental research campaign also aspire to improve the understanding of these complex systems and improve the knowledge of FE modelling techniques. The main goals aim to reduce the huge computational burden and to assess, as well, when the importance of coupling effects between NSCs and the main structure comes into play. Insights provided by innovative monitoring systems were then deployed to develop and validate numerical and analytical models. At the same time, the adoption of Der Kiureghian's stochastic site-based ground motion model (GMM) was deemed necessary to severely excite the process equipment and supplement the scarcity of real records with a specific frequency content capable of enhancing coupling effects. Finally, to assess the seismic risk of NSCs of those special facilities, this thesis introduces state-dependent fragility curves that consider the accumulation of damage effects due to sequential seismic events. To this end, the computational burden was alleviated by adopting polynomial chaos expansion (PCE) surrogate models. More precisely, the dimensionality of a seismic input random vector has been reduced by performing the principal component analysis (PCA) on the experimental realizations. Successively, by bootstrapping on the experimental design, separate PCE coefficients have been determined, yielding a full response sample at each point. Eventually, empirical state-dependent fragility curves were derived. Read more
|
8 |
Optimal iterative solvers for linear systems with stochastic PDE origins : balanced black-box stopping testsPranjal, Pranjal January 2017 (has links)
The central theme of this thesis is the design of optimal balanced black-box stopping criteria in iterative solvers of symmetric positive-definite, symmetric indefinite, and nonsymmetric linear systems arising from finite element approximation of stochastic (parametric) partial differential equations. For a given stochastic and spatial approximation, it is known that iteratively solving the corresponding linear(ized) system(s) of equations to too tight algebraic error tolerance results in a wastage of computational resources without decreasing the usually unknown approximation error. In order to stop optimally-by avoiding unnecessary computations and premature stopping-algebraic error and a posteriori approximation error estimate must be balanced at the optimal stopping iteration. Efficient and reliable a posteriori error estimators do exist for close estimation of the approximation error in a finite element setting. But the algebraic error is generally unknown since the exact algebraic solution is not usually available. Obtaining tractable upper and lower bounds on the algebraic error in terms of a readily computable and monotonically decreasing quantity (if any) of the chosen iterative solver is the distinctive feature of the designed optimal balanced stopping strategy. Moreover, this work states the exact constants, that is, there are no user-defined parameters in the optimal balanced stopping tests. Hence, an iterative solver incorporating the optimal balanced stopping methodology that is presented here will be a black-box iterative solver. Typically, employing such a stopping methodology would lead to huge computational savings and in any case would definitely rule out premature stopping. The constants in the devised optimal balanced black-box stopping tests in MINRES solver for solving symmetric positive-definite and symmetric indefinite linear systems can be estimated cheaply on-the- fly. The contribution of this thesis goes one step further for the nonsymmetric case in the sense that it not only provides an optimal balanced black-box stopping test in a memory-expensive Krylov solver like GMRES but it also presents an optimal balanced black-box stopping test in memory-inexpensive Krylov solvers such as BICGSTAB(L), TFQMR etc. Currently, little convergence theory exists for the memory-inexpensive Krylov solvers and hence devising stopping criteria for them is an active field of research. Also, an optimal balanced black-box stopping criterion is proposed for nonlinear (Picard or Newton) iterative method that is used for solving the finite dimensional Navier-Stokes equations. The optimal balanced black-box stopping methodology presented in this thesis can be generalized for any iterative solver of a linear(ized) system arising from numerical approximation of a partial differential equation. The only prerequisites for this purpose are the existence of a cheap and tight a posteriori error estimator for the approximation error along with cheap and tractable bounds on the algebraic error. Read more
|
9 |
Uncertainty Estimation in Volumetric Image SegmentationPark, Donggyun January 2023 (has links)
The performance of deep neural networks and estimations of their robustness has been rapidly developed. In contrast, despite the broad usage of deep convolutional neural networks (CNNs)[1] for medical image segmentation, research on their uncertainty estimations is being far less conducted. Deep learning tools in their nature do not capture the model uncertainty and in this sense, the output of deep neural networks needs to be critically analysed with quantitative measurements, especially for applications in the medical domain. In this work, epistemic uncertainty, which is one of the main types of uncertainties (epistemic and aleatoric) is analyzed and measured for volumetric medical image segmentation tasks (and possibly more diverse methods for 2D images) at pixel level and structure level. The deep neural network employed as a baseline is 3D U-Net architecture[2], which shares the essential structural concept with U-Net architecture[3], and various techniques are applied to quantify the uncertainty and obtain statistically meaningful results, including test-time data augmentation and deep ensembles. The distribution of the pixel-wise predictions is estimated by Monte Carlo simulations and the entropy is computed to quantify and visualize how uncertain (or certain) the predictions of each pixel are. During the estimation, given the increased network training time in volumetric image segmentation, training an ensemble of networks is extremely time-consuming and thus the focus is on data augmentation and test-time dropouts. The desired outcome is to reduce the computational costs of measuring the uncertainty of the model predictions while maintaining the same level of estimation performance and to increase the reliability of the uncertainty estimation map compared to the conventional methods. The proposed techniques are evaluated on publicly available volumetric image datasets, Combined Healthy Abdominal Organ Segmentation (CHAOS, a set of 3D in-vivo images) from Grand Challenge (https://chaos.grand-challenge.org/). Experiments with the liver segmentation task in 3D Computed Tomography (CT) show the relationship between the prediction accuracy and the uncertainty map obtained by the proposed techniques. / Prestandan hos djupa neurala nätverk och estimeringar av deras robusthet har utvecklats snabbt. Däremot, trots den breda användningen av djupa konvolutionella neurala nätverk (CNN) för medicinsk bildsegmentering, utförs mindre forskning om deras osäkerhetsuppskattningar. Verktyg för djupinlärning fångar inte modellosäkerheten och därför måste utdata från djupa neurala nätverk analyseras kritiskt med kvantitativa mätningar, särskilt för tillämpningar inom den medicinska domänen. I detta arbete analyseras och mäts epistemisk osäkerhet, som är en av huvudtyperna av osäkerheter (epistemisk och aleatorisk) för volymetriska medicinska bildsegmenteringsuppgifter (och möjligen fler olika metoder för 2D-bilder) på pixelnivå och strukturnivå. Det djupa neurala nätverket som används som referens är en 3D U-Net-arkitektur [2] och olika tekniker används för att kvantifiera osäkerheten och erhålla statistiskt meningsfulla resultat, inklusive testtidsdata-augmentering och djupa ensembler. Fördelningen av de pixelvisa förutsägelserna uppskattas av Monte Carlo-simuleringar och entropin beräknas för att kvantifiera och visualisera hur osäkra (eller säkra) förutsägelserna för varje pixel är. Under uppskattningen, med tanke på den ökade nätverksträningstiden i volymetrisk bildsegmentering, är träning av en ensemble av nätverk extremt tidskrävande och därför ligger fokus på dataaugmentering och test-time dropouts. Det önskade resultatet är att minska beräkningskostnaderna för att mäta osäkerheten i modellförutsägelserna samtidigt som man bibehåller samma nivå av estimeringsprestanda och ökar tillförlitligheten för kartan för osäkerhetsuppskattning jämfört med de konventionella metoderna. De föreslagna teknikerna kommer att utvärderas på allmänt tillgängliga volymetriska bilduppsättningar, Combined Healthy Abdominal Organ Segmentation (CHAOS, en uppsättning 3D in-vivo-bilder) från Grand Challenge (https://chaos.grand-challenge.org/). Experiment med segmenteringsuppgiften för lever i 3D Computed Tomography (CT) vissambandet mellan prediktionsnoggrannheten och osäkerhetskartan som erhålls med de föreslagna teknikerna. Read more
|
10 |
Molekularstrahlepitaxie von II-VI QuantenpunktenKratzert, Philipp 03 July 2002 (has links)
Die vorliegende Arbeit befasst sich mit der Molekularstrahlepitaxie und Charakterisierung von CdSe und (Cd,Mn)Se-Quantenpunkten (QP) auf ZnSe-Puffer. Die QP werden durch einen thermisch aktivierten 2D-3D Übergang eines ursprünglich zweidimensionalen CdSe/(Cd,Mn)Se-Films erzeugt. Die Mechanismen der QP-Bildung werden in-situ mittels Reflexionsbeugung hochenergetischer Elektronen und Atom-Kraft-Mikroskopie (UHV-AFM) studiert. Ex-situ Untersuchungen an vergrabenen QP-Strukturen mittels Transmissionselektronenmikroskopie (TEM) und Photolumineszenz (PL) /Magneto-PL ergänzen die in-situ Analyse der CdSe Oberfläche. Die Kombination der Analysemethoden ermöglicht erstmalig den Nachweis, dass mit der thermischen Aktivierung eine Stranski-Krastanov-Morphologie aus CdSe QP, mit einem Kern aus reinem CdSe, auf einem geschlossenen CdSe Film erzeugt werden kann. Die statistische Auswertung im UHV-AFM ergibt eine mittlere QP-Höhe von 1,6 nm, eine QP-Dichte von 1100/µm² sowie einen Durchmesser von unterhalb 10 nm. Aus diesen Parametern lässt sich als oberes Limit des QP-Volumens ein Wert von < 0,15 ML ermitteln. TEM-Messungen ergeben, dass Interdiffusion bei der Bildung der QP sowie während dem Überwachsen von untergeordneter Bedeutung ist. Im UHV-AFM zeigt sich, dass CdSe-QP unter Labormaßstäben morphologisch stabil sind. Über einen Zeitraum von 5 Tagen, sowohl im UHV als auch an Umgebungsluft, sind bei Raumtemperatur keine Reifungserscheinungen der CdSe-QP-Morphologie beobachtbar. Das Studium der Bildungskinetik führt zu der Erkenntnis, dass der 2D-3D Übergang von einem zeitlich instabilen Ausgangszustand abhängig ist und somit kinetisch determiniert. Die experimentellen Befunde deuten darauf hin, dass unmittelbar nach dem Wachstum eine Glättung der Oberfläche auf atomarer Ebene einsetzt. In einem einfachen Modell wird die QP-Bildung als Superposition der Glättung und dem Verhältnis der Wahrscheinlichkeiten des Aufwärtssprungs zum Abwärtssprung zwischen zwei ML-Terrassen beschrieben. Erste Untersuchungen zum Wachstum von (Cd,Mn)Se QP ergeben, dass es mit der thermischen Aktivierung möglich ist bis zu einer Konzentration von ca. 10 % Mn (Cd,Mn)Se QP zu erhalten. Der Einbau des Mn bewirkt eine Reduktion der mittleren QP-Dichte und QP-Höhe. Experimente auf pseudomorphem und relaxiertem ZnSe-Puffer zeigen, dass der entscheidende Einfluss des Mn nicht in der Veränderung der Verspannung liegt, sondern in einer Veränderung der Oberflächendiffusivität. In magneto-optischen Untersuchungen der (Cd,Mn)Se-QP-Strukturen wird eindeutig der Riesen-Zeemaneffekt nachgewiesen. Es werden experimentell effektive g-Faktoren bis zu einem Wert von 220 ermittelt (B = 6T). Der Vergleich mit der Rechnung zeigt, dass das Mn in den QP eingebaut ist. In dieser Arbeit wird das Verständnis der II-VI-QP-Bildung erweitert und eine verbesserte Kontrolle des QP-Ensembles erreicht. Die erzeugten semimagnetischen Strukturen stellen einen Ausgangspunkt für weitergehende optische Experimente dar, an denen in naher Zukunft gezielt einzelne Spins manipuliert und studiert werden können. / In this work CdSe and (Cd,Mn)Se Quantum Dots (QD`s) are grown on ZnSe by molecular beam epitaxy. The QD`s are obtained within a thermicall activated 2D-3D transition of an initially two-dimensional CdSe/(Cd,Mn)Se thin film. The physics behind the 2D-3D transition is investigated in-situ by reflection of high energy electron diffraction and atomic force microscopy (UHV-AFM) measurements. Additionally ex-situ data gained on buried QD structures within Transmission Electron Microscopy (TEM) and Photoluminescence (PL) /Magneto-PL measurements are presented. The study proves for the first time that after the thermal activation a Stranski-Krastanov morphology is established. A statistical evaluation of the QD morphology by UHV-AFM supplies an average QD-height of about 1.6 nm, a QD-density of about 1100/µm² and an upper diameter of about 10 nm. The total QD volume is determined to a value of below 0.15 ML. TEM on overgrown structures reveals QD`s, with a core of pure CdSe, on a closed wetting layer. The data show that interdiffusion is of minor importance for the QD formation as well as during the overgrowth. The stability of the QD morphology is investigated with the UHV-AFM at room temperature. It is shown that the CdSe QD morphology is stable at UHV as well as ambient-air conditions over a time period of 5 days. The experiments demonstrate that CdSe QD`s do not ripen on a laboratory time scale. The investigation of the formation kinetics reveals that the 2D-3D transition is dependent from a unstable precursor state and is therefore determined by kinetics. The experiments indicate that immediately after growth the surface begins to smoothen on an atomar scale. In a simple model the QD formation is described as a superposition of the smoothening and the ratio of upwards and downwards hopping between two ML terraces. First investigations on the growth of (Cd,Mn)Se QD`s show that after the incorporation of up to 10 % Mn semimagnetic QD´s are obtained by the thermal activation. The incorporation of Mn leads to a reduction of the average QD density and QD height. Experiments on pseudomorphic and relaxed ZnSe buffer are compared. It is concluded that strain is not a crucial factor for the QD formation and that probably surface processes like diffusibility of the atomic species play an important role. Magneto-optical investigations of the (Cd,Mn)Se QD structures prove the appearance of the giant-Zeemaneffect. Effective g-values of about 220 have been measured (B = 6T). In a comparison with the calculation the incorporation of Mn into the QD`s is concluded. Within this work the comprehension of the II-VI QD formation is extended and an improved control over the QD morphology is reached. The produced semimagnetic QD structures appear as a starting point for future optical investigations concerning the control of single spins. Read more
|
Page generated in 0.0153 seconds