Spelling suggestions: "subject:"biophysik."" "subject:"geophysik.""
571 |
Formation of massive seed black holes by direct collapse in the early universeAgarwal, Bhaskar 06 December 2013 (has links) (PDF)
No description available.
|
572 |
Search for extra-solar planets with high precision radial velocity curvesBrucalassi, Anna 28 July 2014 (has links) (PDF)
This Ph.D. Thesis has as general subject the study of extrasolar planets using the radial velocity technique from both, instrumental and observative, points of view. Two main parts compose the work: the upgrade of the spectrograph FOCES, a high resolution spectrograph that will be installed next year at the Wendelstein Observatory, and the
search of giant planets around stars in the open cluster Messier-67 (M67). / Die vorliegende Dissertation behandelt die Suche von extra-solaren Planeten mit der Radialgeschwindigkeits Methode und zwar sowohl in Bezug auf die dafür notwendige Instrumentierung
als auch auf die Beobachtung. Die Arbeit ist in zwei Teile gegliedert. Im ersten Teil
werden die vorgenommenen Verbesserungen
des hochauflösenden Spektrographen FOCES
beschrieben, der im kommenden Jahr
am Wendelstein Observatorium installiert werden
wird. Der zweite Teil handelt von der Suche nach Gasplaneten im offenen Sternhaufen M67.
|
573 |
The sub-mJy radio population in the Extended Chandra Deep Field SouthBonzini, Margherita 17 July 2014 (has links) (PDF)
Deep radio observations provide a dust unbiased view of both black hole (BH) and star
formation (SF) activity and therefore represent a powerful tool to investigate their evolution
and their possible mutual influence across cosmic time. Radio astronomy is therefore
becoming increasingly important for galaxy evolution studies thanks also to the many new
radio facilities under construction or being planned. To maximise the potentiality of these
new instruments it is crucial to make predictions on what they will observe and to see how
best to complement the radio data with multi-wavelength information.
These are the motivations of my Thesis in which I studied a sample of 900 sources
detected in one of the deepest radio surveys ever made. The observations have been
performed at 1.4 GHz with the Very Large Array on the Extended Chandra Deep Field
South. I developed a multi-wavelength method to identify the optical-infrared counterparts
of the radio sources and to classify them as radio-loud active galactic nuclei (RL AGNs),
radio-quiet (RQ) AGNs, and star forming galaxies (SFGs). I was able for the first time to
quantify the relative contribution of these different classes of sources down to a radio flux
density limit of ∼30 μJy.
I characterized the host galaxy properties (stellar masses, optical colors, and morphology)
of the radio sources; RQ AGN hosts and SFGs have similar properties with disk
morphology and blue colors while radio-loud AGN hosts are more massive, redder and
mostly ellipticals. This suggests that the RQ and RL activity occurs at two different evolutionary
stages of the BH-host galaxy co-evolution. The RQ phase occurs at earlier times
when the galaxy is still gas rich and actively forming stars while the radio activity of the
BH appears when the galaxy has already formed the bulk of its stellar population, the gas
supply is lower, and the SF is considerably reduced.
I quantified the star formation rate (SFR) of the radio sources using two independent
tracers, the radio and far-infrared luminosities. I found evidence that the main contribution
to the radio emission of RQ AGNs is the SF activity in their host galaxy. This result
demonstrates the remarkable possibility of using the radio band to estimate the SFR even in
the hosts of bright RQ AGNs where the optical-to-mid-infrared emission can be dominated
by the AGN. I have shown that deep radio surveys can be used to study the cosmic star
formation history; I estimated the contribution of the so-called ”starburst” mode to the
total SFR density and quantified the AGN occurrence in galaxies with different levels of
SF. / Tiefe Beobachtungen im Radiobereich ermöglichen einen Blick auf aktive schwarze Löcher
(”black holes“, BH) und auch Gebiete aktiver Sternentstehung (”star formation“, SF),
ohne dass die Beobachtungen durch den Staub in Galaxien beeinflusst werden. Daher
sind Radiobeobachtungen ideal, um deren Entwicklung und eine mögliche gegenseitige
Beeinflussung von BH und SF Aktivit¨at über kosmische Zeiten hinweg zu untersuchen.
Radioastronomie gewinnt darum für die Erforschung von Galaxienentwicklung zunehmend
an Bedeutung. Dies ist auch bedingt durch die zahlreichen neuen Radioanlagen, die im
Bau oder in Planung sind. Um das Potential dieser neuen Instrumente zu maximieren, ist
es essentiell Vorhersagen dar¨uber zu machen, was wir beobachten werden und zu erfahren,
wie wir die Radiodaten am besten mit Multiwellenl¨angendaten ergänzen.
Das ist die Motivation meiner Doktorarbeit, in der ich eine Auswahl von 900 Radioquellen
untersucht habe, die in einer der tiefsten jemals durchgef¨uhrten Radiodurchmusterungen
detektiert wurden. Die Beobachtungen wurden bei 1.4GHz mit dem Very
Large Array auf dem ”Extended Chandra Deep Field South“durchgeführt. Ich habe eine
Multiwellenl¨angenmethode entwickelt, um die optischen und infraroten Pendants dieser
Radioquellen zu identifizieren und sie als radiolaute (”radio loud“, RL) aktive galaktische
Kerne (”active galactic nuclei“, AGNs), radioleise (”radio quiet“, RQ) AGNs und Galaxien
mit aktiver Sternentstehung (”star forming galaxies“,SFGs) zu klassifizieren. Als erste war
es mir möglich, die jeweiligen Anteile dieser verschiedenen Klassen an Quellen bis zu einer
Flussdichte von nur ∼30 μJy zu bestimmen.
Ich charakterisierte die Galaxieneigenschaften (Sternmasse, optische Farben, Morphologie)
der Radioquellen; RQ AGN Galaxien und SFGs sind ¨ahnlich bezüglich der Scheibenmorphologie
und der blauen Farben, w¨ahrend RL AGN Galaxien massiver, röter und meist
elliptisch sind. Dies legt nahe, dass RQ und RL Aktivität auf zwei verschiedenen Entwicklungsstufen
der BH − Galaxien Koevolution stattfindet. Die RQ Phase findet früher statt,
wenn die Galaxie noch gasreich ist und aktiv Sterne bildet, während BHs erst als Radioquelle
aktiv werden, wenn die Galaxie bereits den Groteil ihrer Sternpopulation gebildet
hat, der Gasvorrat gesunken ist und die SF erheblich zurückgegangen ist.
Ich quantifizierte die Sternentstehungsrate (”star formation rate“, SFR) der Radioquellen
durch zwei unabh¨angige Indikatoren, die Radio- und die Ferninfrarotleuchtkraft.
Ich habe Belege gefunden, dass der Hauptanteil der Radioemission von RQ AGNs durch die
SF Aktivität der Galaxie entsteht. Dieses Ergebnis eröffnet die bemerkenswerte Möglichkeit,
den Radiobereich des Spektrums zu benutzen um die SFR auch in Galaxien mit hellen,RQ AGNs abzuschätzen, bei denen die optische bis mittlere Infrarot Emission vom AGN
dominiert sein kann. Ich habe gezeigt, dass tiefe Radiodurchmusterungen benutzt werden
können um die kosmische Sternentstehungsgeschichte zu erforschen; Ich habe den Anteil
des sogenannten
”Starburst“-Modus an der gesamten SFR-Dichte abgeschätzt und das
Vorkommen von AGNs in Galaxien mit verschiedenem Grad an SF quantifiziert.
|
574 |
Aerosol distribution above Munich using remote sensing techniquesSchnell, Franziska Ingrid Josefine 21 July 2014 (has links) (PDF)
Aerosols are an important part of our atmosphere. As they are very inhomogenously distributed both in time and space the estimation of their influence on the climate is very complex. So it is important to improve the knowledge about the aerosol distribution. In this study the distribution of aerosols above the region around Munich, Germany in the time period 2007 to 2010 is studied with measurements from remote sensing instruments. Thereby the main focus is set on the lidar data from the ground based lidar system MULIS of the Meteorological Institute Munich and the space lidar CALIOP onboard the satellite CALIPSO which both deliver aerosol information height resolved. As an addition and for a better comparison with previous studies, aerosol information from the AERONET Sunphotometer in Munich and the satellite spectroradiometer MODIS are used.
With help of these four datasets several aerosol parameters could be studied: on average the aerosol optical depth (AOD) above the Munich region is at 1064 nm about 0.05 to 0.06, at 532 nm about 0.12 to 0.17, and at 355 nm about 0.22 to 0.28. The height of the boundary layer top decreases from 1.68 km in spring to 0.73 km in winter, while the thickness of the elevated layers is more stable (spring: 1.43 km, winter: 1.02 km). The occurrence of ELs is highest in spring (in 75 % of all measurements), and lowest in winter (36 %). Measurements of the particle linear depolarization ratio and the Ångström exponent show that the aerosols in elevated layers clearly differ from the aerosols in the PBL. Especially in spring the average EL depolarization is large (25 %) indicating transportation of strongly depolarizing aerosols like Saharan dust in the free troposphere. The dominant aerosol type in the Munich region is smoke (also called biomass burning), polluted continental can occur in high concentrations especially during summer time. Dust occurs only in rare occasions, mainly mixed with other aerosol types (polluted dust).
One important finding from the comparison of the four datasets is that CALIPSO strongly underestimates the AOD. To study the significances of different causes for this, the CALIPSO extinction coefficient profiles are compared with coincidentally performed measurements of MULIS. The underestimation of the AOD above Munich by CALIPSO is mainly found to be due to the failure of the layer detection: its effect on the AOD underestimation is about 36 %. Also the wrong assumption of the lidar ratio contributes to the underestimation, though on a smaller account of about 5 %. The influence of clouds in the surroundings on the AOD is not quantifiable, but the analysis shows that clouds lead to an overestimation of the AOD. To compensate these reasons and to get detailed profiles from CALIPSO, it could be shown that -for case studies- it is very efficient to calculate the extinction coefficients from CALIPSO raw data (L1B) manually.
|
575 |
Nonlinear filtering with particle filtersHaslehner, Mylène 21 July 2014 (has links) (PDF)
Convective phenomena in the atmosphere, such as convective storms, are characterized by very fast, intermittent and seemingly stochastic processes. They are thus difficult to predict with Numerical Weather Prediction (NWP) models, and difficult to estimate with data assimilation methods that combine prediction and observations. In this thesis, nonlinear data assimilation methods are tested on two idealized convective scale cloud models, developed in [58] and [59]. The aim of this work was to apply the particle filter, a method specifically designed for nonlinear models, to the two toy models that resemble some properties of convection. Potential problems and characteristic features of particle filter methodology were analyzed, having in mind possible tests on far more elaborate NWP models.
The first model, the stochastic cloud model, is a one-dimensional birth-death model initialized by a Poisson distribution, where clouds randomly appear or die independently from each other on a set of grid-points. This purely stochastic model is physically unreal- istic, but since it is highly nonlinear and non-Gaussian, it contains minimal requirements for representing main features of convection. The derivation of the transition probability density function (PDF) of the stochastic cloud model made it possible to understand better the weighting mechanism involved in the particle filter. This mechanism, which associates a weight to particles (state vectors) according to their likelihood with respect to observa- tions and to their evolution in time, is followed by resampling, where particles with high probability are replicated, and others eliminated. The ratio between magnitudes of the ob- servation probability distribution and the transition probability is shown to determine the selection process of particles at each time step, where data and prediction are combined. Further, a strong sensitivity of the filter to the observation density and to the speed of the cloud variability (given by the cloud life time) is demonstrated. Thus, the particle filter can outperform some simpler methods for certain observation densities, whereas it does not bring any improvement when using others. Similarly, it leads to good results for stationary cloud fields while having difficulties to follow fast varying cloud fields, because any change in the model state variable is potentially penalized. The main difficulty for the filter is the fact that this model is discrete, while the filter was designed for data assimilation of continuous fields.
However, by representing an extreme testbed for the particle filter, the stochastic cloud model shows the importance of the observation and model error densities for the selection of particles, and it stresses the influence of the chosen model parameters on the filter’s performance.
The second model considered was the modified shallow water model. It is based on the shallow water equations, to which is added a small stochastic noise in order to trigger convection, and an equation for the evolution of rain. It contains spatial correlations and is represented by three dynamical variables - wind speed, water height and rain concentration - which are linked together. A reduction of the observation coverage and of the number of updated variables leads to a relative error reduction of the particle filter compared to an ensemble of particles that are only continuously pulled (nudged) to observations, for a certain range of nudging parameters. But not surprisingly, reducing data coverage in- creases the absolute error of the filter. We found that the standard deviation of the error density exponents is a quantity that is responsible for the relative success of the filter with respect to nudging-only. In the case where only one variable is assimilated, we formulated a criterion that determines whether the particle filter outperforms the nudged ensemble. A theoretical estimate is derived for this criterion. The theoretical values of this estimate, which depends on the parameters involved in the assimilation set up (nudging intensity, model and observation error covariances, grid size, ensemble size,...), are roughly in accor- dance with the numerical results. In addition, comparing two different nudging matrices that regulate the magnitude of relaxation of the state vectors towards the observations, showed that a diagonally based nudging matrix leads to smaller errors, in the case of assimilating three variables, than a nudging matrix based on stochastic errors added in each integration time step.
We conclude that the efficient particle filter could bring an improvement with respect to conventional data assimilation methods, when it comes to the convective scale. Its success, however, appears to depend strongly on the parameters of the test setting.
|
576 |
High-Intensity, Picosecond-Pumped, Few-Cycle OPCPASkrobol, Christoph 27 August 2014 (has links) (PDF)
No description available.
|
577 |
Faces of gravityWintergerst, Nico 10 September 2014 (has links) (PDF)
In dieser Dissertation untersuchen wir eine Vielzahl von Themen aus dem Bereich der Kosmologie und der Gravitation. Insbesondere behandeln wir Fragestellungen aus der Inflationstheorie, der Strukturbildung im neuzeitlichen Universum und massiver Gravitation, sowie Quantenaspekte schwarzer Löcher und Eigenschaften bestimmter skalare Theorien bei sehr hohen Energien.
Im sogenannten "New Higgs Inflation"-Modell spielt das Higgs-Boson die Rolle des Inflaton-Felds. Das Modell ist kompatibel mit Messungen der Higgs-Masse, weil das Higgs-Boson nichtminimal an den Einstein-Tensor gekoppelt wird. Wir untersuchen das Modell in Hinblick auf die kürzlich veröffentlichten Resultate der BICEP2- und Planck-Experimente und finden eine hervorragende Übereinstimmung mit den gemessenen Daten. Desweiteren zeigen wir auf, dass die scheinbaren Widersprüche zwischen Planck- und BICEP2-Daten dank eines negativ laufenden Spektralindex verschwinden. Wir untersuchen außerdem die Unitaritätseigenschaften der Theorie und räsonieren, dass es während der gesamten Entwicklung des Universums nicht zu Unitaritätsverletzung kommt. Während der Dauer der inflationären Phase sind Kopplungen in den Higgs-Higgs und Higgs-Graviton-Sektoren durch eine großen feldabhängige Skala unterdrückt. Die W- und Z-Bosonen hingegen entkoppeln aufgrund ihrer sehr großen Masse. Wir zeigen eine Möglichkeit auf, die es erlaubt die Eichbosonen als Teil der Niederenergietheorie zu behalten. Dies wird erreicht durch eine gravitationsabhängige nichtminimale Kopplung des Higgs-Felds an die Eichbosonen.
Im nächsten Abschnitt konzentrieren wir uns auf das neuzeitliche Universum. Wir untersuchen den sogenannten sphärischen Kollaps in Modellen gekoppelter dunkler Energie. Insbesondere leiten wir eine Formulierung des sphärischen Kollaps her, die auf den nichtlinearen Navier-Stokes-Gleichungen basiert. Im Gegensatz zu bekannten Beispielen aus der Literatur fließen alle wichtigen Fifth-Force Effekte in die Entwicklung ein. Wir zeigen, dass unsere Methode einfachen Einblick in viele Subtilitäten erlaubt, die auftreten wenn die dunkle Energie als inhomogen angenommen wird.
Es folgt eine Einleitung in die Theorien von massiven Spin-2 Teilchen. Hier erklären wir die Schwierigkeiten der Formulierung einer nichtlinearen, wechselwirkenden Theorie. Wir betrachten das bekannte Problem des Boulware-Deser-Geists und zeigen zwei Wege auf, dieses No-Go-Theorem zu vermeiden. Insbesondere konstruieren wir die eindeutige Theorie eines wechselwirkenden massiven Spin-2 Teilchens, die auf kubischer Ordnung trunkiert werden kann, ohne dass sie zu Geist-Instabilitäten führt.
Der zweite Teil dieser Arbeit widmet sich bekannten Problemen der Physik schwarzer Löcher. Hier liegt unser Fokus auf der Idee, das schwarze Löcher als Bose-Kondensate von Gravitonen aufgefasst werden können. Abweichungen von semiklassischem Verhalten sind Resultat von starken Quanteneffekten die aufgrund einer kollektiven starken Kopplung auftreten. Diese starke Kopplung führt in bekannten Systemen zu einem Quantenphasenübergang oder einer Bifurkation. Die quantenmechanischen Effekte könnten der Schlüssel zur Auflösung lang existierender Probleme in der Physik schwarzer Löcher sein. Dies umschließt zum Beispiel das Informationsparadox und das ``No-Hair''-Theorem. Außerdem könnten sie wertvolle Einblicke in die Vermutung liefern, dass schwarze Löcher die Systeme sind, die Informationen am schnellsten verschlüsseln.
Als Modell für ein schwarzes Loch studieren wir ein System von ultrakalten Bosonen auf einem Ring. Dieses System ist bekannt als eines, dass einen Quantenkritischen Punkt besitzt. Wir demonstrieren, dass am kritischen Punkt Quanteneffekte sogar für sehr große Besetzungszahlen wichtig sein können. Hierzu definieren wir die Fluktuationsverschränkung, die angibt, wie sehr verschiedene Impulsmoden miteinander verschränkt sind. Die Fluktuationsverschränkung ist maximal am kritischen Punkt und ist dominiert von sehr langwelligen Fluktuationen. Wir finden daher Resultate die unabhängig von der Physik im ultravioletten sind.
Im weiteren Verlauf besprechen wir die Informationsverarbeitung von schwarzen Löchern. Insbesondere das Zusammenspiel von Quantenkritikalität und Instabilität kann für ein sehr schnelles Wachstum von Ein-Teilchen-Verschränkung sorgen. Dementsprechend zeigen wir, dass die sogenannte "Quantum Break Time'', welche angibt wie schnell sich die exakte Zeitentwicklung von der semiklassischen entfernt, wie log(N) wächst. Hier beschreibt N die Anzahl der Konstituenten. Im Falle eines Gravitonkondensats gibt N ein Maß für die Entropie des schwarzen Lochs an. Dementsprechend interpretieren wir unsere Erkenntnisse als einen starken Hinweis, dass das Verschlüsseln von Informationen in schwarzen Löchern denselben Ursprung haben könnte.
Das Verdampfen von schwarzen Löchern beruht in unserem Bild auf zwei Effekten. Kohärente Anregungen der tachyonischen radialen Mode führen zum Kollaps des Kondensats, während sich die inkohärente Streuung von Gravitonen für die Hawking-Strahlung verantwortlich zeigt. Hierfür konstruieren wir einen Prototyp, der einen bosonischen Freiheitsgrades mit impulsabhängigen Wechselwirkungen beschreibt. Im Schwinger-Keldysh-Formalismus untersuchen wir die Echtzeit-Evolution des Kondensats und zeigen, dass der Kollaps und die damit einhergehende Evaporation auf selbst-ähnliche Weise verläuft. In diesem Fall ist das Kondensat während des gesamten Kollapses an einem kritischen Punkt. Desweiteren zeigen wir Lösungen, die an einem instabilen Punkt leben, und daher schnelle Verschränkung erzeugen könnten.
Der finale Teil der Arbeit befasst sich mit Renormierungsgruppenflüssen in skalaren Theorien mit impulsabhängigen Wechselwirkungen. Wer leiten die Flussgleichung für eine Theorie, die nur eine Funktion des kinetischen Terms enthält her. Hier zeigen wir die Existenz von Fixpunkten in einer Taylor-Entwicklung der Funktion auf. Wir diskutieren, inwiefern unsere Analyse für Einblick in allgemeinere Theorien mit Ableitungswechselwirkungen sorgen kann. Dies beinhaltet zum Beispiel Gravitation. / This thesis covers various aspects of cosmology and gravity. In particular, we focus on issues in inflation, structure formation, massive gravity, black hole physics, and ultraviolet completion in certain scalar theories.
We commence by considering the model of New Higgs Inflation, where the Higgs boson is kinetically non-minimally coupled to the Einstein tensor. We address the recent results of BICEP2 and Planck and demonstrate that the model is in perfect agreement with the data. We further show how the apparent tension between the Planck and BICEP2 data can be relieved by considering a negative running of the spectral index.
We visit the issue of unitarity violation in the model and argue that it is unitary throughout the evolution of the Universe. During inflation, couplings in the Higgs-Higgs and Higgs-graviton sector are suppressed by a large field dependent cutoff, while the W and Z gauge bosons acquire a very large mass and decouple. We point out how one can avoid this decoupling through a gravity dependent nonminimal coupling of the gauge bosons to the Higgs.
We then focus on more recent cosmology and consider the spherical collapse model in coupled dark energy models.
We derive a formulation of the spherical collapse that is based on the nonlinear hydrodynamical Navier-Stokes equations. Contrary to previous results in the literature, it takes all fifth forces into account properly. Our method can also be used to gain insight on subtleties that arise when inhomogeneities of the scalar field are considered.
We apply our approach to various models of dark energy. This includes models with couplings to cold dark matter and neutrinos, as well as uncoupled models. In particular, we check past results for early dark energy parametrizations.
Next, we give an introduction to massive spin-two theories and the problem of their non-linear completion. We review the Boulware-Deser ghost problem and point out the two ways to circumvent classic no-go theorems. In particular, we construct the unique theory of a massive spin-two particle that does not suffer from ghost instabilities when truncated at the cubic order.
The second part of this dissertation is dedicated to problems in black hole physics.
In particular, we focus on the proposal that black holes can be understood as quantum bound states of soft gravitons. Deviations from semiclassicality are due to strong quantum effects that arise because of a collective strong coupling, equivalent to a quantum phase transition or bifurcation. These deviations may hold the key to the resolution of long standing problems in black hole physics, such as the information paradox and the no hair theorem. They could also provide insights into the conjecture that black holes are the fastest information scramblers in nature.
As a toy model for black holes, we study a model of ultracold bosons in one spatial dimension which is known to undergo a quantum phase transition.
We demonstrate that at the critical point, quantum effects are important even for a macroscopic number of particles. To this end, we propose the notion of fluctuation entanglement, which measures the entanglement between different momentum modes. We observe the entanglement to be maximal at the critical point, and show that it is dominated by long wavelength modes. It is thus independent of ultraviolet physics.
Further, we address the question of information processing in black holes. We point out that the combination of quantum criticality and instability can provide for fast growth of one-particle entanglement. In particular, we show that the quantum break time in a simple Bose-Einstein prototype scales like log(N), where N is the number of constituents. By noting that in the case of graviton condensates, N provides a measure for the black hole entropy, we take our result as as a strong hint that scrambling in black holes may originate in the same physics.
In our picture, the evaporation of the black hole is due to two intertwined effects. Coherent excitation of the tachyonic breathing mode collapses the condensate, while incoherent scattering of gravitons leads to Hawking radiation. To explore this, we construct a toy model of a single bosonic degree of freedom with derivative self-interactions. In the Schwinger-Keldysh formalism, we consider the real-time evolution and show that evaporation and collapse occur in a self-similar manner. The condensate is at a critical point throughout the collapse. Moreover, we discover solutions that are stuck at an unstable point and may thus exhibit fast generation of entanglement.
The final chapter of this thesis is dedicated to renormalization group (RG) flows in scalar theories with derivative couplings. We derive the exact flow equation for a theory that depends on a function of only the kinetic term. We demonstrate the existence of fixed points in a Taylor series expansion of the Lagrangian and discuss how our studies can provide insight into RG flows in more general theories with derivative couplings, for example gravity.
|
578 |
4D offline PET-based treatment verification in ion beam therapyKurz, Christopher 29 August 2014 (has links) (PDF)
Due to the accessible sharp dose gradients, external beam radiotherapy with protons and heavier ions enables a highly conformal adaptation of the delivered dose to arbitrarily shaped tumour volumes. However, this high conformity is accompanied by an increased sensitivity to potential uncertainties, e.g., due to changes in the patient anatomy. Additional challenges are imposed by respiratory motion which does not only lead to rapid changes of the patient anatomy, but, in the cased of actively scanned ions beams, also to the formation of dose inhomogeneities. Therefore, it is highly desirable to verify the actual application of the treatment and to detect possible deviations with respect to the planned irradiation. At present, the only clinically implemented approach for a close-in-time verification of single treatment fractions is based on detecting the distribution of β+-emitter formed in nuclear fragmentation reactions during the irradiation by means of positron emission tomography (PET). For this purpose, a commercial PET/CT (computed tomography) scanner has been installed directly next to the treatment rooms at the Heidelberg Ion-Beam Therapy Center (HIT). Up to present, the application of this treatment verification technique is, however, still limited to static target volumes.
This thesis aimed at investigating the feasibility and performance of PET-based treatment verification under consideration of organ motion. In experimental irradiation studies with moving phantoms, not only the practicability of PET-based treatment monitoring for moving targets, using a commercial PET/CT device, could be shown for the first time, but also the potential of this technique to detect motion-related deviations from the planned treatment with sub-millimetre accuracy. The first application to four exemplary hepato-cellular carcinoma patient cases under substantially more challenging clinical conditions indicated potential for improvement by taking organ motion into consideration, particularly for patients exhibiting motion amplitudes of above 1cm and a sufficiently large number of detected true coincidences during their post-irradiation PET scan. Despite the application of an optimised PET image reconstruction scheme, as retrieved from a dedicated phantom imaging study in the scope of this work, the small number of counts and the resulting high level of image noise were identified as a major limiting factor for the detection of motion-induced dose inhomogeneities within the patient. Moreover, the biological washout modelling of the irradiation-induced isotopes proved to be not sufficiently accurate and thereby impede a quantitative analysis of measured and simulated data under consideration of target motion. In future, improvements are particularly foreseen through dedicated noise-robust time-resolved (4D) image reconstruction algorithms, an improved tracking of the organ motion, e.g., by ultrasound (US) imaging, as implemented for the first time in 4D PET imaging in the scope of this work, as well as by patient-specific washout models.
|
579 |
Optical and transport properties of quantum impurity models - an NRG study of generic models and real physical systemsHanl, Markus Johannes 23 July 2014 (has links) (PDF)
This thesis contributes to the understanding of impurity models. It is divided into two main parts, with a general introduction given in Part I and the research related to it presented in Part II, with the second part being subdivided into two main projects.
In the first project, the influence of two many-body effects, the Kondo effect and the Fermi edge singularity, on the absorption and emission spectra of self-assembled quantum dots (QDs) is examined. Whereas the Kondo effect so far was always examined with transport experiments, we show that it has been observed with optical methods for the first time, by comparing experimental data for the absorption line shapes of QDs to calculations with the numerical renormalization group. We continue by examining a QD with strong optical coupling of the energy levels. The resulting interplay of Rabi-oscillations and Kondo effect leads to a new many-body state, a secondary, outer Kondo effect, with Kondo-like correlations between the spin-Kondo and the trion state. The last work regarding optics at QDs addresses the Fermi edge singularity. We show that for QDs this phenomenon can be described numerically on a quantitative level.
The second project concerns transport properties of impurity models. First, we present a comprehensive study of the Kondo effect for an InAs-nanowire QD, a system for which the Kondo effect was observed only a few years ago. The second study regarding transport concerns the Kondo effect in bulk metals with magnetic impurities. Although nowadays the Kondo effect is often studied with QDs, it was discovered for iron impurities in noble metals like gold and silver. However, it was unknown for a long time which exact realization of Kondo model describes these systems. We identify the model by comparing numerical calculations for the magnetoresistivity and the dephasing rate for different models to experimental results. The third work about transport concerns the phenomenon that for a fixed type of Kondo model quantities like the magnetoresistivity or the conductivity, respectively, can be scaled onto a universal curve for different parameters, when energies are rescaled with the the Kondo temperature $T_K$, since it is the only relevant low energy scale of the problem. For finite bandwidth, however, different definitions of $T_K$ (which coincide in the limit of infinite bandwidth) lead to different $T_K$-values. We show that with a very common definition of $T_K$, finite bandwidth, which is always present at numerical calculations, can deteriorate the universality of rescaled curves, and we offer an alternative definition of $T_K$ which ensures proper scaling. In the last study presented in this thesis we calculate the Fermi-liquid coefficients for fully screened multi-channel Kondo models. For temperatures below $T_K$, these models show Fermi-liquid behavior, and the impurity density of states and certain quantities which depend on it, like resistivity, show quadratic dependencies on parameters like temperature or magnetic field, described by the Fermi-liquid coefficients. We calculate these coefficients both analytically and numerically.
|
580 |
The combined impact of ionizing radiation and momentum winds from massive stars on molecular cloudsNgoumou Y Ewondo, Judith 15 October 2014 (has links) (PDF)
No description available.
|
Page generated in 0.0415 seconds