461 |
Nanoscale pattern formation on ion-sputtered surfaces / Musterbildung auf der Nanometerskala an ion-gesputterten OberflächenYasseri, Taha 21 January 2010 (has links)
No description available.
|
462 |
Simulation studies for the in-vivo dose verification of particle therapyRohling, Heide 21 July 2015 (has links) (PDF)
An increasing number of cancer patients is treated with proton beams or other light ion beams which allow to deliver dose precisely to the tumor. However, the depth dose distribution of these particles, which enables this precision, is sensitive to deviations from the treatment plan, as e.g. anatomical changes. Thus, to assure the quality of the treatment, a non-invasive in-vivo dose verification is highly desired. This monitoring of particle therapy relies on the detection of secondary radiation which is produced by interactions between the beam particles and the nuclei of the patient’s tissue.
Up to now, the only clinically applied method for in-vivo dosimetry is Positron Emission Tomography which makes use of the beta+-activity produced during the irradiation (PT-PET). Since from a PT-PET measurement the applied dose cannot be directly deduced, the simulated distribution of beta+-emitting nuclei is used as a basis for the analysis of the measured PT-PET data. Therefore, the reliable modeling of the production rates and the spatial distribution of the beta+-emitters is required. PT-PET applied during instead of after the treatment is referred to as in-beam PET. A challenge concerning in-beam PET is the design of the PET camera, because a standard full-ring scanner is not feasible. For instance, a double-head PET camera is applicable, but low count rates and the limited solid angle coverage can compromise the image quality. For this reason, a detector system which provides a time resolution allowing the incorporation of time-of-flight information (TOF) into the iterative reconstruction algorithm is desired to improve the quality of the reconstructed images.
Secondly, Prompt Gamma Imaging (PGI), a technique based on the detection of prompt gamma-rays, is currently pursued. Concerning the emissions of prompt gamma-rays during particle irradiation, experimental data is not sufficiently available, making simulations necessary. Compton cameras are based on the detection of incoherently scattered photons and are investigated with respect to PGI. Monte Carlo simulations serve for the optimization of the camera design and the evaluation of criteria for the selection of measured events.
Thus, for in-beam PET and PGI dedicated detection systems and, moreover, profound knowledge about the corresponding radiation fields are required. Using various simulation codes, this thesis contributes to the modelling of the beta+-emitters and photons produced during particle irradiation, as well as to the evaluation and optimization of hardware for both techniques.
Concerning the modeling of the production of the relevant beta+-emitters, the abilities of the Monte Carlo simulation code PHITS and of the deterministic, one-dimensional code HIBRAC were assessed. The Monte Carlo tool GEANT4 was applied for an additional comparison. For irradiations with protons, helium, lithium, and carbon, the depth-dependent yields of the simulated beta+-emitters were compared to experimental data. In general, PHITS underestimated the yields of the considered beta+-emitters in contrast to GEANT4 which provided acceptable values. HIBRAC was substantially extended to enable the modeling of the depth-dependent yields of specific nuclides. For proton beams and carbon ion beams HIBRAC can compete with GEANT4 for this application. Since HIBRAC is fast, compact, and easy to modify, it could be a basis for the simulations of the beta+-emitters in clinical application. PHITS was also applied to the modeling of prompt gamma-rays during proton irradiation following an experimental setup. From this study, it can be concluded that PHITS could be an alternative to GEANT4 in this context.
Another aim was the optimization of Compton camera prototypes. GEANT4 simulations were carried out with the focus on detection probabilities and the rate of valid events. Based on the results, the feasibility of a Compton camera setup consisting of a CZT detector and an LSO or BGO detector was confirmed. Several recommendations concerning the design and arrangement of the Compton camera prototype were derived. Furthermore, several promising event selection strategies were evaluated. The GEANT4 simulations were validated by comparing simulated to measured energy depositions in the detector layers. This comparison also led to the reconsideration of the efficiency of the prototype. A further study evaluated if electron-positron pairs resulting from pair productions could be detected with the existing prototype in addition to Compton events. Regarding the efficiency and the achievable angular resolution, the successful application of the considered prototype as pair production camera to the monitoring of particle therapy is questionable.
Finally, the application of a PET camera consisting of Resistive Plate Chambers (RPCs) providing a good time resolution to in-beam PET was discussed. A scintillator-based PET camera based on a commercially available scanner was used as reference. This evaluation included simulations of the detector response, the image reconstructions using various procedures, and the analysis of image quality. Realistic activity distributions based on real treatment plans for carbon ion therapy were used. The low efficiency of the RPC-based PET camera led to images of poor quality. Neither visually nor with the semi-automatic tool YaPET a reliable detectability of range deviations was possible. The incorporation of TOF into the iterative reconstruction algorithm was especially advantageous for the considered RPC-based PET camera in terms of convergence and artifacts.
The application of the real-time capable back projection method Direct TOF for the RPCbased PET camera resulted in an image quality comparable to the one achieved with the iterative algorihms. In total, this study does not indicate the further investigation of RPC-based PET cameras with similar efficiency for in-beam PET application.
To sum up, simulation studies were performed aimed at the progress of in-vivo dosimetry. Regarding the modeling of the beta+-emitter production and prompt gamma-ray emissions, different simulation codes were evaluated. HIBRAC could be a basis for clinical PT-PET simulations, however, a detailed validation of the underlying cross section models is required. Several recommendations for the optimization of a Compton Camera prototype resulted from systematic variations of the setup. Nevertheless, the definite evaluation of the feasibility of a Compton camera for PGI can only be performed by further experiments. For PT-PET, the efficiency of the detector system is the crucial factor. Due to the obtained results for the considered RPC-based PET camera, the focus should be kept to scintillator-based PET cameras for this purpose.
|
463 |
[pt] APLICAÇÕES DO MÉTODO DA ENTROPIA CRUZADA EM ESTIMAÇÃO DE RISCO E OTIMIZAÇÃO DE CONTRATO DE MONTANTE DE USO DO SISTEMA DE TRANSMISSÃO / [en] CROSS-ENTROPY METHOD APPLICATIONS TO RISK ESTIMATE AND OPTIMIZATION OF AMOUNT OF TRANSMISSION SYSTEM USAGE23 November 2021 (has links)
[pt] As companhias regionais de distribuição não são autossuficientes em
energia elétrica para atender seus clientes, e requerem importar a potência
necessária do sistema interligado. No Brasil, elas realizam anualmente o processo
de contratação do montante de uso do sistema de transmissão (MUST)
para o horizonte dos próximos quatro anos. Essa operação é um exemplo real
de tarefa que envolve decisões sob incerteza com elevado impacto na produtividade
das empresas distribuidoras e do setor elétrico em geral. O trabalho
se torna ainda mais complexo diante da crescente variabilidade associada à
geração de energia renovável e à mudança do perfil do consumidor. O MUST é
uma variável aleatória, e ser capaz de compreender sua variabilidade é crucial
para melhor tomada de decisão. O fluxo de potência probabilístico é uma técnica
que mapeia as incertezas das injeções nodais e configuração de rede nos
equipamentos de transmissão e, consequentemente, nas potências importadas
em cada ponto de conexão com o sistema interligado. Nesta tese, o objetivo
principal é desenvolver metodologias baseadas no fluxo de potência probabilístico
via simulação Monte Carlo, em conjunto com a técnica da entropia
cruzada, para estimar os riscos envolvidos na contratação ótima do MUST.
As metodologias permitem a implementação de software comercial para lidar
com o algoritmo de fluxo de potência, o que é relevante para sistemas reais de
grande porte. Apresenta-se, portanto, uma ferramenta computacional prática
que serve aos engenheiros das distribuidoras de energia elétrica. Resultados
com sistemas acadêmicos e reais mostram que as propostas cumprem os objetivos
traçados, com benefícios na redução dos custos totais no processo de
otimização de contratos e dos tempos computacionais envolvidos nas estimativas
de risco. / [en] Local power distribution companies are not self-sufficient in electricity
to serve their customers, and require importing additional energy supply from
the interconnected bulk power systems. In Brazil, they annually carry out the
contracting process for the amount of transmission system usage (ATSU) for
the next four years. This process is a real example of a task that involves
decisions under uncertainty with a high impact on the productivity of the
distributions companies and on the electricity sector in general. The task
becomes even more complex in face of the increasing variability associated with
the generation of renewable energy and the changing profile of the consumer.
The ATSU is a random variable, and being able to understand its variability
is crucial for better decision making. Probabilistic power flow is a technique
that maps the uncertainties of nodal injections and network configuration in
the transmission equipment and, consequently, in the imported power at each
connection point with the bulk power system. In this thesis, the main objective
is to develop methodologies based on probabilistic power flow via Monte Carlo
simulation, together with cross entropy techniques, to assess the risks involved
in the optimal contracting of the ATSU. The proposed approaches allow the
inclusion of commercial software to deal with the power flow algorithm, which is
relevant for large practical systems. Thus, a realistic computational tool that
serves the engineers of electric distribution companies is presented. Results with academic and real systems show that the proposals fulfill the objectives set, with the benefits of reducing the total costs in the optimization process of contracts and computational times involved in the risk assessments.
|
464 |
Neutronenfluss in UntertagelaborenGrieger, Marcel 28 January 2022 (has links)
Das Felsenkellerlabor ist ein neues Untertagelabor im Bereich der nuklearen Astrophysik. Es befindet sich unter 47 m Hornblende-Monzonit Felsgestein im Stollensystem der ehemaligen Dresdner Felsenkellerbrauerei.
Im Rahmen dieser Arbeit wird der Neutronenuntergrund in Stollen IV und VIII untersucht. Gewonnene Erkenntnisse aus Stollen IV hatten direkten Einfluss auf die geplanten Abschirmbedingungen für Stollen VIII. Die Messung wurde mit dem HENSA-Neutronenspektrometer durchgeführt, welches aus polyethylenmoderierten ³He-Zählrohren besteht.
Mit Hilfe des Monte-Carlo Programmes FLUKA zur Simulation von Teilchentransport werden für das Spektrometer die Neutronen-Ansprechvermögen bestimmt. Für jeden Messort wird außerdem eine Vorhersage des Neutronenflusses erstellt und die Labore hinsichtlich der beiden Hauptkomponenten aus myoneninduzierten Neutronen und Gesteinsneutronen aus (α,n)-Reaktionen und Spaltprozessen kartografiert.
Die verwendeten Mess- und Analysemethoden finden in einer neuen Messung am tiefen Untertagelabor LSC Canfranc Anwendung. Erstmalig werden im Rahmen dieser Arbeit vorläufige Ergebnisse vorgestellt.
Des Weiteren werden Strahlenschutzsimulationen für das Felsenkellerlabor präsentiert, welche den strahlenschutztechnischen Rahmen für die wissenschaftliche Nutzung definieren. Dabei werden die für den Sicherheitsbericht des Felsenkellers verwendeten Werte auf die Strahlenschutzverordnung 2018 aktualisiert.
Letztlich werden Experimente an der Radiofrequenz-Ionenquelle am Felsenkeller vorgestellt, die im Rahmen dieser Arbeit technisch betreut wurde. Dabei werden Langzeitmessungen am übertägigen Teststand am Helmholtz-Zentrum Dresden-Rossendorf präsentiert.:1 Einführung und Motivation
2 Grundlagen
3 Der Dresdner Felsenkeller
4 Neutronenflussmessungen am Felsenkeller
5 Auswertung der Neutronenraten
6 Messung am LSC Canfranc
7 Strahlenschutz am Felsenkeller
8 Die Radiofrequenz-Ionenquelle am Felsenkeller
9 Zusammenfassung
A Technische Angaben zu den verwendeten Zählern
B Aufbauskizzen der Detektoren
C WinBUGS Pulshöhenspektren
D Savitzky-Golay-Filter Fits
E Entfaltung mit Gravel
F Omega-Variation mit Gravel
G Aktivierungssimulationen
|
465 |
Multi-factor approximation : An analysis and comparison ofMichael Pykhtin's paper “Multifactor adjustment”Zanetti, Michael, Güzel, Philip January 2023 (has links)
The need to account for potential losses in rare events is of utmost importance for corporations operating in the financial sector. Common measurements for potential losses are Value at Risk and Expected Shortfall. These are measures of which the computation typically requires immense Monte Carlo simulations. Another measurement is the Advanced Internal Ratings-Based model that estimates the capital requirement but solely accounts for a single risk factor. As an alternative to the commonly used time-consuming credit risk methods and measurements, Michael Pykhtin presents methods to approximate the Value at Risk and Expected Shortfall in his paper Multi-factor adjustment from 2004. The thesis’ main focus is an elucidation and investigation of the approximation methods that Pykhtin presents. Pykhtin’s approximations are thereafter implemented along with the Monte Carlo methods that is used as a benchmark. A recreation of the results Pykhtin presents is completed with satisfactory, strongly matching results, which is a confident verification that the methods have been implemented in correspondence with the article. The methods are also applied on a small and large synthetic Nordea data set to test the methods on alternative data. Due to the size complexity of the large data set, it cannot be computed in its original form. Thus, a clustering algorithm is used to eliminate this limitation while still keeping characteristics of the original data set. Executing the methods on the synthetic Nordea data sets, the Value at Risk and Expected Shortfall results have a larger discrepancy between approximated and Monte Carlo simulated results. The noted differences are probably due to increased borrower exposures, and portfolio structures not being compatible with Pykhtin’s approximation. The purpose of clustering the small data set is to test the effect on the accuracy and understand the clustering algorithm’s impact before implementing it on the large data set. Clustering the small data set caused deviant results compared to the original small data set, which is expected. The clustered large data set’s approximation results had a lower discrepancy to the benchmark Monte Carlo simulated results in comparison to the small data. The increased portfolio size creates a granularity decreasing the outcome’s variance for both the MC methods, and the approximation methods, hence the low discrepancy. Overall, Pykhtin’s approximations’ accuracy and execution time are relatively good for the experiments. It is however very challenging for the approximate methods to handle large portfolios, considering the issues that the portfolio run into at just a couple of thousand borrowers. Lastly, a comparison between the Advanced Internal Ratings-Based model, and modified Value at Risks and Expected Shortfalls are made. Calculating the capital requirement for the Advanced Internal Ratings-Based model, the absence of complex concentration risk consideration is clearly illustrated by the significantly lower results compared to either of the other methods. In addition, an increasing difference can be identified between the capital requirements obtained from Pykhtin’s approximation and the Monte Carlo method. This emphasizes the importance of utilizing complex methods to fully grasp the inherent portfolio risks. / Behovet av att ta hänsyn till potentiella förluster av sällsynta händelser är av yttersta vikt för företag verksamma inom den finansiella sektorn. Vanliga mått på potentiella förluster är Value at Risk och Expected Shortfall. Dessa är mått där beräkningen vanligtvis kräver enorma Monte Carlo-simuleringar. Ett annat mått är Advanced Internal Ratings-Based-modellen som uppskattar ett kapitalkrav, men som enbart tar hänsyn till en riskfaktor. Som ett alternativ till dessa ofta förekommande och tidskrävande kreditriskmetoderna och mätningarna, presenterar Michael Pykhtin metoder för att approximera Value at Risk och Expected Shortfall i sin uppsats Multi-factor adjustment från 2004. Avhandlingens huvudfokus är en undersökning av de approximativa metoder som Pykhtin presenterar. Pykhtins approximationer implementeras och jämförs mot Monte Carlo-metoder, vars resultat används som referensvärden. Ett återskapande av resultaten Pykhtin presenterar i sin artikel har gjorts med tillfredsställande starkt matchande resultat, vilket är en säker verifiering av att metoderna har implementerats i samstämmighet med artikeln. Metoderna tillämpas även på ett litet och ett stor syntetiskt dataset erhållet av Nordea för att testa metoderna på alternativa data. På grund av komplexiteten hos det stora datasetet kan det inte beräknas i sin ursprungliga form. Således används en klustringsalgoritm för att eliminera denna begränsning samtidigt som egenskaperna hos den ursprungliga datamängden fortfarande bibehålls. Vid appliceringen av metoderna på de syntetiska Nordea-dataseten, identifierades en större diskrepans hos Value at Risk och Expected Shortfall-resultaten mellan de approximerade och Monte Carlo-simulerade resultaten. De noterade skillnaderna beror sannolikt på ökade exponeringar hos låntagarna och att portföljstrukturerna inte är förenliga med Pykhtins approximation. Syftet med klustringen av den lilla datasetet är att testa effekten av noggrannheten och förstå klustringsalgoritmens inverkan innan den implementeras på det stora datasetet. Att gruppera det lilla datasetet orsakade avvikande resultat jämfört med det ursprungliga lilla datasetet, vilket är förväntat. De modifierade stora datasetets approximativa resultat hade en lägre avvikelse mot de Monte Carlo simulerade benchmark resultaten i jämförelse med det lilla datasetet. Den ökade portföljstorleken skapar en finkornighet som minskar resultatets varians för både MC-metoderna och approximationerna, därav den låga diskrepansen. Sammantaget är Pykhtins approximationers noggrannhet och utförandetid relativt bra för experimenten. Det är dock väldigt utmanande för de approximativa metoderna att hantera stora portföljer, baserat på de problem som portföljen möter redan vid ett par tusen låntagare. Slutligen görs en jämförelse mellan Advanced Internal Ratings-Based-modellen, och modifierade Value at Risks och Expected shortfalls. När man beräknar kapitalkravet för Advanced Internal Ratings-Based-modellen, illustreras saknaden av komplexa koncentrationsrisköverväganden tydligt av de betydligt lägre resultaten jämfört med någon av de andra metoderna. Dessutom kan en ökad skillnad identifieras mellan kapitalkraven som erhålls från Pykhtins approximation och Monte Carlo-metoden. Detta understryker vikten av att använda komplexa metoder för att fullt ut förstå de inneboende portföljriskerna.
|
466 |
[pt] ESTIMATIVA DE RISCOS EM REDES ELÉTRICAS CONSIDERANDO FONTES RENOVÁVEIS E CONTINGÊNCIAS DE GERAÇÃO E TRANSMISSÃO VIA FLUXO DE POTÊNCIA PROBABILÍSTICO / [en] RISK ASSESSMENT IN ELECTRIC NETWORKS CONSIDERING RENEWABLE SOURCES AND GENERATION AND TRANSMISSION CONTINGENCIES VIA PROBABILISTIC POWER FLOW24 November 2023 (has links)
[pt] A demanda global por soluções sustentáveis para geração de energia elétrica cresceu rapidamente nas últimas décadas, sendo impulsionada por incentivos fiscais dos governos e investimentos em pesquisa e desenvolvimento de tecnologias. Isso provocou uma crescente inserção de fontes renováveis nas redes elétricas ao redor do mundo, criando novos desafios críticos para as avaliações de desempenho dos sistemas que são potencializados pela intermitência desses recursos energéticos combinada às falhas dos equipamentos de rede. Motivado por esse cenário, esta dissertação aborda a estimativa de risco de inadequação de grandezas elétricas, como ocorrências de sobrecarga em ramos elétricos ou subtensão em barramentos, através do uso do fluxo de potência probabilístico, baseado na simulação Monte Carlo e no método de entropia cruzada. O objetivo é determinar o risco do sistema não atender a critérios operativos, de forma precisa e com eficiência computacional, considerando as incertezas de carga, geração e transmissão. O método é aplicado aos sistemas testes IEEE RTS 79 e IEEE 118 barras, considerando também versões modificadas com a inclusão de uma usina eólica, e os resultados são amplamente discutidos. / [en] The global demand for sustainable solutions for electricity generation has grown rapidly in recent decades, driven by government tax incentives and investments in technology research and development. This caused a growing insertion of renewable sources in power networks around the world, creating new critical challenges for systems performance assessments that are enhanced by the intermittency of these energy resources combined with the failures of network equipment. Motivated by this scenario, this dissertation addresses the estimation of risk of inadequacy of electrical quantities, such as overload occurrences in electrical branches or undervoltage in buses, through the use of probabilistic power flow, based on Monte Carlo simulation and the cross-entropy method. The objective is to determine the risk of the system not meeting operational criteria, precisely and with computational efficiency, considering load, generation and transmission uncertainties. The method is applied to IEEE RTS 79 and IEEE 118 bus test systems, also considering modified versions with the inclusion of a wind power plant, and the results are widely discussed.
|
467 |
Applying Peaks-Over-Threshold for Increasing the Speed of Convergence of a Monte Carlo Simulation / Peaks-Over-Threshold tillämpat på en Monte Carlo simulering för ökad konvergenshastighetJakobsson, Eric, Åhlgren, Thor January 2022 (has links)
This thesis investigates applying the semiparametric method Peaks-Over-Threshold on data generated from a Monte Carlo simulation when estimating the financial risk measures Value-at-Risk and Expected Shortfall. The goal is to achieve a faster convergence than a Monte Carlo simulation when assessing extreme events that symbolise the worst outcomes of a financial portfolio. Achieving a faster convergence will enable a reduction of iterations in the Monte Carlo simulation, thus enabling a more efficient way of estimating risk measures for the portfolio manager. The financial portfolio consists of US life insurance policies offered on the secondary market, gathered by our partner RessCapital. The method is evaluated on three different portfolios with different defining characteristics. In Part I an analysis of selecting an optimal threshold is made. The accuracy and precision of Peaks-Over-Threshold is compared to the Monte Carlo simulation with 10,000 iterations, using a simulation of 100,000 iterations as the reference value. Depending on the risk measure and the percentile of interest, different optimal thresholds are selected. Part II presents the result with the optimal thresholds from Part I. One can conclude that Peaks-Over-Threshold performed significantly better than a Monte Carlo simulation for Value-at-Risk with 10,000 iterations. The results for Expected Shortfall did not achieve a clear improvement in terms of precision, but it did show improvement in terms of accuracy. Value-at-Risk and Expected Shortfall at the 99.5th percentile achieved a greater error reduction than at the 99th. The result therefore aligned well with theory, as the more "rare" event considered, the better the Peaks-Over-Threshold method performed. In conclusion, the method of applying Peaks-Over-Threshold can be proven useful when looking to reduce the number of iterations since it do increase the convergence of a Monte Carlo simulation. The result is however dependent on the rarity of the event of interest, and the level of precision/accuracy required. / Det här examensarbetet tillämpar metoden Peaks-Over-Threshold på data genererat från en Monte Carlo simulering för att estimera de finansiella riskmåtten Value-at-Risk och Expected Shortfall. Målet med arbetet är att uppnå en snabbare konvergens jämfört med en Monte Carlo simulering när intresset är s.k. extrema händelser som symboliserar de värsta utfallen för en finansiell portfölj. Uppnås en snabbare konvergens kan antalet iterationer i simuleringen minskas, vilket möjliggör ett mer effektivt sätt att estimera riskmåtten för portföljförvaltaren. Den finansiella portföljen består av amerikanska livförsäkringskontrakt som har erbjudits på andrahandsmarknaden, insamlat av vår partner RessCapital. Metoden utvärderas på tre olika portföljer med olika karaktär. I Del I så utförs en analys för att välja en optimal tröskel för Peaks-Over-Threshold. Noggrannheten och precisionen för Peaks-Over-Threshold jämförs med en Monte Carlo simulering med 10,000 iterationer, där en Monte Carlo simulering med 100,000 iterationer används som referensvärde. Beroende på riskmått samt vilken percentil som är av intresse så väljs olika trösklar. I Del II presenteras resultaten med de "optimalt" valda trösklarna från Del I. Peaks-over-Threshold påvisade signifikant bättre resultat för Value-at-Risk jämfört med Monte Carlo simuleringen med 10,000 iterationer. Resultaten för Expected Shortfall påvisade inte en signifikant förbättring sett till precision, men visade förbättring sett till noggrannhet. För både Value-at-Risk och Expected Shortfall uppnådde Peaks-Over-Threshold en större felminskning vid 99.5:e percentilen jämfört med den 99:e. Resultaten var därför i linje med de teoretiska förväntningarna då en högre percentil motsvarar ett extremare event. Sammanfattningsvis så kan metoden Peaks-Over-Threshold vara användbar när det kommer till att minska antalet iterationer i en Monte Carlo simulering då resultatet visade att Peaks-Over-Threshold appliceringen accelererar Monte Carlon simuleringens konvergens. Resultatet är dock starkt beroende av det undersökta eventets sannolikhet, samt precision- och noggrannhetskravet.
|
468 |
Experimentelle Untersuchung von auftriebsbehafteter Strömung und Wärmeübertragung einer rotierenden Kavität mit axialer DurchströmungDiemel, Eric 23 April 2024 (has links)
The flow and heat transfer within compressor rotor cavities of aero-engines is a conjugate problem. Depending on the operating conditions buoyancy forces, caused by radial temperature difference between the cold throughflow and the hotter shroud, can influence the amount of entrained air significantly. By this, the heat transfer depends on the radial temperature gradient of the cavity walls and in reverse the disk temperatures are dependent on the heat transfer. In this thesis, disk Nusselt numbers are calculated in reference to the air inlet temperature and in comparison to a modeled local air temperature inside the cavity. The local disk heat flux is determined from measured steady-state surface temperatures by solving the inverse heat transfer problem in an iterative procedure. The conduction equation is solved on a 2D mesh, using a validated finite element approach and the heat flux confidence intervals are calculated with a stratified Monte Carlo approach. An estimate for the amount of air entering into the cavity is calculated by a simplified heat balance. In addition to the thermal characterization of the cavity, the mass exchange of the air in the cavity with the axial flow in the annular gap and the swirl distribution of the air in the cavity are also investigated.:1 Einleitung
2 Grundlagen und Literaturübersicht
2.1 Modellsystem der rotierenden Kavitäten mit axialer Durchströmung
2.2 Ergebnisgrößen
2.3 Strömung in rotierenden Kavitäten
2.4 Wärmeübertragung in rotierenden Kavitäten
2.5 Fluidtemperatur in rotierenden Kavitäten
3 Experimenteller Aufbau
4 Messtechnik
4.1 Oberflächen- und Materialtemperaturen
4.2 Lufttemperaturen
4.3 Statischer Druck
4.4 Dreiloch-Drucksonden
5 Datenauswertung
5.1 Kernrotationsverhältnis
5.2 Wärmestromdichte und Nusseltzahl
5.2.1 Finite-Elemente Modell
5.2.2 inverses Wärmeleitungsproblem
5.2.3 Anpassungsmethode
5.2.4 Testfälle zur Validierung
5.2.5 Validierung Testfall 1 und 3 - ideale Kavitätenscheibe
5.2.6 Validierung Testfall 2 - Reproduzierbarkeit
5.2.7 Validierung Testfall 4 - lokales Ereignis
5.2.8 Bestimmung der Wärmestromdichte-Unsicherheit
5.2.9 Anwendung der Anpassungsmethode auf experimentelle Daten
5.2.10 Wahl der Randbedingungsfunktion
5.2.11 Wärmeübergangskoeffizient und Nusselt-Zahl
5.2.12 Zusammenfassung
5.3 Austauschmassenstrom
6 Experimentelle Ergebnisse
6.1 Dichteverteilung in der Kavität
6.2 Massenaustausch Kavität
6.3 Wärmeübertragung in der Kavität
6.3.1 Fallbeispiel
6.3.2 Einfluss der Drehfrequenz
6.3.3 Einfluss des Massenstromes
6.3.4 Einfluss des Auftriebsparameters
6.4 Wärmeübertragung im Ringspalt
6.5 Drall im Ringspalt und der Kavität
7 Zusammenfassung und Ausblick / Die Strömung und Wärmeübertragung in den Verdichterkavitäten von Flugtriebwerken ist ein konjugiertes Problem. Durch die radialen Temperaturunterschiede in der Kavität wird die Menge der in die Kavität strömenden Luft stark beeinflusst. Somit ist die Wärmeübertragung abhängig von den radialen Temperaturgradienten der Scheibenwände und umgekehrt ist die Scheibentemperatur abhängig von der Wärmeübertragung. Die Nusselt-Zahl in diesem System wurde aufgrund der schwierigen Zugänglichkeit in der Historie auf die eine Referenztemperatur vor der Kavität bezogen. Dies ist insofern problematisch, da hierdurch die thermischen Verhältnisse unterschätzt werden können. In dieser Arbeit wird ein neuer Ansatz zu Berechnung der Nusselt-Zahl mithilfe einer modellierten lokalen Lufttemperatur innerhalb der Kavität verwendet. Die lokale Wärmestromdichte auf der Scheibenoberfläche wird mithilfe eines validierten zweidimensionalen rotationssymmetrischen Finite-Element Modells auf der Grundlage von gemessenen Oberflächentemperaturen berechnet. Dies stellt ein inverses Wärmeleitungsproblem dar, welches mithilfe einer Anpassungsmethode gelöst wurde. Die Auswirkung der Messunsicherheit der Temperaturmessung auf die berechnete Wärmestromdichte wird durch eine geschichtete Monte-Carlo-Simulation, nach dem Ansatz der LHC-Methode, untersucht. Neben der thermischen Charakterisierung der Kavität wird zudem der Massenaustausch der Luft in der Kavität mit der axialen Durchströmung im Ringspalt sowie die Drallverteilung der Luft in der Kavität untersucht.:1 Einleitung
2 Grundlagen und Literaturübersicht
2.1 Modellsystem der rotierenden Kavitäten mit axialer Durchströmung
2.2 Ergebnisgrößen
2.3 Strömung in rotierenden Kavitäten
2.4 Wärmeübertragung in rotierenden Kavitäten
2.5 Fluidtemperatur in rotierenden Kavitäten
3 Experimenteller Aufbau
4 Messtechnik
4.1 Oberflächen- und Materialtemperaturen
4.2 Lufttemperaturen
4.3 Statischer Druck
4.4 Dreiloch-Drucksonden
5 Datenauswertung
5.1 Kernrotationsverhältnis
5.2 Wärmestromdichte und Nusseltzahl
5.2.1 Finite-Elemente Modell
5.2.2 inverses Wärmeleitungsproblem
5.2.3 Anpassungsmethode
5.2.4 Testfälle zur Validierung
5.2.5 Validierung Testfall 1 und 3 - ideale Kavitätenscheibe
5.2.6 Validierung Testfall 2 - Reproduzierbarkeit
5.2.7 Validierung Testfall 4 - lokales Ereignis
5.2.8 Bestimmung der Wärmestromdichte-Unsicherheit
5.2.9 Anwendung der Anpassungsmethode auf experimentelle Daten
5.2.10 Wahl der Randbedingungsfunktion
5.2.11 Wärmeübergangskoeffizient und Nusselt-Zahl
5.2.12 Zusammenfassung
5.3 Austauschmassenstrom
6 Experimentelle Ergebnisse
6.1 Dichteverteilung in der Kavität
6.2 Massenaustausch Kavität
6.3 Wärmeübertragung in der Kavität
6.3.1 Fallbeispiel
6.3.2 Einfluss der Drehfrequenz
6.3.3 Einfluss des Massenstromes
6.3.4 Einfluss des Auftriebsparameters
6.4 Wärmeübertragung im Ringspalt
6.5 Drall im Ringspalt und der Kavität
7 Zusammenfassung und Ausblick
|
469 |
An evaluation of changing profit risks in Kansas cattle feeding operationsHerrington, Matthew Abbott January 1900 (has links)
Master of Science / Department of Agricultural Economics / Ted C. Schroeder / Glynn T. Tonsor / Cattle feeders face significant profit risk when placing cattle on feed. Risks arise from both financial and biological sources. To date, few standardized measures exist to measure current risks against historic levels, or to obtain forward looking risk estimates. Those that do exist could benefit from updates and inclusion of additional risk elements.
This study measures the risk of expected profits when cattle are placed on feed. This study creates a forward-looking estimate of expected feedlot profits using futures and options market data as price forecasts. Joint probability distributions are created for prices and cattle performance variables affecting feedlot profit margins. Monte Carlo simulation techniques are then employed to generate probability distributions of expected feedlot profits.
Results show cattle feeding is a risky business and cattle feeders have been placing cattle on feed facing significantly negative expected returns since June, 2010. This assessment of negative expected profits is consistent with other findings. Over the study’s 2002 to 2013 time frame, the relative risk to cattle feeding profits accounted for by feed costs has been increasing, while the relative risk levels from feeder cattle and fed cattle prices remain steady. Additionally, the probability of realized per-head profits greater than $100 has been decreasing since 2009 and the probability of realized per-head profits less than $-100 has been increasingly rapidly.
|
470 |
Pricing American options with jump-diffusion by Monte Carlo simulationFouse, Bradley Warren January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems
Engineering / Chih-Hang Wu / In recent years the stock markets have shown tremendous volatility with significant spikes and drops in the stock prices. Within the past decade, there have been numerous jumps in the market; one key example was on September 17, 2001 when the Dow industrial average dropped 684 points following the 9-11 attacks on the United States. These evident jumps in the markets show the inaccuracy of the Black-Scholes model for pricing options. Merton provided the first research to appease this problem in 1976 when he extended the Black-Scholes model to
include jumps in the market. In recent years, Kou has shown that the distribution of the jump sizes used in Merton’s model does not efficiently model the actual movements of the markets. Consequently, Kou modified Merton’s model changing the jump size distribution from a normal distribution to the double exponential distribution.
Kou’s research utilizes mathematical equations to estimate the value of an American put option where the underlying stocks follow a jump-diffusion process. The research contained within this thesis extends on Kou’s research using Monte Carlo simulation (MCS) coupled with
least-squares regression to price this type of American option. Utilizing MCS provides a
continuous exercise and pricing region which is a distinct difference, and advantage, between MCS and other analytical techniques. The aim of this research is to investigate whether or not MCS is an efficient means to pricing American put options where the underlying stock undergoes a jump-diffusion process. This thesis also extends the simulation to utilize copulas in the pricing of baskets, which contains several of the aforementioned type of American options.
The use of copulas creates a joint distribution from two independent distributions and provides an efficient means of modeling multiple options and the correlation between them.
The research contained within this thesis shows that MCS provides a means of accurately
pricing American put options where the underlying stock follows a jump-diffusion. It also shows that it can be extended to use copulas to price baskets of options with jump-diffusion. Numerical examples are presented for both portions to exemplify the excellent results obtained by using MCS for pricing options in both single dimension problems as well as multidimensional
problems.
|
Page generated in 0.0366 seconds