• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 429
  • 219
  • 73
  • 66
  • 34
  • 29
  • 26
  • 24
  • 12
  • 9
  • 8
  • 6
  • 4
  • 4
  • 2
  • Tagged with
  • 1013
  • 1013
  • 1013
  • 120
  • 117
  • 99
  • 96
  • 83
  • 74
  • 65
  • 64
  • 61
  • 57
  • 56
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Generalized Principal Component Analysis

Solat, Karo 05 June 2018 (has links)
The primary objective of this dissertation is to extend the classical Principal Components Analysis (PCA), aiming to reduce the dimensionality of a large number of Normal interrelated variables, in two directions. The first is to go beyond the static (contemporaneous or synchronous) covariance matrix among these interrelated variables to include certain forms of temporal (over time) dependence. The second direction takes the form of extending the PCA model beyond the Normal multivariate distribution to the Elliptically Symmetric family of distributions, which includes the Normal, the Student's t, the Laplace and the Pearson type II distributions as special cases. The result of these extensions is called the Generalized principal component analysis (GPCA). The GPCA is illustrated using both Monte Carlo simulations as well as an empirical study, in an attempt to demonstrate the enhanced reliability of these more general factor models in the context of out-of-sample forecasting. The empirical study examines the predictive capacity of the GPCA method in the context of Exchange Rate Forecasting, showing how the GPCA method dominates forecasts based on existing standard methods, including the random walk models, with or without including macroeconomic fundamentals. / Ph. D. / Factor models are employed to capture the hidden factors behind the movement among a set of variables. It uses the variation and co-variation between these variables to construct a fewer latent variables that can explain the variation in the data in hand. The principal component analysis (PCA) is the most popular among these factor models. I have developed new Factor models that are employed to reduce the dimensionality of a large set of data by extracting a small number of independent/latent factors which represent a large proportion of the variability in the particular data set. These factor models, called the generalized principal component analysis (GPCA), are extensions of the classical principal component analysis (PCA), which can account for both contemporaneous and temporal dependence based on non-Gaussian multivariate distributions. Using Monte Carlo simulations along with an empirical study, I demonstrate the enhanced reliability of my methodology in the context of out-of-sample forecasting. In the empirical study, I examine the predictability power of the GPCA method in the context of “Exchange Rate Forecasting”. I find that the GPCA method dominates forecasts based on existing standard methods as well as random walk models, with or without including macroeconomic fundamentals.
392

Measurement Invariance and Sensitivity of Delta Fit Indexes in Non-Normal Data: A Monte Carlo Simulation Study

Yu, Meixi 01 January 2024 (has links) (PDF)
The concept of measurement invariance is essential in ensuring psychological and educational tests are interpreted consistently across diverse groups. This dissertation investigated the practical challenges associated with measurement invariance, specifically on how measurement invariance delta fit indexes are affected by non-normal data. Non-normal data distributions are common in real-world scenarios, yet many statistical methods and measurement invariance delta fit indexes are based on the assumption of normally distributed data. This raises concerns about the accuracy and reliability of conclusions drawn from such analyses. The primary objective of this research is to examine how commonly used delta fit indexes of measurement invariance respond under conditions of non-normality. The present research was built upon Cao and Liang (2022a)’s study to test the sensitivities of a series of delta fit indexes, and further scrutinizes the role of non-normal data distributions. A series of simulation studies was conducted, where data sets with varying degrees of skewness and kurtosis were generated. These data sets were then examined by multi-group confirmatory factor analysis (MGCFA) using the Satorra-Bentler scaled chi-square difference test, a method specifically designed to adjust for non-normality. The performance of delta fit indexes such as the Delta Comparative Fit Index (∆CFI), Delta Standardized Root Mean Square residual (∆SRMR) and Delta Root Mean Square Error of Approximation (∆RMSEA) were assessed. These findings have significant implications for professionals and scholars in psychology and education. They provide constructive information related to key aspects of research and practice in these fields related to measurement, contributing to the broader discussion on measurement invariance by highlighting challenges and offering solutions for assessing model fit in non-normal data scenarios.
393

Algorithmic studies of compact lattice QED with Wilson fermions

Zverev, Nikolai 18 December 2001 (has links)
Wir untersuchen numerisch und teilweise analytisch die kompakte Quantenelektrodynamik auf dem Gitter mit Wilson-Fermionen. Dabei konzentrieren wir uns auf zwei wesentliche Teilprobleme der Theorie: der Einfluss von Eichfeld-Moden mit verschwindendem Impuls in der Coulomb-Phase und die Effizienz von verschiedenen Monte-Carlo-Algorithmen unter Berücksichtigung dynamischer Fermionen. Wir zeigen, dass der Einfluss der Null-Impuls-Moden auf die eichabhängigen Gitter-Observablen wie Photon- und Fermion-Korrelatoren nahe der kritischen chiralen Grenzlinie innerhalb der Coulomb Phase zu einem Verhalten führt, das vom naiv erwarteten gitterstörungstheoretischen Verhalten abweicht. Diese Moden sind auch für die Abschirmung des kritischen Verhaltens der eichinvarianten Fermion-Observablen nahe der chiralen Grenzlinie verantwortlich. Eine Entfernung dieser Null-Impuls-Moden aus den Eichfeld-Konfigurationen führt innerhalb der Coulomb-Phase zum störungstheoretisch erwarteten Verhalten der eichabhängigen Observablen. Die kritischen Eigenschaften der eichinvarianten Fermion-Observablen in der Coulomb-Phase werden nach dem Beseitigen der Null-Impuls-Moden sichtbar. Der kritische Hopping-Parameter, den man aus den invarianten Fermion-Observablen erhält, stimmt gut mit demjenigen überein, der aus den eichabhängigen Observablen extrahiert werden kann. Wir führen den zweistufigen Multiboson-Algorithmus für numerische Untersuchungen im U(1)-Gittermodell mit einer geraden Anzahl von dynamischen Fermion-Flavour-Freiheitsgraden ein. Wir diskutieren die geeignete Wahl der technischen Parameter sowohl für den zweistufigen Multiboson-Algorithmus als auch für den hybriden Monte-Carlo-Algorithmus. Wir geben theoretische Abschätzungen für die Effizienz dieser Simulationsmethoden. Wir zeigen numerisch und theoretisch, daß der zweistufige Multiboson-Algorithmus eine gute Alternative darstellt und zumindestens mit der hybriden Monte-Carlo-Methode konkurrieren kann. Wir argumentieren, daß eine weitere Verbesserung der Effizienz des zweistufigen Multiboson-Algorithmus durch eine Vergrößerung der Zahl lokaler Update-Schleifen und auch durch die Reduktion der Ordnungen der ersten und zweiten Polynome zu Lasten des sogenannten 'Reweighting' erzielt werden kann. / We investigate numerically and in part analytically the compact lattice quantum electrodynamics with Wilson fermions. We studied the following particular tasks of the theory: the problem of the zero-momentum gauge field modes in the Coulomb phase and the performance of different Monte Carlo algorithms in the presence of dynamical fermions. We show that the influence of the zero-momentum modes on the gauge dependent lattice observables like photon and fermion correlators within the Coulomb phase leads to a behaviour of these observables different from standard perturbation theory. These modes are responsible also for the screening of the critical behaviour of the gauge invariant fermion values near the chiral limit line. Within the Coulomb phase the elimination of these zero-momentum modes from gauge configurations leads to the perturbatively expected behaviour of gauge dependent observables. The critical properties of gauge invariant fermion observables upon removing the zero-momentum modes are restored. The critical hopping-parameter obtained from the invariant fermion observables coincides with that extracted from gauge dependent values. We implement the two-step multiboson algorithm for numerical investigations in the U(1) lattice model with even dynamical Wilson fermion flavours. We discuss the scheme of an appropriate choice of technical parameters for both two-step multiboson and hybrid Monte Carlo algorithms. We give the theoretical estimates of the performance of such simulation methods. We show both numerically and theoretically that the two-step multiboson algorithm is a good alternative and at least competitive with the hybrid Monte Carlo method. We argue that an improvement of efficiency of the two-step multiboson algorithm can be achieved by increasing the number of local update sweeps and also by decreasing the orders of first and second polynomials corrected for by the reweighting step.
394

[en] PROBABILISTIC LOAD FLOW VIA MONTE CARLO SIMULATION AND CROSS-ENTROPY METHOD / [pt] FLUXO DE POTÊNCIA PROBABILÍSTICO VIA SIMULAÇÃO MONTE CARLO E MÉTODO DA ENTROPIA CRUZADA

ANDRE MILHORANCE DE CASTRO 12 February 2019 (has links)
[pt] Em planejamento e operação de sistemas de energia elétrica, é necessário realizar diversas avaliações utilizando o algoritmo de fluxo de potência, para obter e monitorar o ponto de operação da rede em estudo. Em sua utilização determinística, devem ser especificados valores de geração e níveis de carga por barra, bem como considerar uma configuração especifica da rede elétrica. Existe, porém, uma restrição evidente em se trabalhar com algoritmo de fluxo de potência determinístico: não há qualquer percepção do impacto gerado por incertezas nas variáveis de entrada que o algoritmo utiliza. O algoritmo de fluxo de potência probabilístico (FPP) visa extrapolar as limitações impostas pelo uso da ferramenta convencional determinística, permitindo a consideração das incertezas de entrada. Obtém-se maior sensibilidade na avaliação dos resultados, visto que possíveis regiões de operação são mais claramente examinadas. Consequentemente, estima-se o risco do sistema funcionar fora de suas condições operativas nominais. Essa dissertação propõe uma metodologia baseada na simulação Monte Carlo (SMC) utilizando técnicas de amostragem por importância via o método de entropia cruzada. Índices de risco para eventos selecionados (e.g., sobrecargas em equipamentos de transmissão) são avaliados, mantendo-se a precisão e flexibilidade permitidas pela SMC convencional, porém em tempo computacional muito reduzido. Ao contrário das técnicas analíticas concebidas para solução do FPP, que visam primordialmente à elaboração de curvas de densidade de probabilidade para as variáveis de saída (fluxos, etc.) e sempre necessitam ter a precisão obtida comparada à SMC, o método proposto avalia somente as áreas das caudas dessas densidades, obtendo resultados com maior exatidão nas regiões de interesse do ponto de vista do risco operativo. O método proposto é aplicado nos sistemas IEEE 14 barras, IEEE RTS e IEEE 118 barras, sendo os resultados obtidos amplamente discutidos. Em todos os casos, há claros ganhos de desempenho computacional, mantendo-se a precisão, quando comparados à SMC convencional. As possíveis aplicações do método e suas derivações futuras também fazem parte da dissertação. / [en] In planning and operation of electric energy systems, it is necessary to perform several evaluations using the power flow algorithm to obtain and monitor the operating point of the network under study. Bearing in mind its deterministic use, generation values and load levels per bus must be specified, as well as a specific configuration of the power network. There is, however, an obvious constraint in running a deterministic power flow tool: there is no perception of the impact produced by uncertainties on the input variables used by the conventional algorithm. The probabilistic power flow (PLF) algorithm aims to solve the limitations imposed by the use of the deterministic conventional tool, allowing the consideration of input uncertainties. Superior sensitivity is obtained in the evaluation of results, as possible regions of operation are more clearly examined. Consequently, the risk of the system operating outside its nominal conditions is duly estimated. This dissertation proposes a methodology based on Monte Carlo simulation (MCS) using importance sampling techniques via the cross-entropy method. Risk indices for selected events (e.g., overloads on transmission equipment) are evaluated, keeping the same accuracy and flexibility tolerable by the conventional MCS, but in much less computational time. Unlike the FPP solution obtained by analytical techniques, which primarily aim at assessing probability density curves for the output variables (flows, etc.) and always need to have the accuracy compared to MCS, the proposed method evaluates only the tail areas of these densities, obtaining results with greater accuracy in the regions of interest from the operational risk point of view. The proposed method is applied to IEEE 14, IEEE RTS and IEEE 118 bus systems, and the results are widely discussed. In all cases, there are clear gains in computational performance, maintaining accuracy when compared to conventional SMC. The possible applications of the method and future developments are also part of the dissertation.
395

Nanoscale pattern formation on ion-sputtered surfaces / Musterbildung auf der Nanometerskala an ion-gesputterten Oberflächen

Yasseri, Taha 21 January 2010 (has links)
No description available.
396

[pt] APLICAÇÕES DO MÉTODO DA ENTROPIA CRUZADA EM ESTIMAÇÃO DE RISCO E OTIMIZAÇÃO DE CONTRATO DE MONTANTE DE USO DO SISTEMA DE TRANSMISSÃO / [en] CROSS-ENTROPY METHOD APPLICATIONS TO RISK ESTIMATE AND OPTIMIZATION OF AMOUNT OF TRANSMISSION SYSTEM USAGE

23 November 2021 (has links)
[pt] As companhias regionais de distribuição não são autossuficientes em energia elétrica para atender seus clientes, e requerem importar a potência necessária do sistema interligado. No Brasil, elas realizam anualmente o processo de contratação do montante de uso do sistema de transmissão (MUST) para o horizonte dos próximos quatro anos. Essa operação é um exemplo real de tarefa que envolve decisões sob incerteza com elevado impacto na produtividade das empresas distribuidoras e do setor elétrico em geral. O trabalho se torna ainda mais complexo diante da crescente variabilidade associada à geração de energia renovável e à mudança do perfil do consumidor. O MUST é uma variável aleatória, e ser capaz de compreender sua variabilidade é crucial para melhor tomada de decisão. O fluxo de potência probabilístico é uma técnica que mapeia as incertezas das injeções nodais e configuração de rede nos equipamentos de transmissão e, consequentemente, nas potências importadas em cada ponto de conexão com o sistema interligado. Nesta tese, o objetivo principal é desenvolver metodologias baseadas no fluxo de potência probabilístico via simulação Monte Carlo, em conjunto com a técnica da entropia cruzada, para estimar os riscos envolvidos na contratação ótima do MUST. As metodologias permitem a implementação de software comercial para lidar com o algoritmo de fluxo de potência, o que é relevante para sistemas reais de grande porte. Apresenta-se, portanto, uma ferramenta computacional prática que serve aos engenheiros das distribuidoras de energia elétrica. Resultados com sistemas acadêmicos e reais mostram que as propostas cumprem os objetivos traçados, com benefícios na redução dos custos totais no processo de otimização de contratos e dos tempos computacionais envolvidos nas estimativas de risco. / [en] Local power distribution companies are not self-sufficient in electricity to serve their customers, and require importing additional energy supply from the interconnected bulk power systems. In Brazil, they annually carry out the contracting process for the amount of transmission system usage (ATSU) for the next four years. This process is a real example of a task that involves decisions under uncertainty with a high impact on the productivity of the distributions companies and on the electricity sector in general. The task becomes even more complex in face of the increasing variability associated with the generation of renewable energy and the changing profile of the consumer. The ATSU is a random variable, and being able to understand its variability is crucial for better decision making. Probabilistic power flow is a technique that maps the uncertainties of nodal injections and network configuration in the transmission equipment and, consequently, in the imported power at each connection point with the bulk power system. In this thesis, the main objective is to develop methodologies based on probabilistic power flow via Monte Carlo simulation, together with cross entropy techniques, to assess the risks involved in the optimal contracting of the ATSU. The proposed approaches allow the inclusion of commercial software to deal with the power flow algorithm, which is relevant for large practical systems. Thus, a realistic computational tool that serves the engineers of electric distribution companies is presented. Results with academic and real systems show that the proposals fulfill the objectives set, with the benefits of reducing the total costs in the optimization process of contracts and computational times involved in the risk assessments.
397

Neutronenfluss in Untertagelaboren

Grieger, Marcel 28 January 2022 (has links)
Das Felsenkellerlabor ist ein neues Untertagelabor im Bereich der nuklearen Astrophysik. Es befindet sich unter 47 m Hornblende-Monzonit Felsgestein im Stollensystem der ehemaligen Dresdner Felsenkellerbrauerei. Im Rahmen dieser Arbeit wird der Neutronenuntergrund in Stollen IV und VIII untersucht. Gewonnene Erkenntnisse aus Stollen IV hatten direkten Einfluss auf die geplanten Abschirmbedingungen für Stollen VIII. Die Messung wurde mit dem HENSA-Neutronenspektrometer durchgeführt, welches aus polyethylenmoderierten ³He-Zählrohren besteht. Mit Hilfe des Monte-Carlo Programmes FLUKA zur Simulation von Teilchentransport werden für das Spektrometer die Neutronen-Ansprechvermögen bestimmt. Für jeden Messort wird außerdem eine Vorhersage des Neutronenflusses erstellt und die Labore hinsichtlich der beiden Hauptkomponenten aus myoneninduzierten Neutronen und Gesteinsneutronen aus (α,n)-Reaktionen und Spaltprozessen kartografiert. Die verwendeten Mess- und Analysemethoden finden in einer neuen Messung am tiefen Untertagelabor LSC Canfranc Anwendung. Erstmalig werden im Rahmen dieser Arbeit vorläufige Ergebnisse vorgestellt. Des Weiteren werden Strahlenschutzsimulationen für das Felsenkellerlabor präsentiert, welche den strahlenschutztechnischen Rahmen für die wissenschaftliche Nutzung definieren. Dabei werden die für den Sicherheitsbericht des Felsenkellers verwendeten Werte auf die Strahlenschutzverordnung 2018 aktualisiert. Letztlich werden Experimente an der Radiofrequenz-Ionenquelle am Felsenkeller vorgestellt, die im Rahmen dieser Arbeit technisch betreut wurde. Dabei werden Langzeitmessungen am übertägigen Teststand am Helmholtz-Zentrum Dresden-Rossendorf präsentiert.:1 Einführung und Motivation 2 Grundlagen 3 Der Dresdner Felsenkeller 4 Neutronenflussmessungen am Felsenkeller 5 Auswertung der Neutronenraten 6 Messung am LSC Canfranc 7 Strahlenschutz am Felsenkeller 8 Die Radiofrequenz-Ionenquelle am Felsenkeller 9 Zusammenfassung A Technische Angaben zu den verwendeten Zählern B Aufbauskizzen der Detektoren C WinBUGS Pulshöhenspektren D Savitzky-Golay-Filter Fits E Entfaltung mit Gravel F Omega-Variation mit Gravel G Aktivierungssimulationen
398

Multi-factor approximation : An analysis and comparison ofMichael Pykhtin's paper “Multifactor adjustment”

Zanetti, Michael, Güzel, Philip January 2023 (has links)
The need to account for potential losses in rare events is of utmost importance for corporations operating in the financial sector. Common measurements for potential losses are Value at Risk and Expected Shortfall. These are measures of which the computation typically requires immense Monte Carlo simulations. Another measurement is the Advanced Internal Ratings-Based model that estimates the capital requirement but solely accounts for a single risk factor. As an alternative to the commonly used time-consuming credit risk methods and measurements, Michael Pykhtin presents methods to approximate the Value at Risk and Expected Shortfall in his paper Multi-factor adjustment from 2004. The thesis’ main focus is an elucidation and investigation of the approximation methods that Pykhtin presents. Pykhtin’s approximations are thereafter implemented along with the Monte Carlo methods that is used as a benchmark. A recreation of the results Pykhtin presents is completed with satisfactory, strongly matching results, which is a confident verification that the methods have been implemented in correspondence with the article. The methods are also applied on a small and large synthetic Nordea data set to test the methods on alternative data. Due to the size complexity of the large data set, it cannot be computed in its original form. Thus, a clustering algorithm is used to eliminate this limitation while still keeping characteristics of the original data set. Executing the methods on the synthetic Nordea data sets, the Value at Risk and Expected Shortfall results have a larger discrepancy between approximated and Monte Carlo simulated results. The noted differences are probably due to increased borrower exposures, and portfolio structures not being compatible with Pykhtin’s approximation. The purpose of clustering the small data set is to test the effect on the accuracy and understand the clustering algorithm’s impact before implementing it on the large data set. Clustering the small data set caused deviant results compared to the original small data set, which is expected. The clustered large data set’s approximation results had a lower discrepancy to the benchmark Monte Carlo simulated results in comparison to the small data. The increased portfolio size creates a granularity decreasing the outcome’s variance for both the MC methods, and the approximation methods, hence the low discrepancy. Overall, Pykhtin’s approximations’ accuracy and execution time are relatively good for the experiments. It is however very challenging for the approximate methods to handle large portfolios, considering the issues that the portfolio run into at just a couple of thousand borrowers. Lastly, a comparison between the Advanced Internal Ratings-Based model, and modified Value at Risks and Expected Shortfalls are made. Calculating the capital requirement for the Advanced Internal Ratings-Based model, the absence of complex concentration risk consideration is clearly illustrated by the significantly lower results compared to either of the other methods. In addition, an increasing difference can be identified between the capital requirements obtained from Pykhtin’s approximation and the Monte Carlo method. This emphasizes the importance of utilizing complex methods to fully grasp the inherent portfolio risks. / Behovet av att ta hänsyn till potentiella förluster av sällsynta händelser är av yttersta vikt för företag verksamma inom den finansiella sektorn. Vanliga mått på potentiella förluster är Value at Risk och Expected Shortfall. Dessa är mått där beräkningen vanligtvis kräver enorma Monte Carlo-simuleringar. Ett annat mått är Advanced Internal Ratings-Based-modellen som uppskattar ett kapitalkrav, men som enbart tar hänsyn till en riskfaktor. Som ett alternativ till dessa ofta förekommande och tidskrävande kreditriskmetoderna och mätningarna, presenterar Michael Pykhtin metoder för att approximera Value at Risk och Expected Shortfall i sin uppsats Multi-factor adjustment från 2004. Avhandlingens huvudfokus är en undersökning av de approximativa metoder som Pykhtin presenterar. Pykhtins approximationer implementeras och jämförs mot Monte Carlo-metoder, vars resultat används som referensvärden. Ett återskapande av resultaten Pykhtin presenterar i sin artikel har gjorts med tillfredsställande starkt matchande resultat, vilket är en säker verifiering av att metoderna har implementerats i samstämmighet med artikeln. Metoderna tillämpas även på ett litet och ett stor syntetiskt dataset erhållet av Nordea för att testa metoderna på alternativa data. På grund av komplexiteten hos det stora datasetet kan det inte beräknas i sin ursprungliga form. Således används en klustringsalgoritm för att eliminera denna begränsning samtidigt som egenskaperna hos den ursprungliga datamängden fortfarande bibehålls. Vid appliceringen av metoderna på de syntetiska Nordea-dataseten, identifierades en större diskrepans hos Value at Risk och Expected Shortfall-resultaten mellan de approximerade och Monte Carlo-simulerade resultaten. De noterade skillnaderna beror sannolikt på ökade exponeringar hos låntagarna och att portföljstrukturerna inte är förenliga med Pykhtins approximation. Syftet med klustringen av den lilla datasetet är att testa effekten av noggrannheten och förstå klustringsalgoritmens inverkan innan den implementeras på det stora datasetet. Att gruppera det lilla datasetet orsakade avvikande resultat jämfört med det ursprungliga lilla datasetet, vilket är förväntat. De modifierade stora datasetets approximativa resultat hade en lägre avvikelse mot de Monte Carlo simulerade benchmark resultaten i jämförelse med det lilla datasetet. Den ökade portföljstorleken skapar en finkornighet som minskar resultatets varians för både MC-metoderna och approximationerna, därav den låga diskrepansen. Sammantaget är Pykhtins approximationers noggrannhet och utförandetid relativt bra för experimenten. Det är dock väldigt utmanande för de approximativa metoderna att hantera stora portföljer, baserat på de problem som portföljen möter redan vid ett par tusen låntagare. Slutligen görs en jämförelse mellan Advanced Internal Ratings-Based-modellen, och modifierade Value at Risks och Expected shortfalls. När man beräknar kapitalkravet för Advanced Internal Ratings-Based-modellen, illustreras saknaden av komplexa koncentrationsrisköverväganden tydligt av de betydligt lägre resultaten jämfört med någon av de andra metoderna. Dessutom kan en ökad skillnad identifieras mellan kapitalkraven som erhålls från Pykhtins approximation och Monte Carlo-metoden. Detta understryker vikten av att använda komplexa metoder för att fullt ut förstå de inneboende portföljriskerna.
399

[pt] ESTIMATIVA DE RISCOS EM REDES ELÉTRICAS CONSIDERANDO FONTES RENOVÁVEIS E CONTINGÊNCIAS DE GERAÇÃO E TRANSMISSÃO VIA FLUXO DE POTÊNCIA PROBABILÍSTICO / [en] RISK ASSESSMENT IN ELECTRIC NETWORKS CONSIDERING RENEWABLE SOURCES AND GENERATION AND TRANSMISSION CONTINGENCIES VIA PROBABILISTIC POWER FLOW

24 November 2023 (has links)
[pt] A demanda global por soluções sustentáveis para geração de energia elétrica cresceu rapidamente nas últimas décadas, sendo impulsionada por incentivos fiscais dos governos e investimentos em pesquisa e desenvolvimento de tecnologias. Isso provocou uma crescente inserção de fontes renováveis nas redes elétricas ao redor do mundo, criando novos desafios críticos para as avaliações de desempenho dos sistemas que são potencializados pela intermitência desses recursos energéticos combinada às falhas dos equipamentos de rede. Motivado por esse cenário, esta dissertação aborda a estimativa de risco de inadequação de grandezas elétricas, como ocorrências de sobrecarga em ramos elétricos ou subtensão em barramentos, através do uso do fluxo de potência probabilístico, baseado na simulação Monte Carlo e no método de entropia cruzada. O objetivo é determinar o risco do sistema não atender a critérios operativos, de forma precisa e com eficiência computacional, considerando as incertezas de carga, geração e transmissão. O método é aplicado aos sistemas testes IEEE RTS 79 e IEEE 118 barras, considerando também versões modificadas com a inclusão de uma usina eólica, e os resultados são amplamente discutidos. / [en] The global demand for sustainable solutions for electricity generation has grown rapidly in recent decades, driven by government tax incentives and investments in technology research and development. This caused a growing insertion of renewable sources in power networks around the world, creating new critical challenges for systems performance assessments that are enhanced by the intermittency of these energy resources combined with the failures of network equipment. Motivated by this scenario, this dissertation addresses the estimation of risk of inadequacy of electrical quantities, such as overload occurrences in electrical branches or undervoltage in buses, through the use of probabilistic power flow, based on Monte Carlo simulation and the cross-entropy method. The objective is to determine the risk of the system not meeting operational criteria, precisely and with computational efficiency, considering load, generation and transmission uncertainties. The method is applied to IEEE RTS 79 and IEEE 118 bus test systems, also considering modified versions with the inclusion of a wind power plant, and the results are widely discussed.
400

Experimentelle Untersuchung von auftriebsbehafteter Strömung und Wärmeübertragung einer rotierenden Kavität mit axialer Durchströmung

Diemel, Eric 23 April 2024 (has links)
The flow and heat transfer within compressor rotor cavities of aero-engines is a conjugate problem. Depending on the operating conditions buoyancy forces, caused by radial temperature difference between the cold throughflow and the hotter shroud, can influence the amount of entrained air significantly. By this, the heat transfer depends on the radial temperature gradient of the cavity walls and in reverse the disk temperatures are dependent on the heat transfer. In this thesis, disk Nusselt numbers are calculated in reference to the air inlet temperature and in comparison to a modeled local air temperature inside the cavity. The local disk heat flux is determined from measured steady-state surface temperatures by solving the inverse heat transfer problem in an iterative procedure. The conduction equation is solved on a 2D mesh, using a validated finite element approach and the heat flux confidence intervals are calculated with a stratified Monte Carlo approach. An estimate for the amount of air entering into the cavity is calculated by a simplified heat balance. In addition to the thermal characterization of the cavity, the mass exchange of the air in the cavity with the axial flow in the annular gap and the swirl distribution of the air in the cavity are also investigated.:1 Einleitung 2 Grundlagen und Literaturübersicht 2.1 Modellsystem der rotierenden Kavitäten mit axialer Durchströmung 2.2 Ergebnisgrößen 2.3 Strömung in rotierenden Kavitäten 2.4 Wärmeübertragung in rotierenden Kavitäten 2.5 Fluidtemperatur in rotierenden Kavitäten 3 Experimenteller Aufbau 4 Messtechnik 4.1 Oberflächen- und Materialtemperaturen 4.2 Lufttemperaturen 4.3 Statischer Druck 4.4 Dreiloch-Drucksonden 5 Datenauswertung 5.1 Kernrotationsverhältnis 5.2 Wärmestromdichte und Nusseltzahl 5.2.1 Finite-Elemente Modell 5.2.2 inverses Wärmeleitungsproblem 5.2.3 Anpassungsmethode 5.2.4 Testfälle zur Validierung 5.2.5 Validierung Testfall 1 und 3 - ideale Kavitätenscheibe 5.2.6 Validierung Testfall 2 - Reproduzierbarkeit 5.2.7 Validierung Testfall 4 - lokales Ereignis 5.2.8 Bestimmung der Wärmestromdichte-Unsicherheit 5.2.9 Anwendung der Anpassungsmethode auf experimentelle Daten 5.2.10 Wahl der Randbedingungsfunktion 5.2.11 Wärmeübergangskoeffizient und Nusselt-Zahl 5.2.12 Zusammenfassung 5.3 Austauschmassenstrom 6 Experimentelle Ergebnisse 6.1 Dichteverteilung in der Kavität 6.2 Massenaustausch Kavität 6.3 Wärmeübertragung in der Kavität 6.3.1 Fallbeispiel 6.3.2 Einfluss der Drehfrequenz 6.3.3 Einfluss des Massenstromes 6.3.4 Einfluss des Auftriebsparameters 6.4 Wärmeübertragung im Ringspalt 6.5 Drall im Ringspalt und der Kavität 7 Zusammenfassung und Ausblick / Die Strömung und Wärmeübertragung in den Verdichterkavitäten von Flugtriebwerken ist ein konjugiertes Problem. Durch die radialen Temperaturunterschiede in der Kavität wird die Menge der in die Kavität strömenden Luft stark beeinflusst. Somit ist die Wärmeübertragung abhängig von den radialen Temperaturgradienten der Scheibenwände und umgekehrt ist die Scheibentemperatur abhängig von der Wärmeübertragung. Die Nusselt-Zahl in diesem System wurde aufgrund der schwierigen Zugänglichkeit in der Historie auf die eine Referenztemperatur vor der Kavität bezogen. Dies ist insofern problematisch, da hierdurch die thermischen Verhältnisse unterschätzt werden können. In dieser Arbeit wird ein neuer Ansatz zu Berechnung der Nusselt-Zahl mithilfe einer modellierten lokalen Lufttemperatur innerhalb der Kavität verwendet. Die lokale Wärmestromdichte auf der Scheibenoberfläche wird mithilfe eines validierten zweidimensionalen rotationssymmetrischen Finite-Element Modells auf der Grundlage von gemessenen Oberflächentemperaturen berechnet. Dies stellt ein inverses Wärmeleitungsproblem dar, welches mithilfe einer Anpassungsmethode gelöst wurde. Die Auswirkung der Messunsicherheit der Temperaturmessung auf die berechnete Wärmestromdichte wird durch eine geschichtete Monte-Carlo-Simulation, nach dem Ansatz der LHC-Methode, untersucht. Neben der thermischen Charakterisierung der Kavität wird zudem der Massenaustausch der Luft in der Kavität mit der axialen Durchströmung im Ringspalt sowie die Drallverteilung der Luft in der Kavität untersucht.:1 Einleitung 2 Grundlagen und Literaturübersicht 2.1 Modellsystem der rotierenden Kavitäten mit axialer Durchströmung 2.2 Ergebnisgrößen 2.3 Strömung in rotierenden Kavitäten 2.4 Wärmeübertragung in rotierenden Kavitäten 2.5 Fluidtemperatur in rotierenden Kavitäten 3 Experimenteller Aufbau 4 Messtechnik 4.1 Oberflächen- und Materialtemperaturen 4.2 Lufttemperaturen 4.3 Statischer Druck 4.4 Dreiloch-Drucksonden 5 Datenauswertung 5.1 Kernrotationsverhältnis 5.2 Wärmestromdichte und Nusseltzahl 5.2.1 Finite-Elemente Modell 5.2.2 inverses Wärmeleitungsproblem 5.2.3 Anpassungsmethode 5.2.4 Testfälle zur Validierung 5.2.5 Validierung Testfall 1 und 3 - ideale Kavitätenscheibe 5.2.6 Validierung Testfall 2 - Reproduzierbarkeit 5.2.7 Validierung Testfall 4 - lokales Ereignis 5.2.8 Bestimmung der Wärmestromdichte-Unsicherheit 5.2.9 Anwendung der Anpassungsmethode auf experimentelle Daten 5.2.10 Wahl der Randbedingungsfunktion 5.2.11 Wärmeübergangskoeffizient und Nusselt-Zahl 5.2.12 Zusammenfassung 5.3 Austauschmassenstrom 6 Experimentelle Ergebnisse 6.1 Dichteverteilung in der Kavität 6.2 Massenaustausch Kavität 6.3 Wärmeübertragung in der Kavität 6.3.1 Fallbeispiel 6.3.2 Einfluss der Drehfrequenz 6.3.3 Einfluss des Massenstromes 6.3.4 Einfluss des Auftriebsparameters 6.4 Wärmeübertragung im Ringspalt 6.5 Drall im Ringspalt und der Kavität 7 Zusammenfassung und Ausblick

Page generated in 0.0595 seconds