• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 484
  • 220
  • 85
  • 66
  • 34
  • 30
  • 26
  • 25
  • 13
  • 9
  • 8
  • 6
  • 4
  • 4
  • 2
  • Tagged with
  • 1093
  • 1093
  • 1093
  • 121
  • 121
  • 100
  • 99
  • 95
  • 79
  • 68
  • 67
  • 63
  • 63
  • 56
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Probabilistic Post-Liquefaction Residual Shear Strength Analyses of Cohesionless Soil Deposits: Application to the Kocaeli (1999) and Duzce (1999) Earthquakes

Lumbantoruan, Partahi Mamora Halomoan 31 October 2005 (has links)
Liquefaction of granular soil deposits can have extremely detrimental effects on the stability of embankment dams, natural soil slopes, and mine tailings. The residual or liquefied shear strength of the liquefiable soils is a very important parameter when evaluating stability and deformation of level and sloping ground. Current procedures for estimating the liquefied shear strength are based on extensive laboratory testing programs or from the back-analysis of failures where liquefaction was involved and in-situ testing data was available. All available procedures utilize deterministic methods for estimation and selection of the liquefied shear strength. Over the past decade, there has been an increasing trend towards analyzing geotechnical problems using probability and reliability. This study presents procedures for assessing the liquefied shear strength of cohesionless soil deposits within a risk-based framework. Probabilistic slope stability procedures using reliability methods and Monte Carlo Simulations are developed to incorporate uncertainties associated with geometrical and material parameters. The probabilistic methods are applied to flow liquefaction case histories from the 1999 Kocaeli/Duzce, Turkey Earthquake, where extensive liquefaction was observed. The methods presented in this paper should aid in making better decisions about the design and rehabilitation of structures constructed of or atop liquefiable soil deposits. / Master of Science
452

Scaling of Steady States in a Simple Driven Three-State Lattice Gas

Thies, Michael 15 September 1998 (has links)
Phase segregated states in a simple three-state stochastic lattice gas are investigated. A two dimensional finite lattice with periodic boundary conditions is filled with one hole and two oppositely "charged" species of particles, subject to an excluded volume constraint. Starting from a completely disordered initial configuration, a sufficiently large external "electric" field <I>E</I> induces the phase segregation, by separating the charges into two strips and "trapping" the hole at an interface between them. Focusing on the steady state, the scaling properties of an appropriate order parameter, depending on drive and system size, are investigated by mean-field theory and Monte Carlo methods. Density profiles of the two interfaces in the ordered system are studied with the help of Monte Carlo simulations and are found to scale in the field-dependent variable, Ε = 2 tanh <I>E</I> /2), for <I>E</I> ≲ 0.8. For larger values of <I>E</I>, independent approximations of the interfacial profiles, obtained within the framework of mean-field theory, exhibit significant deviations from the Monte Carlo data. Interestingly, the deviations can be reduced significantly by a slight modification of the mean-field theory. / Master of Science
453

A probabilistic method for the operation of three-phase unbalanced active distribution networks

Mokryani, Geev, Majumdar, A., Pal, B.C. 25 January 2016 (has links)
Yes / This paper proposes a probabilistic multi-objective optimization method for the operation of three-phase distribution networks incorporating active network management (ANM) schemes including coordinated voltage control and adaptive power factor control. The proposed probabilistic method incorporates detailed modelling of three-phase distribution network components and considers different operational objectives. The method simultaneously minimizes the total energy losses of the lines from the point of view of distribution network operators (DNOs) and maximizes the energy generated by photovoltaic (PV) cells considering ANM schemes and network constraints. Uncertainties related to intermittent generation of PVs and load demands are modelled by probability density functions (PDFs). Monte Carlo simulation method is employed to use the generated PDFs. The problem is solved using ɛ-constraint approach and fuzzy satisfying method is used to select the best solution from the Pareto optimal set. The effectiveness of the proposed probabilistic method is demonstrated with IEEE 13- and 34- bus test feeders.
454

Multiply Robust Weighted Generalized Estimating Equations for Incomplete Longitudinal Binary Data Using Empirical Likelihood / 欠測を含む二値の経時データにおける経験尤度法を用いた多重頑健重み付き一般化推定方程式

Komazaki, Hiroshi 25 March 2024 (has links)
京都大学 / 新制・論文博士 / 博士(社会健康医学) / 乙第13612号 / 論社医博第18号 / 京都大学大学院医学研究科社会健康医学系専攻 / (主査)教授 森田 智視, 教授 古川 壽亮, 教授 今中 雄一 / 学位規則第4条第2項該当 / Doctor of Public Health / Kyoto University / DFAM
455

Prepayment Modeling in Mortgage Backed Securities : Independent and Strategic Approaches to Prepayment Timing

Andersson, Johanna January 2024 (has links)
Mortgage Backed Securities (MBS) are a type of security backed by mortgages as the underlying asset. This is achieved through a process called securitization, where specific mortgages are grouped together and separated from the bank’s other assets, and then sold to investors. One of the risks for investors in MBS is mortgage prepayments made by the borrowers of the underlying mortgages. This risk arises due to the uncertainty of the expected cash flows to be distributed among the investors. There is a correlation between falling market interest rates and an increase in prepayments. When market interest rates fall, borrowers have an incentive to refinance their mortgages at lower interest rates, leading to higher prepayment rates. The Public Securities Association (PSA) model is recognized as a standard benchmark for estimating prepayment rates in MBS. In this paper, we have introduced models to generate time points for prepayments and compare how well these models match with the PSA model. Some of these models determine the timing of each prepayment event using an exponentially distributed Poisson process, while one model employs the Gamma distribution. Additionally, we introduce a strategy where prepayment is strategically triggered by whether the market rate falls below the contract rate. In that strategy, we investigate when it is most beneficial to make a prepayment. The results show that among the models employing random generation of prepayment events, the Gamma distribution best aligns with the PSA rule. Regarding the strategic prepayment strategy, our findings suggest that it is most advantageous to make prepayments early in the mortgage term, aligning with the most rational behavior as well.
456

Generalized Principal Component Analysis

Solat, Karo 05 June 2018 (has links)
The primary objective of this dissertation is to extend the classical Principal Components Analysis (PCA), aiming to reduce the dimensionality of a large number of Normal interrelated variables, in two directions. The first is to go beyond the static (contemporaneous or synchronous) covariance matrix among these interrelated variables to include certain forms of temporal (over time) dependence. The second direction takes the form of extending the PCA model beyond the Normal multivariate distribution to the Elliptically Symmetric family of distributions, which includes the Normal, the Student's t, the Laplace and the Pearson type II distributions as special cases. The result of these extensions is called the Generalized principal component analysis (GPCA). The GPCA is illustrated using both Monte Carlo simulations as well as an empirical study, in an attempt to demonstrate the enhanced reliability of these more general factor models in the context of out-of-sample forecasting. The empirical study examines the predictive capacity of the GPCA method in the context of Exchange Rate Forecasting, showing how the GPCA method dominates forecasts based on existing standard methods, including the random walk models, with or without including macroeconomic fundamentals. / Ph. D. / Factor models are employed to capture the hidden factors behind the movement among a set of variables. It uses the variation and co-variation between these variables to construct a fewer latent variables that can explain the variation in the data in hand. The principal component analysis (PCA) is the most popular among these factor models. I have developed new Factor models that are employed to reduce the dimensionality of a large set of data by extracting a small number of independent/latent factors which represent a large proportion of the variability in the particular data set. These factor models, called the generalized principal component analysis (GPCA), are extensions of the classical principal component analysis (PCA), which can account for both contemporaneous and temporal dependence based on non-Gaussian multivariate distributions. Using Monte Carlo simulations along with an empirical study, I demonstrate the enhanced reliability of my methodology in the context of out-of-sample forecasting. In the empirical study, I examine the predictability power of the GPCA method in the context of “Exchange Rate Forecasting”. I find that the GPCA method dominates forecasts based on existing standard methods as well as random walk models, with or without including macroeconomic fundamentals.
457

Measurement Invariance and Sensitivity of Delta Fit Indexes in Non-Normal Data: A Monte Carlo Simulation Study

Yu, Meixi 01 January 2024 (has links) (PDF)
The concept of measurement invariance is essential in ensuring psychological and educational tests are interpreted consistently across diverse groups. This dissertation investigated the practical challenges associated with measurement invariance, specifically on how measurement invariance delta fit indexes are affected by non-normal data. Non-normal data distributions are common in real-world scenarios, yet many statistical methods and measurement invariance delta fit indexes are based on the assumption of normally distributed data. This raises concerns about the accuracy and reliability of conclusions drawn from such analyses. The primary objective of this research is to examine how commonly used delta fit indexes of measurement invariance respond under conditions of non-normality. The present research was built upon Cao and Liang (2022a)’s study to test the sensitivities of a series of delta fit indexes, and further scrutinizes the role of non-normal data distributions. A series of simulation studies was conducted, where data sets with varying degrees of skewness and kurtosis were generated. These data sets were then examined by multi-group confirmatory factor analysis (MGCFA) using the Satorra-Bentler scaled chi-square difference test, a method specifically designed to adjust for non-normality. The performance of delta fit indexes such as the Delta Comparative Fit Index (∆CFI), Delta Standardized Root Mean Square residual (∆SRMR) and Delta Root Mean Square Error of Approximation (∆RMSEA) were assessed. These findings have significant implications for professionals and scholars in psychology and education. They provide constructive information related to key aspects of research and practice in these fields related to measurement, contributing to the broader discussion on measurement invariance by highlighting challenges and offering solutions for assessing model fit in non-normal data scenarios.
458

Algorithmic studies of compact lattice QED with Wilson fermions

Zverev, Nikolai 18 December 2001 (has links)
Wir untersuchen numerisch und teilweise analytisch die kompakte Quantenelektrodynamik auf dem Gitter mit Wilson-Fermionen. Dabei konzentrieren wir uns auf zwei wesentliche Teilprobleme der Theorie: der Einfluss von Eichfeld-Moden mit verschwindendem Impuls in der Coulomb-Phase und die Effizienz von verschiedenen Monte-Carlo-Algorithmen unter Berücksichtigung dynamischer Fermionen. Wir zeigen, dass der Einfluss der Null-Impuls-Moden auf die eichabhängigen Gitter-Observablen wie Photon- und Fermion-Korrelatoren nahe der kritischen chiralen Grenzlinie innerhalb der Coulomb Phase zu einem Verhalten führt, das vom naiv erwarteten gitterstörungstheoretischen Verhalten abweicht. Diese Moden sind auch für die Abschirmung des kritischen Verhaltens der eichinvarianten Fermion-Observablen nahe der chiralen Grenzlinie verantwortlich. Eine Entfernung dieser Null-Impuls-Moden aus den Eichfeld-Konfigurationen führt innerhalb der Coulomb-Phase zum störungstheoretisch erwarteten Verhalten der eichabhängigen Observablen. Die kritischen Eigenschaften der eichinvarianten Fermion-Observablen in der Coulomb-Phase werden nach dem Beseitigen der Null-Impuls-Moden sichtbar. Der kritische Hopping-Parameter, den man aus den invarianten Fermion-Observablen erhält, stimmt gut mit demjenigen überein, der aus den eichabhängigen Observablen extrahiert werden kann. Wir führen den zweistufigen Multiboson-Algorithmus für numerische Untersuchungen im U(1)-Gittermodell mit einer geraden Anzahl von dynamischen Fermion-Flavour-Freiheitsgraden ein. Wir diskutieren die geeignete Wahl der technischen Parameter sowohl für den zweistufigen Multiboson-Algorithmus als auch für den hybriden Monte-Carlo-Algorithmus. Wir geben theoretische Abschätzungen für die Effizienz dieser Simulationsmethoden. Wir zeigen numerisch und theoretisch, daß der zweistufige Multiboson-Algorithmus eine gute Alternative darstellt und zumindestens mit der hybriden Monte-Carlo-Methode konkurrieren kann. Wir argumentieren, daß eine weitere Verbesserung der Effizienz des zweistufigen Multiboson-Algorithmus durch eine Vergrößerung der Zahl lokaler Update-Schleifen und auch durch die Reduktion der Ordnungen der ersten und zweiten Polynome zu Lasten des sogenannten 'Reweighting' erzielt werden kann. / We investigate numerically and in part analytically the compact lattice quantum electrodynamics with Wilson fermions. We studied the following particular tasks of the theory: the problem of the zero-momentum gauge field modes in the Coulomb phase and the performance of different Monte Carlo algorithms in the presence of dynamical fermions. We show that the influence of the zero-momentum modes on the gauge dependent lattice observables like photon and fermion correlators within the Coulomb phase leads to a behaviour of these observables different from standard perturbation theory. These modes are responsible also for the screening of the critical behaviour of the gauge invariant fermion values near the chiral limit line. Within the Coulomb phase the elimination of these zero-momentum modes from gauge configurations leads to the perturbatively expected behaviour of gauge dependent observables. The critical properties of gauge invariant fermion observables upon removing the zero-momentum modes are restored. The critical hopping-parameter obtained from the invariant fermion observables coincides with that extracted from gauge dependent values. We implement the two-step multiboson algorithm for numerical investigations in the U(1) lattice model with even dynamical Wilson fermion flavours. We discuss the scheme of an appropriate choice of technical parameters for both two-step multiboson and hybrid Monte Carlo algorithms. We give the theoretical estimates of the performance of such simulation methods. We show both numerically and theoretically that the two-step multiboson algorithm is a good alternative and at least competitive with the hybrid Monte Carlo method. We argue that an improvement of efficiency of the two-step multiboson algorithm can be achieved by increasing the number of local update sweeps and also by decreasing the orders of first and second polynomials corrected for by the reweighting step.
459

[en] PROBABILISTIC LOAD FLOW VIA MONTE CARLO SIMULATION AND CROSS-ENTROPY METHOD / [pt] FLUXO DE POTÊNCIA PROBABILÍSTICO VIA SIMULAÇÃO MONTE CARLO E MÉTODO DA ENTROPIA CRUZADA

ANDRE MILHORANCE DE CASTRO 12 February 2019 (has links)
[pt] Em planejamento e operação de sistemas de energia elétrica, é necessário realizar diversas avaliações utilizando o algoritmo de fluxo de potência, para obter e monitorar o ponto de operação da rede em estudo. Em sua utilização determinística, devem ser especificados valores de geração e níveis de carga por barra, bem como considerar uma configuração especifica da rede elétrica. Existe, porém, uma restrição evidente em se trabalhar com algoritmo de fluxo de potência determinístico: não há qualquer percepção do impacto gerado por incertezas nas variáveis de entrada que o algoritmo utiliza. O algoritmo de fluxo de potência probabilístico (FPP) visa extrapolar as limitações impostas pelo uso da ferramenta convencional determinística, permitindo a consideração das incertezas de entrada. Obtém-se maior sensibilidade na avaliação dos resultados, visto que possíveis regiões de operação são mais claramente examinadas. Consequentemente, estima-se o risco do sistema funcionar fora de suas condições operativas nominais. Essa dissertação propõe uma metodologia baseada na simulação Monte Carlo (SMC) utilizando técnicas de amostragem por importância via o método de entropia cruzada. Índices de risco para eventos selecionados (e.g., sobrecargas em equipamentos de transmissão) são avaliados, mantendo-se a precisão e flexibilidade permitidas pela SMC convencional, porém em tempo computacional muito reduzido. Ao contrário das técnicas analíticas concebidas para solução do FPP, que visam primordialmente à elaboração de curvas de densidade de probabilidade para as variáveis de saída (fluxos, etc.) e sempre necessitam ter a precisão obtida comparada à SMC, o método proposto avalia somente as áreas das caudas dessas densidades, obtendo resultados com maior exatidão nas regiões de interesse do ponto de vista do risco operativo. O método proposto é aplicado nos sistemas IEEE 14 barras, IEEE RTS e IEEE 118 barras, sendo os resultados obtidos amplamente discutidos. Em todos os casos, há claros ganhos de desempenho computacional, mantendo-se a precisão, quando comparados à SMC convencional. As possíveis aplicações do método e suas derivações futuras também fazem parte da dissertação. / [en] In planning and operation of electric energy systems, it is necessary to perform several evaluations using the power flow algorithm to obtain and monitor the operating point of the network under study. Bearing in mind its deterministic use, generation values and load levels per bus must be specified, as well as a specific configuration of the power network. There is, however, an obvious constraint in running a deterministic power flow tool: there is no perception of the impact produced by uncertainties on the input variables used by the conventional algorithm. The probabilistic power flow (PLF) algorithm aims to solve the limitations imposed by the use of the deterministic conventional tool, allowing the consideration of input uncertainties. Superior sensitivity is obtained in the evaluation of results, as possible regions of operation are more clearly examined. Consequently, the risk of the system operating outside its nominal conditions is duly estimated. This dissertation proposes a methodology based on Monte Carlo simulation (MCS) using importance sampling techniques via the cross-entropy method. Risk indices for selected events (e.g., overloads on transmission equipment) are evaluated, keeping the same accuracy and flexibility tolerable by the conventional MCS, but in much less computational time. Unlike the FPP solution obtained by analytical techniques, which primarily aim at assessing probability density curves for the output variables (flows, etc.) and always need to have the accuracy compared to MCS, the proposed method evaluates only the tail areas of these densities, obtaining results with greater accuracy in the regions of interest from the operational risk point of view. The proposed method is applied to IEEE 14, IEEE RTS and IEEE 118 bus systems, and the results are widely discussed. In all cases, there are clear gains in computational performance, maintaining accuracy when compared to conventional SMC. The possible applications of the method and future developments are also part of the dissertation.
460

Nanoscale pattern formation on ion-sputtered surfaces / Musterbildung auf der Nanometerskala an ion-gesputterten Oberflächen

Yasseri, Taha 21 January 2010 (has links)
No description available.

Page generated in 0.153 seconds