• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1226
  • 305
  • 123
  • 100
  • 67
  • 60
  • 42
  • 24
  • 22
  • 18
  • 14
  • 13
  • 8
  • 7
  • 7
  • Tagged with
  • 2421
  • 878
  • 404
  • 335
  • 302
  • 245
  • 238
  • 204
  • 196
  • 190
  • 178
  • 170
  • 166
  • 152
  • 148
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Parameter study for WinDAM using DAKOTA

Bhat, Ashwin Ramachandra January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell L. Neilsen / Windows[superscript TM] Dam Analysis Modules (WinDAM) is a set of modular software components that can be used to analyze overtopped earthen embankments and internal erosion of embankment dams. Sandia National Laboratories’ DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides algorithms to perform iterative analysis with the help of built-in algorithms for uncertainty quantification with sampling and parameter study methods. This software integrates Sandia National Laboratories’ DAKOTA software suite with WinDAM. It provides a user-interface for input and manipulation of parameters and performs various (centered and multi-dimensional) parameter studies on a wide range of parameters. This software takes advantage of the various algorithms present in DAKOTA to perform parameter studies on the various properties of WinDAM and provides users with detailed output on the changes caused by these variations.
82

Bayesian extreme quantile regression for hidden Markov models

Koutsourelis, Antonios January 2012 (has links)
The main contribution of this thesis is the introduction of Bayesian quantile regression for hidden Markov models, especially when we have to deal with extreme quantile regression analysis, as there is a limited research to inference conditional quantiles for hidden Markov models, under a Bayesian approach. The first objective is to compare Bayesian extreme quantile regression and the classical extreme quantile regression, with the help of simulated data generated by three specific models, which only differ in the error term’s distribution. It is also investigated if and how the error term’s distribution affects Bayesian extreme quantile regression, in terms of parameter and confidence intervals estimation. Bayesian extreme quantile regression is performed by implementing a Metropolis-Hastings algorithm to update our parameters, while the classical extreme quantile regression is performed by using linear programming. Moreover, the same analysis and comparison is performed on a real data set. The results provide strong evidence that our method can be improved, by combining MCMC algorithms and linear programming, in order to obtain better parameter and confidence intervals estimation. After improving our method for Bayesian extreme quantile regression, we extend it by including hidden Markov models. First, we assume a discrete time finite state-space hidden Markov model, where the distribution associated with each hidden state is a) a Normal distribution and b) an asymmetric Laplace distribution. Our aim is to explore the number of hidden states that describe the extreme quantiles of our data sets and check whether a different distribution associated with each hidden state can affect our estimation. Additionally, we also explore whether there are structural changes (breakpoints), by using break-point hidden Markov models. In order to perform this analysis we implement two new MCMC algorithms. The first one updates the parameters and the hidden states by using a Forward-Backward algorithm and Gibbs sampling (when a Normal distribution is assumed), and the second one uses a Forward-Backward algorithm and a mixture of Gibbs and Metropolis-Hastings sampling (when an asymmetric Laplace distribution is assumed). Finally, we consider hidden Markov models, where the hidden state (latent variables) are continuous. For this case of the discrete-time continuous state-space hidden Markov model we implement a method that uses linear programming and the Kalman filter (and Kalman smoother). Our methods are used in order to analyze real interest rates by assuming hidden states, which represent different financial regimes. We show that our methods work very well in terms of parameter estimation and also in hidden state and break-point estimation, which is very useful for the real life applications of those methods.
83

Development, validation, and application of new relativistic methods for all-electron unrestricted two-component calculations of EPR parameters / Entwicklung, Validierung und Anwendung von neuen relativistichen Methoden für spin-polarisierte zweikomponentige Allelektronen-Berechnungen von EPR Parametern.

Malkin Ondik, Irina January 2006 (has links) (PDF)
In this work we have developed the method of back-transfoprmation within the Douglas-Kroll-Hess (DKH) framework, which has simplified the picture-change consistent transformation of first-order property operators in the DKH approach, making the implementation feasible. This has enabled us to implement the first all-electron scalar relativistic calculations of hyperfine coupling tensors at DKH2 level. Furthemore we have presented a general, relativistic two-component DFT approach for the unrestricted calculations of electronic g-tensors, based on DKH Hamiltonian. Additionally we have derived the expressions for the evaluation of hyperfine structurs and two-component unrestricted treatment of g-tensor within the Resolution of Identity Dirac Kohn Sham method developed by Stanoslav Komorovsky and Michal Repisky in collaboration with other members of the group of V. G. Malkin. All these approaches have been extensively validated. / In dieser Arbeit entwickelten wir ein Rücktransformations-Verfahren, das im Rahmen der relativistischen Douglas-Kroll-Hess (DKH) Methode die Picture-Change-konsistente Transformation von Eigenschaftsoperatoren erster Ordnung vereinfacht. Dies ermöglichte uns, die ersten skalar-relativistischen Allelektronen-Berechnungen von Hyperfinekopplungskonstanten auf DKH2-Niveau zu implementieren. Darüber hinaus entwickelten wir eine allgemeine relativistische zweikomponentige Methode für spin-polarisierte Berechnungen von elektronischen g-Tensoren. Zusätzlich leiteten wir die Gleichungen für die Berechnungen der von Stanislav Komorovsky und Michal Repisky entwickelten Resolution-of-Identity-Dirac-Kohn-Sham-Methode her. Alle diese Verfahren wurden in umfangreichen Studien validiert.
84

Estimation of polychoric correlation with non-normal latent variables.

January 1987 (has links)
by Ming-long Lam. / Thesis (M.Ph.)--Chinese University of Hong Kong, 1987. / Bibliography: leaves 41-43.
85

Multilevel analysis of structural equation models.

January 1991 (has links)
by Linda Hoi-ying Yau. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1991. / Includes bibliographical references. / Chapter Chapter 1 --- Preliminary / Chapter § 1.1 --- Introduction page --- p.1 / Chapter § 1.2 --- Notations page --- p.3 / Chapter Chapter 2 --- Multilevel Analysis of Structural Equation Models with Multivariate Normal Distribution / Chapter § 2.1 --- The Multilevel Structural Equation Model page --- p.4 / Chapter § 2.2 --- "First Stage Estimation of and Σkmkm-1---ki+1wo for i=1,...,m-1 page" --- p.5 / Chapter § 2:3 --- Second Stage Estimation of Structural Parameters page --- p.10 / Chapter Chapter 3 --- Generalization to Arbitrary and Elliptical Distributions / Chapter § 3.1 --- Asymptotically Distribution-Free Estimation page --- p.25 / Chapter § 3.2 --- Elliptical Distribution Estimation page --- p.30 / Chapter Chapter 4 --- Artificial Examples / Chapter § 4.1 --- Examples on Multivariate Normal Distribution Estimation Page --- p.34 / Chapter § 4.2 --- Examples on Elliptical Distribution Estimation page --- p.40 / Chapter §4.3 --- Findings and Summary Page --- p.42 / Chapter Chapter 5 --- Conclusion and Discussion page --- p.44 / References page --- p.47 / Figure 1 page --- p.49 / Appendices page --- p.50 / Tables Page --- p.59
86

Extraction of superelastic parameter values from instrumented indentation data

Roberto-Pereira, Francisco Fernando January 2019 (has links)
Interest in superelastic (and shape memory) materials continues to rise, and there is a strong incentive to develop techniques for monitoring of their superelastic characteristics. This is conventionally done via uniaxial testing, but there are many advantages to having a capability for obtaining these characteristics (in the form of parameter values in a constitutive law) via indentation testing. Specimens can then be small, require minimal preparation and be obtainable from components in service. Interrogation of small volumes also allows mapping of properties over a surface. On the other hand, the tested volume must be large enough for its response to be representative of behaviour. Precisely the same arguments apply to more "mainstream" mechanical properties, such as yielding and work hardening characteristics. Indeed, there has been considerable progress in that area recently, using FEM simulation to predict indentation outcomes, evaluating the "goodness of fit" for particular sets of parameter values and converging on a best-fit combination. A similar approach can be used to obtain superelastic parameters, but little work has been done hitherto on sensitivities, uniqueness characteristics or optimal methodologies and the procedures are complicated by limitations to the constitutive laws in current use. The current work presents a comprehensive examination of the issues involved, using experimental (uniaxial and indentation) data for a NiTi Shape Memory Alloy. It was found that it is possible to obtain the superelastic parameter values using a single indenter shape (spherical). Information is also presented on sensitivities and the probable reliability of such parameters obtained in this way for an unknown material.
87

Identificação de parâmetros em obras civis. / Parameter identification in civil structures.

Costa, Adriane 18 May 2006 (has links)
O problema de identificação de parâmetros consiste em se determinar parâmetros que minimizem a diferença entre valores medidos e calculados de determinadas grandezas. Certamente, essa identificação é realizada para parâmetros que apresentam razoável grau de incerteza nos seus valores. Neste trabalho apresentam-se os principais conceitos e fundamentos matemáticos envolvidos no assunto, desenvolve-se um procedimento de identificação de parâmetros com base matemática sólida e aplica-se esse procedimento em problemas de interesse prático da engenharia. São estudados o Túnel de Hudvudsta e a barragem de Machadinho, nos quais são identificados parâmetros relacionados com as ações ou com as propriedades físicas dos materiais, considerando modelos hierárquicos para representar as estruturas. Utilizam-se os principais critérios de identificação para a definição das funções objetivo e métodos do tipo Newton para a minimização dessas funções. / The parameter identification problem consists of determining the values of the parameters that minimize the difference between measured and calculated values of some variables. Indeed, this identification is performed to parameters that present some uncertainty on their values. In this work the main mathematical concepts and fundaments related to back analysis are presented. A procedure for parameter identification with a consistent mathematical basis is developed and applied in practical engineering problems. The Hudvudsta tunnel and the Machadinho dam are studied to identify parameters related to loads or material physical properties by using hierarchical models to represent the structures. The objetive functions are defined with the main identification criteria and minimized with Newton´s methods.
88

On the stability of sequential Monte Carlo methods for parameter estimation

Kuhlenschmidt, Bernd January 2015 (has links)
No description available.
89

Informing the use of Hyper-Parameter Optimization Through Meta-Learning

Sanders, Samantha Corinne 01 June 2017 (has links)
One of the challenges of data mining is finding hyper-parameters for a learning algorithm that will produce the best model for a given dataset. Hyper-parameter optimization automates this process, but it can still take significant time. It has been found that hyperparameter optimization does not always result in induced models with significant improvement over default hyper-parameters, yet no systematic analysis of the role of hyper-parameter optimization in machine learning has been conducted. We propose the use of meta-learning to inform the decision to optimize hyper-parameters based on whether default hyper-parameter performance can be surpassed in a given amount of time. We will build a base of metaknowledge, through a series of experiments, to build predictive models that will assist in the decision process.
90

Model based estimation of parameters of spatial populations from probability samples

Weaver, George W. 02 October 1996 (has links)
Many ecological populations can be interpreted as response surfaces; the spatial patterns of the population vary in response to changes in the spatial patterns of environmental explanatory variables. Collection of a probability sample from the population provides unbiased estimates of the population parameters using design based estimation. When information is available for the environmental explanatory variables, model based procedures are available that provide more precise estimates of population parameters in some cases. In practice, not all of these environmental explanatory variables will be known. When the spatial coordinates of the population units are available, a spatial model can be used as a surrogate for the unknown, spatially patterned explanatory variables. Design based and model based procedures will be compared for estimating parameters of the population of Acid Neutralizing Capacity (ANC) of lakes in the Adirondack Mountains in New York. Results from the analysis of this population will be used to elucidate some general principles for model based estimation of parameters of spatial populations. Results indicate that using model based estimates of population parameters provide more precise estimates than design based estimates in some cases. In addition, including spatial information as a surrogate for spatially patterned missing covariates improves the precision of the estimates in some cases, the degree to which depends upon the model chosen to represent the spatial pattern. When the probability sample is selected from the spatial population is a stratified sample, differences in stratum variances need to be accounted for when residual spatial covariance estimation is desired for the entire population. This can be accomplished by scaling residuals by their estimated stratum standard deviation functions, and calculating the residual covariance using these scaled residuals. Results here demonstrate that the form of scaling influences the estimated strength of the residual correlation and the estimated correlation range. / Graduation date: 1997

Page generated in 0.0795 seconds