• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • Tagged with
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Implementation of one surface fitting algorithm for randomly scattered scanning data

Guo, Xi January 2000 (has links)
No description available.
2

On testing for the Cox model using resampling methods

Fang, Jing, 方婧 January 2007 (has links)
published_or_final_version / abstract / Statistics and Actuarial Science / Master / Master of Philosophy
3

On testing for the Cox model using resampling methods

Fang, Jing, January 2007 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2008. / Also available in print.
4

Modeling Random Events

Quintos Lima, Alejandra January 2022 (has links)
In this thesis, we address two types of modeling of random events. The first one, contained in Chapters 2 and 3, is related to the modeling of dependent stopping times. In Chapter 2, we use a modified Cox construction, along with a modification of the bivariate exponential introduced by Marshall & Olkin (1967), to create a family of stopping times, which are not necessarily conditionally independent, allowing for a positive probability for them to be equal. We also present a series of results exploring the special properties of this construction, along with some generalizations and possible applications. In Chapter 3, we present a detailed application of our model to Credit Risk theory. We propose a new measure of systemic risk that is consistent with the economic theories relating to the causes of financial market failures and can be estimated using existing hazard rate methodologies, and hence, it is simple to estimate and interpret. We do this by characterizing the probability of a market failure which is defined as the default of two or more globally systemically important banks (G-SIBs) in a small interval of time. We derive various theorems related to market failure probabilities, such as the probability of a catastrophic market failure, the impact of increasing the number of G-SIBs in an economy, and the impact of changing the initial conditions of the economy's state variables. The second type of random events we focus on is the failure of a group in the context of microlending, which is a loan made by a bank to a small group of people without credit histories. Since the creation of this mechanism by Muhammed Yunus, it has received a fair amount of academic attention. However, one of the issues not yet addressed in full detail is the issue of the size of the group. In Chapter 4, we propose a model with interacting forces to find the optimal group size. We define "optimal" as that group size that minimizes the probability of default of the group. Ultimately, we show that the original choice of Muhammad Yunus, of a group size of five people, is, under the right, and, we believe, reasonable hypotheses, either close to optimal, or even at times exactly optimal, i.e., the optimal group size is indeed five people.
5

Attractors of autoencoders : Memorization in neural networks / Attractors of autoencoders : Memorization in neural networks

Strandqvist, Jonas January 2020 (has links)
It is an important question in machine learning to understand how neural networks learn. This thesis sheds further light onto this by studying autoencoder neural networks which can memorize data by storing it as attractors.What this means is that an autoencoder can learn a training set and later produce parts or all of this training set even when using other inputs not belonging to this set. We seek out to illuminate the effect on how ReLU networks handle memorization when trained with different setups: with and without bias, for different widths and depths, and using two different types of training images -- from the CIFAR10 dataset and randomly generated. For this, we created controlled experiments in which we train autoencoders and compute the eigenvalues of their Jacobian matrices to discern the number of data points stored as attractors.We also manually verify and analyze these results for patterns and behavior. With this thesis we broaden the understanding of ReLU autoencoders: We find that the structure of the data has an impact on the number of attractors. For instance, we produced autoencoders where every training image became an attractor when we trained with random pictures but not with CIFAR10. Changes to depth and width on these two types of data also show different behaviour.Moreover, we observe that loss has less of an impact than expected on attractors of trained autoencoders.
6

Équations de Schrödinger à données aléatoires : construction de solutions globales pour des équations sur-critiques / Random data for Schrödinger equations : construction of global solutions for supercritical equations

Poiret, Aurélien 19 December 2012 (has links)
Dans cette thèse, on construit un grand nombre de solutions globales pour de nombreuses équations de Schrödinger sur-critiques. Le principe consiste à rendre la donnée initiale aléatoire, selon les mêmes méthodes que Nicolas Burq, Nikolay Tzvetkov et Laurent Thomann afin de gagner de la dérivabilité.On considère d'abord l'équation de Schrödinger cubique en dimension 3. En partant de variables aléatoires gaussiennes et de la base de L^2(R^3) formée des fonctions d'Hermite tensorielles, on construit des ensembles de solutions globales pour des données initiales qui sont moralement dans L^2(R^3). Les points clefs de la démonstration sont l'existence d'une estimée bilinéaire de type Bourgain pour l'oscillateur harmonique et la transformation de lentille qui permet de se ramener à prouver l'existence locale de solutions à l'équation de Schrödinger avec potentiel harmonique.On étudie ensuite l'effet régularisant pour prouver un théorème analogue où le gain de dérivée vaut 1/2-2/(p-1) où p correspond à la non linéarité de l'équation. Le gain est donc plus faible que précédemment mais la base de fonctions propres quelconques. De plus, la méthode s'appuyant sur des estimées linéaires, on établit le résultat pour des variables aléatoires dont la queue de distribution est à décroissance exponentielle.Enfin, on démontre des estimées multilinéaires en dimension 2 pour une base de fonctions propres quelconques ainsi que des inégalités de types chaos de Wiener pour une classe générale de variables aléatoires. Cela nous permet d'établir le théorème pour l'équation de Schrödinger quintique, avec un gain de dérivée égal à 1/3, dans le même cadre que la partie précédente. / In this thesis, we build a large number of global solutions for many supercritical Schrödinger equations. The method is to make the random initial data, using the same methods that Nicolas Burq, Nikolay Tzvetkov and Laurent Thomann in order to obtain differentiability. First, we consider the cubic Schrödinger equation in three dimensional. Using Gaussian random variables and the basis of L^2(R^3) consists of tensorial Hermite functions, we construct sets of solutions for initial data that are morally in L^2(R^3). The main ingredients of the proof are the existence of Bourgain type bilinear estimates for the harmonic oscillator and the lens transform which can be reduced to prove a local existence of solutions for the Schrödinger equation with harmonic potential. Next, we study the smoothing effect to prove an analogous theorem which the gain of differentiability is equalto 1/2-2/(p-1) which p is the nonlinearity of the equation. This gain is lower than previously but the basis of eigenfunctions are general. As the method uses only linear estimates, we establish the result for a general class of random variables.Finally, we prove multilinear estimates in two dimensional for a basis of ordinaries eigenfunctions and Wienerchaos type inequalities for classical random variables. This allows us to establish the theorem for the quinticSchrödinger equation, with a gain of differentiability equals to 1/3, in the same context as the previous chapter.
7

Multiple Imputation for Two-Level Hierarchical Models with Categorical Variables and Missing at Random Data

January 2016 (has links)
abstract: Accurate data analysis and interpretation of results may be influenced by many potential factors. The factors of interest in the current work are the chosen analysis model(s), the presence of missing data, and the type(s) of data collected. If analysis models are used which a) do not accurately capture the structure of relationships in the data such as clustered/hierarchical data, b) do not allow or control for missing values present in the data, or c) do not accurately compensate for different data types such as categorical data, then the assumptions associated with the model have not been met and the results of the analysis may be inaccurate. In the presence of clustered/nested data, hierarchical linear modeling or multilevel modeling (MLM; Raudenbush & Bryk, 2002) has the ability to predict outcomes for each level of analysis and across multiple levels (accounting for relationships between levels) providing a significant advantage over single-level analyses. When multilevel data contain missingness, multilevel multiple imputation (MLMI) techniques may be used to model both the missingness and the clustered nature of the data. With categorical multilevel data with missingness, categorical MLMI must be used. Two such routines for MLMI with continuous and categorical data were explored with missing at random (MAR) data: a formal Bayesian imputation and analysis routine in JAGS (R/JAGS) and a common MLM procedure of imputation via Bayesian estimation in BLImP with frequentist analysis of the multilevel model in Mplus (BLImP/Mplus). Manipulated variables included interclass correlations, number of clusters, and the rate of missingness. Results showed that with continuous data, R/JAGS returned more accurate parameter estimates than BLImP/Mplus for almost all parameters of interest across levels of the manipulated variables. Both R/JAGS and BLImP/Mplus encountered convergence issues and returned inaccurate parameter estimates when imputing and analyzing dichotomous data. Follow-up studies showed that JAGS and BLImP returned similar imputed datasets but the choice of analysis software for MLM impacted the recovery of accurate parameter estimates. Implications of these findings and recommendations for further research will be discussed. / Dissertation/Thesis / Doctoral Dissertation Educational Psychology 2016
8

On two Random Models in Data Analysis

James, David 12 January 2017 (has links)
No description available.

Page generated in 0.0634 seconds