• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 11
  • 7
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 58
  • 58
  • 13
  • 11
  • 10
  • 9
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

LDPC DropConnect

Chen, Xi January 2023 (has links)
Machine learning is a popular topic that has become a scientific research tool in many fields. Overfitting is a common challenge in machine learning, where the model fits the training data too well and performs poorly on new data. Stochastic regularization is one method used to prevent overfitting, by artificially constraining the model to be simpler. In this thesis, we investigate the use of tools from information and coding theory as regularization methods in machine learning. The motivation for this project comes from recent results that successfully related generalization capability of learning algorithms to the information stored in the model parameters. This has led us to explore the use of stochastic regularization techniques like Dropout and DropConnect, which add sparsity to the networks and can help control and limit the information that the parameters store on the training data. Specifically, we explore the use of parity-check matrices from coding theory as masks in the DropConnect method. Parity-check matrices describe linear relations that codewords must satisfy, and have been shown to perform well as measurement matrices in compressed sensing. We build a new family of neural networks that apply Low-Density Parity-Check (LDPC) matrices as DropConnect masks, so-called Low-Density Parity-Check DropConnect (LDPC DropConnect). We evaluate the performance of this neural network with popular datasets in classification and track the generalization capability with statistics of the LDPC matrices. Our experiments show that adopting LDPC matrices does not significantly improve the generalization performance, but it helps provide a more robust evidence lower bound in the Bayesian approach. Our work may provide insights for further research on applying machine learning in compressed sensing, distributed computation, and other related areas. / Maskininlärning är ett populärt ämne som har blivit ett vetenskapligt forskningsverktyg inom många områden. Overfitting är en vanlig utmaning inom maskininlärning, där modellen anpassar sig till träningsdatan för bra och presterar dåligt på nya data. Stokastisk regularisering är en metod som används för att förhindra överanpassning, genom att artificiellt begränsa modellen till att vara enklare. I detta examensarbete undersöker vi användningen av verktyg från informations och kodningsteorin som regulariseringsmetoder inom maskininlärning. Motivationen för detta projekt kommer från nya resultat som framgångsrikt relaterade generaliseringsförmågan hos inlärningsalgoritmer till informationen som lagras i modellparametrarna. Detta har lett oss till att utforska användningen av stokastiska regulariseringstekniker som Dropout och DropConnect, som leder till glesa nätverken och kan hjälpa till att kontrollera och begränsa informationen som parametrarna lagrar am träningsdatan. Specifikt utforskar vi användningen av paritetskontrollmatriser från kodningsteori som masker i DropConnect-metoden. Paritetskontrollmatriser beskriver linjära relationer som kodord måste uppfylla, och har visat sig fungera bra som mätmatriser vid komprimerad avkänning. Vi bygger en ny familj av neurala nätverk som tillämpar low-density parity-check (LDPC)-matriser som DropConnect-masker, så kallade LDPC DropConnect. Vi utvärderar prestandan för detta neurala nätverk med populära datauppsättningar i klassificering och spårar generaliseringsförmågan med statistik över LDPC-matriserna. Våra experiment visar att antagandet av LDPC-matriser inte signifikant förbättrar generaliseringsprestandan, men det hjälper till att ge en mer robust bevis nedre gräns i den Bayesianska metoden. Vårt arbete kan ge insikter för ytterligare forskning om tillämpning av maskininlärning i komprimerad avkänning, distribuerad beräkning och andra relaterade områden.
52

Some Inferential Results for One-Shot Device Testing Data Analysis

So, Hon Yiu January 2016 (has links)
In this thesis, we develop some inferential results for one-shot device testing data analysis. These extend and generalize existing methods in the literature. First, a competing-risk model is introduced for one-shot testing data under accelerated life-tests. One-shot devices are products which will be destroyed immediately after use. Therefore, we can observe only a binary status as data, success or failure, of such products instead of its lifetime. Many one-shot devices contain multiple components and failure of any one of them will lead to the failure of the device. Failed devices are inspected to identify the specific cause of failure. Since the exact lifetime is not observed, EM algorithm becomes a natural tool to obtain the maximum likelihood estimates of the model parameters. Here, we develop the EM algorithm for competing exponential and Weibull cases. Second, a semi-parametric approach is developed for simple one-shot device testing data. Semi-parametric estimation is a model that consists of parametric and non-parametric components. For this purpose, we only assume the hazards at different stress levels are proportional to each other, but no distributional assumption is made on the lifetimes. This provides a greater flexibility in model fitting and enables us to examine the relationship between the reliability of devices and the stress factors. Third, Bayesian inference is developed for one-shot device testing data under exponential distribution and Weibull distribution with non-constant shape parameters for competing risks. Bayesian framework provides statistical inference from another perspective. It assumes the model parameters to be random and then improves the inference by incorporating expert's experience as prior information. This method is shown to be very useful if we have limited failure observation wherein the maximum likelihood estimator may not exist. The thesis proceeds as follows. In Chapter 2, we assume the one-shot devices to have two components with lifetimes having exponential distributions with multiple stress factors. We then develop an EM algorithm for developing likelihood inference for the model parameters as well as some useful reliability characteristics. In Chapter 3, we generalize to the situation when lifetimes follow a Weibull distribution with non-constant shape parameters. In Chapter 4, we propose a semi-parametric model for simple one-shot device test data based on proportional hazards model and develop associated inferential results. In Chapter 5, we consider the competing risk model with exponential lifetimes and develop inference by adopting the Bayesian approach. In Chapter 6, we generalize these results on Bayesian inference to the situation when the lifetimes have a Weibull distribution. Finally, we provide some concluding remarks and indicate some future research directions in Chapter 7. / Thesis / Doctor of Philosophy (PhD)
53

Statistical models for the long-term monitoring of songbird populations : a Bayesian analysis of constant effort sites and ring-recovery data

Cave, Vanessa M. January 2010 (has links)
To underpin and improve advice given to government and other interested parties on the state of Britain’s common songbird populations, new models for analysing ecological data are developed in this thesis. These models use data from the British Trust for Ornithology’s Constant Effort Sites (CES) scheme, an annual bird-ringing programme in which catch effort is standardised. Data from the CES scheme are routinely used to index abundance and productivity, and to a lesser extent estimate adult survival rates. However, two features of the CES data that complicate analysis were previously inadequately addressed, namely the presence in the catch of “transient” birds not associated with the local population, and the sporadic failure in the constancy of effort assumption arising from the absence of within-year catch data. The current methodology is extended, with efficient Bayesian models developed for each of these demographic parameters that account for both of these data nuances, and from which reliable and usefully precise estimates are obtained. Of increasing interest is the relationship between abundance and the underlying vital rates, an understanding of which facilitates effective conservation. CES data are particularly amenable to an integrated approach to population modelling, providing a combination of demographic information from a single source. Such an integrated approach is developed here, employing Bayesian methodology and a simple population model to unite abundance, productivity and survival within a consistent framework. Independent data from ring-recoveries provide additional information on adult and juvenile survival rates. Specific advantages of this new integrated approach are identified, among which is the ability to determine juvenile survival accurately, disentangle the probabilities of survival and permanent emigration, and to obtain estimates of total seasonal productivity. The methodologies developed in this thesis are applied to CES data from Sedge Warbler, Acrocephalus schoenobaenus, and Reed Warbler, A. scirpaceus.
54

Approche stochastique de l'analyse du « residual moveout » pour la quantification de l'incertitude dans l'imagerie sismique / A stochastic approach to uncertainty quantification in residual moveout analysis

Tamatoro, Johng-Ay 09 April 2014 (has links)
Le principale objectif de l'imagerie sismique pétrolière telle qu'elle est réalisée de nos jours est de fournir une image représentative des quelques premiers kilomètres du sous-sol. Cette image permettra la localisation des structures géologiques formant les réservoirs où sont piégées les ressources en hydrocarbures. Pour pouvoir caractériser ces réservoirs et permettre la production des hydrocarbures, le géophysicien utilise la migration-profondeur qui est un outil d'imagerie sismique qui sert à convertir des données-temps enregistrées lors des campagnes d'acquisition sismique en des images-profondeur qui seront exploitées par l'ingénieur-réservoir avec l'aide de l'interprète sismique et du géologue. Lors de la migration profondeur, les évènements sismiques (réflecteurs,…) sont replacés à leurs positions spatiales correctes. Une migration-profondeur pertinente requiert une évaluation précise modèle de vitesse. La précision du modèle de vitesse utilisé pour une migration est jugée au travers l'alignement horizontal des évènements présents sur les Common Image Gather (CIG). Les évènements non horizontaux (Residual Move Out) présents sur les CIG sont dus au ratio du modèle de vitesse de migration par la vitesse effective du milieu. L'analyse du Residual Move Out (RMO) a pour but d'évaluer ce ratio pour juger de la pertinence du modèle de vitesse et permettre sa mise à jour. Les CIG qui servent de données pour l'analyse du RMO sont solutions de problèmes inverses mal posés, et sont corrompues par du bruit. Une analyse de l'incertitude s'avère nécessaire pour améliorer l'évaluation des résultats obtenus. Le manque d'outils d'analyse de l'incertitude dans l'analyse du RMO en fait sa faiblesse. L'analyse et la quantification de l'incertitude pourrait aider à la prise de décisions qui auront des impacts socio-économiques importantes. Ce travail de thèse a pour but de contribuer à l'analyse et à la quantification de l'incertitude dans l'analyse des paramètres calculés pendant le traitement des données sismiques et particulièrement dans l'analyse du RMO. Pour atteindre ces objectifs plusieurs étapes ont été nécessaires. Elles sont entre autres :- L’appropriation des différents concepts géophysiques nécessaires à la compréhension du problème (organisation des données de sismique réflexion, outils mathématiques et méthodologiques utilisés);- Présentations des méthodes et outils pour l'analyse classique du RMO;- Interprétation statistique de l’analyse classique;- Proposition d’une approche stochastique;Cette approche stochastique consiste en un modèle statistique hiérarchique dont les paramètres sont :- la variance traduisant le niveau de bruit dans les données estimée par une méthode basée sur les ondelettes, - une fonction qui traduit la cohérence des amplitudes le long des évènements estimée par des méthodes de lissages de données,- le ratio qui est considéré comme une variable aléatoire et non comme un paramètre fixe inconnue comme c'est le cas dans l'approche classique de l'analyse du RMO. Il est estimé par des méthodes de simulations de Monte Carlo par Chaîne de Markov.L'approche proposée dans cette thèse permet d'obtenir autant de cartes de valeurs du paramètre qu'on le désire par le biais des quantiles. La méthodologie proposée est validée par l'application à des données synthétiques et à des données réelles. Une étude de sensibilité de l'estimation du paramètre a été réalisée. L'utilisation de l'incertitude de ce paramètre pour quantifier l'incertitude des positions spatiales des réflecteurs est présentée dans ce travail de thèse. / The main goal of the seismic imaging for oil exploration and production as it is done nowadays is to provide an image of the first kilometers of the subsurface to allow the localization and an accurate estimation of hydrocarbon resources. The reservoirs where these hydrocarbons are trapped are structures which have a more or less complex geology. To characterize these reservoirs and allow the production of hydrocarbons, the geophysicist uses the depth migration which is a seismic imaging tool which serves to convert time data recorded during seismic surveys into depth images which will be exploited by the reservoir engineer with the help of the seismic interpreter and the geologist. During the depth migration, seismic events (reflectors, diffractions, faults …) are moved to their correct locations in space. Relevant depth migration requires an accurate knowledge of vertical and horizontal seismic velocity variations (velocity model). Usually the so-called Common-Image-Gathers (CIGs) serve as a tool to verify correctness of the velocity model. Often the CIGs are computed in the surface offset (distance between shot point and receiver) domain and their flatness serve as criteria of the velocity model correctness. Residual moveout (RMO) of the events on CIGs due to the ratio of migration velocity model and effective velocity model indicates incorrectness of the velocity model and is used for the velocity model updating. The post-stacked images forming the CIGs which are used as data for the RMO analysis are the results of an inverse problem and are corrupt by noises. An uncertainty analysis is necessary to improve evaluation of the results. Dealing with the uncertainty is a major issue, which supposes to help in decisions that have important social and commercial implications. The goal of this thesis is to contribute to the uncertainty analysis and its quantification in the analysis of various parameters computed during the seismic processing and particularly in RMO analysis. To reach these goals several stages were necessary. We began by appropriating the various geophysical concepts necessary for the understanding of:- the organization of the seismic data ;- the various processing ;- the various mathematical and methodological tools which are used (chapters 2 and 3). In the chapter 4, we present different tools used for the conventional RMO analysis. In the fifth one, we give a statistical interpretation of the conventional RMO analysis and we propose a stochastic approach of this analysis. This approach consists in hierarchical statistical model where the parameters are: - the variance which express the noise level in the data ;- a functional parameter which express coherency of the amplitudes along events ; - the ratio which is assume to be a random variable and not an unknown fixed parameter as it is the case in conventional approach. The adjustment of data to the model done by using smoothing methods of data, combined with the using of the wavelets for the estimation of allow to compute the posterior distribution of given the data by the empirical Bayes methods. An estimation of the parameter is obtained by using Markov Chain Monte Carlo simulations of its posterior distribution. The various quantiles of these simulations provide different estimations of . The proposed methodology is validated in the sixth chapter by its application on synthetic data and real data. A sensitivity analysis of the estimation of the parameter was done. The using of the uncertainty of this parameter to quantify the uncertainty of the spatial positions of reflectors is presented in this thesis.
55

Limited sampling strategies for estimation of cyclosporine exposure in pediatric hematopoietic stem cell transplant recipients : methodological improvement and introduction of sampling time deviation analysis

Sarem, Sarem 12 1900 (has links)
No description available.
56

Inference for Gamma Frailty Models based on One-shot Device Data

Yu, Chenxi January 2024 (has links)
A device that is accompanied by an irreversible chemical reaction or physical destruction and could no longer function properly after performing its intended function is referred to as a one-shot device. One-shot device test data differ from typical data obtained by measuring lifetimes in standard life-tests. Due to the very nature of one-shot devices, actual lifetimes of one-shot devices under test cannot be observed, and they are either left- or right-censored. In addition, a one-shot device often has multiple components that could cause the failure of the device. The components are coupled together in the manufacturing process or assembly, resulting in the failure modes possessing latent heterogeneity and dependence. Frailty models enable us to describe the influence of common, but unobservable covariates, on the hazard function as a random effect in a model and also provide an easily understandable interpretation. In this thesis, we develop some inferential results for one-shot device testing data with gamma frailty model. We first develop an efficient expectation-maximization (EM) algorithm for determining the maximum likelihood estimates of model parameters of a gamma frailty model with exponential lifetime distributions for components based on one-shot device test data with multiple failure modes, wherein the data are obtained from a constant-stress accelerated life-test. The maximum likelihood estimate of the mean lifetime of $k$-out-of-$M$ structured one-shot devices under normal operating conditions is also presented. In addition, the asymptotic variance–covariance matrix of the maximum likelihood estimates is derived, which is then used to construct asymptotic confidence intervals for the model parameters. The performance of the proposed inferential methods is finally evaluated through Monte Carlo simulations and then illustrated with a numerical example. A gamma frailty model with Weibull baseline hazards is considered next for fitting one-shot device testing data. The Weibull baseline hazards enable us to analyze time-varying failure rates more accurately, allowing for a deeper understanding of the dynamic nature of system's reliability. We develop an EM algorithm for estimating the model parameters utilizing the complete likelihood function. A detailed simulation study evaluates the performance of the Weibull baseline hazard model with that of the exponential baseline hazard model. The introduction of shape parameters in the component's lifetime distribution within the Weibull baseline hazard model offers enhanced flexibility in model fitting. Finally, Bayesian inference is then developed for the gamma frailty model with exponential baseline hazard for one-shot device testing data. We introduce the Bayesian estimation procedure using Markov chain Monte Carlo (MCMC) technique for estimating the model parameters as well as for developing credible intervals for those parameters. The performance of the proposed method is evaluated in a simulation study. Model comparison between independence model and the frailty model is made using Bayesian model selection criterion. / Thesis / Candidate in Philosophy
57

排列檢定法應用於空間資料之比較 / Permutation test on spatial comparison

王信忠, Wang, Hsin-Chung Unknown Date (has links)
本論文主要是探討在二維度空間上二母體分佈是否一致。我們利用排列 (permutation)檢定方法來做比較, 並藉由費雪(Fisher)正確檢定方法的想法而提出重標記 (relabel)排列檢定方法或稱為費雪排列檢定法。 我們透過可交換性的特質證明它是正確 (exact) 的並且比 Syrjala (1996)所建議的排列檢定方法有更高的檢定力 (power)。 本論文另提出二個空間模型: spatial multinomial-relative-log-normal 模型 與 spatial Poisson-relative-log-normal 模型 來配適一般在漁業中常有的右斜長尾次數分佈並包含很多0 的空間資料。另外一般物種可能因天性或自然環境因素像食物、溫度等影響而有群聚行為發生, 這二個模型亦可描述出空間資料的群聚現象以做適當的推論。 / This thesis proposes the relabel (Fisher's) permutation test inspired by Fisher's exact test to compare between distributions of two (fishery) data sets locating on a two-dimensional lattice. We show that the permutation test given by Syrjala (1996} is not exact, but our relabel permutation test is exact and, additionally, more powerful. This thesis also studies two spatial models: the spatial multinomial-relative-log-normal model and the spatial Poisson-relative-log-normal model. Both models not only exhibit characteristics of skewness with a long right-hand tail and of high proportion of zero catches which usually appear in fishery data, but also have the ability to describe various types of aggregative behaviors.
58

Risk-averse periodic preventive maintenance optimization

Singh, Inderjeet,1978- 21 December 2011 (has links)
We consider a class of periodic preventive maintenance (PM) optimization problems, for a single piece of equipment that deteriorates with time or use, and can be repaired upon failure, through corrective maintenance (CM). We develop analytical and simulation-based optimization models that seek an optimal periodic PM policy, which minimizes the sum of the expected total cost of PMs and the risk-averse cost of CMs, over a finite planning horizon. In the simulation-based models, we assume that both types of maintenance actions are imperfect, whereas our analytical models consider imperfect PMs with minimal CMs. The effectiveness of maintenance actions is modeled using age reduction factors. For a repairable unit of equipment, its virtual age, and not its calendar age, determines the associated failure rate. Therefore, two sets of parameters, one describing the effectiveness of maintenance actions, and the other that defines the underlying failure rate of a piece of equipment, are critical to our models. Under a given maintenance policy, the two sets of parameters and a virtual-age-based age-reduction model, completely define the failure process of a piece of equipment. In practice, the true failure rate, and exact quality of the maintenance actions, cannot be determined, and are often estimated from the equipment failure history. We use a Bayesian approach to parameter estimation, under which a random-walk-based Gibbs sampler provides posterior estimates for the parameters of interest. Our posterior estimates for a few datasets from the literature, are consistent with published results. Furthermore, our computational results successfully demonstrate that our Gibbs sampler is arguably the obvious choice over a general rejection sampling-based parameter estimation method, for this class of problems. We present a general simulation-based periodic PM optimization model, which uses the posterior estimates to simulate the number of operational equipment failures, under a given periodic PM policy. Optimal periodic PM policies, under the classical maximum likelihood (ML) and Bayesian estimates are obtained for a few datasets. Limitations of the ML approach are revealed for a dataset from the literature, in which the use of ML estimates of the parameters, in the maintenance optimization model, fails to capture a trivial optimal PM policy. Finally, we introduce a single-stage and a two-stage formulation of the risk-averse periodic PM optimization model, with imperfect PMs and minimal CMs. Such models apply to a class of complex equipment with many parts, operational failures of which are addressed by replacing or repairing a few parts, thereby not affecting the failure rate of the equipment under consideration. For general values of PM age reduction factors, we provide sufficient conditions to establish the convexity of the first and second moments of the number of failures, and the risk-averse expected total maintenance cost, over a finite planning horizon. For increasing Weibull rates and a general class of increasing and convex failure rates, we show that these convexity results are independent of the PM age reduction factors. In general, the optimal periodic PM policy under the single-stage model is no better than the optimal two-stage policy. But if PMs are assumed perfect, then we establish that the single-stage and the two-stage optimization models are equivalent. / text

Page generated in 0.0686 seconds