• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 856
  • 403
  • 113
  • 89
  • 24
  • 19
  • 13
  • 10
  • 7
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1886
  • 660
  • 330
  • 234
  • 220
  • 216
  • 212
  • 212
  • 208
  • 204
  • 189
  • 182
  • 169
  • 150
  • 144
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Precision improvement for Mendelian Randomization

Zhu, Yineng 23 January 2023 (has links)
Mendelian Randomization (MR) methods use genetic variants as instrumental variables (IV) to infer causal relationships between an exposure and an outcome, which overcomes the inability to infer such a relationship in observational studies due to unobserved confounders. There are several MR methods, including the inverse variance weighted (IVW) method, which has been extended to deal with correlated IVs; the median method, which provides consistent causal estimates in the presence of pleiotropy when less than half of the genetic variants are invalid IVs but assumes independent IVs. In this dissertation, we propose two new methods to improve precision for MR analysis. In the first chapter, we extend the median method to correlated IVs: the quasi-boots median method, that accounts for IV correlation in the standard error estimation using a quasi-bootstrap method. Simulation studies show that this method outperforms existing median methods under the correlated IVs setting with and without the presence of pleiotropic effects. In the second chapter, to overcome the lack of an effective solution to account for sample overlap in current IVW methods, we propose a new overall causal effect estimator by exploring the distribution of the estimator for individual IVs under the independent IVs setting, which we name the IVW-GH method. In the final chapter, we extend the IVW-GH method to correlated IVs. In simulation studies, the IVW-GH method outperforms the existing IVW methods under the one-sample setting for independent IVs and shows reasonable results for other settings. We apply these proposed methods to genome-wide association results from the Framingham Heart Study Offspring Study and the Million Veteran Program to identify potential causal relationships between a number of proteins and lipids. All the proposed methods are able to identify some proteins known to be related to lipids. In addition, the quasi-boots median method is robust to pleiotropic effects in the real data application. Consequently, the newly proposed quasi-boots median method and IVW-GH method may provide additional insights for identifying causal relationships. / 2025-01-23T00:00:00Z
152

A Study of Bayesian Inference in Medical Diagnosis

Herzig, Michael 05 1900 (has links)
<p> Bayes' formula may be written as follows: </p> <p> P(yᵢ|X) = P(X|yᵢ)・P(yᵢ)/j=K Σ j=1 P(X|yⱼ)・P(yⱼ) where (1) </p> <p> Y = {y₁, y₂,..., y_K} </p> <P> X = {x₁, x₂,..., xₖ} </p> <p> Assuming independence of attributes x₁, x₂,..., xₖ, Bayes' formula may be rewritten as follows: </p> <p> P(yᵢ|X) = P(x₁|yᵢ)・P(x₂|yᵢ)・...・P(xₖ|yᵢ)・P(yᵢ)/j=K Σ j=1 P(x₁|yⱼ)・P(x₂|yⱼ)・...・P(xₖ|yⱼ)・P(yⱼ) (2) </p> <p> In medical diagnosis the y's denote disease states and the x's denote the presence or absence of symptoms. Bayesian inference is applied to medical diagnosis as follows: for an individual with data set X, the predicted diagnosis is the disease yⱼ such that P(yⱼ|X) = max_i P(yᵢ|X), i=1,2,...,K (3) </p> <p> as calculated from (2). </p> <p> Inferences based on (2) and (3) correctly allocate a high proportion of patients (>70%) in studies to date, despite violations of the independence assumption. The aim of this thesis is modest, (i) to demonstrate the applicability of Bayesian inference to the problem of medical diagnosis (ii) to review pertinent literature (iii) to present a Monte Carlo method which simulates the application of Bayes' formula to distinguish among diseases (iv) to present and discuss the results of Monte Carlo experiments which allow statistical statements to be made concerning the accuracy of Bayesian inference when the assumption of independence is violated. </p> <p> The Monte Carlo study considers paired dependence among attributes when Bayes' formula is used to predict diagnoses from among 6 disease categories. A parameter which measured deviations from attribute independence is defined by DH=(j=6 Σ j=1|P(x_B|x_A,yⱼ)-P(x_B|yⱼ)|)/6, where x_A and x_B denote a dependent attribute pair. It was found that the correct number of Bayesian predictions, M, decreases markedly as attributes increasing diverge from independence, ie, as DH increases. However, a simple first order linear model of the form M = B₀+B₁・DH does not consistently explain the variation in M. </p> / Thesis / Master of Science (MSc)
153

BALLWORLD: A FRAMEWORK FOR LEARNING STATISTICAL INFERENCE AND STREAM PROCESSING

Ravali, Yeluri January 2017 (has links)
No description available.
154

SELF ORGANIZED INFERENCE OF SPATIAL STRUCTURE IN RANDOMLY DEPLOYED SENSOR NETWORKS

GEORGE, NEENA A. January 2006 (has links)
No description available.
155

A Hybrid-Genetic Algorithm for Training a Sugeno-Type Fuzzy Inference System with a Mutable Rule Base

Coy, Christopher G. January 2010 (has links)
No description available.
156

The effect of data error in inducing confirmatory inference strategies in scientific hypothesis testing /

Kern, Leslie Helen January 1982 (has links)
No description available.
157

Grammatical inference of regular and context-free languages /

Marik, Delores Ann January 1977 (has links)
No description available.
158

MEMBERSHIP INFERENCE ATTACKS AND DEFENSES IN CLASSIFICATION MODELS

Jiacheng Li (17775408) 12 January 2024 (has links)
<p dir="ltr">Neural network-based machine learning models are now prevalent in our daily lives, from voice assistants~\cite{lopez2018alexa}, to image generation~\cite{ramesh2021zero} and chatbots (e.g., ChatGPT-4~\cite{openai2023gpt4}). These large neural networks are powerful but also raise serious security and privacy concerns, such as whether personal data used to train these models are leaked by these models. One way to understand and address this privacy concern is to study membership inference (MI) attacks and defenses~\cite{shokri2017membership,nasr2019comprehensive}. In MI attacks, an adversary seeks to infer if a given instance was part of the training data. We study the membership inference (MI) attack against classifiers, where the attacker's goal is to determine whether a data instance was used for training the classifier. Through systematic cataloging of existing MI attacks and extensive experimental evaluations of them, we find that a model's vulnerability to MI attacks is tightly related to the generalization gap---the difference between training accuracy and test accuracy. We then propose a defense against MI attacks that aims to close the gap by intentionally reduces the training accuracy. More specifically, the training process attempts to match the training and validation accuracies, by means of a new {\em set regularizer} using the Maximum Mean Discrepancy between the softmax output empirical distributions of the training and validation sets. Our experimental results show that combining this approach with another simple defense (mix-up training) significantly improves state-of-the-art defense against MI attacks, with minimal impact on testing accuracy. </p><p dir="ltr"><br></p><p dir="ltr">Furthermore, we considers the challenge of performing membership inference attacks in a federated learning setting ---for image classification--- where an adversary can only observe the communication between the central node and a single client (a passive white-box attack). Passive attacks are one of the hardest-to-detect attacks, since they can be performed without modifying how the behavior of the central server or its clients, and assumes {\em no access to private data instances}. The key insight of our method is empirically observing that, near parameters that generalize well in test, the gradient of large overparameterized neural network models statistically behave like high-dimensional independent isotropic random vectors. Using this insight, we devise two attacks that are often little impacted by existing and proposed defenses. Finally, we validated the hypothesis that our attack depends on the overparametrization by showing that increasing the level of overparametrization (without changing the neural network architecture) positively correlates with our attack effectiveness.</p><p dir="ltr">Finally, we observe that training instances have different degrees of vulnerability to MI attacks. Most instances will have low loss even when not included in training. For these instances, the model can fit them well without concerns of MI attacks. An effective defense only needs to (possibly implicitly) identify instances that are vulnerable to MI attacks and avoids overfitting them. A major challenge is how to achieve such an effect in an efficient training process. Leveraging two distinct recent advancements in representation learning: counterfactually-invariant representations and subspace learning methods, we introduce a novel Membership-Invariant Subspace Training (MIST) method to defend against MI attacks. MIST avoids overfitting the vulnerable instances without significant impact on other instances. We have conducted extensive experimental studies, comparing MIST with various other state-of-the-art (SOTA) MI defenses against several SOTA MI attacks. We find that MIST outperforms other defenses while resulting in minimal reduction in testing accuracy. </p><p dir="ltr"><br></p>
159

A Comparative Analysis of Bayesian Nonparametric Variational Inference Algorithms for Speech Recognition

Steinberg, John January 2013 (has links)
Nonparametric Bayesian models have become increasingly popular in speech recognition tasks such as language and acoustic modeling due to their ability to discover underlying structure in an iterative manner. These methods do not require a priori assumptions about the structure of the data, such as the number of mixture components, and can learn this structure directly. Dirichlet process mixtures (DPMs) are a widely used nonparametric Bayesian method which can be used as priors to determine an optimal number of mixture components and their respective weights in a Gaussian mixture model (GMM). Because DPMs potentially require an infinite number of parameters, inference algorithms are needed to make posterior calculations tractable. The focus of this work is an evaluation of three of these Bayesian variational inference algorithms which have only recently become computationally viable: Accelerated Variational Dirichlet Process Mixtures (AVDPM), Collapsed Variational Stick Breaking (CVSB), and Collapsed Dirichlet Priors (CDP). To eliminate other effects on performance such as language models, a phoneme classification task is chosen to more clearly assess the viability of these algorithms for acoustic modeling. Evaluations were conducted on the CALLHOME English and Mandarin corpora, consisting of two languages that, from a human perspective, are phonologically very different. It is shown in this work that these inference algorithms yield error rates comparable to a baseline Gaussian mixture model (GMM) but with a factor of up to 20 fewer mixture components. AVDPM is shown to be the most attractive choice because it delivers the most compact models and is computationally efficient, enabling its application to big data problems. / Electrical and Computer Engineering
160

Being Sherlock Holmes Can we sense empathy from a brief sample of behaviour

Wu, W., Sheppard, E., Mitchell, Peter 04 June 2020 (has links)
Yes / Mentalizing (otherwise known as ‘theory of mind’) involves a special process that is adapted for predicting and explaining the behaviour of others (targets) based on inferences about targets’ beliefs and character. This research investigated how well participants made inferences about an especially apposite aspect of character, empathy. Participants were invited to make inferences of self‐rated empathy after watching or listening to an unfamiliar target for a few seconds telling a scripted joke (or answering questions about him/herself or reading aloud a paragraph of promotional material). Across three studies, participants were good at identifying targets with low and high self‐rated empathy but not good at identifying those who are average. Such inferences, especially of high self‐rated empathy, seemed to be based mainly on clues in the target's behaviour, presented either in a video, a still photograph or in an audio track. However, participants were not as effective in guessing which targets had low or average self‐rated empathy from a still photograph showing a neutral pose or from an audio track. We conclude with discussion of the scope and the adaptive value of this inferential ability.

Page generated in 0.0398 seconds