• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 12
  • 12
  • 12
  • 12
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Use of binary and truncated regression models in the analysis of recreational fish catches

O'Neill, M. Unknown Date (has links)
No description available.
2

Extended Poisson process modelling

Toscas, P. Unknown Date (has links)
No description available.
3

Extended Poisson process modelling

Toscas, P. Unknown Date (has links)
No description available.
4

Extended Poisson process modelling

Toscas, P. Unknown Date (has links)
No description available.
5

An assessment of statistical methodologies used in the analysis of marine community data

Ellis, R. N. Unknown Date (has links)
No description available.
6

SCALABLE BAYESIAN METHODS FOR PROBABILISTIC GRAPHICAL MODELS

Chuan Zuo (18429759) 25 April 2024 (has links)
<p dir="ltr">In recent years, probabilistic graphical models have emerged as a powerful framework for understanding complex dependencies in multivariate data, offering a structured approach to tackle uncertainty and model complexity. These models have revolutionized the way we interpret the interplay between variables in various domains, from genetics to social network analysis. Inspired by the potential of probabilistic graphical models to provide insightful data analysis while addressing the challenges of high-dimensionality and computational efficiency, this dissertation introduces two novel methodologies that leverage the strengths of graphical models in high-dimensional settings. By integrating advanced inference techniques and exploiting the structural advantages of graphical models, we demonstrate how these approaches can efficiently decode complex data patterns, offering significant improvements over traditional methods. This work not only contributes to the theoretical advancements in the field of statistical data analysis but also provides practical solutions to real-world problems characterized by large-scale, complex datasets.</p><p dir="ltr">Firstly, we introduce a novel Bayesian hybrid method for learning the structure of Gaus- sian Bayesian Networks (GBNs), addressing the critical challenge of order determination in constraint-based and score-based methodologies. By integrating a permutation matrix within the likelihood function, we propose a technique that remains invariant to data shuffling, thereby overcoming the limitations of traditional approaches. Utilizing Cholesky decompo- sition, we reparameterize the log-likelihood function to facilitate the identification of the parent-child relationship among nodes without relying on the faithfulness assumption. This method efficiently manages the permutation matrix to optimize for the sparsest Cholesky factor, leveraging the Bayesian Information Criterion (BIC) for model selection. Theoretical analysis and extensive simulations demonstrate the superiority of our method in terms of precision, recall, and F1-score across various network complexities and sample sizes. Specifically, our approach shows significant advantages in small-n-large-p scenarios, outperforming existing methods in detecting complex network structures with limited data. Real-world applications on datasets such as ECOLI70, ARTH150, MAGIC-IRRI, and MAGIC-NIAB further validate the effectiveness and robustness of our proposed method. Our findings contribute to the field of Bayesian network structure learning by providing a scalable, efficient, and reliable tool for modeling high-dimensional data structures.</p><p dir="ltr">Secondly, we introduce a Bayesian methodology tailored for Gaussian Graphical Models (GGMs) that bridges the gap between GBNs and GGMs. Utilizing the Cholesky decomposition, we establish a novel connection that leverages estimated GBN structures to accurately recover and estimate GGMs. This innovative approach benefits from a theoretical foundation provided by a theorem that connects sparse priors on Cholesky factors with the sparsity of the precision matrix, facilitating effective structure recovery in GGMs. To assess the efficacy of our proposed method, we conduct comprehensive simulations on AR2 and circle graph models, comparing its performance with renowned algorithms such as GLASSO, CLIME, and SPACE across various dimensions. Our evaluation, based on metrics like estimation ac- curacy and selection correctness, unequivocally demonstrates the superiority of our approach in accurately identifying the intrinsic graph structure. The empirical results underscore the robustness and scalability of our method, underscoring its potential as an indispensable tool for statistical data analysis, especially in the context of complex datasets.</p>
7

A SYSTEMATIC STUDY OF SPARSE DEEP LEARNING WITH DIFFERENT PENALTIES

Xinlin Tao (13143465) 25 April 2023 (has links)
<p>Deep learning has been the driving force behind many successful data science achievements. However, the deep neural network (DNN) that forms the basis of deep learning is</p> <p>often over-parameterized, leading to training, prediction, and interpretation challenges. To</p> <p>address this issue, it is common practice to apply an appropriate penalty to each connection</p> <p>weight, limiting its magnitude. This approach is equivalent to imposing a prior distribution</p> <p>on each connection weight from a Bayesian perspective. This project offers a systematic investigation into the selection of the penalty function or prior distribution. Specifically, under</p> <p>the general theoretical framework of posterior consistency, we prove that consistent sparse</p> <p>deep learning can be achieved with a variety of penalty functions or prior distributions.</p> <p>Examples include amenable regularization penalties (such as MCP and SCAD), spike-and?slab priors (such as mixture Gaussian distribution and mixture Laplace distribution), and</p> <p>polynomial decayed priors (such as the student-t distribution). Our theory is supported by</p> <p>numerical results.</p> <p><br></p>
8

Deep Learning for Ordinary Differential Equations and Predictive Uncertainty

Yijia Liu (17984911) 19 April 2024 (has links)
<p dir="ltr">Deep neural networks (DNNs) have demonstrated outstanding performance in numerous tasks such as image recognition and natural language processing. However, in dynamic systems modeling, the tasks of estimating and uncovering the potentially nonlinear structure of systems represented by ordinary differential equations (ODEs) pose a significant challenge. In this dissertation, we employ DNNs to enable precise and efficient parameter estimation of dynamic systems. In addition, we introduce a highly flexible neural ODE model to capture both nonlinear and sparse dependent relations among multiple functional processes. Nonetheless, DNNs are susceptible to overfitting and often struggle to accurately assess predictive uncertainty despite their widespread success across various AI domains. The challenge of defining meaningful priors for DNN weights and characterizing predictive uncertainty persists. In this dissertation, we present a novel neural adaptive empirical Bayes framework with a new class of prior distributions to address weight uncertainty.</p><p dir="ltr">In the first part, we propose a precise and efficient approach utilizing DNNs for estimation and inference of ODEs given noisy data. The DNNs are employed directly as a nonparametric proxy for the true solution of the ODEs, eliminating the need for numerical integration and resulting in significant computational time savings. We develop a gradient descent algorithm to estimate both the DNNs solution and the parameters of the ODEs by optimizing a fidelity-penalized likelihood loss function. This ensures that the derivatives of the DNNs estimator conform to the system of ODEs. Our method is particularly effective in scenarios where only a set of variables transformed from the system components by a given function are observed. We establish the convergence rate of the DNNs estimator and demonstrate that the derivatives of the DNNs solution asymptotically satisfy the ODEs determined by the inferred parameters. Simulations and real data analysis of COVID-19 daily cases are conducted to show the superior performance of our method in terms of accuracy of parameter estimates and system recovery, and computational speed.</p><p dir="ltr">In the second part, we present a novel sparse neural ODE model to characterize flexible relations among multiple functional processes. This model represents the latent states of the functions using a set of ODEs and models the dynamic changes of these states utilizing a DNN with a specially designed architecture and sparsity-inducing regularization. Our new model is able to capture both nonlinear and sparse dependent relations among multivariate functions. We develop an efficient optimization algorithm to estimate the unknown weights for the DNN under the sparsity constraint. Furthermore, we establish both algorithmic convergence and selection consistency, providing theoretical guarantees for the proposed method. We illustrate the efficacy of the method through simulation studies and a gene regulatory network example.</p><p dir="ltr">In the third part, we introduce a class of implicit generative priors to facilitate Bayesian modeling and inference. These priors are derived through a nonlinear transformation of a known low-dimensional distribution, allowing us to handle complex data distributions and capture the underlying manifold structure effectively. Our framework combines variational inference with a gradient ascent algorithm, which serves to select the hyperparameters and approximate the posterior distribution. Theoretical justification is established through both the posterior and classification consistency. We demonstrate the practical applications of our framework through extensive simulation examples and real-world datasets. Our experimental results highlight the superiority of our proposed framework over existing methods, such as sparse variational Bayesian and generative models, in terms of prediction accuracy and uncertainty quantification.</p>
9

INVESTIGATING OFFENDER TYPOLOGIES AND VICTIM VULNERABILITIES IN ONLINE CHILD GROOMING

Siva sahitya Simhadri (17522730) 02 December 2023 (has links)
<p dir="ltr">One of the issues on social media that is expanding the fastest is children being exposed to predators online [ 1 ]. Due to the ease with which a larger segment of the younger population may now access the Internet, online grooming activity on social media has grown to be a significant social concern. Child grooming, in which adults and minors exchange sexually explicit text and media via social media platforms, is a typical component of online child exploitation. An estimated 500,000 predators operate online every day. According to estimates, Internet chat rooms and instant messaging are where 89% of sexual approaches against children take place. The child may face a variety of unpleasant consequences following a grooming event, including shame, anger, anxiety, tension, despair, and substance abuse which make it more difficult for them to report the exploitation. A substantial amount of research in this domain has focused on identifying certain vulnerabilities of the victims of grooming. These vulnerabilities include specific age groups, gender, psychological factors, no family support, and lack of good social relations which make young people more vulnerable to grooming. So far no technical work has been done to apply statistical analysis on these vulnerability profiles and observe how these patterns change between different victim types and offender types. This work presents a detailed analysis of the effect of Offender type (contact and fantasy) and victim type (Law Enforcement Officers, Real Victims and Decoys (Perverted Justice)) on representation of different vulnerabilities in grooming conversations. Comparison of different victim groups would provide insights into creating the right training material for LEOs and decoys and help in the training process for online sting operations. Moreover, comparison of different offender types would help create targeted prevention strategies to tackle online child grooming and help the victims.</p>
10

Limit theorems for rare events in stochastic topology

Zifu Wei (15420086) 02 December 2023 (has links)
<p>This dissertation establishes a variety of limit theorems pertaining to rare events in stochastic topology, exploiting probabilistic methods to study simplicial complex models. We focus on the filtration of \vc ech complexes and examine the asymptotic behavior of two topological functionals: the Betti numbers and critical faces. The filtration involves a parameter rn>0 that determines the growth rate of underlying Cech complexes. If rn depends also on the time parameter t, the obtained limit theorems will be established in a functional sense.</p> <p>The first part of this dissertation is devoted to investigating the layered structure of topological complexity in the tail of a probability distribution. We establish the functional strong law of large numbers for Betti numbers, a basic quantifier of algebraic topology, of a geometric complex outside an open ball of radius Rn, such that Rn to infinity as the sample size n increases. The nature of the obtained law of large numbers is determined by the decay rate of a probability density. It especially depends on whether the tail of a density decays at a regularly varying rate or an exponentially decaying rate. The nature of the limit theorem depends also on how rapidly Rn diverges. In particular, if Rn diverges sufficiently slowly, the limiting function in the law of large numbers is crucially affected by the emergence of arbitrarily large connected components supporting topological cycles in the limit.</p> <p>The second part of this dissertation investigates convergence of point processes associated with critical faces for a Cech filtration built over a homogeneous Poisson point process in the d-dimensional flat torus. The convergence of our point process is established in terms of the  Mo-topology, when the connecting radius of a Cech complex decays to 0, so slowly that critical faces are even less likely to occur than those in the regime of threshold for homological connectivity. We also obtain a series of limit theorems for positive and negative critical faces, all of which are considerably analogous to those for critical faces.</p>

Page generated in 0.1345 seconds