• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 412
  • 58
  • 47
  • 19
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 7
  • 7
  • 4
  • 3
  • Tagged with
  • 692
  • 132
  • 97
  • 95
  • 76
  • 70
  • 63
  • 60
  • 56
  • 54
  • 48
  • 43
  • 38
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Abstractions for Probabilistic Programming to Support Model Development

Bernstein, Ryan January 2023 (has links)
Probabilistic programming is a recent advancement in probabilistic modeling whereby we can express a model as a program with little concern for the details of probabilistic inference. Probabilistic programming thereby provides a clean and powerful abstraction to its users, letting even non-experts develop clear and concise models that can leverage state-of-the-art computational inference algorithms. This model-as-program representation also presents a unique opportunity: we can apply methods from the study of programming languages directly onto probabilistic models. By developing techniques to analyze, transform, or extend the capabilities of probabilistic programs, we can immediately improve the workflow of probabilistic modeling and benefit all of its applications throughout science and industry. The aim of this dissertation is to support an ideal probabilistic modeling workflow byaddressing two limitations of probabilistic programming: that a program can only represent one model; and that the structure of the model that it represents is often opaque to users and to the compiler. In particular, I make the following primary contributions: (1) I introduce Multi-Model Probabilistic Programming: an extension of probabilistic programming whereby a program can represent a network of interrelated models. This new representation allows users to construct and leverage spaces of models in the same way that probabilistic programs do for individual models. Multi-Model Probabilistic Programming lets us visualize and navigate solution spaces, track and document model development paths, and audit modeler degrees of freedom to mitigate issues like p-hacking. It also provides an efficient computational foundation for the automation of model-space applications like model search, sensitivity analysis, and ensemble methods. I give a formal language specification and semantics for Multi-Model Probabilistic Programming built on the Stan language, I provide algorithms for the fundamental model-space operations along with proofs of correctness and efficiency, and I present a prototype implementation, with which I demonstrate a variety of practical applications. (2) I present a method for automatically transforming probabilistic programs into semantically related forms by using static analysis and constraint solving to recover the structure of their underlying models. In particular, I automate two general model transformations that are required for diagnostic checks which are important steps of a model-building workflow. Automating these transformations frees the user from manually rewriting their models, thereby avoiding potential correctness and efficiency issues. (3) I present a probabilistic program analysis tool, “Pedantic Mode”, that automatically warns users about potential statistical issues with the model described by their program. “Pedantic Mode” uses specialized static analysis methods to decompose the structure of the underlying model. Lastly, I discuss future work in these areas, such as advanced model-space algorithms and other general-purpose model transformations. I also discuss how these ideas may fit into future modeling workflows as technologies.
412

Statistical modeling of extreme rainfall processes in consideration of climate change

Cung, Annie. January 2007 (has links)
No description available.
413

Nonverbal interaction in small groups: a methodological strategy for studying process

Fitzpatrick, Donna Lee January 1974 (has links)
No description available.
414

A probabilistic analysis of a class of random trees/

Mahmoud, Hosam M. January 1983 (has links)
No description available.
415

Silicon I GF-values /

Damm, Frank Louis January 1969 (has links)
No description available.
416

Methodological and analytical considerations on ranking probabilities in network meta-analysis: Evaluating comparative effectiveness and safety of interventions

Daly, Caitlin Helen January 2020 (has links)
Network meta-analysis (NMA) synthesizes all available direct (head-to-head) and indirect evidence on the comparative effectiveness of at least three treatments and provides coherent estimates of their relative effects. Ranking probabilities are commonly used to summarize these estimates and provide comparative rankings of treatments. However, the reliability of ranking probabilities as summary measures has not been formally established and treatments are often ranked for each outcome separately. This thesis aims to address methodological gaps and limitations in current literature by providing alternative methods for evaluating the robustness of treatment ranks, establishing comparative rankings, and integrating ranking probabilities across multiple outcomes. These novel tools, addressing three specific objectives, are developed in three papers. The first paper presents a conceptual framework for quantifying the robustness of treatments ranks and for elucidating potential sources of lack of robustness. Cohen’s kappa is proposed for quantifying the agreement between two sets of ranks based on NMAs of the full data and a subset of the data. A leave one-study-out strategy was used to illustrate the framework with empirical data from published NMAs, where ranks based on the surface under the cumulative ranking curve (SUCRA) were considered. Recommendations for using this strategy to evaluate sensitivity or robustness to concerning evidence are given. When two or more cumulative ranking curves cross, treatments with large probabilities of ranking the best, second best, third best, etc. may rank worse than treatments with smaller corresponding probabilities based on SUCRA. This limitation of SUCRA is addressed in the second paper through the proposal of partial SUCRA (pSUCRA) as an alternative measure for ranking treatments. pSUCRA is adopted from the partial area under the receiver operating characteristic curve in diagnostic medicine and is derived to summarize relevant regions of the cumulative ranking curve. Knowledge users are often faced with the challenge of making sense of large volumes of NMA results presented across multiple outcomes. This may be further complicated if the comparative rankings on each outcome contradict each other, leading to subjective final decisions. The third paper addresses this limitation through a comprehensive methodological framework for integrating treatments’ ranking probabilities across multiple outcomes. The framework relies on the area inside spie charts representing treatments’ performances on all outcomes, while also incorporating the outcomes’ relative importance. This approach not only provides an objective measure of the comparative ranking of treatments across multiple outcomes, but also allows graphical presentation of the results, thereby facilitating straightforward interpretation. All contributions in this thesis provide objective means to improve the use of comparative treatment rankings in NMA. Further extensive evaluations of these tools are required to assess their validity in empirical and simulated networks of different size and sparseness. / Thesis / Doctor of Philosophy (PhD) / Decisions on how to best treat a patient should be informed by all relevant evidence comparing the benefits and harms of available options. Network meta-analysis (NMA) is a statistical method for combining evidence on at least three treatments and produces a coherent set of results. Nevertheless, NMA results are typically presented separately for each health outcome (e.g., length of hospital stay, mortality) and the volume of results can be overwhelming to a knowledge user. Moreover, the results can be contradictory across multiple outcomes. Statistics that facilitate the ranking of treatments may aid in easing this interpretative burden while limiting subjectivity. This thesis aims to address methodological gaps and limitations in current ranking approaches by providing alternative methods for evaluating the robustness of treatment ranks, establishing comparative rankings, and integrating ranking probabilities across multiple outcomes. These contributions provide objective means to improve the use of comparative treatment rankings in NMA.
417

Non-additive probabilities and quantum logic in finite quantum systems

Vourdas, Apostolos January 2016 (has links)
Yes / A quantum system Σ(d) with variables in Z(d) and with Hilbert space H(d), is considered. It is shown that the additivity relation of Kolmogorov probabilities, is not valid in the Birkhoff-von Neumann orthocomplemented modular lattice of subspaces L(d). A second lattice Λ(d) which is distributive and contains the subsystems of Σ(d) is also considered. It is shown that in this case also, the additivity relation of Kolmogorov probabilities is not valid. This suggests that a more general (than Kolmogorov) probability theory is needed, and here we adopt the Dempster-Shafer probability theory. In both of these lattices, there are sublattices which are Boolean algebras, and within these 'islands' quantum probabilities are additive.
418

On New and Improved Measures for Item Analysis with Signal Detection Theory

Lee, Rachel January 2024 (has links)
Classical item analysis (CIA) entails summarizing items based on two key attributes: item difficulty and item discrimination, defined as the proportion of examinees answering correctly and the difference in correctness between high and low scorers. Recent insights reveal a direct link between these measures and aspects of signal detection theory (SDT) in item analysis, offering modifications to traditional metrics and introducing new ones to identify problematic items (DeCarlo, 2023). The SDT approach involves extending Luce's choice model (1959) using a mixture framework, with mixing occurring within examinees rather than across them, reflecting varying latent knowledge states (know or don't know) across items. This implies a 'true' split (know/don't know) enabling straightforward discrimination and difficulty measures, lending theoretical support to the conventional item splitting approach. DeCarlo (2023) demonstrated improved measures and item screening using simple median splits, motivating this study to explore enhanced measures via refined splits. This study builds on these findings, refining CIA and SDT measures by integrating additional information like response time and item scores using latent class and cluster models.
419

An investigation into the mechanics and pricing of credit derivatives

Eraman, Direen 11 1900 (has links)
With the exception of holders of default-free instruments, a key risk run by investors is credit risk. To meet the need of investors to hedge this risk, the market uses credit derivatives. The South African credit derivatives market is still in its infancy and only the very simplistic instruments are traded. One of the reasons is due to the technical sophistication required in pricing these instruments. This dissertation introduces the key concepts of risk neutral probabilities, arbitrage free pricing, martingales, default probabilities, survival probabilities, hazard rates and forward spreads. These mathematical concepts are then used as a building block to develop pricing formulae which can be used to infer valuations to the most popular credit derivatives in the South African financial markets. / Operations Research / M.Sc. (Operations Research)
420

An investigation into the mechanics and pricing of credit derivatives

Eraman, Direen 11 1900 (has links)
With the exception of holders of default-free instruments, a key risk run by investors is credit risk. To meet the need of investors to hedge this risk, the market uses credit derivatives. The South African credit derivatives market is still in its infancy and only the very simplistic instruments are traded. One of the reasons is due to the technical sophistication required in pricing these instruments. This dissertation introduces the key concepts of risk neutral probabilities, arbitrage free pricing, martingales, default probabilities, survival probabilities, hazard rates and forward spreads. These mathematical concepts are then used as a building block to develop pricing formulae which can be used to infer valuations to the most popular credit derivatives in the South African financial markets. / Operations Research / M.Sc. (Operations Research)

Page generated in 0.0613 seconds