• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 10
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 108
  • 22
  • 13
  • 13
  • 13
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modulation of Splicing Factor Function and Alternative Splicing Outcomes

Chen, Steven Xiwei 06 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Alternative RNA splicing is an important means of genetic control and transcriptome diversity. Alternative splicing events are frequently studied independently, and coordinated splicing controlled by common factors is often overlooked: The molecular mechanisms by which splicing regulators promote or repress specific pre-mRNA processing are still not yet well understood. It is well known that splicing factors can regulate splicing in a context-dependent manner, and the search for modulation of splicing factor activity via direct or indirect mechanisms is a worthwhile pursuit towards explaining context-dependent activity. We hypothesized that the combined analysis of hundreds of consortium RNA-seq datasets could identify trans-acting “modulators” whose expression is correlated with differential effects of a splicing factor on its target splice events in mRNAs. We first tested a genome-wide approach to identify relationships between RNA-binding proteins and their inferred modulators in kidney cancer. We then applied a more targeted approach to identify novel modulators of splicing factor SRSF1 function over dozens of its intron retention splicing targets in a neurological context using hundreds of dorsolateral prefrontal cortex samples. Our hypothesized model was further strengthened with the incorporation of genetic variants to impute gene expression in a Mendelian randomization-based approach. The modulators of intron retention splicing we identified may be associated with risk variants linked to Alzheimer’s Disease, among other neurological disorders, to explain disease-causing splicing mechanisms. Our strategy can be widely used to identify modulators of RNA-binding proteins involved in tissue-specific alternative splicing.
12

Precision improvement for Mendelian Randomization

Zhu, Yineng 23 January 2023 (has links)
Mendelian Randomization (MR) methods use genetic variants as instrumental variables (IV) to infer causal relationships between an exposure and an outcome, which overcomes the inability to infer such a relationship in observational studies due to unobserved confounders. There are several MR methods, including the inverse variance weighted (IVW) method, which has been extended to deal with correlated IVs; the median method, which provides consistent causal estimates in the presence of pleiotropy when less than half of the genetic variants are invalid IVs but assumes independent IVs. In this dissertation, we propose two new methods to improve precision for MR analysis. In the first chapter, we extend the median method to correlated IVs: the quasi-boots median method, that accounts for IV correlation in the standard error estimation using a quasi-bootstrap method. Simulation studies show that this method outperforms existing median methods under the correlated IVs setting with and without the presence of pleiotropic effects. In the second chapter, to overcome the lack of an effective solution to account for sample overlap in current IVW methods, we propose a new overall causal effect estimator by exploring the distribution of the estimator for individual IVs under the independent IVs setting, which we name the IVW-GH method. In the final chapter, we extend the IVW-GH method to correlated IVs. In simulation studies, the IVW-GH method outperforms the existing IVW methods under the one-sample setting for independent IVs and shows reasonable results for other settings. We apply these proposed methods to genome-wide association results from the Framingham Heart Study Offspring Study and the Million Veteran Program to identify potential causal relationships between a number of proteins and lipids. All the proposed methods are able to identify some proteins known to be related to lipids. In addition, the quasi-boots median method is robust to pleiotropic effects in the real data application. Consequently, the newly proposed quasi-boots median method and IVW-GH method may provide additional insights for identifying causal relationships. / 2025-01-23T00:00:00Z
13

Advancements in the Field of Cardiovascular Disease Pharmacogenetics

Ross, Stephanie 06 1900 (has links)
Background and Objectives: Pharmacogenetics has the potential to maximize drug efficacy and minimize adverse effects of cardiovascular disease (CVD) but its translation into clinical practice been slow. However, recent advancements in genotyping and statistical methodologies have now provided robust evidence in the support of personalized medicine. This thesis addresses how the advancements in pharmacogenetics may help to gain novel insights into existing drug targets, inform and guide clinical decision-making and validate potential disease target pathways. Methods: This was achieved by exploring whether the COX-2 genetic variant (rs20417) is associated with a decreased risk of CVD outcomes, assessing whether bile acid sequestrants (BAS) are associated with a reduced the risk of coronary artery disease (CAD) using the principles of Mendelian Randomization and investigating whether genetic variants associated with dysglycaemia are associated with an increased risk of CAD. Results: We demonstrated that COX-2 carrier status was associated with a decreased risk of major cardiovascular outcomes. Furthermore, we also showed that BAS appear to be associated with a reduced risk of CAD and genetic variants associated with HbA1c and diabetes were associated with an increased risk of CAD. Conclusions: The convergence of technological and statistical advancements in pharmacogenetics have led to a more high-quality and cost-effective means of assessing the effect of CVD therapeutic agents. / Thesis / Doctor of Philosophy (PhD)
14

Response Adaptive Randomization using Surrogate and Primary Endpoints

Wang, Hui 01 January 2016 (has links)
In recent years, adaptive designs in clinical trials have been attractive due to their efficiency and flexibility. Response adaptive randomization procedures in phase II or III clinical trials are proposed to appeal ethical concerns by skewing the probability of patient assignments based on the responses obtained thus far, so that more patients will be assigned to a superior treatment group. General response-adaptive randomizations usually assume that the primary endpoint can be obtained quickly after the treatment. However, in real clinical trials, the primary outcome is delayed, making it unusable for adaptation. Therefore, we utilize surrogate and primary endpoints simultaneously to adaptively assign subjects between treatment groups for clinical trials with continuous responses. We explore two types of primary endpoints commonly used in clinical tirials: normally distributed outcome and time-to-event outcome. We establish a connection between the surrogate and primary endpoints through a Bayesian model, and then update the allocation ratio based on the accumulated data. Through simulation studies, we find that our proposed response adaptive randomization is more effective in assigning patients to better treatments as compared with equal allocation randomization and standard response adaptive randomization which is solely based on the primary endpoint.
15

Randomization test and correlation effects in high dimensional data

Wang, Xiaofei January 1900 (has links)
Master of Science / Department of Statistics / Gary Gadbury / High-dimensional data (HDD) have been encountered in many fields and are characterized by a “large p, small n” paradigm that arises in genomic, lipidomic, and proteomic studies. This report used a simulation study that employed basic block diagonal covariance matrices to generate correlated HDD. Quantities of interests in such data are, among others, the number of ‘significant’ discoveries. This number can be highly variable when data are correlated. This project compared randomization tests versus usual t-tests for testing of significant effects across two treatment conditions. Of interest was whether the variance of the number of discoveries is better controlled in a randomization setting versus a t-test. The results showed that the randomization tests produced results similar to that of t-tests.
16

A Random Bored : How randomization in cooperative board games create replayability and tension

Thålin, Felix January 2015 (has links)
This paper examines five cooperative board games from the perspective of how randomization is used and how it affects the replayability and player strategy, with the intent to properly identify and categorize elements that contribute to replayability and tension and uses randomization to do so. Each element directly affected by or causes randomization is identified, explained (both what it does and how and what it affects), and categorized based on where in the game the randomization originates in the effort to create a base for game designers to get a better understanding of randomization, if and how they can use it, and which method of using it that can be useful for their own designs.The thesis discusses the impact of using certain randomization elements and draws some conclusions based on how they relate to the replayability and tension of games that use those elements.
17

IDENTIFYING CIRCULATING MEDIATORS OF CEREBROVASCULAR DISEASE

Chong, Michael January 2021 (has links)
Many current drugs for stroke act by targeting circulating molecules, yet these have not been exhaustively evaluated for therapeutic potential. A central challenge is that while many molecules correlate with stroke risk, only a subset cause stroke. To disentangle causality from association, a statistical genetics framework called “Mendelian Randomization” can be used by integrating genetic, biomarker, and phenotypic information. In Study 1, we screened 653 circulating proteins using this technique and found evidence supporting causal roles for seven proteins, two of which (SCARA5 and TNFSF12) were not previously implicated in stroke pathogenesis. We also characterized potential side-effects of targeting these molecules for stroke prevention and did not identify any adverse effects for SCARA5. The remaining two studies focused on investigating the role of an emerging marker of mitochondrial activity, leukocyte mitochondrial DNA copy number (mtDNA-CN). Mitochondria have long been known to play a protective role in stroke recovery; however, a mitochondrial basis for stroke protection has not been extensively studied in humans. In Study 2, we first sought to better understand the genetic basis of mtDNA-CN in a series of genetic association studies involving 395,781 UK residents. We identified 71 loci which represents a 40% increase in our knowledge. In Study 3, epidemiological analyses of 3,498 acute stroke demonstrated that low mtDNA-CN was associated with higher risk of subsequent mortality and worse functional outcome 1-month after stroke. Furthermore, Mendelian Randomization analyses corroborated a causative relationship for the first time, implying that interventions that increase mtDNA-CN levels in stroke patients may represent a novel strategy for mitigating post-stroke complications. Ultimately, this work uncovered several novel therapeutic leads for preventing stroke onset and ameliorating its progression. Future investigations are necessary to better understand the underlying biological mechanisms connecting these molecules to stroke and to further interrogate their validity as potential drug targets. / Thesis / Doctor of Philosophy (PhD) / Current stroke medications work by targeting circulating molecules. Our aim was to discover new drug candidates by combining genetic and circulating biomarker data using a technique called “Mendelian Randomization”. In Study 1, we screened 653 circulating proteins and found evidence supporting causal roles for two novel candidates, SCARA5 and TNFSF12. Prior experimental studies suggest an important role for mitochondria in stroke recovery. Accordingly, in Study 2, we characterized the genetic basis of an emerging biomarker, mitochondrial DNA copy number (mtDNA-CN). Analyses of 395,781 participants revealed 71 associated genetic regions, representing a 40% increase in our knowledge. In Study 3, we measured mtDNA-CN in 3,498 acute patients and observed that lower levels predicted elevated risk of worse post-stroke functional outcomes. Furthermore, Mendelian Randomization analysis suggested a likely causal relationship. Overall, this work uncovered several novel therapeutic leads for preventing stroke onset and progression that warrant further investigation to verify therapeutic utility.
18

rave: A Framework for Code and Memory Randomization of Linux Containers

Blackburn, Christopher Nogueira 23 July 2021 (has links)
Memory corruption continues to plague modern software systems, as it has for decades. With the emergence of code-reuse attacks which take advantage of these vulnerabilities like Return- Oriented Programming (ROP) or non-control data attacks like Data-Oriented programming (DOP), defenses against these are growing thin. These attacks, and more advanced variations of them, are becoming more difficult to detect and to mitigate. In this arms race, it is critical to not only develop mitigation techniques, but also ways we can effectively deploy those techniques. In this work, we present rave - a framework which takes common design features of defenses against memory corruption and code-reuse and puts them in a real-world setting. Rave consists of two components: librave, the library responsible for static binary analysis and instrumentation, and CRIU-rave, an extended version of the battle-tested process migration tool available for Linux. In our prototype of this framework, we have shown that these tools can be used to rewrite live applications, like NGINX, with enough randomization to disrupt memory corruption attacks. This work is supported in part by ONR under grant N00014-18-1-2022 and NAVSEA/NEEC/NSWC Dahlgren under grant N00174-20-1-0009. / Master of Science / Memory corruption attacks continue to be a concrete threat against modern computer systems. Malicious actors can take advantage of related vulnerabilities to carry out more advance, hard-to-detect attacks which give them control of the target or leak critical information. Many works have been developed to defend against these sophisticated attacks and their triggers (memory corruption), but many struggle to be adopted into the real-world for reasons such as instability or difficulty in deployment. In this work, we introduce rave, a framework which seeks to address issues of stability and deployment by designing a way for defenders to coordinate and apply mitigation techniques in a real-world setting.
19

Considerations for Identifying and Conducting Cluster Randomized Trials / Considerations For Identifying and Conducting Cluster Trials

Al-Jaishi, Ahmed January 2021 (has links)
Background: The cluster randomized trial design randomly assigns groups of people to different treatment arms. This dissertation aimed to (1) develop machine learning algorithms to identify cluster trials in bibliographic databases, (2) assess reporting of methodological and ethical elements in hemodialysis-related cluster trials, and (3) assess how well two covariate-constrained randomization methods balanced baseline characteristics compared with simple randomization. Methods: In study 1, we developed three machine learning algorithms that classify whether a bibliographic citation is a CRT report or not. We only used the information available in an article citation, including the title, abstract, keywords, and subject headings. In study 2, we conducted a systematic review of CRTs in the hemodialysis setting to review the reporting of key methodological and ethical issues. We reviewed CRTs published in English between 2000 and 2019 and indexed in MEDLINE or EMBASE. In study 3, we assessed how well two covariate-constrained randomization methods balanced baseline characteristics compared with simple randomization. Results: In study 1, we successfully developed high-performance algorithms that identified whether a citation was a CRT. Our algorithms had greater than 97% sensitivity and 77% specificity in identifying CRTs. For study 2, we found suboptimal conduct and reporting of methodological issues of CRTs in the hemodialysis setting and incomplete reporting of key ethical issues. For study 3, where we randomized 72 clusters, constraining the randomization using historical information achieved a better balance on baseline characteristics than simple randomization; however, the magnitude of benefit was modest. Conclusions: This dissertation's results will help researchers quickly identify cluster trials in bibliographic databases (study 1) and inform the design and analyses of future Canadian trials conducted within the hemodialysis setting (study 2 & 3). / Thesis / Doctor of Philosophy (PhD) / The cluster trial design randomly assigns groups of people to different treatment arms rather than individuals. Cluster trials are commonly used in research areas such as education, public health, and health service research. Examples of clusters can include villages/communities, worksites, schools, hospitals, hospital wards, and physicians. This dissertation aimed to (1) develop machine learning algorithms to identify cluster trials in bibliographic databases, (2) assess reporting of methodological and ethical elements in hemodialysis-related cluster trials, and (3) identified best practices for randomly assigning hemodialysis centers in cluster trials. We conducted three studies to address these aims. The results of this dissertation will help researchers quickly identify cluster trials in bibliographic databases (study 1) and inform the design and analyses of future Canadian trials conducted within the hemodialysis setting (study 2 & 3).
20

Paving the Randomized Gauss-Seidel

Wu, Wei 01 January 2017 (has links)
The Randomized Gauss-Seidel Method (RGS) is an iterative algorithm that solves overdetermined systems of linear equations Ax = b. This paper studies an update on the RGS method, the Randomized Block Gauss-Seidel Method. At each step, the algorithm greedily minimizes the objective function L(x) = kAx bk2 with respect to a subset of coordinates. This paper describes a Randomized Block Gauss-Seidel Method (RBGS) which uses a randomized control method to choose a subset at each step. This algorithm is the first block RGS method with an expected linear convergence rate which can be described by the properties of the matrix A and its column submatrices. The analysis demonstrates that RBGS improves RGS more when given appropriate column-paving of the matrix, a partition of the columns into well-conditioned blocks. The main result yields a RBGS method that is more e cient than the simple RGS method.

Page generated in 0.1039 seconds