• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 374
  • 47
  • 33
  • 20
  • 17
  • 10
  • 8
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 702
  • 702
  • 367
  • 189
  • 173
  • 106
  • 96
  • 94
  • 90
  • 82
  • 80
  • 78
  • 78
  • 76
  • 73
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

ULTRASTRUCTURAL NEURONAL MODELING OF CALCIUM DYNAMICS UNDER TRANSCRANIAL MAGNETIC STIMULATION

Rosado, James, 0000-0003-1542-3711 January 2022 (has links)
A paramount question in the study of Calcium (Ca2+) signaling is how this ion regulates a wide spectrum of cellular processes, which include: fertilization, proliferation, learning, memory, and cell death. All of these processes are the result of synaptic strengthening and weakening. Part of the answer lies in the spatial-temporal interactions of Ca2+ at the extracellular and intracellular levels of a neuron. Within these levels of a neuron there is a complex concert of Ca2+ ion exchange and transport mechanisms that are activated (or inactivated) by external stimuli and it remains to be studied the role of these interactions at the ultrastructural scale. One mode of external stimulation is by Transcranial Magnetic Stimulation (TMS) and repetitive TMS (rTMS). TMS is a noninvasive brain stimulation method to modulate humanbrain activity by generating a strong magnetic field near the cranium. The magnetic field induces an electric field which depolarizes neurons; therefore, TMS is used in clinical applications to treat neuropsychiatric and neurological disorders. However, it is not well known the effect of TMS on intracellular Ca2+ interactions; therefore, I endeavor to determine the types of calcium interactions that occur when a neuron experiences TMS. I also determine how intracellular calcium mechanisms are affected by TMS stimuli. In particular, the cellular regulators of calcium are given by: the internal Ca2+ store (“calcium bank”) of a neuron called the endoplasmic reticulum (ER) with spine apparatus (SA), the voltage dependent calcium channels (VDCCs), and calcium influx at synaptic spines. Ultimately, the ER is responsible for synaptic plasticity and from here I determined under what conditions does TMS cause intracellular calcium to induce synaptic plasticity. For the first part of this dissertation I describe the neurobiology, model equations, and methods that are employed in understanding the role of intracellular calcium. Simulating calcium dynamics at the ultrastructural level is computationally expensive when including the effects of TMS in concert with intracellular calcium transport mechanism. Therefore, I also identify the numerical methodologies that provide the best results in terms of numerical accuracy to the physiology of the intracellullar dynamics and the parameters such as error and time step size that yield sufficient results. I will also describe the framework used in this study (i.e., UG4) and the pipeline for performing my studies, this includes: the process from microscopy to computational domains, generating and preserving mesh features, the choice of numerical methods, and the process of parallelizing the simulations. In the second part, I dive into the electro-dynamic mechanisms that cause voltage propagation through a neuron. This is of particular importance, because many ion membrane transport mechanisms depend on plasma membrane voltage. The simulations coded and executed in MatLab are used to drive calcium dynamics which is discussed in the third part of the dissertation. I will also take the opportunity to explain a case study involving virtual reality with the Hodgkin-Huxley electrical model for voltage propagation. Additionally, I incorporate synaptic communication which is driven by TMS protocols or simulated by voltage clamps, and both provide a mechanism by which intracellular calcium transients occurs. For the third chapter I discuss the calcium dynamic mechanisms that are inside of neurons and I discuss the methodology I take to setup a simulation and perform simulations. This includes the steps taken to process microscopy images to generate computational domains, implementing the model equations, and utilizing appropriate numerical schemes. I also discuss several preliminary examples as proof of concept to my simulation pipeline and I give results involving the regulation of calcium with respect to intracellular mechanisms. The fourth part of this dissertation describes the steps for running TMS simulations using voltage data from electrical simulations to drive calcium signaling events. In particular, I discuss the tool NeMo-TMS which uses voltage and calcium simulations together to draw conclusions with respect to intracellular calcium propagation. I describe the multi-scale paradigm that is used, model equations, and computational domains that are used and provide several examples of results from this modeling pipeline. Of particular importance, I provide discussion on the coupling of data from electrical simulations and biochemical simulations, i.e. I use TMS induced voltage data to drive voltage dependent calcium release and I examine the effects of TMS induced back propagating action potentials. / Mathematics
372

Finding Genotype-Phenotype Correlations in Norway Spruce - A Genome-Wide Association Study using Machine Learning

Sandberg, Matilda January 2023 (has links)
The Norway spruce is of great importance from both an ecological- and economic standpoint. Information about which genes that causes certain phenotypic traits in the species is therefore highly valuable. The purpose of this project was to apply machine learning to find such genotype-phenotype correlations. The purpose was also to compare the results from different machine learning algorithms to a more traditional linear mixed model GWAS (where correlation to the phenotype is estimated for each SNP one by one) to find which is the better method for GWAS. The machine learning algorithms tested were decision tree, support vector machine and support vector regression. The phenotypes analyzed were wood density and initiation frequency of zygotic embryogenesis (ZE). The latter is related to a new method for cloning. The genetic data consisted of single-nucleotide polymorphisms (SNPs). Due to the large genome size of Norway spruce and due to limitations in the packages used in R two different approaches were taken to reduce the sample size. The first approach used Kendall’s rank correlation coefficient to remove redundant SNPs and the second used an iterative approach to the machine learning model. The iterative approach was proven to be the best and support vector machine/regression was found to be better than decision tree for both phenotypes. Support vector regression from the iterative approach resulted in a squared correlation coefficient of 0.83 for density and 0.94 for ZE initiation frequency. Note that these very high values should be interpreted with caution, as it is possible that some of the significant correlations are only due to random chance. Even a small chance for random correlations will result in findings when the number of SNPs are this large (1908552 SNPs). The significant SNPs identified by the machine learning models were compared to SNPs identified by the linear mixed model GWAS. This indeed showed some overlaps of significant SNPs, which increases the credibility of my results. However, further investigation of the identified significant SNPs is needed to determine their functional mode of action. My conclusion is that using machine learning to predict phenotypic traits from SNP data can be a good choice. However, the model might not use all correlated SNPs, just enough to get a good prediction. Therefore, for the purpose of finding significant SNPs, the linear mixed model approach might be better. In other words, the method used should be determined by the purpose of the study.
373

Combining Cell Painting, Gene Expression and Structure-Activity Data for Mechanism of Action Prediction

Everett Palm, Erik January 2023 (has links)
The rapid progress in high-throughput omics methods and high-resolution morphological profiling, coupled with the significant advances in machine learning (ML) and deep learning (DL), has opened new avenues for tackling the notoriously difficult problem of predicting the Mechanism of Action (MoA) for a drug of clinical interest. Understanding a drug's MoA can enrich our knowledge of its biological activity, shed light on potential side effects, and serve as a predictor of clinical success.  This project aimed to examine whether incorporating gene expression data from LINCS L1000 public repository into a joint model previously developed by Tian et al. (2022), which combined chemical structure and morphological profiles derived from Cell Painting, would have a synergistic effect on the model's ability to classify chemical compounds into ten well-represented MoA classes. To do this, I explored the gene expression dataset to assess its quality, volume, and limitations. I applied a variety of ML and DL methods to identify the optimal single model for MoA classification using gene expression data, with a particular emphasis on transforming tabular data into image data to harness the power of convolutional neural networks. To capitalize on the complementary information stored in different modalities, I tested end-to-end integration and soft-voting on sets of joint models across five stratified data splits.  The gene expression dataset was relatively low in quality, with many uncontrollable factors that complicated MoA prediction. The highest-performing gene expression model was a one-dimensional convolutional neural network, with an average macro F1 score of 0.40877 and a standard deviation of 0.034. Approaches converting tabular data into image data did not significantly outperform other methods. Combining optimized single models resulted in a performance decline compared to the best single model in the combination. To take full advantage of algorithmic developments in drug development and high-throughput multi-omics data, my project underscores the need for standardizing data generation and optimizing data fusion methods.
374

Modelling Large Protein Complexes

Chim, Ho Yeung January 2023 (has links)
AlphaFold [Jumper et al., 2021, Evans et al., 2022] is a deep learning-based method that can accurately predict the structure of single- and multiple-chain proteins. However, its accuracy decreases with an increasing number of chains, and GPU memory limits the size of protein complexes that can be predicted. Recently, Elofsson’s groupintroduced a Monte Carlo tree search method, MoLPC, that can predict the structure of large complexes from predictions of sub-components [Bryant et al., 2022b]. However, MoLPC cannot adjust for errors in the sub-component predictions and requires knowledge of the correct protein stoichiometry. Large protein complexes are responsible for many essential cellular processes, such as mRNA splicing [Will and Lührmann, 2011], protein degradation [Tanaka, 2009], and protein folding [Ditzel et al., 1998]. However, the lack of structural knowledge of many large protein complexes remains challenging. Only a fraction of the eukaryoticcore complexes in CORUM [Giurgiu et al., 2019] have homologous structures covering all chains in PDB, indicating a significant gap in our structural understanding of protein complexes. AlphaFold-Multimer [Evans et al., 2022] is the only deep learning method that can predict the structure of more than two protein chains, trained on proteins of up to 20 chains, and can predict complexes of up to a few thousand residues, where memory limitations come into play. Another approach, MoLPC, is to predict the structure of sub-components of large complexes and assemble them. It has shown that it is possible to manually assemble large complexes from dimers manually [Burke et al., 2021] or use Monte Carlo tree search [Bryant et al., 2022b]. One limitation of the previous MoLPC approach is its inability to account for errors in sub-component prediction. The addition of small errors in each sub-component can propagate to a significant error when building the entire complex, leading toMoLPC’s failure. To overcome this challenge, the Monte Carlo Tree Search algorithms in MoLPC2 is enhanced to assemble protein complexes while simultaneously predicting their stoichiometry. Using MoLPC2, we accurately predicted the structures of 50 out of 175 non-redundant protein complexes (TM-score >0.8), while MoLPC only predicted 30. It should be noted that improvements introduced in AlphaFold version 2.3 enable the prediction of larger complexes, and if stoichiometry is known, it can accurately predict the structures of 74 complexes. Our findings suggest that assembling symmetrical complexes from sub-components results in higher accuracy while assembling asymmetrical complexes remains challenging.
375

Developing a highly accurate, locally interpretable neural network for medical image analysis

Ventura Caballero, Rony David January 2023 (has links)
Background Machine learning techniques, such as convolutional networks, have shown promise in medical image analysis, including the detection of pediatric pneumonia. However, the interpretability of these models is often lacking, compromising their trustworthiness and acceptance in medical applications. The interpretability of machine learning models in medical applications is crucial for trust and bias identification. Aim The aim is to create a locally interpretable neural network that performs comparably to black-box models while being inherently interpretable, enhancing trust in medical machine learning models. Method An MLP ReLU network is trained with Guangzhou Women and Children's Medical Center pediatric chest x-ray image dataset and utilize Aletheia unwrapper for interpretability. A 5-fold cross-validation assesses the network's performance, measuring accuracy and F1 score. The average accuracy and F1 score are 0.90 and 0.91, respectively. To assessthe interpretability results are compared against a CNN network aided with LIME and SHAP to generate explanations. Results Despite lacking convolutional layers, the MLP network satisfactorily categorizes pneumonia images and explanations align with relevant areas of interest from previous studies. Moreover, by comparing it with a state of the art network aided with LIME and SHAP explanations, the local explanations demonstrate to be consistent within areas of the lungs while the post-hoc alternatives often highlighted areas not relevant for the specific task. Conclusion The developed locally interpretable neural network demonstrates promising performance and interpretability. However, additional research and implementation are required for it to outperform the so-called black box models. In a medical setting, a more accurate model despite the score could be crucial, as it could potentially save more lives, which is the ultimate goal of healthcare.
376

Error Correcting Codes and the Human Genome.

Lyle, Suzanne McLean 08 May 2010 (has links) (PDF)
In this work, we study error correcting codes and generalize the concepts with a view toward a novel application in the study of DNA sequences. The author investigates the possibility that an error correcting linear code could be included in the human genome through application and research. The author finds that while it is an accepted hypothesis that it is reasonable that some kind of error correcting code is used in DNA, no one has actually been able to identify one. The author uses the application to illustrate how the subject of coding theory can provide a teaching enrichment activity for undergraduate mathematics.
377

Quantitative Studies of Amyloidogenic Protein Residue Interaction Networks and Abnormal Ammonia Metabolism in Neurotoxicity and Disease

Griffin, Jeddidiah 01 August 2018 (has links) (PDF)
Investigating similarities among neurological diseases can provide insight into disease processes. Two prominent commonalities of neurological diseases are the formation of amyloid deposits and altered ammonia and glutamate metabolism. Computational techniques were used to explore these processes in several neurological diseases. Residue interaction networks (RINs) abstract protein structure into a series of nodes (representing residues) and edges (representing connections between residues likely to interact). Analyzing the RINs of monomeric forms of amyloidogenic proteins for common network features revealed similarities not previously known. First, amyloidogenic variants of lysozyme were used to demonstrate the usefulness of RINs to the study of amyloidogenic proteins. Next, I compared RINs of amyloidogenic proteins with randomized control networks and a group of real protein controls and found similarities in network structures unique to amyloidogenic proteins. The use of 3D structure data and network structure data of amyloid-beta (1-42) (Abeta42) in a hydrophobic, membrane-mimicking solvent led to the identification of an interaction between Val24 and Ile31 as potentially involved in preventing Abeta aggregation. Since Abeta causes oxidative damage, since the ammonia metabolism enzyme glutamine synthetase is particularly susceptible to oxidative damage, and since glutamate plays a central role in neuronal function, I expanded my research to include the study of ammonia and glutamate metabolism in neurological diseases. A computational model of the effects of the interactions between the amount of dietary protein and the activities of ammonia metabolism enzymes on blood and brain ammonia levels supports potentially important roles for these enzymes in the protection of neural function. Next, I reviewed the role of amino acid catabolism in Alzheimer’s disease (AD). Common tissue pathology and the ability of memantine, an NMDA receptor antagonist, to relieve symptoms in patients and animal models of AD, major depressive disorder (MDD), and type 2 diabetes (T2D) further support a role for ammonia and glutamate metabolism in disease. Lastly, I found that single nucleotide polymorphisms (SNPs) in select ammonia metabolism genes are associated with these three diseases. The results presented in this dissertation demonstrate that investigating neurological diseases using computational approaches can provide great insight into the common underlying pathologies.
378

Phylogenomics of Ascetosporea

Bhawe, Harshal Kunal January 2022 (has links)
Ascetosporea is a class of poorly studied unicellular eukaryotes that function as parasites of marine invertebrates. These parasites cause mass mortality events in aquaculture species such as oysters and mussels. The economic importance of these aquaculture species should lead to more attention on the genomics of Ascetosporea and their place on the evolutionary tree of life. With the onset of global warming and rising sea levels and temperatures, many emerging pathogens have been seen and until these are sequenced and analysed, it is difficult to make any conclusions about their relationships and evolution. As there aren’t many genomes and transcriptomes available for Ascetosporea, their position in the larger eukaryotic tree of life remains hypothetical. To attempt to remedy this lack of information, the Burki lab has recently generated sequencing data through sample collection and sequencing for these organisms (genomes and transcriptomes). A curated dataset of the various eukaryotic species was previously created and newly sampled and sequenced Ascetosporean genomes of Paramarteilia sp., Marteilia pararefringens, Paramikrocytos canceri, etc. from multiple sampling locations like Ireland, Norway, Sweden, and the UK were included. These could increase the genomic and transcriptomic data available for Ascetosporea and help to resolve the relationships within Ascetosporea. A few reasons why this group has not yet been placed on the tree of life are that the samples are from host tissue, which makes it difficult to sequence these parasites. These Ascetosporeans have also been seen to be very fast-evolving. After building phylogenetic relationships with single gene trees to allow for the identification of possible contaminants and paralogs, it was seen that there was a lot of contamination in Ascetosporea, due to the sampling being from host tissue material (hosts are open to the environment). After cleaning and filtering the possible contaminated genes, the trees were remade and a possible link between a fungal group called Microsporidia and Ascetosporea was observed in a few genes. This was hypothesized to be lateral gene transfer between the two groups resulting from their similar lifestyles and infection of invertebrates. There were complications like contamination and short blast hits that arose during analysis, and these could be caused by problems by fragmentation in the genome. This fragmentation could have negative effects on genome annotation predictions and consequently phylogenetic and phylogenomic analysis. Due to this and the challenging nature of collecting samples, the read coverage for the genomes is low but it can be used to perform phylogenetic and phylogenomic studies using currently available data and methods. Another expected result was that the sequenced data had contaminants, and a thorough and comprehensive search would have to be conducted on a dataset-wide level to remove any contaminants.
379

LOTUS: A Web-Based Computational Tool for the Preliminary Investigation of a Novel MST Method Utilizing a Library of 16S rRNA Bacteroides OTUs

Dewitte, Ginger 01 May 2021 (has links) (PDF)
Microbial Source Tracking (MST) is a field of study that attempts to identify the source of fecal contamination in waterways in order to assist with development of remediation strategies. Biologists at Cal Poly Center for Applications in Biotechnology (CAB) are developing a new MST method using microbes from the genus Bacteroides. Bacteroides species are host-specific microorganisms that can theoretically be used to trace back to a single host species. After fecal samples are collected, biologists use Next-Generation Sequencing (NGS) techniques to obtain only the genetic sequences of microorganisms belonging to the phylum Bacteroidetes. Investigators hypothesize that similar sequences belong to the same phlyogenetic group (i.e., the same genus) and can therefore be computationally clustered. Each cluster of related sequences, typically 97% similar, is called an Operational Taxonomic Unit (OTU). Theoretically, an OTU acts as a molecular signature that can be traced back to a specific host genus. This thesis presents LOTUS, the Library of OTUs, a web-based computational tool for the preliminary investigation of the use of the Bacteroides OTU library as an MST method. This work discusses the four contributions of LOTUS: a database design which accurately models OTUs and the underlying relationships necessary for source tracking, a pipeline to create OTUs from raw sequencing reads, a method of assigning taxonomy to OTUs, and a web-based user interface. In preliminary testing for a reference library of twelve samples, LOTUS produced 1,431 OTUs, of which 891 were single-source (OTUs derived from sequences from a single host species). Using these OTUs, LOTUS was able to accurately taxonomically match four of five unknown test samples, showing promise for using OTUs as an MST method.
380

Multiscale Modeling of Human Addiction: a Computational Hypothesis for Allostasis and Healing

Levy, Yariv Z. 01 February 2013 (has links)
This dissertation presents a computational multiscale framework for predicting behavioral tendencies related to human addiction. The research encompasses three main contributions. The first contribution presents a formal, heuristic, and exploratory framework to conduct interdisciplinary investigations about the neuropsychological, cognitive, behavioral, and recovery constituents of addiction. The second contribution proposes a computational framework to account for real-life recoveries that are not dependent on pharmaceutical, clinical, and counseling support. This exploration relies upon a combination of current biological beliefs together with unorthodox rehabilitation practices, such as meditation, and proposes a conjecture regarding possible cognitive mechanisms involved in the recovery process. Further elaboration of this investigation leads on to the third contribution, which introduces a computational hypothesis for exploring the allostatic theory of addiction. A person engaging in drug consumption is likely to encounter mood deterioration and eventually to suffer the loss of a reasonable functional state (e.g., experience depression). The allostatic theory describes how the consumption of abusive substances modifies the brain's reward system by means of two mechanisms which aim to viably maintain the functional state of an addict. The first mechanism is initiated in the reward system itself, whereas the second might originate in the endocrine system or elsewhere. The proposed computational hypothesis indicates that the first mechanism can explain the functional stabilization of the addict, whereas the second mechanism is a candidate for a source of possible recovery. The formal arguments presented in this dissertation are illustrated by simulations which delineate archetypal patterns of human behavior toward drug consumption: escalation of use and influence of conventional and alternative rehabilitation treatments. Results obtained from this computational framework encourage an integrative approach to drug rehabilitation therapies which combine conventional therapies with alternative practices to achieve higher rates of consumption cessation and lower rates of relapse.

Page generated in 0.5215 seconds