Spelling suggestions: "subject:"passively parallel"" "subject:"massively parallel""
11 |
Risk Measures Extracted from Option Market Data Using Massively Parallel ComputingZhao, Min 27 April 2011 (has links)
The famous Black-Scholes formula provided the first mathematically sound mechanism to price financial options. It is based on the assumption, that daily random stock returns are identically normally distributed and hence stock prices follow a stochastic process with a constant volatility. Observed prices, at which options trade on the markets, don¡¯t fully support this hypothesis. Options corresponding to different strike prices trade as if they were driven by different volatilities. To capture this so-called volatility smile, we need a more sophisticated option-pricing model assuming that the volatility itself is a random process. The price we have to pay for this stochastic volatility model is that such models are computationally extremely intensive to simulate and hence difficult to fit to observed market prices. This difficulty has severely limited the use of stochastic volatility models in the practice. In this project we propose to overcome the obstacle of computational complexity by executing the simulations in a massively parallel fashion on the graphics processing unit (GPU) of the computer, utilizing its hundreds of parallel processors. We succeed in generating the trillions of random numbers needed to fit a monthly options contract in 3 hours on a desktop computer with a Tesla GPU. This enables us to accurately price any derivative security based on the same underlying stock. In addition, our method also allows extracting quantitative measures of the riskiness of the underlying stock that are implied by the views of the forward-looking traders on the option markets.
|
12 |
Deafness in the genomics eraShearer, Aiden Eliot 01 May 2014 (has links)
Deafness is the most common sensory deficit in humans, affecting 278 million people worldwide. Non-syndromic hearing loss (NSHL), hearing loss not associated with other symptoms, is the most common type of hearing loss and most NSHL in developed countries is due to a genetic cause. The inner ear is a remarkably complex organ, and as such, there are estimated to be hundreds of genes with mutations that can cause hearing loss. To date, 62 of these genes have been identified. This extreme genetic heterogeneity has made comprehensive genetic testing for deafness all but impossible due to low-throughput genetic testing methods that sequence a single gene at a time.
The human genome project was completed in 2003. Soon after, genomic technologies, including massively parallel sequencing, were developed. MPS gives the ability to sequence millions or billions of DNA base-pairs of the genome simultaneously. The goal of my thesis work was to use these newly developed genomic technologies to create a comprehensive genetic testing platform for deafness and use this platform to answer key scientific questions about genetic deafness. This platform would need to be relatively inexpensive, highly sensitive, and accurate enough for clinical diagnostics.
In order to accomplish this goal we first determined the best methods to use for this platform by comparing available methods for isolation of all exons of all genes implicated in deafness and massively parallel sequencers. We performed this pilot study on a limited number of patient samples, but were able to determine that solution-phase targeted genomic enrichment (TGE) and Illumina sequencing presented the best combination of sensitivity and cost. We decided to call this platform and diagnostic pipeline OtoSCOPE®. Also during this study we identified several weaknesses with the standard method for TGE that we sought to improve.
The next aim was to focus on these weaknesses to develop an improved protocol for TGE that was highly reproducible and efficient. We developed a new protocol and tested the limits of sequencer capacity. These findings allowed us to translate OtoSCOPE® to the clinical setting and use it to perform comprehensive genetic testing on a large number of individuals in research studies.
Finally, we used the OtoSCOPE® platform to answer crucial questions about genetic deafness that had remained unanswered due to the low-throughput genetic testing methods available previously. By screening 1,000 normal hearing individuals from 6 populations we determined the carrier frequency for non-DFNB1 recessive deafness-causing mutations to be 3.3%. Our findings will also help us to interpret variants uncovered during analysis of deafness genes in affected individuals. When we used OtoSCOPE® to screen 100 individuals with apparent genetic deafness, we were able to provide a genetic diagnosis in 45%, a large increase compared to previous gene-by-gene sequencing methods.
Because it provides a pinpointed etiological diagnosis, genetic testing with a comprehensive platform like OtoSCOPE® could provide an attractive alternative to the newborn hearing screen. In addition, this research lays the groundwork for molecular therapies to restore or reverse hearing loss that are tailored to specific genes or genetic mutations. Therefore, a molecular diagnosis with a comprehensive platform like OtoSCOPE® is integral for those affected by hearing loss.
|
13 |
Analysis of genetic variations in cancerHasmats, Johanna January 2012 (has links)
The aim of this thesis is to apply recently developed technologies for genomic variation analyses, and to ensure quality of the generated information for use in preclinical cancer research. Faster access to a patients’ full genomic sequence for a lower cost makes it possible for end users such as clinicians and physicians to gain a more complete understanding of the disease status of a patient and adjust treatment accordingly. Correct biological interpretation is important in this context, and can only be provided through fast and simple access to relevant high quality data. Therefore, we here propose and validate new bioinformatic strategies for biomarker selection for prediction of response to cancer therapy. We initially explored the use of bioinformatic tools to select interesting targets for toxicity in carboplatin and paclitaxel on a smaller scale. From our findings we then further extended the analysis to the entire exome to look for biomarkers as targets for adverse effects from carboplatin and gemcitabine. To investigate any bias introduced by the methods used for targeting the exome, we analyzed the mutation profiles in cancer patients by comparing whole genome amplified DNA to unamplified DNA. In addition, we applied RNA-seq to the same patients to further validate the variations obtained by sequencing of DNA. The understanding of the human cancer genome is growing rapidly, thanks to methodological development of analysis tools. The next step is to implement these tools as a part of a chain from diagnosis of patients to genomic research to personalized treatment. / <p>QC 20121105</p>
|
14 |
Massively parallel analysis of cells and nucleic acidsSandberg, Julia January 2011 (has links)
Recent proceedings in biotechnology have enabled completely new avenues in life science research to be explored. By allowing increased parallelization an ever-increasing complexity of cell samples or experiments can be investigated in shorter time and at a lower cost. This facilitates for example large-scale efforts to study cell heterogeneity at the single cell level, by analyzing cells in parallel that also can include global genomic analyses. The work presented in this thesis focuses on massively parallel analysis of cells or nucleic acid samples, demonstrating technology developments in the field as well as use of the technology in life sciences. In stem cell research issues such as cell morphology, cell differentiation and effects of reprogramming factors are frequently studied, and to obtain information on cell heterogeneity these experiments are preferably carried out on single cells. In paper I we used a high-density microwell device in silicon and glass for culturing and screening of stem cells. Maintained pluripotency in stem cells from human and mouse was demonstrated in a screening assay by antibody staining and the chip was furthermore used for studying neural differentiation. The chip format allows for low sample volumes and rapid high-throughput analysis of single cells, and is compatible with Fluorescence Activated Cell Sorting (FACS) for precise cell selection. Massively parallel DNA sequencing is revolutionizing genomics research throughout the life sciences by constantly producing increasing amounts of data from one sequencing run. However, the reagent costs and labor requirements in current massively parallel sequencing protocols are still substantial. In paper II-IV we have focused on flow-sorting techniques for improved sample preparation in bead-based massive sequencing platforms, with the aim of increasing the amount of quality data output, as demonstrated on the Roche/454 platform. In paper II we demonstrate a rapid alternative to the existing shotgun sample titration protocol and also use flow-sorting to enrich for beads that carry amplified template DNA after emulsion PCR, thus obtaining pure samples and with no downstream sacrifice of DNA sequencing quality. This should be seen in comparison to the standard 454-enrichment protocol, which gives rise to varying degrees of sample purity, thus affecting the sequence data output of the sequencing run. Massively parallel sequencing is also useful for deep sequencing of specific PCR-amplified targets in parallel. However, unspecific product formation is a common problem in amplicon sequencing and since these shorter products may be difficult to fully remove by standard procedures such as gel purification, and their presence inevitably reduces the number of target sequence reads that can be obtained in each sequencing run. In paper III a gene-specific fluorescent probe was used for target-specific FACS enrichment to specifically enrich for beads with an amplified target gene on the surface. Through this procedure a nearly three-fold increase in fraction of informative sequences was obtained and with no sequence bias introduced. Barcode labeling of different DNA libraries prior to pooling and emulsion PCR is standard procedure to maximize the number of experiments that can be run in one sequencing lane, while also decreasing the impact of technical noise. However, variation between libraries in quality and GC content affects amplification efficiency, which may result in biased fractions of the different libraries in the sequencing data. In paper IV barcode specific labeling and flow-sorting for normalization of beads with different barcodes on the surface was used in order to weigh the proportion of data obtained from different samples, while also removing mixed beads, and beads with no or poorly amplified product on the surface, hence also resulting in an increased sequence quality. In paper V, cell heterogeneity within a human being is being investigated by low-coverage whole genome sequencing of single cell material. By focusing on the most variable portion of the human genome, polyguanine nucleotide repeat regions, variability between different cells is investigated and highly variable polyguanine repeat loci are identified. By selectively amplifying and sequencing polyguanine nucleotide repeats from single cells for which the phylogenetic relationship is known, we demonstrate that massively parallel sequencing can be used to study cell-cell variation in length of these repeats, based on which a phylogenetic tree can be drawn. / QC 20111031
|
15 |
Enabling massive genomic and transcriptomic analysisStranneheim, Henrik January 2011 (has links)
In recent years there have been tremendous advances in our ability to rapidly and cost-effectively sequence DNA. This has revolutionized the fields of genetics and biology, leading to a deeper understanding of the molecular events in life processes. The rapid advances have enormously expanded sequencing opportunities and applications, but also imposed heavy strains on steps prior to sequencing, as well as the subsequent handling and analysis of the massive amounts of sequence data that are generated, in order to exploit the full capacity of these novel platforms. The work presented in this thesis (based on six appended papers) has contributed to balancing the sequencing process by developing techniques to accelerate the rate-limiting steps prior to sequencing, facilitating sequence data analysis and applying the novel techniques to address biological questions. Papers I and II describe techniques to eliminate expensive and time-consuming preparatory steps through automating library preparation procedures prior to sequencing. The automated procedures were benchmarked against standard manual procedures and were found to substantially increase throughput while maintaining high reproducibility. In Paper III, a novel algorithm for fast classification of sequences in complex datasets is described. The algorithm was first optimized and validated using a synthetic metagenome dataset and then shown to enable faster analysis of an experimental metagenome dataset than conventional long-read aligners, with similar accuracy. Paper IV, presents an investigation of the molecular effects on the p53 gene of exposing human skin to sunlight during the course of a summer holiday. There was evidence of previously accumulated persistent p53 mutations in 14% of all epidermal cells. Most of these mutations are likely to be passenger events, as the affected cell compartments showed no apparent growth advantage. An annual rate of 35,000 novel sun-induced persistent p53 mutations was estimated to occur in sun-exposed skin of a human individual. Paper V, assesses the effect of using RNA obtained from whole cell extracts (total RNA) or cytoplasmic RNA on quantifying transcripts detected in subsequent analysis. Overall, more differentially detected genes were identified when using the cytoplasmic RNA. The major reason for this is related to the reduced complexity of cytoplasmic RNA, but also apparently due (at least partly) to the nuclear retention of transcripts with long, structured 5’- and 3’-untranslated regions or long protein coding sequences. The last paper, VI, describes whole-genome sequencing of a large, consanguineous family with a history of Leber hereditary optic neuropathy (LHON) on the maternal side. The analysis identified new candidate genes, which could be important in the aetiology of LHON. However, these candidates require further validation before any firm conclusions can be drawn regarding their contribution to the manifestation of LHON. / QC 20111115
|
16 |
Train Re-scheduling : A Massively Parallel Approach Using CUDAPetersson, Anton January 2015 (has links)
Context. Train re-scheduling during disturbances is a time-consuming task. Modified schedules need to be found, so that trains can meet in suitable locations and delays minimized. Domino effects are difficult to manage. Commercial optimization software has been shown to find optimal solutions, but modied schedules need to be found quickly. Therefore, greedy depth-first algorithms have been designed to find solutions within a limited time-frame. Modern GPUs have a high computational capacity, and have become easier to use for computations unrelated to computer graphics with the development of technologies such as CUDA and OpenCL. Objectives. We explore the feasibility of running a re-scheduling algorithm developed specifically for this problem on a GPU using the CUDA toolkit. The main objective is to find a way of exploiting the computational capacity of modern GPUs to find better re-scheduling solutions within a limited time-frame. Methods. We develop and adapt a sequential algorithm for use on a GPU and run multiple experiments using 16 disturbance scenarios on the single-tracked iron ore line in northern Sweden. Results. Our implementation succeeds in finding re-scheduling solutions without conflicts for all 16 scenarios. The algorithm visits on average 7 times more nodes per time unit than the sequential CPU algorithm when branching at depth 50, and 4 times more when branching at depth 200. Conclusions. The computational performance of our parallel algorithm is promising but the approach is not complete. Our experiments only show that multiple solution branches can be explored fast in parallel, but not how to construct a high level algorithm that systematically searches for better schedules within a certain time limit. Further research is needed for that. We also find that multiple threads explore redundant solutions in our approach.
|
17 |
Exploring the genetics of a complex disease - atypical hemolytic uremic syndromeBu, Fengxiao 01 May 2016 (has links)
Atypical hemolytic uremic syndrome (aHUS) is a rare renal disorder characterized by thrombotic microangiopathy, thrombocytopenia, and acute kidney injury. Its pathogenesis has been attributed to a ‘triggering' event that leads to dysregulation of the complement cascade at the level of the endothelial cell surface. Consistent with this understanding of the disease, mutations in complement genes have been definitively implicated in aHUS. However, the existence of other genetic contributors is supported by two observations. First, in ~50% of cases, disease-causing variants are not identified in complement genes, and second, disease penetrance is typically incomplete and highly variable.
To test this hypothesis, we identified pathways established to have crosstalk with the complement cascade, focusing initially on the coagulation pathway. Using targeted genomic enrichment and massively parallel sequencing we screened 36 European-American patients with sporadic aHUS patients for genetic variants in 85 complement and coagulation genes, identifying deleterious rare variants in several coagulation genes. The most frequently mutated coagulation gene in our study cohort was PLG, which encodes a zymogen of plasmin and plays key role in fibrinolysis. These results implicate the coagulation pathway in the pathogenesis of aHUS.
Based on this outcome, we developed a clinical genetic testing panel to screen disease-related genes in a group of ultra-rare complement-mediated diseases that includes, in addition to aHUS, thrombotic thrombocytopenic purpura (TTP), C3 glomerulonephritis (C3GN) and dense deposit disease (DDD) patients. Data from 193 patients validate the usage of this panel in clinical practice and also provide confirmatory insight into the pathogeneses of these diseases. Specifically, we found that in aHUS and TTP patients, variants were frequently identified in complement regulator genes, while in C3GN and DDD patients, variants were additionally found in C3 convertase genes.
To understand variability in disease penetrance, we completed targeted genetic screening in two aHUS families grossly discordant for disease penetrance, identifying in one family a co-segregating Factor X-deficiency variant (F10 p.Glu142Lys) that abrogated the effect of the complement mutation. Functional studies of the F10 p.Glu142Lys variant show that it decreases Factor X activity predicting to a hypo-coagulable state and further illustrating the importance of complement-coagulation crosstalk in exacerbating, but also mitigating the aHUS phenotype.
In our final studies, we have sought to complete a comprehensive analysis for other potentially related pathways by using bioinformatics to identify candidate pathways coupled with whole exome sequencing. Preliminary data from 43 aHUS patients and 300 controls suggest that pathways for dermatan and heparan sulfate synthesis, which are relevant to the formation of the extra-cellular matrix and cell surface adhesion, may be implicated in the aHUS.
|
18 |
Mosaicism in tumor suppressor gene syndromes: prevalence, diagnostic strategies, and transmission riskChen, Jillian Leigh 10 November 2021 (has links)
Mosaicism occurs due to postzygotic genetic alterations during early embryonic development. The phenomenon is common, present in all humans, animals, and plants, and is associated with phenotypic variability and heterogeneity. Mosaic pathogenic gene variants result in a mosaic disease state, in which the individual can present with mild, generalized disease, a localized disease phenotype in specific organs and tissue regions, or full-blown clinical features which are indistinguishable from the heterozygous disease state. Multiple studies have described the prevalence and clinical correlations associated with low-level mosaicism for various genetic disorders, including several tumor suppressor gene (TSG) syndromes, which are well-known to display mosaicism. However, the extent of mosaicism research varies widely between TSG syndromes. Currently there is no comprehensive, up to date review covering multiple TSGs and focusing on mosaicism prevalence, diagnostic strategies and transmission risk.
Here, in this literature review, I focus on 8 common tumor suppressor genes NF1, NF2, TSC1, TSC2, RB1, PTEN, VHL, and TP53; reporting the following disease aspects:
• Role and function of each tumor suppressor gene, disease prevalence, inheritance pattern, penetrance/expressivity pattern, age of onset clinical features, organs affected, and benign or malignant tumors seen
• Different types of mosaicism, including critical review of recent, representative publications for each tumor suppressor gene syndrome
• Established criteria for clinical diagnosis of inherited versus mosaic disease, molecular diagnosis, and current methods of genetic analysis
Then more extensively, this thesis discusses the most informative, representative original studies for each TSG and provides a summary which covers:
• The number of mosaic patients analyzed and the spectrum of clinical features of the cohort they were sampled from
• The spectrum of variant allele frequency (VAF), tissue types analyzed, and different analysis methods performed
• Whether or not the mosaic patients met clinical criteria for diagnosis of inherited disease
• The number of patients who were persistently classified as no mutation identified (NMI) after genetic analysis
• Spectrum and type of mosaic mutational event(s) identified
• Age of onset and age range of mosaic patients
• Patient ascertainment and family history (sporadic or familial cases) and
• Type of mosaicism seen
Furthermore, it compares and discusses disease severity, possibility of malignancy, and genotype-phenotype correlations for each TSG. Ultimately, by juxtaposing these TSGs, this review aims to centralize existing knowledge about mosaicism and provide insight into how molecular techniques can be broadly applied for better diagnosis of mosaic disease. / 2022-11-10T00:00:00Z
|
19 |
Spectral-element simulations of separated turbulent internal flowsOhlsson, Johan January 2009 (has links)
QC 20101105
|
20 |
Incorporation of Organ-Specific MicroRNA Target Sequences to Improve Gene Therapy Specificity:Samenuk, Thomas January 2021 (has links)
Thesis advisor: Vassilios Bezzerides / The aim of this study was to utilize a massively parallel reporter assay (MPRA) to identify organ-specific microRNA (miRNA) target sequences to refine the timing and expression of transgene expression for gene therapy. We previously had developed a cardiac gene therapy for Catecholaminergic Polymorphic Ventricular Tachycardia (CPVT) using a systemically delivered adeno-associated virus (AAV9) vector. We hypothesized that incorporation of organ specific miRNA target sites into our vector construct could improve our therapy’s tissue specificity due to the ability of miRNAs to silence transgene expression. Initially, we attempted to incorporate mir-124 target sequences into our vector to detarget the brain. Although these initial attempts were unsuccessful, the study allowed us to develop a protocol to test the effectiveness of miRNA target sequences. Thereafter, we developed a method to screen thousands of putative miRNA target sequences simultaneously. In this study, target sequences of miRNAs specific to the heart, brain and liver were incorporated into a plasmid library. This plasmid library was subsequently made into AAV and injected into mice from a CPVT transgenic line. Total DNA and RNA was later extracted from the target organs, converted into genomic DNA (gDNA) and complementary DNA (cDNA) libraries respectively, and sent for amplicon sequencing. We analyzed the results using Comparative Microbiome Analysis 2.0 software (CoMA) and a custom python script to count the occurrence of each specified barcode per sample. In doing so, we showed that the miRNA suppression mechanism is not only effective but also organ specific. Furthermore, we developed a second script to create a combinatorial library from a set list of miRNA target sequences enabling us to efficiently test thousands of target sequence combinations at once. In doing so, we will be able to identify effective miRNA target sequence combinations to further improve gene therapy specificity. / Thesis (BS) — Boston College, 2021. / Submitted to: Boston College. College of Arts and Sciences. / Discipline: Departmental Honors. / Discipline: Biology.
|
Page generated in 0.0645 seconds