• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 8
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 87
  • 47
  • 33
  • 33
  • 32
  • 15
  • 15
  • 14
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Spectral-element simulations of separated turbulent internal flows

Ohlsson, Johan January 2009 (has links)
No description available.
12

Optimising a fluid plasma turbulence simulation on modern high performance computers

Edwards, Thomas David January 2010 (has links)
Nuclear fusion offers the potential of almost limitless energy from sea water and lithium without the dangers of carbon emissions or long term radioactive waste. At the forefront of fusion technology are the tokamaks, toroidal magnetic confinement devices that contain miniature stars on Earth. Nuclei can only fuse by overcoming the strong electrostatic forces between them which requires high temperatures and pressures. The temperatures in a tokamak are so great that the Deuterium-Tritium fusion fuel forms a plasma which must be kept hot and under pressure to maintain the fusion reaction. Turbulence in the plasma causes disruption by transporting mass and energy away from this core, reducing the efficiency of the reaction. Understanding and controlling the mechanisms of plasma turbulence is key to building a fusion reactor capable of producing sustained output. The extreme temperatures make detailed empirical observations difficult to acquire, so numerical simulations are used as an additional method of investigation. One numerical model used to study turbulence and diffusion is CENTORI, a direct two-fluid magneto-hydrodynamic simulation of a tokamak plasma developed by the Culham Centre for Fusion Energy (CCFE formerly UKAEA:Fusion). It simulates the entire tokamak plasma with realistic geometry, evolving bulk plasma quantities like pressure, density and temperature through millions of timesteps. This requires CENTORI to run in parallel on a Massively Parallel Processing (MPP) supercomputer to produce results in an acceptable time. Any improvements in CENTORI’s performance increases the rate and/or total number of results that can be obtained from access to supercomputer resources. This thesis presents the substantial effort to optimise CENTORI on the current generation of academic supercomputers. It investigates and reviews the properties of contemporary computer architectures then proposes, implements and executes a benchmark suite of CENTORI’s fundamental kernels. The suite is used to compare the performance of three competing memory layouts of the primary vector data structure using a selection of compilers on a variety of computer architectures. The results show there is no optimal memory layout on all platforms so a flexible optimisation strategy was adopted to pursue “portable” optimisation i.e optimisations that can easily be added, adapted or removed from future platforms depending on their performance. This required designing an interface to functions and datatypes that separate CENTORI’s fundamental algorithms from repetitive, low-level implementation details. This approach offered multiple benefits including: the clearer representation of CENTORI’s core equations as mathematical expressions in Fortran source code allows rapid prototyping and development of new features; the reduction in the total data volume by a factor of three reduces the amount of data transferred over the memory bus to almost a third; and the reduction in the number of intense floating point kernels reduces the effort of optimising the application on new platforms. The project proceeds to rewrite CENTORI using the new Application Programming Interface (API) and evaluates two optimised implementations. The first is a traditional library implementation that uses hand optimised subroutines to implement the library functions. The second uses a dynamic optimisation engine to perform automatic stripmining to improve the performance of the memory hierarchy. The automatic stripmining implementation uses lazy evaluation to delay calculations until absolutely necessary, allowing it to identify temporary data structures and minimise them for optimal cache use. This novel technique is combined with highly optimised implementations of the kernel operations and optimised parallel communication routines to produce a significant improvement in CENTORI’s performance. The maximum measured speed up of the optimised versions over the original code was 3.4 times on 128 processors on HPCx, 2.8 times on 1024 processors on HECToR and 2.3 times on 256 processors on HPC-FF.
13

Risk Measures Extracted from Option Market Data Using Massively Parallel Computing

Zhao, Min 27 April 2011 (has links)
The famous Black-Scholes formula provided the first mathematically sound mechanism to price financial options. It is based on the assumption, that daily random stock returns are identically normally distributed and hence stock prices follow a stochastic process with a constant volatility. Observed prices, at which options trade on the markets, don¡¯t fully support this hypothesis. Options corresponding to different strike prices trade as if they were driven by different volatilities. To capture this so-called volatility smile, we need a more sophisticated option-pricing model assuming that the volatility itself is a random process. The price we have to pay for this stochastic volatility model is that such models are computationally extremely intensive to simulate and hence difficult to fit to observed market prices. This difficulty has severely limited the use of stochastic volatility models in the practice. In this project we propose to overcome the obstacle of computational complexity by executing the simulations in a massively parallel fashion on the graphics processing unit (GPU) of the computer, utilizing its hundreds of parallel processors. We succeed in generating the trillions of random numbers needed to fit a monthly options contract in 3 hours on a desktop computer with a Tesla GPU. This enables us to accurately price any derivative security based on the same underlying stock. In addition, our method also allows extracting quantitative measures of the riskiness of the underlying stock that are implied by the views of the forward-looking traders on the option markets.
14

Protegendo a economia virtual de MMOGS através da detecção de cheating. / Protecting the virtual economy in MMOGs by cheat detection

Severino, Felipe Lange January 2012 (has links)
Nos últimos anos Jogos Online Massivamente Multijogadores (MMOG) têm se expandido em popularidade e investimento, influenciado, especialmente, pela evolução da conexão residencial (conexões mais rápidas a preços mais baixos). Com o crescimento dessa demanda, surgem problemas na utilização da arquitetura cliente-servidor, normalmente utilizada em jogos comerciais. Entre as arquiteturas alternativas de suporte a MMOGs estão as arquiteturas peer-to-peer. Porém essas arquiteturas apresentam problemas relativos a segurança, problemas esses que possuem, muitas vezes, soluções de baixo desempenho, sendo impraticáveis em jogos reais. Entre os problemas de segurança mais significativos para MMOGs encontra-se o cheating, ou a ação que um ou mais jogador toma para burlar as regras em favor próprio. A preocupação com cheating agravase quando o efeito desse cheating pode causar danos irreversíveis à economia virtual e, potencialmente, afetar todos os jogadores. O presente trabalho faz uso de uma divisão celular do mundo virtual para restringir o impacto de um dado cheating a uma única célula, evitando que este se propague. Para tanto é realizada uma classificação do estado do jogador e utiliza-se uma técnica de detecção de cheating para cada uma das classificações. Foram realizados experimentos através de simulação para testes de aplicabilidade do modelo e análise de desempenho e acuracidade. Os testes indicam que o modelo proposto consegue, de forma eficaz, realizar a proteção da economia virtual, impedindo que a ocorrência de um cheating atinja todos os jogadores. / In the past few years, Massively Multiplayer Online Games (MMOG) grew in both popularity and investment. This growth has been influenced by the evolution of residential connection (faster and cheaper connections). With the demand, some limitations imposed by the client-server architecture becomes more significant. Peer-to-peer architectures aim to solve those problems by distributing the game among several computers. However, those solutions usually lack security, or presents low performance. Among the problems, cheating can be considered the most significant to MMOGs. Cheating can be defined as the action taken by a player when this action is against the rules. This may be aggravated when this action can cause irreversible damage to the virtual economy and, potentially, affect all players in the virtual world. This work’s goal is to restrict the cheating impact using a cellular world division. The proposal is to restrict the cheating in a limited virtual space, preventing the propagation. A state classification is presented, and different cheating detection techniques are presented to each element of this classification. Simulation is used to make the experiments aiming to test the performance and accuracy of the proposal. Results indicate that the proposed solution can efficiently protect the virtual economy, restraining the effects of a cheating occurrence to a small portion of the virtual world.
15

Deafness in the genomics era

Shearer, Aiden Eliot 01 May 2014 (has links)
Deafness is the most common sensory deficit in humans, affecting 278 million people worldwide. Non-syndromic hearing loss (NSHL), hearing loss not associated with other symptoms, is the most common type of hearing loss and most NSHL in developed countries is due to a genetic cause. The inner ear is a remarkably complex organ, and as such, there are estimated to be hundreds of genes with mutations that can cause hearing loss. To date, 62 of these genes have been identified. This extreme genetic heterogeneity has made comprehensive genetic testing for deafness all but impossible due to low-throughput genetic testing methods that sequence a single gene at a time. The human genome project was completed in 2003. Soon after, genomic technologies, including massively parallel sequencing, were developed. MPS gives the ability to sequence millions or billions of DNA base-pairs of the genome simultaneously. The goal of my thesis work was to use these newly developed genomic technologies to create a comprehensive genetic testing platform for deafness and use this platform to answer key scientific questions about genetic deafness. This platform would need to be relatively inexpensive, highly sensitive, and accurate enough for clinical diagnostics. In order to accomplish this goal we first determined the best methods to use for this platform by comparing available methods for isolation of all exons of all genes implicated in deafness and massively parallel sequencers. We performed this pilot study on a limited number of patient samples, but were able to determine that solution-phase targeted genomic enrichment (TGE) and Illumina sequencing presented the best combination of sensitivity and cost. We decided to call this platform and diagnostic pipeline OtoSCOPE®. Also during this study we identified several weaknesses with the standard method for TGE that we sought to improve. The next aim was to focus on these weaknesses to develop an improved protocol for TGE that was highly reproducible and efficient. We developed a new protocol and tested the limits of sequencer capacity. These findings allowed us to translate OtoSCOPE® to the clinical setting and use it to perform comprehensive genetic testing on a large number of individuals in research studies. Finally, we used the OtoSCOPE® platform to answer crucial questions about genetic deafness that had remained unanswered due to the low-throughput genetic testing methods available previously. By screening 1,000 normal hearing individuals from 6 populations we determined the carrier frequency for non-DFNB1 recessive deafness-causing mutations to be 3.3%. Our findings will also help us to interpret variants uncovered during analysis of deafness genes in affected individuals. When we used OtoSCOPE® to screen 100 individuals with apparent genetic deafness, we were able to provide a genetic diagnosis in 45%, a large increase compared to previous gene-by-gene sequencing methods. Because it provides a pinpointed etiological diagnosis, genetic testing with a comprehensive platform like OtoSCOPE® could provide an attractive alternative to the newborn hearing screen. In addition, this research lays the groundwork for molecular therapies to restore or reverse hearing loss that are tailored to specific genes or genetic mutations. Therefore, a molecular diagnosis with a comprehensive platform like OtoSCOPE® is integral for those affected by hearing loss.
16

Design and Optimization of Wireless Networks for Large Populations

Silva Allende, Alonso Ariel 07 June 2010 (has links) (PDF)
The growing number of wireless devices and wireless systems present many challenges on the design and operation of these networks. We focus on massively dense ad hoc networks and cellular systems. We use the continuum modeling approach, useful for the initial phase of deployment and to analyze broad-scale regional studies of the network. We study the routing problem in massively dense ad hoc networks, and similar to the work of Nash, and Wardrop, we define two principles of network optimization: user- and system-optimization. We show that the optimality conditions of an appropriately constructed optimization problem coincides with the user-optimization principle. For different cost functions, we solve the routing problem for directional and omnidirectional antennas. We also find a characterization of the minimum cost paths by extensive use of Green's theorem in directional antennas. In many cases, the solution is characterized by a partial differential equation. We propose its numerical analysis by finite elements method which gives bounds in the variation of the solution with respect to the data. When we allow mobility of the origin and destination nodes, we find the optimal quantity of active relay nodes. In Network MIMO systems and MIMO broadcast channels, we show that, even when the channel offers an infinite number of degrees of freedom, the capacity is limited by the ratio between the size of the antenna array at the base station and the mobile terminals position and the wavelength of the signal. We also find the optimal mobile association for the user- and system-optimization problem under different policies and distributions of the users.
17

The Exploratory Research of Flow Experience on Internet

Chen, Wei-Jei 03 January 2002 (has links)
none
18

Analysis of genetic variations in cancer

Hasmats, Johanna January 2012 (has links)
The aim of this thesis is to apply recently developed technologies for genomic variation analyses, and to ensure quality of the generated information for use in preclinical cancer research. Faster access to a patients’ full genomic sequence for a lower cost makes it possible for end users such as clinicians and physicians to gain a more complete understanding of the disease status of a patient and adjust treatment accordingly. Correct biological interpretation is important in this context, and can only be provided through fast and simple access to relevant high quality data. Therefore, we here propose and validate new bioinformatic strategies for biomarker selection for prediction of response to cancer therapy. We initially explored the use of bioinformatic tools to select interesting targets for toxicity in carboplatin and paclitaxel on a smaller scale. From our findings we then further extended the analysis to the entire exome to look for biomarkers as targets for adverse effects from carboplatin and gemcitabine. To investigate any bias introduced by the methods used for targeting the exome, we analyzed the mutation profiles in cancer patients by comparing whole genome amplified DNA to unamplified DNA. In addition, we applied RNA-seq to the same patients to further validate the variations obtained by sequencing of DNA. The understanding of the human cancer genome is growing rapidly, thanks to methodological development of analysis tools. The next step is to implement these tools as a part of a chain from diagnosis of patients to genomic research to personalized treatment. / <p>QC 20121105</p>
19

Massively parallel analysis of cells and nucleic acids

Sandberg, Julia January 2011 (has links)
Recent proceedings in biotechnology have enabled completely new avenues in life science research to be explored. By allowing increased parallelization an ever-increasing complexity of cell samples or experiments can be investigated in shorter time and at a lower cost. This facilitates for example large-scale efforts to study cell heterogeneity at the single cell level, by analyzing cells in parallel that also can include global genomic analyses. The work presented in this thesis focuses on massively parallel analysis of cells or nucleic acid samples, demonstrating technology developments in the field as well as use of the technology in life sciences. In stem cell research issues such as cell morphology, cell differentiation and effects of reprogramming factors are frequently studied, and to obtain information on cell heterogeneity these experiments are preferably carried out on single cells. In paper I we used a high-density microwell device in silicon and glass for culturing and screening of stem cells. Maintained pluripotency in stem cells from human and mouse was demonstrated in a screening assay by antibody staining and the chip was furthermore used for studying neural differentiation. The chip format allows for low sample volumes and rapid high-throughput analysis of single cells, and is compatible with Fluorescence Activated Cell Sorting (FACS) for precise cell selection. Massively parallel DNA sequencing is revolutionizing genomics research throughout the life sciences by constantly producing increasing amounts of data from one sequencing run. However, the reagent costs and labor requirements in current massively parallel sequencing protocols are still substantial. In paper II-IV we have focused on flow-sorting techniques for improved sample preparation in bead-based massive sequencing platforms, with the aim of increasing the amount of quality data output, as demonstrated on the Roche/454 platform. In paper II we demonstrate a rapid alternative to the existing shotgun sample titration protocol and also use flow-sorting to enrich for beads that carry amplified template DNA after emulsion PCR, thus obtaining pure samples and with no downstream sacrifice of DNA sequencing quality. This should be seen in comparison to the standard 454-enrichment protocol, which gives rise to varying degrees of sample purity, thus affecting the sequence data output of the sequencing run. Massively parallel sequencing is also useful for deep sequencing of specific PCR-amplified targets in parallel. However, unspecific product formation is a common problem in amplicon sequencing and since these shorter products may be difficult to fully remove by standard procedures such as gel purification, and their presence inevitably reduces the number of target sequence reads that can be obtained in each sequencing run. In paper III a gene-specific fluorescent probe was used for target-specific FACS enrichment to specifically enrich for beads with an amplified target gene on the surface. Through this procedure a nearly three-fold increase in fraction of informative sequences was obtained and with no sequence bias introduced. Barcode labeling of different DNA libraries prior to pooling and emulsion PCR is standard procedure to maximize the number of experiments that can be run in one sequencing lane, while also decreasing the impact of technical noise. However, variation between libraries in quality and GC content affects amplification efficiency, which may result in biased fractions of the different libraries in the sequencing data. In paper IV barcode specific labeling and flow-sorting for normalization of beads with different barcodes on the surface was used in order to weigh the proportion of data obtained from different samples, while also removing mixed beads, and beads with no or poorly amplified product on the surface, hence also resulting in an increased sequence quality. In paper V, cell heterogeneity within a human being is being investigated by low-coverage whole genome sequencing of single cell material. By focusing on the most variable portion of the human genome, polyguanine nucleotide repeat regions, variability between different cells is investigated and highly variable polyguanine repeat loci are identified. By selectively amplifying and sequencing polyguanine nucleotide repeats from single cells for which the phylogenetic relationship is known, we demonstrate that massively parallel sequencing can be used to study cell-cell variation in length of these repeats, based on which a phylogenetic tree can be drawn. / QC 20111031
20

Enabling massive genomic and transcriptomic analysis

Stranneheim, Henrik January 2011 (has links)
In recent years there have been tremendous advances in our ability to rapidly and cost-effectively sequence DNA. This has revolutionized the fields of genetics and biology, leading to a deeper understanding of the molecular events in life processes. The rapid advances have enormously expanded sequencing opportunities and applications, but also imposed heavy strains on steps prior to sequencing, as well as the subsequent handling and analysis of the massive amounts of sequence data that are generated, in order to exploit the full capacity of these novel platforms. The work presented in this thesis (based on six appended papers) has contributed to balancing the sequencing process by developing techniques to accelerate the rate-limiting steps prior to sequencing, facilitating sequence data analysis and applying the novel techniques to address biological questions.   Papers I and II describe techniques to eliminate expensive and time-consuming preparatory steps through automating library preparation procedures prior to sequencing. The automated procedures were benchmarked against standard manual procedures and were found to substantially increase throughput while maintaining high reproducibility. In Paper III, a novel algorithm for fast classification of sequences in complex datasets is described. The algorithm was first optimized and validated using a synthetic metagenome dataset and then shown to enable faster analysis of an experimental metagenome dataset than conventional long-read aligners, with similar accuracy. Paper IV, presents an investigation of the molecular effects on the p53 gene of exposing human skin to sunlight during the course of a summer holiday. There was evidence of previously accumulated persistent p53 mutations in 14% of all epidermal cells. Most of these mutations are likely to be passenger events, as the affected cell compartments showed no apparent growth advantage. An annual rate of 35,000 novel sun-induced persistent p53 mutations was estimated to occur in sun-exposed skin of a human individual.  Paper V, assesses the effect of using RNA obtained from whole cell extracts (total RNA) or cytoplasmic RNA on quantifying transcripts detected in subsequent analysis. Overall, more differentially detected genes were identified when using the cytoplasmic RNA. The major reason for this is related to the reduced complexity of cytoplasmic RNA, but also apparently due (at least partly) to the nuclear retention of transcripts with long, structured 5’- and 3’-untranslated regions or long protein coding sequences. The last paper, VI, describes whole-genome sequencing of a large, consanguineous family with a history of Leber hereditary optic neuropathy (LHON) on the maternal side. The analysis identified new candidate genes, which could be important in the aetiology of LHON. However, these candidates require further validation before any firm conclusions can be drawn regarding their contribution to the manifestation of LHON. / QC 20111115

Page generated in 0.0561 seconds