• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 48
  • 48
  • 14
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Methodologies for the synthesis of cost-effective modular-MPC configurations for image processing applications

Kumm, Holger Thomas January 1995 (has links)
No description available.
2

Development of a massively parallel nanoscale laser shock peening process

Hense, Matthew Davis 18 May 2015 (has links)
In this report, the feasibility of a massively parallel, nanoscale laser shock peening process is investigated. This report will give a fundamental background on laser shock peening processes in general. The background will include a description of the mechanisms associated with laser shock peening, and the theory behind laser shock peening. The experiments that were performed to develop a nanoscale laser shock peening process will also be described in detail. The problems associated with different experiments and the results will be presented. / text
3

Structural Design Using Cellular Automata

Slotta, Douglas J. 22 June 2001 (has links)
Traditional parallel methods for structural design do not scale well. This thesis discusses the application of massively scalable cellular automata (CA) techniques to structural design. There are two sets of CA rules, one used to propagate stresses and strains, and one to perform design analysis. These rules can be applied serially, periodically, or concurrently, and Jacobi or Gauss-Seidel style updating can be done. These options are compared with respect to convergence, speed, and stability. / Master of Science
4

Real Time Data Reduction and Analysis Using Artificial Neural Networks

Dionisi, Steven M. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada / An artificial neural network (ANN) for use in real time data reduction and analysis will be presented. The use and advantage of hardware and software implementations of neural networks will be considered. The ability of neural networks to learn and store associations between different sets of data can be used to create custom algorithms for some of the data analysis done during missions. Once trained, the ANN can distill the signals from several sensors into a single output, such as safe/unsafe. Used on a neural chip, the trained ANN can eliminate the need for A/D conversions and multiplexing for processing of combined parameters and the massively parallel nature of the network allows the processing time to remain independent of the number of parameters. As a software routine, the advantages of using an ANN over conventional algorithms include the ease of use for engineers, and the ability to handle nonlinear, noisy and imperfect data. This paper will apply the ANN to performance data from a T-38 aircraft.
5

Implementing CAL Actor Component on Massively Parallel Processor Array

Khanfar, Husni January 2010 (has links)
No description available.
6

Genetic dissection of the transcriptional hypoxia response and genomic regional capture for massively parallel sequencing

Turnbull, Douglas William, 1979- 09 1900 (has links)
xv, 99 p. : ill. (some col.) A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number. / When cells are faced with the stress of oxygen deprivation (hypoxia), they must alter their physiology in order to survive. One adaptation cells make during hypoxia entails the transcriptional activation of specific groups of genes as well as the concurrent repression of other groups. This modulation is achieved through the actions of transcription factors, proteins that are directly involved in this transcriptional activation and repression. I studied the transcriptional response to hypoxia in the model organism Drosophila melanogaster utilizing DNA microarrays to examine the transcriptomes of five different mutant Drosophila strains deficient in the hypoxia-responsive transcription factors HIF-1, FOXO, NFkB, p53, and MTF-1. By comparing hypoxia responsive gene expression in these mutants to that of wild type flies and subsequently identifying binding sites for each transcription factor near putative target genes, I was able to identify the transcripts regulated by each transcription factor during hypoxia. I discovered that FOXO plays an unexpectedly large role in hypoxic gene regulation, regulating a greater number of genes than any other transcription factor. I also identified multiple interesting targets of other transcription factors and uncovered a potential regulatory link between HIF-1 and FOXO. This study is the most in-depth examination of the transcriptional hypoxia response to date. I was also involved in additional research on transcriptional stress responses in Drosophila. Also included in this dissertation are two papers on which I was the second author. One paper identified a regulatory link between the transcriptional responses to hypoxia and heat-shock. The other examined elevated CO 2 stress (hypercapnia) in Drosophila, showing that this stress causes the down-regulation of NFkB-dependent antimicrobial peptide gene expression. My studies of stress responses would not have been possible without well-described mutant fly strains. Another part of my dissertation research involved the creation of a method for characterizing new mutants for future studies. When researchers seek to identify the molecular nature of a mutation that causes an interesting phenotype, they must ultimately determine the specific responsible genomic sequence change. While classical genetic methods and other techniques can easily be used to roughly map the location of a mutation in a genome, regions identified by these means are usually so large that sequencing them to precisely identify the polymorphism is laborious and slow. I have developed a technique that makes sequencing genomic regions of this size much easier. My technique involves capturing genomic regions by hybridization of fragmented genomic target DNA to biotinylated probes generated from fosmid DNA, which are subsequently immobilized and washed on streptavidin beads. Genomic DNA fragments are then eluted by denaturation and sequenced using the latest generation of massively parallel sequencing technology. I have demonstrated the effectiveness of this approach by sequencing a mutation-containing 336-kilobase genomic region from a Caenorhabditis elegans strain. My entire protocol can be completed in two days, is relatively inexpensive, and is broadly applicable to any situation in which one wants to sequence a specific genomic region using massively parallel sequencing. This dissertation includes both my previously published and my coauthored materials. / Adviser: Eric Johnson
7

Deep generative design of RNA family sequences / RNAファミリー配列の深層生成設計

Sumi, Shunsuke 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(医学) / 甲第25172号 / 医博第5058号 / 京都大学大学院医学研究科医学専攻 / (主査)教授 村川 泰裕, 教授 竹内 理, 教授 伊藤 貴浩 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
8

Simulations of turbulent boundary layers with heat transfer

Li, Qiang January 2009 (has links)
No description available.
9

Spectral-element simulations of separated turbulent internal flows

Ohlsson, Johan January 2009 (has links)
No description available.
10

Optimising a fluid plasma turbulence simulation on modern high performance computers

Edwards, Thomas David January 2010 (has links)
Nuclear fusion offers the potential of almost limitless energy from sea water and lithium without the dangers of carbon emissions or long term radioactive waste. At the forefront of fusion technology are the tokamaks, toroidal magnetic confinement devices that contain miniature stars on Earth. Nuclei can only fuse by overcoming the strong electrostatic forces between them which requires high temperatures and pressures. The temperatures in a tokamak are so great that the Deuterium-Tritium fusion fuel forms a plasma which must be kept hot and under pressure to maintain the fusion reaction. Turbulence in the plasma causes disruption by transporting mass and energy away from this core, reducing the efficiency of the reaction. Understanding and controlling the mechanisms of plasma turbulence is key to building a fusion reactor capable of producing sustained output. The extreme temperatures make detailed empirical observations difficult to acquire, so numerical simulations are used as an additional method of investigation. One numerical model used to study turbulence and diffusion is CENTORI, a direct two-fluid magneto-hydrodynamic simulation of a tokamak plasma developed by the Culham Centre for Fusion Energy (CCFE formerly UKAEA:Fusion). It simulates the entire tokamak plasma with realistic geometry, evolving bulk plasma quantities like pressure, density and temperature through millions of timesteps. This requires CENTORI to run in parallel on a Massively Parallel Processing (MPP) supercomputer to produce results in an acceptable time. Any improvements in CENTORI’s performance increases the rate and/or total number of results that can be obtained from access to supercomputer resources. This thesis presents the substantial effort to optimise CENTORI on the current generation of academic supercomputers. It investigates and reviews the properties of contemporary computer architectures then proposes, implements and executes a benchmark suite of CENTORI’s fundamental kernels. The suite is used to compare the performance of three competing memory layouts of the primary vector data structure using a selection of compilers on a variety of computer architectures. The results show there is no optimal memory layout on all platforms so a flexible optimisation strategy was adopted to pursue “portable” optimisation i.e optimisations that can easily be added, adapted or removed from future platforms depending on their performance. This required designing an interface to functions and datatypes that separate CENTORI’s fundamental algorithms from repetitive, low-level implementation details. This approach offered multiple benefits including: the clearer representation of CENTORI’s core equations as mathematical expressions in Fortran source code allows rapid prototyping and development of new features; the reduction in the total data volume by a factor of three reduces the amount of data transferred over the memory bus to almost a third; and the reduction in the number of intense floating point kernels reduces the effort of optimising the application on new platforms. The project proceeds to rewrite CENTORI using the new Application Programming Interface (API) and evaluates two optimised implementations. The first is a traditional library implementation that uses hand optimised subroutines to implement the library functions. The second uses a dynamic optimisation engine to perform automatic stripmining to improve the performance of the memory hierarchy. The automatic stripmining implementation uses lazy evaluation to delay calculations until absolutely necessary, allowing it to identify temporary data structures and minimise them for optimal cache use. This novel technique is combined with highly optimised implementations of the kernel operations and optimised parallel communication routines to produce a significant improvement in CENTORI’s performance. The maximum measured speed up of the optimised versions over the original code was 3.4 times on 128 processors on HPCx, 2.8 times on 1024 processors on HECToR and 2.3 times on 256 processors on HPC-FF.

Page generated in 0.0609 seconds