• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 6
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 70
  • 27
  • 18
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Hierarchical Temporal Memory Cortical Learning Algorithm for Pattern Recognition on Multi-core Architectures

Price, Ryan William 01 January 2011 (has links)
Strongly inspired by an understanding of mammalian cortical structure and function, the Hierarchical Temporal Memory Cortical Learning Algorithm (HTM CLA) is a promising new approach to problems of recognition and inference in space and time. Only a subset of the theoretical framework of this algorithm has been studied, but it is already clear that there is a need for more information about the performance of HTM CLA with real data and the associated computational costs. For the work presented here, a complete implementation of Numenta's current algorithm was done in C++. In validating the implementation, first and higher order sequence learning was briefly examined, as was algorithm behavior with noisy data doing simple pattern recognition. A pattern recognition task was created using sequences of handwritten digits and performance analysis of the sequential implementation was performed. The analysis indicates that the resulting rapid increase in computing load may impact algorithm scalability, which may, in turn, be an obstacle to widespread adoption of the algorithm. Two critical hotspots in the sequential code were identified and a parallelized version was developed using OpenMP multi-threading. Scalability analysis of the parallel implementation was performed on a state of the art multi-core computing platform. Modest speedup was readily achieved with straightforward parallelization. Parallelization on multi-core systems is an attractive choice for moderate sized applications, but significantly larger ones are likely to remain infeasible without more specialized hardware acceleration accompanied by optimizations to the algorithm.
52

Předpovídání struktury proteinů / Protein Structure Prediction

Tuček, Jaroslav January 2009 (has links)
This work describes the three dimensional structure of protein molecules and biological databases used to store information about this structure or its hierarchical classification. Current methods of computational structure prediction are overviewed with an emphasis on comparative modeling. This particular method is also implemented in a proof-of-concept program and finally, the implementation is analysed.
53

Low-power high-resolution image detection

Merchant, Caleb 09 August 2019 (has links)
Many image processing algorithms exist that can accurately detect humans and other objects such as vehicles and animals. Many of these algorithms require large amounts of processing often requiring hardware acceleration with powerful central processing units (CPUs), graphics processing units (GPUs), field programmable gate arrays (FPGAs), etc. Implementing an algorithm that can detect objects such as humans at longer ranges makes these hardware requirements even more strenuous as the numbers of pixels necessary to detect objects at both close ranges and long ranges is greatly increased. Comparing the performance of different low-power implementations can be used to determine a trade-off between performance and power. An image differencing algorithm is proposed along with selected low-power hardware that is capable of detected humans at ranges of 500 m. Multiple versions of the detection algorithm are implemented on the selected hardware and compared for run-time performance on a low-power system.
54

Investigation and Characterization of AlGaN/GaN Device Structures and the Effects of Material Defects and Processing on Device Performance

Jessen, Gregg Huascar 20 December 2002 (has links)
No description available.
55

III-V semiconductors on SiGe substrates for multi-junction photovoltaics

Andre, Carrie L. 19 November 2004 (has links)
No description available.
56

Exploring feasibility of reinforcement learning flight route planning / Undersökning av använding av förstärkningsinlärning för flyruttsplannering

Wickman, Axel January 2021 (has links)
This thesis explores and compares traditional and reinforcement learning (RL) methods of performing 2D flight path planning in 3D space. A wide overview of natural, classic, and learning approaches to planning s done in conjunction with a review of some general recurring problems and tradeoffs that appear within planning. This general background then serves as a basis for motivating different possible solutions for this specific problem. These solutions are implemented, together with a testbed inform of a parallelizable simulation environment. This environment makes use of random world generation and physics combined with an aerodynamical model. An A* planner, a local RL planner, and a global RL planner are developed and compared against each other in terms of performance, speed, and general behavior. An autopilot model is also trained and used both to measure flight feasibility and to constrain the planners to followable paths. All planners were partially successful, with the global planner exhibiting the highest overall performance. The RL planners were also found to be more reliable in terms of both speed and followability because of their ability to leave difficult decisions to the autopilot. From this it is concluded that machine learning in general, and reinforcement learning in particular, is a promising future avenue for solving the problem of flight route planning in dangerous environments.
57

Towards Low-Complexity Scalable Shared-Memory Architectures

Zeffer, Håkan January 2006 (has links)
<p>Plentiful research has addressed low-complexity software-based shared-memory systems since the idea was first introduced more than two decades ago. However, software-coherent systems have not been very successful in the commercial marketplace. We believe there are two main reasons for this: lack of performance and/or lack of binary compatibility.</p><p>This thesis studies multiple aspects of how to design future binary-compatible high-performance scalable shared-memory servers while keeping the hardware complexity at a minimum. It starts with a software-based distributed shared-memory system relying on no specific hardware support and gradually moves towards architectures with simple hardware support.</p><p>The evaluation is made in a modern chip-multiprocessor environment with both high-performance compute workloads and commercial applications. It shows that implementing the coherence-violation detection in hardware while solving the interchip coherence in software allows for high-performing binary-compatible systems with very low hardware complexity. Our second-generation hardware-software hybrid performs on par with, and often better than, traditional hardware-only designs.</p><p>Based on our results, we conclude that it is not only possible to design simple systems while maintaining performance and the binary-compatibility envelope, it is often possible to get better performance than in traditional and more complex designs.</p><p>We also explore two new techniques for evaluating a new shared-memory design throughout this work: adjustable simulation fidelity and statistical multiprocessor cache modeling.</p>
58

Interrogation of Nucleic Acids by Parallel Threading

Pettersson, Erik January 2007 (has links)
Advancements in the field of biotechnology are expanding the scientific horizon and a promising era is envisioned with personalized medicine for improved health. The amount of genetic data is growing at an ever-escalating pace due to the availability of novel technologies that allow massively parallel sequencing and whole-genome genotyping, that are supported by the advancements in computer science and information technologies. As the amount of information stored in databases throughout the world is growing and our knowledge deepens, genetic signatures with significant importance are discovered. The surface of such a set in the data mining process may include causative- or marker single nucleotide polymorphisms (SNPs), revealing predisposition to disease, or gene expression signatures, profiling a pathological state. When targeting a reduced set of signatures in a large number of samples for diagnostic- or fine-mapping purposes, efficient interrogation and scoring require appropriate preparations. These needs are met by miniaturized and parallelized platforms that allow a low sample and template consumption. This doctoral thesis describes an attempt to tackle some of these challenges by the design and implementation of a novel assay denoted Trinucleotide Threading (TnT). The method permits multiplex amplification of a medium size set of specific loci and was adapted to genotyping, gene expression profiling and digital allelotyping. Utilizing a reduced number of nucleotides permits specific amplification of targeted loci while preventing the generation of spurious amplification products. This method was applied to genotype 96 individuals for 75 SNPs. In addition, the accuracy of genotyping from minute amounts of genomic DNA was confirmed. This procedure was performed using a robotic workstation running custom-made scripts and a software tool was implemented to facilitate the assay design. Furthermore, a statistical model was derived from the molecular principles of the genotyping assay and an Expectation-Maximization algorithm was chosen to automatically call the generated genotypes. The TnT approach was also adapted to profiling signature gene sets for the Swedish Human Protein Atlas Program. Here 18 protein epitope signature tags (PrESTs) were targeted in eight different cell lines employed in the program and the results demonstrated high concordance rates with real-time PCR approaches. Finally, an assay for digital estimation of allele frequencies in large cohorts was set up by combining the TnT approach with a second-generation sequencing system. Allelotyping was performed by targeting 147 polymorphic loci in a genomic pool of 462 individuals. Subsequent interrogation was carried out on a state-of-the-art massively parallelized Pyrosequencing instrument. The experiment generated more than 200,000 reads and with bioinformatic support, clonally amplified fragments and the corresponding sequence reads were converted to a precise set of allele frequencies. / QC 20100813
59

Towards Low-Complexity Scalable Shared-Memory Architectures

Zeffer, Håkan January 2006 (has links)
Plentiful research has addressed low-complexity software-based shared-memory systems since the idea was first introduced more than two decades ago. However, software-coherent systems have not been very successful in the commercial marketplace. We believe there are two main reasons for this: lack of performance and/or lack of binary compatibility. This thesis studies multiple aspects of how to design future binary-compatible high-performance scalable shared-memory servers while keeping the hardware complexity at a minimum. It starts with a software-based distributed shared-memory system relying on no specific hardware support and gradually moves towards architectures with simple hardware support. The evaluation is made in a modern chip-multiprocessor environment with both high-performance compute workloads and commercial applications. It shows that implementing the coherence-violation detection in hardware while solving the interchip coherence in software allows for high-performing binary-compatible systems with very low hardware complexity. Our second-generation hardware-software hybrid performs on par with, and often better than, traditional hardware-only designs. Based on our results, we conclude that it is not only possible to design simple systems while maintaining performance and the binary-compatibility envelope, it is often possible to get better performance than in traditional and more complex designs. We also explore two new techniques for evaluating a new shared-memory design throughout this work: adjustable simulation fidelity and statistical multiprocessor cache modeling.
60

Étude de l’assemblage, de la mécanique et de la dynamique des complexes ADN-protéine impliquant le développement d’un modèle « gros grains » / Study assembly, mecanism and dynamic of protein-DNA complexes with coarse-grained model

Éthève, Loic 01 December 2016 (has links)
Les interactions ADN-protéine sont fondamentales dans de nombreux processus biologiques tels que la régulation des gènes et la réparation de l'ADN. Cette thèse est centrée sur l'analyse des propriétés physiques et dynamiques des interfaces ADN-protéine. À partir de l'étude de quatre complexes ADN-protéine, nous avons montré que l'interface ADN-protéine est dynamique et que les ponts salins et liaisons hydrogène se forment et se rompent dans une échelle de temps de l'ordre de la centaine de picosecondes. L'oscillation des chaînes latérales des résidus est dans certains cas capable de moduler la spécificité d'interaction. Nous avons ensuite développé un modèle de protéine gros grains dans le but de décomposer les interactions ADN-protéine en identifiant les facteurs qui modulent la stabilité et la conformation de l'ADN ainsi que les facteurs responsables de la spécificité de reconnaissance ADN-protéine. Notre modèle est adaptable, allant d'un simple volume mimant une protéine à une représentation plus complexe comportant des charges formelles sur les résidus polaires, ou des chaînes latérales à l'échelle atomique dans le cas de résidus clés ayant des comportements particuliers, tels que les cycles aromatiques qui s'intercalent entre les paires de base de l'acide nucléique / DNA-protein interactions are fundamental in many biological processes such as gene regulation and DNA repair. This thesis is focused on an analysis of the physical and dynamic properties of DNA-protein interfaces. In a study of four DNA-protein complexes, we have shown that DNA-protein interfaces are dynamic and that the salt bridges and hydrogen bonds break and reform over a time scale of hundreds of picoseconds. In certain cases, this oscillation of protein side chains is able to modulate interaction specificity. We have also developed a coarse-grain model of proteins in order to deconvolute the nature of protein-DNA interactions, identifying factors that modulate the stability and conformation of DNA and factors responsible for the protein-DNA recognition specificity. The design of our model can be changed from a simple volume mimicking the protein to a more complicated representation by the addition of formal charges on polar residues, or by adding atomic-scale side chains in the case of key residues with more precise behaviors, such as aromatic rings that intercalate between DNA base pairs

Page generated in 0.3483 seconds