• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 6
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 70
  • 27
  • 18
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Accelerating Hardware Simulation on Multi-cores

Nanjundappa, Mahesh 04 June 2010 (has links)
Electronic design automation (EDA) tools play a central role in bridging the productivity gap for designing complex hardware systems. However, with an increase in the size and complexity of today's design requirements, current methodologies and EDA tools are unable to effectively mitigate the further widening of productivity gap. It is estimated that testing and verification takes 2/3rd of the total development time of complex hardware systems. Functional simulation forms the main stay of testing and verification process and is the most widely used technique for testing and verification. Most of the simulation algorithms and their implementations are designed for uniprocessor systems that cannot easily leverage the parallelism in multi-core and GPU platforms. For example, logic simulation often uses levelized sequential algorithms, whereas the discrete-event simulation frameworks for Verilog, VHDL and SystemC employ concurrency in the form of multi-threading to given an illusion of the inherent parallelism present in circuits. However, the discrete-event model of computation requires a global notion of an event-queue, which makes improving its simulation performance via parallelization even more challenging. This work investigates automatic parallelization of simulation algorithms used to simulate hardware models. In particular, we focus on parallelizing the simulation of hardware designs described at the RTL using SystemC/HDL with examples to clearly describe the parallelization. Even though multi-cores and GPUs other parallelism, efficiently exploiting this parallelism with their programming models is not straightforward. To overcome this, we also focus our research on building intelligent translators to map simulation applications onto multi-cores and GPUs such that the complexity of the low-level programming models is hidden from the designers. / Master of Science
32

Behavior recording with the scoring program MouseClick : A study in cross platform and precise timing developing

Karlsson, Erik January 2010 (has links)
This thesis will deal with problems and solutions of cross-platform developing using MoNo framework as a replacement of Microsoft .NET framework on Linux and Mac OS-X platforms. It will take in account matters such as limitations in the filesystem to problems with deploying released programs. It will also deal with demands of precise timing and the need of efficient code on precise tasks to construct a program used for creating data from recordings of animals. These animals is set to perform a task, for example exploring a labyrinth or running on a rod, and it is all recorded on video. These videos are later reviewed by an observer which transcripts the recordings into data based on predefined behaviors and the time and frequency with which the animal is expressing them.
33

Behavior recording with the scoring program MouseClick : A study in cross platform and precise timing developing

Karlsson, Erik January 2010 (has links)
This thesis will deal with problems and solutions of cross-platform developing using MoNo framework as a replacement of Microsoft .NET framework on Linux and Mac OS-X platforms. It will take in account matters such as limitations in the filesystem to problems with deploying released programs. It will also deal with demands of precise timing and the need of efficient code on precise tasks to construct a program used for creating data from recordings of animals. These animals is set to perform a task, for example exploring a labyrinth or running on a rod, and it is all recorded on video. These videos are later reviewed by an observer which transcripts the recordings into data based on predefined behaviors and the time and frequency with which the animal is expressing them.
34

Elektronická informační tabule LCD / Electronic notice board LCD

Bureš, Michal January 2019 (has links)
This diploma thesis deals with the research of existing solutions of electronic information systems. The purpose of this survey is to inspire what is offered to users of these professional systems, or what users expect from these systems. Based on the acquired knowledge, the thesis also deals with the design of its own system, which can serve as another alternative to professional solutions of companies on the market, in the sense of the concept of solution of the given task. After the design of the system, the thesis deals chronologically with the selected "tools" used for the actual implementation of the assigned task Electronic Information Boards. Both hardware and software parts of the task, which form the majority of this thesis. At the end of the work are presented practical results.
35

Akcelerace fotoakustického snímkování / Acceleration of Photoacoustic Imaging

Nedeljković, Sava January 2020 (has links)
Hlavním cílem této práce je navrhnout novu metodu rekonstrukce obrazu z dat fotoakustického snímkování. Fotoakustické snímkování je velmi populární neinvazivní metoda snímkování založená na detekování ultrazvukových vln vyvolaných laserovým paprskem. Proces snímkování generuje velké množství dat, a kvůli tomu je proces rekonstrukce obrazu velmi časově náročný. Táto práce demonstruje proces rekonstrukce obrazu pomocí zpětné projekce, algoritmu který je dostatečně jednoduchý na přizpůsobení moderním architekturám procesorů umožňující různé způsoby optimalizovaného výpočtu. Dvě různé variantu algoritmu byly navrženy: z pohledu pixelu a z pohledu senzoru, který detekuje ultrazvukové vlny. Obě varianty byly implementovány třemi různými způsoby: pomocí vektorového paralelismu, vláknového paralelismu a paralelismu na grafické karetě (GPU). Všechny 3 implementace obou variant algoritmu byly testovány a výsledky byly srovnány s výsledkem rekonstrukce algoritmu reverzního času, přesnějšího ale mnohokrát pomalejšího algoritmu. Výsledky ukázaly, že GPU paralelismus nabízí nejrychlejší výpočet, cca. 200 krát rychlejší než u algoritmu reverzního času, a proto se dá použit i v aplikacích pracující v reálném čase.
36

Utilizing Multi-Core for Optimized Data Exchange Via VoIP

Azami Ghadim, Sohrab January 2016 (has links)
In contemporary IT industry, Multi-tasking solutions are highly regarded as optimal solutions, because hardware is equipped with multi-core CPUs.  With Multi-Core technology, CPUs run with lower frequencies while giving same or better performance as a whole system of processing. This thesis work takes advantage of multi-threading architecture in order to run different tasks under different cores such as SIP signaling and messaging to establish one or more SIP calls, capture voice, medical data, and packetize them to be streamed over internet to other SIP agents. VoIP is designed to stream voice over IP. There is inter-protocol communication and cooperation such as between the SIP, SDP, RTP, and RTCP protocols in order to establish a SIP connection and- afterwards- stream media over the internet. We use the Microsoft COM technology in order to better the C++ component design. It allows us to design and develop code once and run it anywhere on different platforms. Using VC++ helps us reduce software design time and development time. Moreover, we follow software design standards setup by software engineers’ society. VoIP technology uses protocols such as the SIP signaling protocol to locate the user agents that communicate with each other. Pjsip is a library that allows developers to extend their design with SIP capability. We use the PJSIP library in order to sign up our own developed VoIP module to a SIP server over the Internet and locate other user agents. We implement and use the already-designed iRTP protocol instead of the RTP to stream media over the Internet. Thus, we can improve RTP packet delays and improve Quality of Service (QoS). Since medical data is critical and must not be lost, the iRTP guarantees no loss of medical data. If we want to stream voice only, we would not need iRTP, because RTP is a good protocol for voice applications. Due to the increasing Internet traffic, we need to use a reliable protocol that can detect packet loss of medical data. iRTP resolves the issue and leverages QoS. This thesis work focuses on streaming medical data and medical voice-calls using VoIP, even over small bandwidths and in high traffic periods. The main contribution of this thesis is in the parallel design of iRTP and the implementation of this very design in order to be used with Multi-Core technology. We do so via multi-threading technology to speed up the streaming of medical data and medical voice-calls. According to our tests, measurements, and result analyses, the parallel design of iRTP and the multithreaded implementation on VC++ leverage performance to a level where the average decrease in delay is 71.1% when using iRTP for audio and medical data instead of the nowadays applied case of using an RTP stream for audio and multiple TCPs streams for medical data .
37

Quantum well state of cubic inclusions in hexagonal silicon carbide studied with ballistic electron emission microscopy

Ding, Yi 17 June 2004 (has links)
No description available.
38

Modeling and Simulation of the Vector-Borne Dengue Disease and the Effects of Regional Variation of Temperature in the Disease Prevalence in Homogenous and Heterogeneous Human Populations

Bravo-Salgado, Angel D 08 1900 (has links)
The history of mitigation programs to contain vector-borne diseases is a story of successes and failures. Due to the complex interplay among multiple factors that determine disease dynamics, the general principles for timely and specific intervention for incidence reduction or eradication of life-threatening diseases has yet to be determined. This research discusses computational methods developed to assist in the understanding of complex relationships affecting vector-borne disease dynamics. A computational framework to assist public health practitioners with exploring the dynamics of vector-borne diseases, such as malaria and dengue in homogenous and heterogeneous populations, has been conceived, designed, and implemented. The framework integrates a stochastic computational model of interactions to simulate horizontal disease transmission. The intent of the computational modeling has been the integration of stochasticity during simulation of the disease progression while reducing the number of necessary interactions to simulate a disease outbreak. While there are improvements in the computational time reducing the number of interactions needed for simulating disease dynamics, the realization of interactions can remain computationally expensive. Using multi-threading technology to improve performance upon the original computational model, multi-threading experimental results have been tested and reported. In addition, to the contact model, the modeling of biological processes specific to the corresponding pathogen-carrier vector to increase the specificity of the vector-borne disease has been integrated. Last, automation for requesting, retrieving, parsing, and storing specific weather data and geospatial information from federal agencies to study the differences between homogenous and heterogeneous populations has been implemented.
39

Résolution de grands systèmes linéaires issus de la méthode des éléments finis sur des calculateurs massivement parallèles

Gueye, Ibrahima 15 December 2009 (has links) (PDF)
Cette étude consiste à résoudre de grands systèmes linéaires creux sur des calculateurs massivement parallèles. Ces systèmes linéaires, souvent rencontrés lors de la simulation numérique de problèmes de mécanique des structures par des codes de calcul par éléments finis, sont résolus avec des coûts très importants en temps de calcul et en espace mémoire. Dans cette thèse, nous mettons au point un parallélisme à deux niveaux et l'intégrons dans les méthodes de décomposition de domaine de type FETI. La démarche s'est organisée autour de trois chapitres principaux. Dans un premier temps, nous mettons en œuvre un solveur direct pour inverser des systèmes linéaires creux qui peuvent être symétriques ou non symétriques, réels ou complexes, à second membre simple ou multiple. La mise en œuvre, basée sur une technique de renumérotation de type dissection emboîtée, est complétée par un point utile dans beaucoup de méthodes de décomposition de domaine (construction d'un préconditionneur ou formulation de l'opérateur de FETI) : la détection de modes à énergie nulle des systèmes singuliers. Dans un deuxième temps, nous parallélisons le solveur direct à travers un modèle de parallélisme à mémoire partagée (multi-threading) pour tirer profit des nouveaux processeurs multi-coeurs. Dans un troisième temps, nous intégrons cette version multi-threads du solveur dans les méthodes FETI pour inverser les problèmes locaux en parallèle. Les résultats de cette étude mettent en évidence l'utilité des travaux effectués et l'intérêt d'utiliser comme solveur local dans les méthodes FETI un solveur direct parallèle robuste et efficace. Tout ceci peut donner accès à de nouvelles gammes de problèmes en calcul des structures. Il serait intéressant de revoir le parallélisme à gros grains entre sous-domaines dans les méthodes FETI. Cela pourrait consister à utiliser la version du solveur direct à second membre multiple pour améliorer plus la méthode itérative utilisée dans la résolution du problème d'interface.
40

Methods for Analyzing Genomes

Ståhl, Patrik L. January 2010 (has links)
The human genome reference sequence has given us a two‐dimensional blueprint of our inherited code of life, but we need to employ modern‐day technology to expand our knowledge into a third dimension. Inter‐individual and intra‐individual variation has been shown to be larger than anticipated, and the mode of genetic regulation more complex. Therefore, the methods that were once used to explain our fundamental constitution are now used to decipher our differences. Over the past four years, throughput from DNA‐sequencing platforms has increased a thousand‐fold, bearing evidence of a rapid development in the field of methods used to study DNA and the genomes it constitutes. The work presented in this thesis has been carried out as an integrated part of this technological evolution, contributing to it, and applying the resulting solutions to answer difficult biological questions. Papers I and II describe a novel approach for microarray readout based on immobilization of magnetic particles, applicable to diagnostics. As benchmarked on canine mitochondrial DNA, and human genomic DNA from individuals with cystic fibrosis, it allows for visual interpretation of genotyping results without the use of machines or expensive equipment. Paper III outlines an automated and cost‐efficient method for enrichment and titration of clonally amplified DNA‐libraries on beads. The method uses fluorescent labeling and a flow‐cytometer to separate DNA‐beads from empty ones. At the same time the fraction of either bead type is recorded, and a titration curve can be generated. In paper IV we combined the highly discriminating multiplex genotyping of trinucleotide threading with the digital readout made possible by massively parallel sequencing. From this we were able to characterize the allelic distribution of 88 obesity related SNPs in a population of 462 individuals enrolled at a childhood obesity center. Paper V employs the throughput of present day DNA sequencingas it investigates deep into sun‐exposed skin to find clues on the effects of sunlight during the course of a summer holiday. The tumor suppressor p53 gene was targeted, only to find that despite its well‐documented involvement in the disease progression of cancers, an estimated 35,000 novel sun‐induced persistent p53 mutations are added and phenotypically tolerated in the skin of every individual every year. The last paper, VI, describes a novel approach for finding breast cancer biomarkers. In this translational study we used differential protein expression profiles and sequence capture to select and enrich for 52 candidate genes in DNA extracted from ten tumors. Two of the genes turned out to harbor protein‐altering mutations in multiple individuals.

Page generated in 0.047 seconds