• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 7
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 96
  • 42
  • 29
  • 28
  • 19
  • 18
  • 17
  • 13
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

On testing concurrent systems through contexts of queues

Huo, Jiale. January 2006 (has links)
Concurrent systems, including asynchronous circuits, computer networks, and multi-threaded programs, have important applications, but they are also very complex and expensive to test. This thesis studies how to test concurrent systems through contexts consisting of queues. Queues, modeling buffers and communication delays, are an integral part of the test settings for concurrent systems. However, queues can also distort the behavior of the concurrent system as observed by the tester, so one should take into account the queues when defining conformance relations or deriving tests. On the other hand, queues can cause state explosion, so one should avoid testing them if they are reliable or have already been tested. To solve these problems, we propose two different solutions. The first solution is to derive tests using some test selection criteria such as test purposes, fault coverage, and transition coverage. The second solution is to compensate for the problems caused by the queues so that testers do not discern the presence of the queues in the first place. Unifying the presentation of the two solutions, we consider in a general testing framework partial specifications, various contexts, and a hierarchy of conformance relations. Case studies on test derivation for asynchronous circuits, communication protocols, and multi-threaded programs are presented to demonstrate the applications of the results.
52

Efficient synchronization for a large-scale multi-core chip architecture

Zhu, Weirong. January 2007 (has links)
Thesis (D.Eng.)--University of Delaware, 2007. / Principal faculty advisor: Guang R. Gao, Dept. of Electrical and Computer Engineering. Includes bibliographical references.
53

Ανάπτυξη ενσωματωμένων συστημάτων σε πολυπήρηνο ή πολυεπεξεργαστικό περιβάλλον με χρήση Real Time Java

Δημητρακόπουλος, Γεώργιος 20 October 2010 (has links)
Ο σκοπός της παρούσας διπλωματικής εργασίας είναι να μελετηθεί με ποιον τρόπο μια εφαρμογή μπορεί να αξιοποιήσει την παρουσία πολλών επεξεργαστών σε ένα σύστημα. Το σύστημα προς μελέτη είναι το Festo MPS, το οποίο είναι ένα κατανεμημένο ενσωματωμένο σύστημα πραγματικού χρόνου, αποτελούμενο από τρεις υπομονάδες. Το σύστημα έχει υλοποιηθεί σε Real Time Java, μια επέκταση της Java, η οποία ανταποκρίνεται σε απαιτήσεις πραγματικού χρόνου. Η εφαρμογή εκτελείται σε μια Java Virtual Machine πραγματικού χρόνου, η οποία με τη σειρά της εκτελείται σε ένα λειτουργικό σύστημα τύπου Linux. Το κάθε επίπεδο έχει διάφορους μηχανισμούς έτσι ώστε να αξιοποιεί τους διαθέσιμους επεξεργαστές. Ερευνούνται τρόποι με τους οποίους ο προγραμματιστής μπορεί να διευκολυνθεί στο έργο του, γράφοντας αποδοτικότερο, μικρότερο και καθαρότερο παράλληλο κώδικα, καθώς και οι επιλογές, ώστε να καθορίζει ο ίδιος επακριβώς τον τρόπο εκτέλεσης, όταν αυτό απαιτείται. Τελικός στόχος είναι να εκτελεστεί μια προσομοίωση του συστήματος, όπου κάθε υπομονάδα θα εκτελείται σε διαφορετικό επεξεργαστή. Αυτό επιτυγχάνεται με τη βοήθεια των κλήσεων του λειτουργικού συστήματος μέσω Java Native Interface. / The purpose of this thesis is to study how an application can exploit many processors available in a system. The case study system is the Festo MPS, which is a distributed embedded real time system consisting of three subunits. The system has been implemented in Real Time Java, an extension of Java, which responds to real-time requirements. The application runs on a Real Time Java Virtual Machine, which in turn runs on a Linux type operating system. Each level has several mechanisms to utilize the available processors. There are explored ways in which the programmer can be facilitated in his work by writing more efficient, smaller and cleaner parallel code and also the options to set himself how the code will be executed when required. The ultimate goal is to run a simulation of the system where each subunit will run on its own processor. This is achieved through calls to the operating system through Java Native Interface.
54

Enabling Multi-threaded Applications on Hybrid Shared Memory Manycore Architectures

January 2014 (has links)
abstract: As the number of cores per chip increases, maintaining cache coherence becomes prohibitive for both power and performance. Non Coherent Cache (NCC) architectures do away with hardware-based cache coherence, but they become difficult to program. Some existing architectures provide a middle ground by providing some shared memory in the hardware. Specifically, the 48-core Intel Single-chip Cloud Computer (SCC) provides some off-chip (DRAM) shared memory some on-chip (SRAM) shared memory. We call such architectures Hybrid Shared Memory, or HSM, manycore architectures. However, how to efficiently execute multi-threaded programs on HSM architectures is an open problem. To be able to execute a multi-threaded program correctly on HSM architectures, the compiler must: i) identify all the shared data and map it to the shared memory, and ii) map the frequently accessed shared data to the on-chip shared memory. This work presents a source-to-source translator written using CETUS that identifies a conservative superset of all the shared data in a multi-threaded application and maps it to the shared memory such that it enables execution on HSM architectures. / Dissertation/Thesis / Masters Thesis Computer Science 2014
55

Ambiente independente de idioma para suporte a identificação de tuplas duplicadas por meio da similaridade fonética e numérica: otimização de algoritmo baseado em multithreading

Andrade, Tiago Luís de [UNESP] 05 August 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:29:40Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-08-05Bitstream added on 2014-06-13T19:38:58Z : No. of bitstreams: 1 andrade_tl_me_sjrp.pdf: 1077520 bytes, checksum: 1573dc8642ce7969baffac2fd03d22fb (MD5) / Com o objetivo de garantir maior confiabilidade e consistência dos dados armazenados em banco de dados, a etapa de limpeza de dados está situada no início do processo de Descoberta de Conhecimento em Base de Dados (Knowledge Discovery in Database - KDD). Essa etapa tem relevância significativa, pois elimina problemas que refletem fortemente na confiabilidade do conhecimento extraído, como valores ausentes, valores nulos, tuplas duplicadas e valores fora do domínio. Trata-se de uma etapa importante que visa a correção e o ajuste dos dados para as etapas posteriores. Dentro dessa perspectiva, são apresentadas técnicas que buscam solucionar os diversos problemas mencionados. Diante disso, este trabalho tem como metodologia a caracterização da detecção de tuplas duplicadas em banco de dados, apresentação dos principais algoritmos baseados em métricas de distância, algumas ferramentas destinadas para tal atividade e o desenvolvimento de um algoritmo para identificação de registros duplicados baseado em similaridade fonética e numérica independente de idioma, desenvolvido por meio da funcionalidade multithreading para melhorar o desempenho em relação ao tempo de execução do algoritmo. Os testes realizados demonstram que o algoritmo proposto obteve melhores resultados na identificação de registros duplicados em relação aos algoritmos fonéticos existentes, fato este que garante uma melhor limpeza da base de dados / In order to ensure greater reliability and consistency of data stored in the database, the data cleaning stage is set early in the process of Knowledge Discovery in Database - KDD. This step has significant importance because it eliminates problems that strongly reflect the reliability of the knowledge extracted as missing values, null values, duplicate tuples and values outside the domain. It is an important step aimed at correction and adjustment for the subsequent stages. Within this perspective, techniques are presented that seek to address the various problems mentioned. Therefore, this work is the characterization method of detecting duplicate tuples in the database, presenting the main algorithms based on distance metrics, some tools designed for such activity and the development of an algorithm to identify duplicate records based on phonetic similarity numeric and language-independent, developed by multithreading functionality to improve performance over the runtime of the algorithm. Tests show that the proposed algorithm achieved better results in identifying duplicate records regarding phonetic algorithms exist, a fact that ensures better cleaning of the database
56

Visualization of complex events in multithreading systems

Iakovenko, Volodymyr January 2013 (has links)
Today more and more applications become multithreaded, because parallel processing of multiple threads improves program performance on computer systems that have multiple CPUs. With growing of amount of multithreaded applications also amount of problems increases, which can occur during program execution. There are many tools, which help with multithreading debugging. They all differ from each other and they are all good in solving specific kinds of problems that they aim to solve, but still there is much work has to be done in multithreading debugging area. The aim of this project is to create a solution, with use of each users can define events, which are executed in their single- or multithreaded applications and should be visualized for future debugging. This will help user to see how the application works on a logical level.
57

Forward plus rendering performance using the GPU vs CPU multi-threading. : A comparative study of culling process in Forward plus

Rahm, Marcus January 2017 (has links)
Context. The rendering techniques in games have the goal of shading the scene with as high of a quality as possible while being as efficient as possible. With more advanced tools being developed such as a compute shader. It has allowed for more efficient speed up of the shading process. One rendering technique that makes use of this, is Forward plus rendering. Forward plus rendering make use of a compute shader to perform a culling pass of all the lights. However, not all computers can make use of compute shaders. Objectives. The aims of this thesis are to investigate the performance of using the CPU to perform the light culling required by the Forward plus rendering technique, comparing it to the performance of a GPU implementation. With that, the aim is also to explore if the CPU can be an alternative solution for the light culling by the Forward plus rendering technique. Methods. The standard Forward plus is implemented using a compute shader. After which Forward plus is then implemented using CPU multithreaded to perform the light culling. Both versions of Forward plus are evaluated by sampling the frames per second during the tests with specific properties. Results. The results show that there is a difference in performance between the CPU and GPU implementation of Forward plus. This difference is fairly significant as with 256 lights rendered the GPU implementation has 126% more frames per second over the CPU implementation of Forward plus. However, the results show that the performance of the CPU implementation of Forward plus is viable. As the performance stays above 30 frames per second with less than 2048 lights in the scene. The performance also outperforms the performance of basic Forward rendering. Conclusions. The conclusion of this thesis shows that multi-threaded CPU can be used for culling lights for Forward plus rendering. It is also a viable chose over basic Forward rendering. With 64 lights the CPU implementation performs with 133% more frames per second over the basic Forward rendering.
58

Network Processor specific Multithreading tradeoffs

Boivie, Victor January 2005 (has links)
Multithreading is a processor technique that can effectively hide long latencies that can occur due to memory accesses, coprocessor operations and similar. While this looks promising, there is an additional hardware cost that will vary with for example the number of contexts to switch to and what technique is used for it and this might limit the possible gain of multithreading. Network processors are, traditionally, multiprocessor systems that share a lot of common resources, such as memories and coprocessors, so the potential gain of multithreading could be high for these applications. On the other hand, the increased hardware required will be relatively high since the rest of the processor is fairly small. Instead of having a multithreaded processor, higher performance gains could be achieved by using more processors instead. As a solution, a simulator was built where a system can effectively be modelled and where the simulation results can give hints of the optimal solution for a system in the early design phase of a network processor system. A theoretical background to multithreading, network processors and more is also provided in the thesis.
59

Analyzing Symbiosis on SMT Processors Using a Simple Co-scheduling Scheme

Lundmark, Elias, Persson, Chris January 2017 (has links)
Simultanous Multithreading (SMT) är ett koncept för att möjligöra effektivare utnyttjande av processorer genom att exekvera flera trådar samtidigt på en enda processorkärna, vilket leder till att systemet kan nyttjas till större grad. Om flera trådar använder samma funktonsenheter i procesorkärnan kommer effektiviteten att minska eftersom detta är ett scenario när SMT inte kan omvandla thread-level parallelism (TLP) till instruction-level parallelism (ILP). I tidigare arbete av de Blanche och Lundqvist föreslår de en simpel schemaläggningsprincip genom att anta att flera instanser av samma program använder samma resurser, bör dessa inte tillåtas att samköras. I detta arbete tillämpar vi deras princip på processorer med stöd för SMT, med motiveringen att flera identiska trådar använder samma funktionsenheter inom processorkärnan. Vi påvisar detta genom att förhindra program från att exekveras simultant med identiska program härleder till att SMT kan omvandla TLP till ILP oftare, när jobb inte kan utnyttja ILP självständigt. Intuitivt visar vi även att genom sakta ned ILP genom att göra det motsatta kan vi lindra belastningen på minnessystemet. / Simultaneous Multithreading (SMT) allows for more efficient processor utilization through co-executing multiple threads on a single processing core, increasing system efficiency and throughput. Multiple co-executing threads share the functional units of a processing core and if the threads use the same functional units, efficiency decreases as this is a scenario where SMT cannot convert thread-level parallelism (TLP) to instruction-level parallelism (ILP). In previous work by de Blanche and Lundqvist, they propose a simple co-scheduling principle co-scheduling multiple instances of the same job should be considered a bad co-schedule as they are more likely to use the same resources. In this thesis, we apply their principle on SMT processors with the rationale that identical threads should use the same functional units within a processing core. We demonstrate that by disallowing jobs to coexecute with itself we enable SMT to convert TLP to ILP more often and that this is true if jobs cannot exploit ILP by themselves. Intuitively, we also show that slowing down ILP by doing the opposite can alleviate the stress on the memory system.
60

Evaluating the Scalability of SDF Single-chip Multiprocessor Architecture Using Automatically Parallelizing Code

Zhang, Yuhua 12 1900 (has links)
Advances in integrated circuit technology continue to provide more and more transistors on a chip. Computer architects are faced with the challenge of finding the best way to translate these resources into high performance. The challenge in the design of next generation CPU (central processing unit) lies not on trying to use up the silicon area, but on finding smart ways to make use of the wealth of transistors now available. In addition, the next generation architecture should offer high throughout performance, scalability, modularity, and low energy consumption, instead of an architecture that is suitable for only one class of applications or users, or only emphasize faster clock rate. A program exhibits different types of parallelism: instruction level parallelism (ILP), thread level parallelism (TLP), or data level parallelism (DLP). Likewise, architectures can be designed to exploit one or more of these types of parallelism. It is generally not possible to design architectures that can take advantage of all three types of parallelism without using very complex hardware structures and complex compiler optimizations. We present the state-of-art architecture SDF (scheduled data flowed) which explores the TLP parallelism as much as that is supplied by that application. We implement a SDF single-chip multiprocessor constructed from simpler processors and execute the automatically parallelizing application on the single-chip multiprocessor. SDF has many desirable features such as high throughput, scalability, and low power consumption, which meet the requirements of the next generation of CPU design. Compared with superscalar, VLIW (very long instruction word), and SMT (simultaneous multithreading), the experiment results show that for application with very little parallelism SDF is comparable to other architectures, for applications with large amounts of parallelism SDF outperforms other architectures.

Page generated in 0.0796 seconds