• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 12
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Features of a Multi-Threaded Memory Allocator

Wasik, Ayelet January 2008 (has links)
Multi-processor computers are becoming increasingly popular and are important for improving application performance. Providing high-performance memory-management is important for multi-threaded programs. This thesis looks at memory allocation of dynamic-allocation memory in concurrent C and C++ programs. The challenges facing the design of any memory allocator include minimizing fragmentation, and promoting good locality. A multi-threaded memory-allocator is also concerned with minimizing contention, providing mutual exclusion, avoiding false-sharing, and preventing heap-blowup (a form of fragmentation). Several potential features are identified in existing multi-threaded memory-allocators. These features include per-thread heaps with a global heap, object ownership, object containers, thread-local free-list buffers, remote free-lists, allocation buffers, and lock-free operations. When used in different combinations, these features can solve most of the challenges facing a multi-threaded memory-allocator. Through the use of a test suite composed of both single and multi-threaded benchmark programs, several existing memory allocators and a set of new allocators are compared. It is determined that different features address different multi-threaded issues in the memory allocator with respect to performance, scaling, and fragmentation. Finally, recommendations are made for the design of a general-purpose memory-allocator.
2

Features of a Multi-Threaded Memory Allocator

Wasik, Ayelet January 2008 (has links)
Multi-processor computers are becoming increasingly popular and are important for improving application performance. Providing high-performance memory-management is important for multi-threaded programs. This thesis looks at memory allocation of dynamic-allocation memory in concurrent C and C++ programs. The challenges facing the design of any memory allocator include minimizing fragmentation, and promoting good locality. A multi-threaded memory-allocator is also concerned with minimizing contention, providing mutual exclusion, avoiding false-sharing, and preventing heap-blowup (a form of fragmentation). Several potential features are identified in existing multi-threaded memory-allocators. These features include per-thread heaps with a global heap, object ownership, object containers, thread-local free-list buffers, remote free-lists, allocation buffers, and lock-free operations. When used in different combinations, these features can solve most of the challenges facing a multi-threaded memory-allocator. Through the use of a test suite composed of both single and multi-threaded benchmark programs, several existing memory allocators and a set of new allocators are compared. It is determined that different features address different multi-threaded issues in the memory allocator with respect to performance, scaling, and fragmentation. Finally, recommendations are made for the design of a general-purpose memory-allocator.
3

Machine Assisted Reasoning for Multi-Threaded Java Bytecode / Datorstödda resonemang om multi-trådad Java-bytekod

Lagerkvist, Mikael Zayenz January 2005 (has links)
<p>In this thesis an operational semantics for a subset of the Java Virtual Machine (JVM) is developed and presented. The subset contains standard operations such as control flow, computation, and memory management. In addition, the subset contains a treatment of parallel threads of execution.</p><p> </p><p>The operational semantics are embedded into a $µ$-calculus based proof assistant, called the VeriCode Proof Tool (VCPT). VCPT has been developed at the Swedish Institute of Computer Science (SICS), and has powerful features for proving inductive assertions.</p><p> </p><p>Some examples of proving properties of programs using the embedding are presented.</p> / <p>I det här examensarbetet  presenteras en operationell semantik för en delmängd av Javas virtuella maskin. Den delmängd som hanteras innehåller kontrollflöde, beräkningar och minneshantering. Vidare beskrivs  semantiken för parallella exekveringstrådar.</p><p>Den operationella semantiken formaliseras i en bevisassistent for $µ$-kalkyl, VeriCode Proof Tool (VCPT). VCPT har utvecklats vid Swedish Institiute of Computer Science (SICS), och har kraftfulla tekniker för att bevisa induktiva påståenden.</p><p>Några exempel på bevis av egenskaper hos program användandes formaliseringen presenteras också.</p>
4

TENA Software Decommutation System

Wigent, Mark A., Mazzario, Andrea M. 10 1900 (has links)
The Test and Training Enabling Architecture (TENA) is implemented within the TENA Software Decommutation System (TSDS) in order to bring TENA as close as possible to the sensor interface. Key attributes of TSDS include: • TSDS is a software-based approach to telemetry stream decommutation implemented within Java. This offers technical advantages such as platform independence and portability. • TSDS uses auto code generation technologies to further reduce the effort associated with updating decommutation systems to support new telemetry stream definitions. Users of TSDS within the range are not required to have detailed knowledge of proprietary protocols, nor are they required to have an understanding of how to implement decommutation within software. The use of code generation in software decommutation offers potential cost savings throughout the entire T&E community. • TSDA offers a native TENA interface so that telemetry data can be published directly into TENA object models.
5

PERFORMANCE-AWARE RESOURCE MANAGEMENT OF MULTI-THREADED APPLICATIONS FOR MANY-CORE SYSTEMS

Olsen, Daniel 01 August 2016 (has links)
Future integrated systems will contain billions of transistors, composing tens to hundreds of IP cores. Modern computing platforms take advantage of this manufacturing technology advancement and are moving from Multi-Processor Systems-on-Chip (MPSoC) towards Many-Core architectures employing high numbers of processing cores. These hardware changes are also driven by application changes. The main characteristic of modern applications is the increased parallelism and the need for data storage and transfer. Resource management is a key technology for the successful use of such many-core platforms. The thread to core mapping can deal with the run-time dynamics of applications and platforms. Thus, the efficient resource management enables the efficient usage of the platform resources. maximizing platform utilization, minimizing interconnection network communication load and energy budget. In this thesis, we present a performance-aware resource management scheme for many- core architectures. Particular, the developed framework takes as input parallel applications and performs an application profiling. Based on that profile information, a thread to core mapping algorithm finds (i) the appropriate number of threads that this application will have in order to maximize the utilization of the system and (ii) the best mapping for maximizing the performance of the application under the selected number of threads. In order to validate the proposed algorithm, we used and extended the Sniper, state-of-art, many-core simulator. Last, we developed a discrete event simulator, on top of Sniper simulator, in order to test and validate multiple scenarios faster. The results show that the the proposed methodology, achieves on average a gain of 23% compared to a performance oriented mapping presented and each application completes its workload 18% faster on average.
6

Guided Testing for Automatic Error Discovery in Concurrent Software

Rungta, Neha Shyam 14 September 2009 (has links) (PDF)
The quality and reliability of software systems, in terms of their functional correctness, critically relies on the effectiveness of the testing tools and techniques to detect errors in the system before deployment. A lack of testing tools for concurrent programs that systematically control thread scheduling choices has not allowed concurrent software development to keep abreast with hardware trends of multi-core and multi-processor technologies. This motivates a need for the development of systematic testing techniques that detect errors in concurrent programs. The work in this dissertation presents a potentially scalable technique that can be used to detect concurrency errors in production code. The technique is a viable solution for software engineers and testers to detect errors in multi-threaded programs before deployment. We present a guided testing technique that combines static analysis techniques, systematic verification techniques, and heuristics to efficiently detect errors in concurrent programs. An abstraction-refinement technique lies at the heart of the guided test technique. The abstraction-refinement technique uses as input potential errors in the program generated by imprecise, but scalable, static analysis tools. The abstraction further leverages static analyses to generate a set of program locations relevant in verifying the reachability of the potential error. Program execution is guided along these points by ranking both thread and data non-determinism. The set of relevant locations is refined when program execution is unable to make progress. The dissertation also discusses various heuristics for effectively guiding program execution. We implemented the guided test technique to detect errors in Java programs. Guided test successfully detects errors caused by thread schedules and data input values in Java benchmarks and the JDK concurrent libraries for which other state of the art analysis and testing tools for concurrent programs are unable to find an error.
7

Multi-database support in the recursive multi-threaded software process management tool

Kuo, Yi-Chiun 01 January 2002 (has links)
The Recursive Multi-Threaded (RMT) software process management tool gives software developers the following capabilities: break a large project into a sequence of prototypes (or threads) track these threads individually, and estimate the progress and completion date of the project from these individual threads. The goal of this project is to provide the RMT Tool with an ability to support multi-database for collaborative software development. As a demonstration, actual data is used from several previous algorithma projects.
8

Use of Multi-Threading, Modern Programming Language, and Lossless Compression in a Dynamic Commutation/Decommutation System

Wigent, Mark A., Mazzario, Andrea M., Matsumura, Scott M. 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / The Spectrum Efficient Technology Science and Technology (SET S&T) Program is sponsoring the development of the Dynamic Commutation and Decommutation System (DCDS), which optimizes telemetry data transmission in real time. The goal of DCDS is to improve spectrum efficiency - not through improving RF techniques but rather through changing and optimizing contents of the telemetry stream during system test. By allowing the addition of new parameters to the telemetered stream at any point during system test, DCDS removes the need to transmit measured data unless it is actually needed on the ground. When compared to serial streaming telemetry, real time re-formatting of the telemetry stream does require additional processing onboard the test article. DCDS leverages advances in microprocessor technology to perform this processing while meeting size, weight, and power constraints of the test environment. Performance gains of the system have been achieved by significant multi-threading of the application, allowing it to run on modern multi-core processors. Two other enhancing technologies incorporated into DCDS are the Java programming language and lossless compression.
9

Relaxing Concurrency Control in Transactional Memory

Aydonat, Utku 05 January 2012 (has links)
Transactional memory (TM) systems have gained considerable popularity in the last decade driven by the increased demand for tools that ease parallel programming. TM eliminates the need for user-locks that protect accesses to shared data. It offers performance close to that of fine-grain locking with the programming simplicity of coarse-grain locking. Today’s TM systems implement the two-phase-locking (2PL) algorithm which aborts transactions every time a conflict occurs. 2PL is a simple algorithm that provides fast transactional operations. However, it limits concurrency in applications with high contention because it increases the rate of aborts. We propose the use of a more relaxed concurrency control algorithm to provide better concurrency. This algorithm is based on the conflict-serializability (CS) model. Unlike 2PL, it allows some transactions to commit successfully even when they make conflicting accesses. We implement this algorithm both in a software TM system as well as in a simulator of a hardware TM system. Our evaluation using TM benchmarks shows that the algorithm improves the performance of applications with long transactions and high abort rates. Performance is improved by up to 299% in the software TM, and up to 66% in the hardware simulator. We argue that these improvements come with little additional complexity and require no changes to the transactional programming model. This makes our implementation feasible
10

Relaxing Concurrency Control in Transactional Memory

Aydonat, Utku 05 January 2012 (has links)
Transactional memory (TM) systems have gained considerable popularity in the last decade driven by the increased demand for tools that ease parallel programming. TM eliminates the need for user-locks that protect accesses to shared data. It offers performance close to that of fine-grain locking with the programming simplicity of coarse-grain locking. Today’s TM systems implement the two-phase-locking (2PL) algorithm which aborts transactions every time a conflict occurs. 2PL is a simple algorithm that provides fast transactional operations. However, it limits concurrency in applications with high contention because it increases the rate of aborts. We propose the use of a more relaxed concurrency control algorithm to provide better concurrency. This algorithm is based on the conflict-serializability (CS) model. Unlike 2PL, it allows some transactions to commit successfully even when they make conflicting accesses. We implement this algorithm both in a software TM system as well as in a simulator of a hardware TM system. Our evaluation using TM benchmarks shows that the algorithm improves the performance of applications with long transactions and high abort rates. Performance is improved by up to 299% in the software TM, and up to 66% in the hardware simulator. We argue that these improvements come with little additional complexity and require no changes to the transactional programming model. This makes our implementation feasible

Page generated in 0.0672 seconds