Spelling suggestions: "subject:"face conditions"" "subject:"race conditions""
1 |
Rabbit: A novel approach to find data-races during state-space exploration / Rabbit: A novel approach to find data-races during state-space explorationOliveira, João Paulo dos Santos 30 August 2012 (has links)
Submitted by Pedro Henrique Rodrigues (pedro.henriquer@ufpe.br) on 2015-03-05T18:45:35Z
No. of bitstreams: 2
jpso-master_rabbit_complete.pdf: 1450168 bytes, checksum: 081b9f94c19c494561e97105eb417001 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-05T18:45:35Z (GMT). No. of bitstreams: 2
jpso-master_rabbit_complete.pdf: 1450168 bytes, checksum: 081b9f94c19c494561e97105eb417001 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Previous issue date: 2012-08-30 / Data-races are an important kind of error in concurrent shared-memory programs. Software model
checking is a popular approach to find them. This research proposes a novel approach to find races
that complements model-checking by efficiently reporting precise warnings during state-space
exploration (SSE): Rabbit. It uses information obtained across different paths explored during SSE
to predict likely racy memory accesses. We evaluated Rabbit on 33 different scenarios of race,
involving a total of 21 distinct application subjects of various sources and sizes. Results indicate
that Rabbit reports race warnings very soon compared to the time the model checker detects the
race (for 84.8% of the cases it reports a true warning of race in <5s) and that the warnings it reports
include very few false alarms. We also observed that the model checker finds the actual race
quickly when it uses a guided-search that builds on Rabbit’s output (for 74.2% of the cases it
reports the race in <20s).
|
2 |
True random number generation using genetic algorithms on high performance architecturesMIJARES CHAN, JOSE JUAN 01 September 2016 (has links)
Many real-world applications use random numbers generated by pseudo-random number and true random number generators (TRNG). Unlike pseudo-random number generators which rely on an input seed to generate random numbers, a TRNG relies on a non-deterministic source to generate aperiodic random numbers. In this research, we develop a novel and generic software-based TRNG using a random source extracted from compute architectures of today. We show that the non-deterministic events such as race conditions between compute threads follow a near Gamma distribution, independent of the architecture, multi-cores or co-processors. Our design improves the distribution towards a uniform distribution ensuring the stationarity of the sequence of random variables.
We improve the random numbers statistical deficiencies by using a post-processing stage based on a heuristic evolutionary algorithm. Our post-processing algorithm is composed of two phases: (i) Histogram Specification and (ii) Stationarity Enforcement. We propose two techniques for histogram equalization, Exact Histogram Equalization (EHE) and Adaptive EHE (AEHE) that maps the random numbers distribution to a user-specified distribution. EHE is an offline algorithm with O(NlogN). AEHE is an online algorithm that improves performance using a sliding window and achieves O(N). Both algorithms ensure a normalized entropy of (0:95; 1:0].
The stationarity enforcement phase uses genetic algorithms to mitigate the statistical deficiencies from the output of histogram equalization by permuting the random numbers until wide-sense stationarity is achieved. By measuring the power spectral density standard deviation, we ensure that the quality of the numbers generated from the genetic algorithms are within the specified level of error defined by the user. We develop two algorithms, a naive algorithm with an expected exponential complexity of E[O(eN)], and an accelerated FFT-based algorithm with an expected quadratic complexity of E[O(N2)]. The accelerated FFT-based algorithm exploits the parallelism found in genetic algorithms on a homogeneous multi-core cluster. We evaluate the effects of its scalability and data size on a standardized battery of tests, TestU01, finding the tuning parameters to ensure wide-sense stationarity on long runs. / October 2016
|
3 |
Operating system transactionsPorter, Donald E. 26 January 2011 (has links)
Applications must be able to synchronize accesses to operating system (OS)
resources in order to ensure correctness in the face of concurrency
and system failures. This thesis proposes system transactions,
with which the programmer
specifies atomic updates to heterogeneous system resources and the OS
guarantees atomicity, consistency, isolation, and durability (ACID).
This thesis provides a model for system transactions as a concurrency control mechanism.
System transactions efficiently and cleanly solve long-standing
concurrency problems that are difficult to address with other
techniques.
For example, malicious users can exploit
race conditions between distinct system calls in privileged applications,
gaining administrative access to a system.
Programmers can eliminate these vulnerabilities by eliminating these
race conditions with system transactions.
Similarly, failed software installations can leave a system unusable.
System transactions can roll back an unsuccessful software installation
without disturbing concurrent, independent updates to the file system.
This thesis describes the design and implementation of TxOS,
a variant of Linux 2.6.22 that implements
system transactions. The thesis contributes new implementation
techniques that yield fast, serializable transactions
with strong isolation and fairness between system transactions and
non-transactional activity.
Using system transactions,
programmers can build
applications with better performance or stronger correctness guarantees
from simpler code. For instance, wrapping an installation of
OpenSSH in a system transaction guarantees that a failed installation
will be rolled back completely. These atomicity properties are
provided by the OS, requiring no modification to the installer itself
and adding only 10% performance overhead. The prototype implementation of system transactions also
minimizes non-transactional overheads. For instance, a non-transactional
compilation of Linux incurs negligible (less than 2%) overhead on TxOS.
Finally, this thesis describes a new lock-free linked list algorithm,
called OLF, for optimistic, lock-free lists.
OLF addresses key limitations of prior algorithms, which
sacrifice functionality for performance.
Prior lock-free list algorithms can
safely insert or delete
a single item, but cannot atomically compose multiple operations
(e.g., atomically move an item between two lists).
OLF provides
both arbitrary composition of list operations as well as performance scalability
close to previous lock-free list designs.
OLF also removes previous requirements for dynamic memory allocation and garbage collection
of list nodes, making it suitable for low-level system software, such as the Linux kernel.
We replace lists
in the Linux kernel's directory cache with OLF lists, which
currently requires a coarse-grained
lock to ensure invariants across multiple lists.
OLF lists in the Linux kernel improve performance of a filesystem metadata microbenchmark
by 3x over unmodified Linux at 8 CPUs.
The TxOS prototype demonstrates that a mature
OS running on commodity hardware can provide system transactions at a
reasonable performance cost.
As a practical OS abstraction for application developers,
system transactions facilitate
writing correct application code in the presence of
concurrency and system failures.
The OLF algorithm demonstrates that application developers can have both
the functionality of locks and the performance scalability of a lock-free linked list. / text
|
Page generated in 0.0921 seconds