Within the last decade multi-core processors have become increasingly commonplace with the
power and performance demands of modern real-world programs acting to accelerate this trend. The
rapid advancements in designing and adoption of such architectures mean that there is a serious need
for programming models that allow the development of correct parallel programs that execute
efficiently on these processors. A principle problem in this regard is that of efficiently synchronizing
concurrent accesses to shared memory. Traditional solutions to this problem are either inefficient but
provide programmability (coarse-grained locks) or are efficient but are not composable and very hard
to program and verify (fine-grained locks). Optimistic Transactional Memory systems provide many of
the composability and programmabillity advantages of coarse-grained locks and good theoretical
scaling but several studies have found that their performance in practice for many programs remains
quite poor primarily because of the high overheads of providing safe optimism. Moreover current
transactional memory models remain rigid - they are not suited for expressing some of the complex
thread interactions that are prevalent in modern parallel programs. Moreover, the synchronization
achieved by these transactional memory systems is at the physical or memory level.
This thesis advocates a position that memory synchronization problem for threads should be
modeled and solved in terms of synchronization of underlying program values which have semantics
associated with them. It presents optimistic synchronization techniques that address the semantic
synchronization requirements of a parallel program instead.
These techniques include methods to 1) enable optimistic transactions to recover from
expensive sharing conflicts without discarding all the work made possible by the optimism 2) enable a
hybrid pessimistic-optimistic form of concurrency control that lowers overheads 3) make
synchronization value-aware and semantics-aware 4) enable finer grained consistency rules (than
allowed by traditional optimistic TM models) therefore avoiding conflicts that do not enforce any
semantic property required by the program. In addition to improving the expressibility of specific
synchronization idioms all these techniques are also effective in improving parallel performance. This
thesis formulates these techniques in terms of their purpose, the extensions to the language, the
compiler as well as to the concurrency control runtime necessary to implement them. It also briefly
presents an experimental evaluation of each of them on a variety of modern parallel workloads. These
experiments show that these techniques significantly improve parallel performance and scalability over
programs using state-of-the-art optimistic synchronization methods.
Identifer | oai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/42802 |
Date | 06 October 2011 |
Creators | Sreeram, Jaswanth |
Publisher | Georgia Institute of Technology |
Source Sets | Georgia Tech Electronic Thesis and Dissertation Archive |
Detected Language | English |
Type | Dissertation |
Page generated in 0.0019 seconds