Spelling suggestions: "subject:"computer science"" "subject:"coomputer science""
51 |
Fairness and Approximation in Multi-version Transactional Memory.Assiri, Basem Ibrahim 11 July 2016 (has links)
Shared memory multi-core systems benet from transactional memory implementations
due to the inherent avoidance of deadlocks and progress guarantees. In
this research, we examine how the system performance is aected by transaction
fairness in scheduling and by the precision in consistency. We rst explore
the fairness aspect using a Lazy Snapshot (multi-version) Algorithm. The fairness
of transactions scheduling aims to balance the load between read-only and update
transactions. We implement a fairness mechanism based on machine learning
techniques that improve fairness decisions according to the transaction execution
history. Experimental analysis shows that the throughput of the Lazy Snapshot
Algorithm is improved with machine learning support.
We also explore the impacts on performance of consistency relaxation. In transactional
memory, correctness is typically proven with opacity which is a precise
consistency property that requires a legal serialization of an execution such that
transactions do not overlap (atomicity) and read instructions always return the
most recent value (legality). In real systems there are situations where system delays
do not allow precise consistency, such as in large scale applications, due to
network or other time delays. Thus, we introduce here the notion of approximate
consistency in transactional memory. We dene K-opacity as a relaxed consistency
property where transactions' read operations may return one of K most recent
written values. In multi-version transactional memory, this allows to save a new
object version once every K object updates, which has two benets: (i) it reduces
space requirements by a factor of K, and (ii) it reduces the number of aborts, since
there is smaller chance for con
icts. In fact, we apply the concept of K-opacity on
regular read and write, count and queue objects, which are common objects used in typical concurrent programs. We provide formal correctness proofs and we also
demonstrate the performance benets of our approach with experimental analysis.
We compare the performance of precise consistent execution (1-opaque) with
dierent consistency values of K using micro benchmarks. The results show that
increased relaxation of opacity gives higher throughput and decreases the aborts
rate signicantly.
|
52 |
Heterogeneous construction of spatial data structuresButts, Robert O. 22 May 2015 (has links)
<p> Linear spatial trees are typically constructed in two discrete, consecutive stages: calculating location codes, and sorting the spatial data according to the codes. Additionally, a GPU R-tree construction algorithm exists which likewise consists of sorting the spatial data and calculating nodes' bounding boxes. Current GPUs are approximately three orders of magnitude faster than CPUs for perfectly vectorizable problems. However, the best known GPU sorting algorithms only achieve 10-20 times speedup over sequential CPU algorithms. Both calculating location codes and bounding boxes are perfectly vectorizable problems. We thus investigate the construction of linear quadtrees, R-trees, and linear <i>k</i>-d trees using the GPU for location code and bounding box calculation, and parallel CPU algorithms for sorting. In this endeavor, we show how existing GPU linear quadtree and R-tree construction algorithms may be modified to be heterogeneous, and we develop a novel linear <i> k</i>-d tree construction algorithm which uses an existing parallel CPU quicksort partition algorithm. We implement these heterogeneous construction algorithms, and we show that heterogeneous construction of spatial data structures can approach the speeds of homogeneous GPU algorithms, while freeing the GPU to be used for better vectorizable problems.</p>
|
53 |
Deciding static inclusion for Delta-strong and omega▿-strong intruder theories| Applications to cryptographic protocol analysisGero, Kimberly A. 28 July 2015 (has links)
<p> In this dissertation we will be studying problems relating to <i> indistinguishability</i>. This topic is of great interest and importance to cryptography. Cryptographic protocol analysis is currently being studied a great deal due to numerous high profile security breaches. The form of indistinguishability that we will be focusing on is <i>static inclusion </i> and its subcase <i>static equivalence</i>. Our work in this dissertation is based on “Intruders with Caps.” Our main results are providing co-saturation procedures for deciding whether a frame <i>A</i> is statically included in a frame <i>B</i> over Δ-strong and ω▿-strong intruder theories, where a frame consists of hidden data and substitutions that represent knowledge that an intruder could have gained from eavesdropping on message exchanges by agents.</p>
|
54 |
Java 8: Completable Futures and Asynchronous PipelinesSmith, Nolan Michael 29 July 2015 (has links)
As the meteoric rise of publicly available web service APIs enables access to increasing amounts of useful data via network requests, and network requests are best handled asynchronously, asynchronous and concurrent programming in applications is becoming commonplace. However, current solutions use abstractions that, when performing pipelines of asynchronous operations, lead to poorly architected interfaces in code. The goal of this project is to discuss new abstractions intended to alleviate the pitfalls involved in asynchronous programming and present an example of sound interface design in an asynchronous Java 8 client-server application environment. The application uses Java 8 to construct a dependent network of computations that invoke publicly available web services to plan an eventful weekend at a destination city.
Given a budget and a destination city, the application retrieves flights, tickets to events, and fun public places to visit. Most importantly, the application emphasizes asynchronous interfaces that enable code to be modularized, easily read, and to minimize the latency felt by the user.
|
55 |
A hierarchical modelling and simulation environment for AI multicomputer design.Lee, Chilgee. January 1990 (has links)
The mainstream usage of computer applications is expanding from pure data processing to intelligence processing through information processing and knowledge processing. There is increasing demand for high performance computer systems to solve bigger and more complex AI problems. Simulation can offer an efficient means of investigating the enormous number of alternatives for existing or proposed computer architectures, thereby saving effort, time and cost. However, a model developed using the conventional simulation languages is non-modular, not-reusable, inflexible and provides no support for hierarchical decomposition of the system. To enable the hierarchical decomposition of systems and the development of modular, reusable models, object-oriented concepts are required. In this dissertation, an object-oriented modelling and simulation environment using the System Entity Structure (SES) and Discrete Event System Specification (DEVS) formalism is shown to be a powerful, knowledge-based environment for hierarchical modelling and simulation. This knowledge-based simulation environment provides a means for designing complex multiple processor systems. Modelling and simulating the Traveling Salesman Problem using DEVS-Scheme rule-based models with an inference engine and a set of rules is shown. Also centralized, distributed and multilevel control strategies for heuristic search by AI multiagent systems are modelled, simulated, and analyzed. The importance of high bandwidth, high connectivity communications, such as expected from optical devices, is demonstrated. Based on the experiments, a new multilevel computer architecture for artificial intelligence search applications is proposed.
|
56 |
Univers: The construction of an internet-wide descriptive naming system.Bowman, Clair Michael, II. January 1990 (has links)
This thesis describes the construction of a descriptive naming system for an internet environment. Descriptive naming systems allow clients to identify a set objects by description. In a world where information is perfect, this amounts to a simple database query. However, descriptive naming systems operate in an imperfect world: clients may provide inaccurate descriptions, the database may contain out-of-date or incomplete information, the database may be highly distributed, and so on. Traditional strategies for handling imperfect information approach the problem from the database's perspective; i.e., queries are resolved according to a method determined by the database designer. This thesis presents a model, called a preference hierarchy, that allows the client to define the meaning of a preferred answer. A preferred answer is computed using knowledge about the quality of information in the database and in the query. Specifically, clients provide the naming system with a description of an object and some meta-information that describes the client's beliefs about the query and the naming system. This meta-information is an ordering on a set of perfect-world approximations and it describes the preferred methods for accommodating imperfect information. The description is then resolved in a way that respects the preferred approximations. The preference hierarchy may be used to solve problems associated with some forms of imperfect information that exist in descriptive naming systems. It also provides a foundation for designing and comparing various naming systems. For example, the preference hierarchy allows us to compare naming systems based on how discriminating they are, and to identify the class of names for which a given naming system is sound and complete. A study of several example naming systems demonstrates how the preference hierarchy can be used as a formal tool for designing naming systems. Univers is a generic attribute-based name server that implements the preference hierarchy model. It provides a foundation upon which a variety of high-level naming services can be built. It is a platform for constructing an internet-wide descriptive naming system. This thesis describes several aspects of its implementation and demonstrates how various descriptive naming services--including a global white-pages service, a local yellow-pages services, and a conventional name-to-address mapper--can be built on top of Univers.
|
57 |
Hierarchical optimistic distributed simulation: Combining DEVS and Time Warp.Christensen, Eric Richard. January 1990 (has links)
Conventional simulation environments and languages do not provide a unified approach to system decomposition and modelling. Also noticeably lacking is the support for model reuse. In this time of constrained resources--people, time, money--it is imperative that the new methodologies present in parallel computing, software engineering, and artificial intelligence be applied to the modelling and simulation domain. Additionally modelling and simulation must move from one time modelling efforts in isolation to an integrated multifaceted system modelling approach maximizing model reuse and optimizing the constrained resources. This dissertation reviews the concepts of Discrete Event System Specification (DEVS) formalism and its associated abstract simulator concepts, the Ada programming language, and the conservative and optimistic distributed simulation paradigms. Then requirements for a distributed modelling and simulation environment which incorporate the new methodologies present in parallel computing, software engineering and artificial intelligence are proposed. A hierarchical optimistic distributed modelling and simulation environment is implemented in Ada. The environment combines the DEVS formalism and its associated abstract simulators with the Time Warp optimistic distributed simulation paradigm. The implemented modelling and simulation environment (DEVS-Ada) is then examined with respect to how it meets the requirements for a distributed modelling and simulation environment. A simulation study is conducted measuring the performance of the nondistributed versus distributed implementations of DEVS-Ada using the replicative validation of a Single Server Without Queue model. Additional studies are conducted examining the effect of model to processor mappings, and the use of flat versus hierarchical models.
|
58 |
Relational communication in computer-mediated interaction.Walther, Joseph Bart. January 1990 (has links)
This study involved an experiment of the effects of time and communication channel--computer conferencing versus face-to-face meetings--on impression development, message personalization, and relational communication in groups. Prior research on the relational aspects of computer-mediated communication has suggested strong depersonalizing effects of the medium due to the absence of nonverbal cues. Past research is criticized for failing to incorporate temporal and developmental perspectives on information processing and relational development. In this study data were collected from, and observations made of 96 subjects assigned to computer conferencing or traditional zero-history groups of three, who completed three tasks over several weeks' time. Results showed that computer-mediated groups increased in several relational dimensions to more positive levels, and that these subsequent levels approximated those of face-to-face groups. Boundaries on the predominant theories of computer-mediated communication are recommended, and future research is suggested.
|
59 |
Interactive graph layout: The exploration of large graphs.Henry, Tyson Rombauer. January 1992 (has links)
Directed and undirected graphs provide a natural notation for describing many fundamental structures of computer science. Unfortunately graphs are hard to draw in an easy to read fashion. Traditional graph layout algorithms have focused on creating good layouts for the entire graph. This approach works well with smaller graphs, but often cannot produce readable layouts for large graphs. This dissertation presents a novel methodology for viewing large graphs. The basic concept is to allow the user to interactively navigate through large graphs, learning about them in appropriately small and concise pieces. The motivation of this approach is that large graphs contain too much information to be conveyed by a single canonical layout. For a user to be able to understand the data encoded in the graph she must be able to carve up the graph into manageable pieces and then create custom layouts that match her current interests. An architecture is presented that supports graph exploration. It contains three new concepts for supporting interactive graph layout: interactive decomposition of large graphs, end-user specified layout algorithms, and parameterized layout algorithms. The mechanism for creating custom layout algorithms provides the non-programming end-user with the power to create custom layouts that are well suited for the graph at hand. New layout algorithms are created by combining existing algorithms in a hierarchical structure. This method allows the user to create layouts that accurately reflect the current data set and her current interests. In order to explore a large graph, the user must be able to break the graph into small, more manageable pieces. A methodology is presented that allows the user to apply graph traversal algorithms to large graphs to carve out reasonably sized pieces. Graph traversal algorithms can be combined using a visual programming language. This provides the user with the control to select subgraphs that are of particular interest to her. The ability to Parameterize layout algorithms provides the user with control over the layout process. The user can customize the generated layout by changing parameters to the layout algorithm. Layout algorithm parameterization is placed into an interactive framework that allows the user to iteratively fine tune the generated layout. As a proof of concept, examples are drawn from a working prototype that incorporates this methodology.
|
60 |
Approximate pattern matching and its applications.Wu, Sun. January 1992 (has links)
In this thesis, we study approximate pattern matching problems. Our study is based on the Levenshtein distance model, where errors considered are 'insertions', 'deletions', and 'substitutions'. In general, given a text string, a pattern, and an integer k, we want to find substrings in the text that match the pattern with no more than k errors. The pattern can be a fixed string, a limited expression, or a regular expression. The problem has different variations with different levels of difficulties depending on the types of the pattern as well as the constraint imposed on the matching. We present new results both of theoretical interest and practical value. We present a new algorithm for approximate regular expression matching, which is the first to achieve a subquadratic asymptotic time for this problem. For the practical side, we present new algorithms for approximate pattern matching that are very efficient and flexible. Based on these algorithms, we developed a new software tool called 'agrep', which is the first general purpose approximate pattern matching tool in the UNIX system. 'agrep' is not only usually faster than the UNIX 'grep/egrep/fgrep' family, it also provides many new features such as searching with errors allowed, record-oriented search, AND/OR combined patterns, and mixed exact/approximate matching. 'agrep' has been made publicly available through anonymous ftp from cs.arizona.edu since June 1991.
|
Page generated in 0.104 seconds