261 |
The design and implementation of OCCAM/CSP support for a range of languages and platformsMoores, James January 2002 (has links)
No description available.
|
262 |
Dynamics and pragmatics for high performance concurrencyBarnes, Frederick R. M. January 2003 (has links)
This thesis is concerned with support at all levels for building highly concurrent and dynamic parallel processing systems. The CSP model of concurrency, as (largely) embodied in the occam programming language is used due to its simplicity, expressiveness, architecture- independent nature, and potential for high performance. Additionally, occam provides guarantees regarding freedom from aliasing and race-hazard error. This thesis addresses one of the grand challenges of present day computer science: providing a software technology that offers the dynamic flexibility and performance of mainstream object oriented environments with the level of safety, formal analysis, modularity and lightweight concurrency offered by CSP/occam. Two approaches to this challenge are possible: do something to make the mainstream languages (e.g. Java, C++) safe, or make occam dynamic -- without compromising its existing good properties. This thesis follows the latter route. The first part of this thesis concentrates on enhancing the occam language and run-time system, on a commodity platform (IBM PC) running the freely available Linux operating system. After a brief introduction to the various components of the kroc occam system, additions and extensions to the occam programming language and supporting run-time system are examined. These provide a greater degree of programming flexibility in occam (for example, by adding support for dynamic allocation, mobile semantics and dynamic network construction), without compromising the safety of programs which use them. Benchmarks are reported that demonstrate significant improvements in performance (for example, channel communication in tens of nano-seconds). The second part concentrates on improving the level of interaction between occam programs and the OS environment. Providing easy access to sockets and networking, for example. This thesis concludes with a discussion of the work presented herein, with consideration given to parallels with object-oriented languages. Also described are details of ongoing and potential future research. The modified language grammar, details of new compiler generated code, and miscellany are provided in the appendices.
|
263 |
A model for flexible telephone services based on negotiating agentsRizzo, Michael January 1996 (has links)
No description available.
|
264 |
Chinese character generation : a stroke oriented methodTseng, Kuo-Jung January 1996 (has links)
No description available.
|
265 |
A process oriented approach to solving problems of parallel decomposition and distributionDimmich, Damian J. January 2009 (has links)
This thesis argues that there is a modern, broad and growing need for programming languages and tools supporting highly concurrent complex systems. It claims traditional approaches based on threads and locks, are non-compositional and do not scale. Instead, it focuses on occam-pi, a derivative from classical Transputer occam whose process oriented concurrency model is based on a combination of the formal algebras of Hoare's Communicating Sequential Processes and Milner's pi-calculus. The advent of hybrid processors such as STI's Cell Broadband Engine, which consists of a PowerPC core and eight vector co-processors on a single die, NVidia's graphics-processor based CUDA architecture and Intel's upcoming Larabee require new programming paradigms in order to be used effectively. occam-pi's compositional concurrency model simplifies the management of complexity of concurrent programs and is capable of filling the technological gap that the new processors are creating in terms of the lack of expressiveness of concurrency in current programming languages. occam-pi's formalised basis allows reasoning about programs using formal methods techniques and avoids common concurrency errors though compile-time verification. The Transterpreter, a new portable runtime for occam-pi reduced the cost of porting occam-pi to new platforms to a minimum. Further extensions to the Transterpreter enable hardware-specific enhancements to the support library, making it possible to implement and evaluate occam-pi on new platforms in a relatively short time. The work reported in this thesis makes use of this ability and presents implementations of the Transterpreter on new and interesting processors, evaluating the use of process oriented concurrency as a programming model on such processors. Additional infrastructure that is required to make occam-pi useful on such architectures is presented, such as interfacing with legacy languages, thereby providing support for existing libraries and device drivers. Furthermore techniques for making use of vector processing capabilities that are offered by these new architectures are described. This thesis claims that the work presented makes a useful contribution to simplifying the design and construction of complex systems through the use of concurrency. By enabling both the language and the runtime to support new architectures through libraries, device drivers and direct access to hardware, it enables that contribution for learners and advanced engineers working with novel hardware.
|
266 |
The visualization of evolving searchesSuvanaphen, Edward January 2006 (has links)
No description available.
|
267 |
Adding privacy protection to policy based authorisation systemsFatema, Kaniz January 2013 (has links)
An authorisation system determines who is authorised to do what i.e. it assigns privileges to users and provides a decision on whether someone is allowed to perform a requested action on a resource. A traditional authorisation decision system, which is simply called authorisation system or system in the rest of the thesis, provides the decision based on a policy which is usually written by the system administrator. Such a traditional authorisation system is not sufficient to protect privacy of personal data, since users (the data subjects) are usually given a take it or leave it choice to accept the controlling organisation’s policy. Privacy is the ability of the owners or subjects of personal data to control the flow of data about themselves, according to their own preferences. This thesis describes the design of an authorisation system that will provide privacy for personal data by including sticky authorisation policies from the issuers and data subjects, to supplement the authorisation policy of the controlling organisation. As personal data moves from controlling system to controlling system, the sticky policies travel with the data. A number of data protection laws and regulations have been formulated to protect the privacy of individuals. The rights and prohibitions provided by the law need to be enforced by the authorisation system. Hence, the designed authorisation system also includes the authorisation rules from the legislation. This thesis describes the conversion of rules from the EU Data Protection Directive into machine executable rules. Due to the nature of the legislative rules, not all of them could be converted into deterministic machine executable rules, as in several cases human intervention or human judgement is required. This is catered for by allowing the machine rules to be configurable. Since the system includes independent policies from various authorities (law, issuer, data subject and controller) conflicts may arise among the decisions provided by them. Consequently, this thesis describes a dynamic, automated conflict resolution mechanism. Different conflict resolution algorithms are chosen based on the request contexts. As the EU Data Protection Directive allows processing of personal data based on contracts, we designed and implemented a component, Contract Validation Service (ConVS) that can validate an XML based digital contract to allow processing of personal data based on a contract. The authorisation system has been implemented as a web service and the performance of the system is measured, by first deploying it in a single computer and then in a cloud server. Finally the validity of the design and implementation are tested against a number of use cases based on scenarios involving accessing medical data in a health service provider’s system and accessing personal data such as CVs and degree certificates in an employment service provider’s system. The machine computed authorisation decisions are compared to the theoretical decisions to ensure that the system returns the correct decisions.
|
268 |
GPU optimizations for a production molecular docking codeLandaverde, Raphael J. January 2014 (has links)
Thesis (M.Sc.Eng.) -- Boston University / Scientists have always felt the desire to perform computationally intensive tasks that surpass the capabilities of conventional single core computers. As a result of this trend, Graphics Processing Units (GPUs) have come to be increasingly used for general computation in scientific research. This field of GPU acceleration is now a vast and mature discipline.
Molecular docking, the modeling of the interactions between two molecules, is a particularly computationally intensive task that has been the subject of research for many years. It is a critical simulation tool used for the screening of protein compounds for drug design and in research of the nature of life itself. The PIPER molecular docking program was previously accelerated using GPUs, achieving a notable speedup over conventional single core implementation. Since its original release the development of the CPU based PIPER has not ceased, and it is now a mature and fast parallel code. The GPU version, however, still contains many potential points for optimization. In the current work, we present a new version of GPU PIPER that attains a 3.3x speedup over a parallel MPI version of PIPER running on an 8 core machine and using the optimized Intel Math Kernel Library. We achieve this speedup by optimizing existing kernels for modern GPU architectures and migrating critical code segments to the GPU. In particular, we both improve the runtime of the filtering and scoring stages by more than an order of magnitude, and move all molecular data permanently to the GPU to improve data locality. This new speedup is obtained while retaining a computational accuracy virtually identical to the CPU based version. We also demonstrate that, due to the algorithmic dependencies of the PIPER algorithm on the 3D Fast Fourier Transform, our GPU PIPER will likely remain proportionally faster than equivalent CPU based implementations, and with little room for further optimizations.
This new GPU accelerated version of PIPER is integrated as part of the ClusPro molecular docking and analysis server at Boston University. ClusPro has over 4000 registered users and more than 50000 jobs run over the past 4 years.
|
269 |
Developing high-fidelity mental models of programming concepts using manipulatives and interactive metaphorsFuncke, Matthew January 2015 (has links)
It is well established that both learning and teaching programming are difficult tasks. Difficulties often occur due to weak mental models and common misconceptions. This study proposes a method of teaching programming that both encourages high-fidelity mental models and attempts to minimise misconceptions in novice programmers, through the use of metaphors and manipulatives. The elements in ActionWorld with which the students interact are realizations of metaphors. By simple example, a variable has a metaphorical representation as a labelled box that can hold a value. The dissertation develops a set of metaphors which have several core requirements: metaphors should avoid causing misconceptions, they need to be high-fidelity so as to avoid failing when used with a new concept, students must be able to relate to them, and finally, they should be usable across multiple educational media. The learning style that ActionWorld supports is one which requires active participation from the student - the system acts as a foundation upon which students are encouraged to build their mental models. This teaching style is achieved by placing the student in the role of code interpreter, the code they need to interpret will not advance until they have demonstrated its meaning via use of the aforementioned metaphors. ActionWorld was developed using an iterative developmental process that consistently improved upon various aspects of the project through a continual evaluation-enhancement cycle. The primary outputs of this project include a unified set of high-fidelity metaphors, a virtual-machine API for use in similar future projects, and two metaphor-testing games. All of the aforementioned deliverables were tested using multiple quality-evaluation criteria, the results of which were consistently positive. ActionWorld and its constituent components contribute to the wide assortment of methods one might use to teach novice programmers.
|
270 |
Web resource re-discovery : personal resource storage and retrieval on the World Wide WebCooper, Ian January 1998 (has links)
This thesis examines the realm of Web resource re-discovery: the location of previously visited material to enable its further use. As the Web continues to grow, new tools for managing references to useful information - aiding the ever growing numbers of users - must be developed. Retro, a personal information storage and retrieval system, is a prototype of such a tool. Examination of current practice identified two primary tools in use. The first, global indexes were shown to be inadequate - they do not have access to the full content of the Web, and therefore cannot fully support re-discovery; the second, hotlists, required manual intervention, disrupting the primary task: reading and understanding the content. To avoid problems associated with resource discovery systems, and to enable creation of automatic hotlists, Retro moves document indexing to the user's desktop. Problems involved in recording and comparing Web content were present. Personal Web proxies were used to intercept addresses and content of every visited page. Suggestions for possible use of proxy hierarchies, providing shared Web memories, were discussed. Content of HTML pages was extracted into summaries using a two stage SGML parsing technique. Document validity of only 13% indicated that such tools must be used with care. Analysis of Retro, in a limited real-world environment, indicated document re-use at a level suitable for supporting creation of automatic hotlists. Such lists provide useful supplements to existing tools. Projected requirements for personal index storage, over twelve months, averaged 15Mbytes for the Retro filter. This is within acceptable limits for modern desktop computers. Aliases, identified as a serious potential threat for re-discovery tools, were found in 1% of recorded material. Evidence demonstrates that Retro tools provide a useful supplementary environment for re-discovery, and indicates that future research to improve and extend this system is desirable.
|
Page generated in 0.0427 seconds