201 |
Performance monitoring and analysis environment for distributed memory MIMD programsImre, Kayhan January 1993 (has links)
This thesis studies event monitoring techniques that are used for collecting, filtering and visualising event traces from parallel programs. Implementations of two experimental monitoring systems are presented. The first system is a hybrid implementation which uses extra hardware to collect event traces. The second system is a software implementation which was implemented on the Edinburgh Concurrent Supercomputer. These two systems can gather event traces from the parallel programs at a very low cost. The event abstraction mechanism is used for filtering these event traces. The generic and application specific performance metrics are achieved by using event abstraction techniques to replace event patterns with new abstract events which are used for visualising performance related behaviour. The strengths and weaknesses of the event abstraction approach are discussed in the context of performance analysis and visualisation of message passing parallel programs.
|
202 |
Mathematical model of concurrent computationMilne, George Johnstone January 1977 (has links)
A mathematical model is presented in which we can understand and discuss the behaviour of concurrent computing agents such as interconnecting hardware modules, operating system components and parallel programs. It is shown that it is natural to represent computing agents in a "value-passing" framework rather than by using a global store. Proof techniques involving computation induction which allows us to reason about psuLesses and the agents they represent in a concise manner are also given, together with a uniform method of modelling the scheduling of a number of computing agents. Two scheduling techniques involving this method are presented and they are shown to be equivalent. This result is used in a final example where we use the process model to produce two equivalent denotational semantics for a concurrent programming language involving path expressions.
|
203 |
Descriptive simplicity in parallel computingMarr, Marcus January 1997 (has links)
The programming of parallel computers is recognised as being a difficult task and there exist a wide selection of parallel programming languages and environments. This thesis presents and examines the Hierarchical Skeleton Model (HSM), a model of parallel programming that combines ease of use, portability and flexibility. The model is based on the exploitation of nested parallelism in parallel algorithms expressed using a hierarchy of algorithmic skeletons. The model acknowledges that not all forms of parallelism can be expressed clearly using skeletons and allows the use of ad hoc parallelism within the controlled framework of the skeleton hierarchy. The thesis describes the HSM model and defines the syntax and semantics of the HSM language. The model and language are evaluated using three problems and compared against solutions written using the Fork95++ language in a shared memory environment and the C++ language with the Message Passing Interface (MPI) in a distributed memory environment. The thesis concludes that the combination of the HSM model and language with an ad hoc parallel base model proved successful in tackling the problems with clearer and more concise code than either of the alternative languages.
|
204 |
Microprogrammed control of an associative processorArmstrong, C. V. W. January 1976 (has links)
No description available.
|
205 |
High level synthesis of memory architecturesFallside, Hamish January 1995 (has links)
The development of high level tools for electronic design has been driven by the increasing demands of an ever more complex design process. The diversification in the use of electronic circuitry requires design tools tailored to application specific domains. Intelligent synthesis requires domain specific knowledge in addition to general synthesis techniques. The preponderance of synthesis systems in domains such as Digital Signal Processing is indicative of this need. Methods are presented here for the synthesis of memory architectures in one such domain: image processing. The research concentrates on performance synthesis. The techniques presented aim to optimise the design so as to minimise the memory access bottleneck of the eventual hardware implementation. The development of a synthesis system is described which serves to support the research. Algorithmic descriptions, coded in C, are processed by the tool in order to produce a structural description of a memory architecture able to implement the presented algorithms in hardware. Data flow and dependence analysis techniques are employed, these address the "high levelness" of the input algorithm, an important task if the designer is to be relieved of low level design detail. Methods for organising the algorithm's data in, and it's access from memory are presented, and experimental results are included. The organisation of data in memory is accomplished as part of the scheduling process for the user algorithm. The methods aim to optimise the hardware implementation by maximising the utilisation of the memory resources allocated during synthesis. In dealing with the access of data from memory, methods are presented for the automatic detection of memory inefficient structures in the user description, and their transformation into a representation yielding synthesised designs with greater memory throughput. Such designs are better able to support the user's algorithms within desired performance limitations. Examples are included which provide an evaluation of the techniques' efficacy.
|
206 |
Observational models of requirements evolutionFelici, Massimo January 2004 (has links)
Requirements Evolution is one of the main issues that affect development activities as well as system features (e.g., system dependability). Although researchers and practitioners recognise the importance of requirements evolution, research results and experience are still patchy. This points out a lack of methodologies that address requirements evolution. This thesis investigates the current understanding of requirements evolution and explores new directions in requirements evolution research. The empirical analysis of industrial case studies highlights software requirements evolution as an important issue. Unfortunately, traditional requirements engineering methodologies provide limited support to capture requirements evolution. Heterogeneous engineering provides a comprehensive account of system requirements. Heterogeneous engineering stresses a holistic viewpoint that allows us to understand the underlying mechanisms of evolution of socio-technical systems. Requirements, as mappings between socio-technical solutions and problems, represent an account of the history of socio-technical issues arising and being solved within industrial settings. The formal extension of an heterogeneous account of requirements provides a framework to model and capture requirements evolution. The application of the proposed framework provides further evidence that it is possible to capture and model evolutionary information about requirements. The discussion of scenarios of use stresses practical necessities for methodologies addressing requirements evolution. Finally, the identification of a broad spectrum of evolutions in socio-technical systems points out strong contingencies between system evolution and dependability. This thesis argues that the better our understanding of socio-technical evolution, the better system dependability. In summary, this thesis is concerned with software requirements evolution in industrial settings. This thesis develops methodologies to empirically investigate and model requirements evolution, hence <i>Observational Models of Requirements Evolution.</i> The results provide new insights in requirements engineering and identify the foundations for Requirements Evolution Engineering. This thesis addresses the problem of empirically understanding and modelling requirements evolution.
|
207 |
Engineering the performance of parallel applicationsMacDonald, Neil Blair January 1996 (has links)
Parallel computing platforms are widely used to run scientific applications. The vast majority of these applications are programmed in an explicitly parallel style. Often the performance of a parallel application is considered only after implementation - in the guise of performance debugging and tuning. Performance engineering approaches incorporate this into the design phase, using performance data to inform design decisions. This thesis is concerned with performance engineering of parallel applications. Performance engineering requires accurate predictive models of application performance. The accuracy of micro-analysis techniques for predicting the execution time of sequential code is investigated on a number of representative uniprocessor platforms. This approach is extended to SPMD (Single Program Multiple Data) parallel programs written in a message-passing style using collective communication operations. The approach is used to predict the execution time of commonly occurring parallel application structures, and its accuracy is assessed on a number of representative parallel platforms. Reasoning about the performance of parallel applications in the absence of contention is straightforward; situations in which all communication serialises can be analysed with a little more sophistication. Reasoning about the effects of contention between these two extreme cases is difficult. Furthermore, allowing point-to-point message-passing operations destroys the assumption of synchrony used to analyse SPMD programs using collective communications. The complexities introduced by these issues inhibits informal reasoning about performance properties of parallel systems. A formal framework for reasoning about the performance of parallel systems is developed, based on a timed process algebra - Eager Timed CCS. Methods for automatically analysing the performance of Eager Timed CCS models are developed and extended to handle abstract Eager Timed CCS models in which time can be represented symbolically. The techniques allow the derivation of parametric expressions for the execution time of models.
|
208 |
Implementation of neural networks as CMOS integrated circuitsSmith, Anthony V. W. January 1988 (has links)
This thesis describes research into the VLSI implementation of neural networks. A novel approach is detailed, which uses streams of pulses to signal neural states and chopping clocks to perform multiplication on these streams of pulses. Practical results, using custom VLSI devices, are presented. A second approach uses reduced precision arithmetic as the basis of a digital neural simulator and shows how this arithmetic technique can be used to solve neural problems. Simulation results confirm the viability of this method.
|
209 |
Digital parametric testingWard, Derek January 1991 (has links)
As minimum geometries of VLSI processes continue to shrink there have been two main effects on the field of parametric test. Firstly, structures must be able to characterise these smaller geometries and secondly the space for test structures has become more limited due to the requirement for them to be located in the scribe channel. The work of this thesis investigates methods of increasing the efficiency of test structure implementation to alleviate these problems. This work has demonstrated SPICE parameter extraction from test transistors accessed via a digitally addressed multiplexer: firstly using test circuits, to analyse pass transistor effects, then on a test chip using multiplexed access. The technique allows SPICE parameters to be extracted from transistor arrays with a large saving in the number of probe pads and hence overall silicon area. Digital misalignment structures have been implemented for the characterisation of small geometry processes. Use of such structures is demonstrated in this thesis using both a shift register output and a novel 'diode vernier' scheme. One of the main drawbacks of using shift register structures has been the requirement for a large amount of functional circuitry. The diode vernier introduced in this thesis is a simply designed structure that can be easily tested with standard parametric test equipment and requires only one diode per test structure element. Finally, a digital process control chip has been fabricated to integrate the ideas presented in this thesis. This uses multiplexers to access both test transistors and diode vernier structures. It demonstrates the feasibility of using a digital approach to parametric test chip design which has the potential to significantly reduce the area required for test structures.
|
210 |
Representation and learning schemes for sentiment analysisMukras, Rahman January 2009 (has links)
This thesis identifies four novel techniques of improving the performance of sentiment analysis of text systems. Thes include feature extraction and selection, enrichment of the document representation and exploitation of the ordinal structure of rating classes. The techniques were evaluated on four sentiment-rich corpora, using two well-known classifiers: Support Vector Machines and Na¨ıve Bayes. This thesis proposes the Part-of-Speech Pattern Selector (PPS), which is a novel technique for automatically selecting Part-of-Speech (PoS) patterns. The PPS selects its patterns from a background dataset by use of a number of measures including Document Frequency, Information Gain, and the Chi-Squared Score. Extensive empirical results show that these patterns perform just as well as the manually selected ones. This has important implications in terms of both the cost and the time spent in manual pattern construction. The position of a phrase within a document is shown to have an influence on its sentiment orientation, and that document classification performance can be improved by weighting phrases in this regard. It is, however, also shown to be necessary to sample the distribution of sentiment rich phrases within documents of a given domain prior to adopting a phrase weighting criteria. A key factor in choosing a classifier for an Ordinal Sentiment Classification (OSC) problem is its ability to address ordinal inter-class similarities. Two types of classifiers are investigated: Those that can inherently solve multi-class problems, and those that decompose a multi-class problem into a sequence of binary problems. Empirical results showed the former to be more effective with regard to both mean squared error and classification time performances. Important features in an OSC problem are shown to distribute themselves across similar classes. Most feature selection techniques are ignorant of inter-class similarities and hence easily overlook such features. The Ordinal Smoothing Procedure (OSP), which augments inter-class similarities into the feature selection process, is introduced in this thesis. Empirical results show the OSP to have a positive effect on mean squared error performance.
|
Page generated in 0.0436 seconds