231 |
A productive response to legacy system petrificationLauder, Anthony Philip James January 2001 (has links)
Requirements change. The requirements of a legacy information system change, often in unanticipated ways, and at a more rapid pace than the rate at which the information system itself can be evolved to support them. The capabilities of a legacy system progressively fall further and further behind their evolving requirements, in a degrading process termed petrification. As systems petrify, they deliver diminishing business value, hamper business effectiveness, and drain organisational resources. To address legacy systems, the first challenge is to understand how to shed their resistance to tracking requirements change. The second challenge is to ensure that a newly adaptable system never again petrifies into a change resistant legacy system. This thesis addresses both challenges. The approach outlined herein is underpinned by an agile migration process - termed Productive Migration - that homes in upon the specific causes of petrification within each particular legacy system and provides guidance upon how to address them. That guidance comes in part from a personalised catalogue of petrifying patterns, which capture recurring themes underlying petrification. These steer us to the problems actually present in a given legacy system, and lead us to suitable antidote productive patterns via which we can deal with those problems one by one. To prevent newly adaptable systems from again degrading into legacy systems, we appeal to a follow-on process, termed Productive Evolution, which embraces and keeps pace with change rather than resisting and falling behind it. Productive Evolution teaches us to be vigilant against signs of system petrification and helps us to nip them in the bud. The aim is to nurture systems that remain supportive of the business, that are adaptable in step with ongoing requirements change, and that continue to retain their value as significant business assets.
|
232 |
The divide-and-conquer method for the solution of the symmetric tridiagonal eigenproblem and transputer implementationsFachin, M. P. G. January 1994 (has links)
No description available.
|
233 |
A new blueprint for network QoSReeve, David C. January 2003 (has links)
No description available.
|
234 |
Removing garbage collector synchronisationKing, Andrew C. January 2004 (has links)
No description available.
|
235 |
Exploiting immunological metaphors in the development of serial, parallel and distributed learning algorithmsWatkins, Andrew January 2005 (has links)
This thesis examines the use of immunological metaphors in building serial, parallel, and distributed learning algorithms. It offers a basic study in the development of biologically-inspired algorithms which merge inspiration from biology with known, standard computing technology to examine robust methods of computing. This thesis begins by detailing key interactions found within the immune system that provide inspiration for the development of a learning system. It then exploits the use of more processing power for the development of faster algorithms. This leads to the exploration of distributed computing resources for the examination of more biologically plausible systems. This thesis offers the following main contributions. The components of the immune system that exhibit the capacity for learning are detailed. A framework for discussing learning algorithms is proposed. Three properties of every learning algorithm-memory, adaptation, and decision-making-are identified for this framework, and traditional learning algorithms are placed in the context of this framework. An investigation into the use of immunological components for learning is provided. This leads to an understanding of these components in terms of the learning framework. A simplification of the Artificial Immune Recognition System (AIRS) immune-inspired learning algorithm is provided by employing affinity-dependent somatic hypermutation. A parallel version of the Clonal Selection Algorithm (CLONALG) immune learning algorithm is developed. It is shown that basic parallel computing techniques can provide computational benefits for this algorithm. Exploring this technology further, a parallel version of AIRS is offered. It is shown that applying these same parallel computing techniques to AIRS, while less scalable than when applied to CLONALG, still provides computational gains. A distributed approach to AIRS is offered, and it is argued that this approach provides a more biologically appealing model. The simple distributed approach is proposed in terms of an initial step toward a more complex, distributed system. Biological immune systems exhibit complex cellular interactions. The mechanisms of these interactions, while often poorly understood, hint at an extremely powerful information processing/problem solving system. This thesis demonstrates how the use of immunological principles coupled with standard computing technology can lead to the development of robust, biologically inspired learning algorithms.
|
236 |
Software measurement for functional programmingRyder, Chris January 2004 (has links)
This thesis presents an investigation into the usefulness of software measurement techniques, also known as software metrics, for software written in functional programming languages such as Haskell. Statistical analysis is performed on a selection of metrics for Haskell programs, some taken from the world of imperative languages. An attempt is made to assess the utility of various metrics in predicting likely places that bugs may occur in practice by correlating bug fixes with metric values within the change histories of a number of case study programs. This work also examines mechanisms for visualising the results of the metrics and shows some proof of concept implementations for Haskell programs, and notes the usefulness of such tools in other software engineering processes such as refactoring.
|
237 |
Constructing efficient self-organising application layer multicast overlaysTan, Su-Wei January 2005 (has links)
This thesis investigates efficient techniques to build both low cost (i.e. low resource usage) and low delay ALM trees. We focus on self-organising distributed proposals that use limited information about the underlying physical network, limited coordination between the members, and construct overlays with bounded branching degree subject to the bandwidth constraint of each individual member.
|
238 |
Model driven language engineeringPatrascoiu, Octavian January 2005 (has links)
Modeling is a most important exercise in software engineering and development and one of the current practices is object-oriented (OO) modeling. The Object Management Group (OMG) has defined a standard object-oriented modeling language the Unified Modeling Language (UML). The OMG is not only interested in modeling languages; its primary aim is to enable easy integration of software systems and components using vendor-neutral technologies. This thesis investigates the possibilities for designing and implementing modeling frameworks and transformation languages that operate on models and to explore the validation of source and target models. Specifically, we will focus on OO models used in OMG's Model Driven Architecture (MDA), which can be expressed in terms of UML terms (e.g. classes and associations). The thesis presents the Kent Modeling Framework (KMF), a modeling framework that we developed, and describes how this framework can be used to generate a modeling tool from a model. It then proceeds to describe the customization of the generated code, in particular the definition of methods that allows a rapid and repeatable instantiation of a model. Model validation should include not only checking the well-formedness using OCL constraints, but also the evaluation of model quality. Software metrics are useful means for evaluating the quality of both software development processes and software products. As models are used to drive the entire software development process it is unlikely that high quality software will be obtained using low quality models. The thesis presents a methodology supported by KMF that uses the UML specification to compute the design metrics at an early stage of software development. The thesis presents a transformation language called YATL (Yet Another Transformation Language), which was designed and implemented to support the features provided by OMG's Request For Proposal and the future QVT standard. YATL is a hybrid language (a mix of declarative and imperative constructions) designed to answer the Query/Views/Transformations Request For Proposals issued by OMG and to express model transformations as required by the Model Driven Architecture (MDA) approach. Several examples of model transformations, which have been implemented using YATL and the support provided by KMF, are presented. These experiments investigate different knowledge areas as programming languages, visual diagrams and distributed systems. YATL was used to implement the following transformations: * UML to Java mapping * Spider diagrams to OCL mapping * EDOC to Web Services
|
239 |
Artificial immune systems for Web content mining : focusing on the discovery of interesting informationSecker, Andrew D. January 2006 (has links)
This thesis explores the way in which biological metaphors can be applied to web content mining and, more specifically, the identification of interesting information in web documents. Web content mining is the use of content found on the web, most usually the text found on web pages, for data mining tasks such as classification. Due to the nature of the search domain, i.e. the web content is noisy and undergoing constant change, an adaptive system is required. The discovery of interesting information is an advance on basic text mining in that it aims to identify text that is novel, unexpected or surprising to a user, whilst still being relevant. This thesis investigates the use of Artificial Immune Systems (AIS) applied to discovery of interesting information as AIS are thought to confer the adaptability and learning required for this task. Two novel Artificial Immune Systems are described and tested. AISEC (Artificial Immune System for Interesting E-mail Classification) is a novel, immune inspired system for the classification of e-mail. It is shown that AISEC performs with a predictive accuracy comparable to a naïve Bayesian algorithm when continually classifying e-mail collected from a real user. This section contributes to the understanding of how AIS react in a continuous learning scenario. Following from the knowledge gained by testing AISEC, AISIID (Artificial Immune system for Interesting Information Discovery) is then described. A study involving the subjective evaluation of the results by users is undertaken and AISIID is seen to discover pages rated more interesting by users than a comparative system. The results of this study also reveal AISIID performs with subjective quality similar to the well known search engine, Google. This leads to a contribution regarding a better understanding of the user's perception of interestingness and possible inadequacies in the current understanding of interestingness regarding text documents.
|
240 |
A unified model for inter- and intra-processor concurrencySchweigler, Mario January 2006 (has links)
Although concurrency is generally perceived to be a `hard' subject, it can in fact be very simple --- provided that the underlying model is simple. The occam-pi parallel processing language provides such a simple yet powerful concurrency model that is based on CSP and the pi-calculus. This thesis presents pony, the occam-pi Network Environment. occam-pi and pony provide a new, unified, concurrency model that bridges inter- and intra-processor concurrency. This enables the development of distributed applications in a transparent, dynamic and highly scalable way. The author specified the layout of the pony system as presented in this thesis, and carried out about 90% of the implementation. This thesis is structured into three main parts, as well as an introduction and an appendix. In the introduction, the need for a unified concurrency model is examined in detail. Thereupon, the pony environment is presented as a solution that provides such a unified model. The first part of this thesis is concerned with the usage of the pony environment for the development of distributed applications. It presents the interface between pony and the user-level code, as well as pony's configuration and a sample application. The second part presents the design and implementation of the pony environment. It explains the internal structure of pony, the implementation of pony's components and public processes, and the integration of pony in the KRoC compiler. The third part evaluates pony's performance and contains the final conclusions. It presents a number of performance tests and concludes with a discussion of the work presented in this thesis, along with an outline of possible future research.
|
Page generated in 0.1122 seconds