• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 87
  • 87
  • 87
  • 87
  • 87
  • 25
  • 16
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Integration of security and reliability in a distributed collaborative environment

Ali, Edries Abdelhadi January 2001 (has links)
No description available.
22

A productive response to legacy system petrification

Lauder, Anthony Philip James January 2001 (has links)
Requirements change. The requirements of a legacy information system change, often in unanticipated ways, and at a more rapid pace than the rate at which the information system itself can be evolved to support them. The capabilities of a legacy system progressively fall further and further behind their evolving requirements, in a degrading process termed petrification. As systems petrify, they deliver diminishing business value, hamper business effectiveness, and drain organisational resources. To address legacy systems, the first challenge is to understand how to shed their resistance to tracking requirements change. The second challenge is to ensure that a newly adaptable system never again petrifies into a change resistant legacy system. This thesis addresses both challenges. The approach outlined herein is underpinned by an agile migration process - termed Productive Migration - that homes in upon the specific causes of petrification within each particular legacy system and provides guidance upon how to address them. That guidance comes in part from a personalised catalogue of petrifying patterns, which capture recurring themes underlying petrification. These steer us to the problems actually present in a given legacy system, and lead us to suitable antidote productive patterns via which we can deal with those problems one by one. To prevent newly adaptable systems from again degrading into legacy systems, we appeal to a follow-on process, termed Productive Evolution, which embraces and keeps pace with change rather than resisting and falling behind it. Productive Evolution teaches us to be vigilant against signs of system petrification and helps us to nip them in the bud. The aim is to nurture systems that remain supportive of the business, that are adaptable in step with ongoing requirements change, and that continue to retain their value as significant business assets.
23

The divide-and-conquer method for the solution of the symmetric tridiagonal eigenproblem and transputer implementations

Fachin, M. P. G. January 1994 (has links)
No description available.
24

A new blueprint for network QoS

Reeve, David C. January 2003 (has links)
No description available.
25

Removing garbage collector synchronisation

King, Andrew C. January 2004 (has links)
No description available.
26

Exploiting immunological metaphors in the development of serial, parallel and distributed learning algorithms

Watkins, Andrew January 2005 (has links)
This thesis examines the use of immunological metaphors in building serial, parallel, and distributed learning algorithms. It offers a basic study in the development of biologically-inspired algorithms which merge inspiration from biology with known, standard computing technology to examine robust methods of computing. This thesis begins by detailing key interactions found within the immune system that provide inspiration for the development of a learning system. It then exploits the use of more processing power for the development of faster algorithms. This leads to the exploration of distributed computing resources for the examination of more biologically plausible systems. This thesis offers the following main contributions. The components of the immune system that exhibit the capacity for learning are detailed. A framework for discussing learning algorithms is proposed. Three properties of every learning algorithm-memory, adaptation, and decision-making-are identified for this framework, and traditional learning algorithms are placed in the context of this framework. An investigation into the use of immunological components for learning is provided. This leads to an understanding of these components in terms of the learning framework. A simplification of the Artificial Immune Recognition System (AIRS) immune-inspired learning algorithm is provided by employing affinity-dependent somatic hypermutation. A parallel version of the Clonal Selection Algorithm (CLONALG) immune learning algorithm is developed. It is shown that basic parallel computing techniques can provide computational benefits for this algorithm. Exploring this technology further, a parallel version of AIRS is offered. It is shown that applying these same parallel computing techniques to AIRS, while less scalable than when applied to CLONALG, still provides computational gains. A distributed approach to AIRS is offered, and it is argued that this approach provides a more biologically appealing model. The simple distributed approach is proposed in terms of an initial step toward a more complex, distributed system. Biological immune systems exhibit complex cellular interactions. The mechanisms of these interactions, while often poorly understood, hint at an extremely powerful information processing/problem solving system. This thesis demonstrates how the use of immunological principles coupled with standard computing technology can lead to the development of robust, biologically inspired learning algorithms.
27

Software measurement for functional programming

Ryder, Chris January 2004 (has links)
This thesis presents an investigation into the usefulness of software measurement techniques, also known as software metrics, for software written in functional programming languages such as Haskell. Statistical analysis is performed on a selection of metrics for Haskell programs, some taken from the world of imperative languages. An attempt is made to assess the utility of various metrics in predicting likely places that bugs may occur in practice by correlating bug fixes with metric values within the change histories of a number of case study programs. This work also examines mechanisms for visualising the results of the metrics and shows some proof of concept implementations for Haskell programs, and notes the usefulness of such tools in other software engineering processes such as refactoring.
28

Constructing efficient self-organising application layer multicast overlays

Tan, Su-Wei January 2005 (has links)
This thesis investigates efficient techniques to build both low cost (i.e. low resource usage) and low delay ALM trees. We focus on self-organising distributed proposals that use limited information about the underlying physical network, limited coordination between the members, and construct overlays with bounded branching degree subject to the bandwidth constraint of each individual member.
29

Model driven language engineering

Patrascoiu, Octavian January 2005 (has links)
Modeling is a most important exercise in software engineering and development and one of the current practices is object-oriented (OO) modeling. The Object Management Group (OMG) has defined a standard object-oriented modeling language the Unified Modeling Language (UML). The OMG is not only interested in modeling languages; its primary aim is to enable easy integration of software systems and components using vendor-neutral technologies. This thesis investigates the possibilities for designing and implementing modeling frameworks and transformation languages that operate on models and to explore the validation of source and target models. Specifically, we will focus on OO models used in OMG's Model Driven Architecture (MDA), which can be expressed in terms of UML terms (e.g. classes and associations). The thesis presents the Kent Modeling Framework (KMF), a modeling framework that we developed, and describes how this framework can be used to generate a modeling tool from a model. It then proceeds to describe the customization of the generated code, in particular the definition of methods that allows a rapid and repeatable instantiation of a model. Model validation should include not only checking the well-formedness using OCL constraints, but also the evaluation of model quality. Software metrics are useful means for evaluating the quality of both software development processes and software products. As models are used to drive the entire software development process it is unlikely that high quality software will be obtained using low quality models. The thesis presents a methodology supported by KMF that uses the UML specification to compute the design metrics at an early stage of software development. The thesis presents a transformation language called YATL (Yet Another Transformation Language), which was designed and implemented to support the features provided by OMG's Request For Proposal and the future QVT standard. YATL is a hybrid language (a mix of declarative and imperative constructions) designed to answer the Query/Views/Transformations Request For Proposals issued by OMG and to express model transformations as required by the Model Driven Architecture (MDA) approach. Several examples of model transformations, which have been implemented using YATL and the support provided by KMF, are presented. These experiments investigate different knowledge areas as programming languages, visual diagrams and distributed systems. YATL was used to implement the following transformations: * UML to Java mapping * Spider diagrams to OCL mapping * EDOC to Web Services
30

Artificial immune systems for Web content mining : focusing on the discovery of interesting information

Secker, Andrew D. January 2006 (has links)
This thesis explores the way in which biological metaphors can be applied to web content mining and, more specifically, the identification of interesting information in web documents. Web content mining is the use of content found on the web, most usually the text found on web pages, for data mining tasks such as classification. Due to the nature of the search domain, i.e. the web content is noisy and undergoing constant change, an adaptive system is required. The discovery of interesting information is an advance on basic text mining in that it aims to identify text that is novel, unexpected or surprising to a user, whilst still being relevant. This thesis investigates the use of Artificial Immune Systems (AIS) applied to discovery of interesting information as AIS are thought to confer the adaptability and learning required for this task. Two novel Artificial Immune Systems are described and tested. AISEC (Artificial Immune System for Interesting E-mail Classification) is a novel, immune inspired system for the classification of e-mail. It is shown that AISEC performs with a predictive accuracy comparable to a naïve Bayesian algorithm when continually classifying e-mail collected from a real user. This section contributes to the understanding of how AIS react in a continuous learning scenario. Following from the knowledge gained by testing AISEC, AISIID (Artificial Immune system for Interesting Information Discovery) is then described. A study involving the subjective evaluation of the results by users is undertaken and AISIID is seen to discover pages rated more interesting by users than a comparative system. The results of this study also reveal AISIID performs with subjective quality similar to the well known search engine, Google. This leads to a contribution regarding a better understanding of the user's perception of interestingness and possible inadequacies in the current understanding of interestingness regarding text documents.

Page generated in 0.0952 seconds