• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 24
  • 15
  • 13
  • 1
  • Tagged with
  • 2490
  • 1189
  • 1135
  • 1070
  • 895
  • 183
  • 135
  • 114
  • 91
  • 89
  • 88
  • 86
  • 78
  • 75
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Improving the acoustic modelling of speech using modular/ensemble combinations of heterogeneous neural networks

Antoniou, Christos Andrea January 2000 (has links)
No description available.
82

A software testing estimation and process control model

Archibald, Colin J. January 1998 (has links)
The control of the testing process and estimation of the resource required to perform testing is key to delivering a software product of target quality on budget. This thesis explores the use of testing to remove errors, the part that metrics and models play in this process, and considers an original method for improving the quality of a software product. The thesis investigates the possibility of using software metrics to estimate the testing resource required to deliver a product of target quality into deployment and also determine during the testing phases the correct point in time to proceed to the next testing phase in the life-cycle. Along with the metrics Clear ratio. Chum, Error rate halving. Severity shift, and faults per week, a new metric 'Earliest Visibility' is defined and used to control the testing process. EV is constructed upon the link between the point at which an error is made within development and subsequently found during testing. To increase the effectiveness of testing and reduce costs, whilst maintaining quality the model operates by each test phase being targeted at the errors linked to that test phase and the ability for each test phase to build upon the previous phase. EV also provides a measure of testing effectiveness and fault introduction rate by development phase. The resource estimation model is based on a gradual refinement of an estimate, which is updated following each development phase as more reliable data is available. Used in conjunction with the process control model, which will ensure the correct testing phase is in operation, the estimation model will have accurate data for each testing phase as input. The proposed model and metrics have been developed and tested on a large-scale (4 million LOC) industrial telecommunications product written in C and C++ running within a Unix environment. It should be possible to extend this work to suit other environments and other development life-cycles.
83

End-to-end security in active networks

Brown, Ian January 2001 (has links)
Active network solutions have been proposed to many of the problems caused by the increasing heterogeneity of the Internet. These ystems allow nodes within the network to process data passing through in several ways. Allowing code from various sources to run on routers introduces numerous security concerns that have been addressed by research into safe languages, restricted execution environments, and other related areas. But little attention has been paid to an even more critical question: the effect on end-to-end security of active flow manipulation. This thesis first examines the threat model implicit in active networks. It develops a framework of security protocols in use at various layers of the networking stack, and their utility to multimedia transport and flow processing, and asks if it is reasonable to give active routers access to the plaintext of these flows. After considering the various security problem introduced, such as vulnerability to attacks on intermediaries or coercion, it concludes not. We then ask if active network systems can be built that maintain end-to-end security without seriously degrading the functionality they provide. We describe the design and analysis of three such protocols: a distributed packet filtering system that can be used to adjust multimedia bandwidth requirements and defend against denial-of-service attacks; an efficient composition of link and transport-layer reliability mechanisms that increases the performance of TCP over lossy wireless links; and a distributed watermarking servicethat can efficiently deliver media flows marked with the identity of their recipients. In all three cases, similar functionality is provided to designs that do not maintain end-to-end security. Finally, we reconsider traditional end-to-end arguments in both networking and security, and show that they have continuing importance for Internet design. Our watermarking work adds the concept of splitting trust throughout a network to that model; we suggest further applications of this idea.
84

A prototype Prolog-based code generator environment

Fan, Wei Yi January 1993 (has links)
No description available.
85

Computational learning of finite-state models for natural language processing

Belz, Anja January 2000 (has links)
No description available.
86

Lexical acquisition at the syntax-semantics interface : diathesis alternations, subcategorization frames and selectional preferences

McCarthy, Diana January 2001 (has links)
No description available.
87

An analysis of score distributions and design pattern interaction for object-oriented metrics

Huston, Brian January 2001 (has links)
One method suggested for improving software quality has been that of collecting metric scores for a given design and refactoring in response to what are deemed to be unsatisfactory metric values. In the case of object orientation, a considerable number of metrics have been proposed in the literature, with the intention of highlighting the possible 'misuse' of concepts such as inheritance and polymorphism. The aim is to produce systems which are more easily maintainable than may otherwise be the case (in terms of characteristics such as reduced modification times or increased class reusability). Subsequent to this, a major requirement for promoting the widescale adoption of such design metrics has been to establish their validity beyond mere intuitive appeal. The theoretical approach to validation has been limited, relying on the use of measurement axioms as an initial filter to rule out inconsistent measures. Given the profusion of possible systems, empirical studies must be seen to represent limited sampling, and taken as a whole seem to produce an excess of (sometimes conflicting) correlations. The aim as therefore been to establish a complementary approach to the attempts at validation so far undertaken. One aspect of this activity is to examine the theoretical nature of inter-metric dependencies, and a technique for evaluating levels of metric interaction is presented. This addresses a significant issue as highly correlated metrics may imply redundancy, while conflicting measures can cause problems if they are simultaneously applied as part of a suite. To this end, a matrix based technique for metric score generation and comparison is introduced. In addition, the interaction between metrics and a sample of design patterns is considered, to provide an assessment of whether these two approaches to improving software quality are in fact compatible. Methods of analysis are presented which gauge the effects of applying various patterns on certain metric scores, highlighting cases where viewpoints for the metric, the pattern (or indeed both) could be anomalous. The results generated suggest that only minor levels of redundancy and conflict exist amongst commonly quoted metrics, although the application of certain design patterns can actually work in opposition to the viewpoints for particular metrics. On this basis, overall recommendations regarding the selection and application of measures for designs are then made.
88

The role of domain decomposition in the parallelisation of genetic search for multi-modal numerical optimisation

Vincent, Jonathan January 2001 (has links)
This thesis is concerned with the optimisation of multi-modal numerical problems using genetic algorithms. Genetic algorithms are an established technique, inspired by principles of genetics and evolution, and have been successfully utilised in a wide range of applications. However, they are computationally intensive and consequently, addressing problems of increasing size and complexity has led to research into parallelisation. this thesis is concerned with coarse-grained parallelism because of the growing importance of cluster computing. Current parallel genetic algorithm technology offers one coarse-grained approach, usually referred to as the island model. Parallelisation is concerned with the division of a computational system into components which can be executed concurrently on multiple processing nodes. It can be based on a decomposition of either the process or the domain on which it operates. The island model is a process based approach, which divides the genetic algorithm population into a number of co-operating sub-populations. This research examines an alternative approach based on domain decomposition - the search space is divided into a number of regions which are separately optimised. The aims of this research are to determine whether domain decomposition is useful in terms of search performance, and whether it is feasible when there is no a priori knowledge of the search space. It is established, empirically that domain decomposition offers a more robust sampling of the search space. It is further shown that the approach is beneficial when there is an element of deception in the problem. However, domain decomposition is non-trivial when the domain is irregular. the irregularities of the search space result in a computational load imbalance which would reduce the efficiency of a parallel implementation. To address this, a dynamic load-balancing algorithm is developed which adjusts the decomposition of the search space, at run time according to the fitness distribution. Using this algorithm, it is showm that domain decomposition is feasible, and offers significant search advantages in the field of multi-modal numerical optimisation. The performance is compared with a serial implementation and an island model parallel implementation on a number of non-trivial problems. It is concluded that the domain decomposition approach offers superior performance on these problems in terms of rapid convergence and final solution quality. Approaches to the extension and generalisation of the approach are suggested for further work.
89

Parallel simulations using recurrence relations and relaxation

McGough, Andrew Stephen January 2000 (has links)
This thesis develops and evaluates a number of efficient algorithms for performing parallel simulations. These algorithms achieve approximate linear speed-up, in the sense that their run times are in the order of O(n/p), when n is the size of the problem and p is the number of processors employed. The systems that are being simulated are related to ATM switches and sliding window communication protocols. The algorithms presented first are concern with the parallel generation and merging of bursty arrival sources, marking and deleting of lost cells due to buffer overflows and computation of departure instants. They work well on shared memory multiprocessors. However, different techniques need to be emulated in order to achieve similar speed-ups on a distributed cluster of workstations. The main obstacle is the inter-process communication overhead. To overcome it, new algorithms are developed that reduce considerably the amount of information transferred between processors. They are applied both to the ATM switch and to the sliding window protocol with feedbacks. In all cases, the methodology relies in reducing the simulation task to a set of recurrence relations. The latter are solved using the techniques of parallel prefix computation, parallel merging and relaxing. The effectiveness of these algorithms is evaluated by comparing their run times with that of an optimized sequential algorithm. A number of experiments are carried out on a 12-processor shared memory system, and also on a distributed cluster of 12 processors connected by a fast Ethernet.
90

Exploring run-time reconfiguration on programmable logic for DSP and telecommunications applications

Courtney, T. E. G. January 2003 (has links)
No description available.

Page generated in 0.0254 seconds