• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Evolutionary methods for the design of dispatching rules for complex and dynamic scheduling problems

Pickardt, Christoph W. January 2013 (has links)
Three methods, based on Evolutionary Algorithms (EAs), to support and automate the design of dispatching rules for complex and dynamic scheduling problems are proposed in this thesis. The first method employs an EA to search for problem instances on which a given dispatching rule performs badly. These instances can then be analysed to reveal weaknesses of the tested rule, thereby providing guidelines for the design of a better rule. The other two methods are hyper-heuristics, which employ an EA directly to generate effective dispatching rules. In particular, one hyper-heuristic is based on a specific type of EA, called Genetic Programming (GP), and generates a single rule from basic job and machine attributes, while the other generates a set of work centre-specific rules by selecting a (potentially) different rule for each work centre from a number of existing rules. Each of the three methods is applied to some complex and dynamic scheduling problem(s), and the resulting dispatching rules are tested against benchmark rules from the literature. In each case, the benchmark rules are shown to be outperformed by a rule (set) that results from the application of the respective method, which demonstrates the effectiveness of the proposed methods.
42

Towards the design of efficient error detection mechanisms

Leeke, Matthew January 2011 (has links)
The pervasive nature of modern computer systems has led to an increase in our reliance on such systems to provide correct and timely services. Moreover, as the functionality of computer systems is being increasingly defined in software, it is imperative that software be dependable. It has previously been shown that a fault intolerant software system can be made fault tolerant through the design and deployment of software mechanisms implementing abstract artefacts known as error detection mechanisms (EDMs) and error recovery mechanisms (ERMs), hence the design of these components is central to the design of dependable software systems. The EDM design problem, which relates to the construction of a boolean predicate over a set of program variables, is inherently difficult, with current approaches relying on system specifications and the experience of software engineers. As this process necessarily entails the identification and incorporation of program variables by an error detection predicate, this thesis seeks to address the EDM design problem from a novel variable-centric perspective, with the research presented supporting the thesis that, where it exists under the assumed system model, an efficient EDM consists of a set of critical variables. In particular, this research proposes (i) a metric suite that can be used to generate a relative ranking of the program variables in a software with respect to their criticality, (ii) a systematic approach for the generation of highly-efficient error detection predicates for EDMs, and (iii) an approach for dependability enhancement based on the protection of critical variables using software wrappers that implement error detection and correction predicates that are known to be efficient. This research substantiates the thesis that an efficient EDM contains a set of critical variables on the basis that (i) the proposed metric suite is able, through application of an appropriate threshold, to identify critical variables, (ii) efficient EDMs can be constructed based only on the critical variables identified by the metric suite, and (iii) the criticality of the identified variables can be shown to extend across a software module such that an efficient EDM designed for that software module should seek to determine the correctness of the identified variables.
43

Biologically plausible attractor networks

Webb, Tristan J. January 2013 (has links)
Attractor networks have shownmuch promise as a neural network architecture that can describe many aspects of brain function. Much of the field of study around these networks has coalesced around pioneering work done by John Hoprield, and therefore many approaches have been strongly linked to the field of statistical physics. In this thesis I use existing theoretical and statistical notions of attractor networks, and introduce several biologically inspired extensions to an attractor network for which a mean-field solution has been previously derived. This attractor network is a computational neuroscience model that accounts for decision-making in the situation of two competing stimuli. By basing our simulation studies on such a network, we are able to study situations where mean- field solutions have been derived, and use these as the starting case, which we then extend with large scale integrate-and-fire attractor network simulations. The simulations are large enough to provide evidence that the results apply to networks of the size found in the brain. One factor that has been highlighted by previous research to be very important to brain function is that of noise. Spiking-related noise is seen to be a factor that influences processes such as decision-making, signal detection, short-term memory, and memory recall even with the quite large networks found in the cerebral cortex, and this thesis aims to measure the effects of noise on biologically plausible attractor networks. Our results are obtained using a spiking neural network made up of integrate-and-fire neurons, and we focus our results on the stochastic transition that this network undergoes. In this thesis we examine two such processes that are biologically relevant, but for which no mean-field solutions yet exist: graded firing rates, and diluted connectivity. Representations in the cortex are often graded, and we find that noise in these networks may be larger than with binary representations. In further investigations it was shown that diluted connectivity reduces the effects of noise in the situation where the number of synapses onto each neuron is held constant. In this thesis we also use the same attractor network framework to investigate the Communication through Coherence hypothesis. The Communication through Coherence hypothesis states that synchronous oscillations, especially in the gamma range, can facilitate communication between neural systems. It is shown that information transfer from one network to a second network occurs for a much lower strength of synaptic coupling between the networks than is required to produce coherence. Thus, information transmission can occur before any coherence is produced. This indicates that coherence is not needed for information transmission between coupled networks. This raises a major question about the Communication through Coherence hypothesis. Overall, the results provide substantial contributions towards understanding operation of attractor neuronal networks in the brain.
44

Supporting delivery of adaptive hypermedia

Scotton, Joshua D. January 2013 (has links)
Although Adaptive Hypermedia (AH) can improve upon the traditional one-size-fitsall learning approach through Adaptive Educational Hypermedia (AEH), it still has problems with the authoring and delivery processes that are holding back the widespread usage of AEH. In this thesis we present the development of the Adaptive Delivery Environment (ADE) delivery system and use the lessons learnt during its development along with feedback from adaptation specification authors, researchers and other evaluations to formalise a list of essential and recommended optional features for AEH delivery engines. In addition to this we also investigate how the powerful adaptation techniques recommended in the above list and described in Brusilovsky and Knutov’s taxonomies can be implemented in a way that minimises the technical knowledge of adaptation authors needed to use these techniques. As the adaptation functionality increases, we research how a modular framework for adaptation strategies can be created to increase the reusability of parts of an AH system’s overall adaptation specification. Following on from this, we investigate how reusing these modular strategies via a pedagogically based visual editor can enable adaptation authors without programming experience to use these powerful adaptation techniques.
45

An empirical modelling approach to software system development in finance : applications and prospects

Maad, Soha January 2002 (has links)
The financial industry is witnessing major changes. The financial enterprise is undergoing major business process renewal accompanied with the introduction of new technologies including electronic commerce; the financial market is shifting from an old to a new trading model that introduces major structural changes to the market and new roles for market participants; investment offers access to ever larger repositories of financial information and a wider choice of financial instruments to fulfill rising needs and expectations. In all these developments, there is a central role for human intelligence that can potentially influence the pattern of change and direct appropriate decisions in adapting to change. There is also a vital need for computer-based technology to support this human activity. The relation between human and computer activities in classical models for computer-based support is characterised by rigidity and framed patterns of interaction. The emphasis in such models is on automation, not only in respect of routine trading operations, but even of the role of market participants. An alternative culture is emerging through the use of advanced technologies incorporating databases, spreadsheets, virtual reality, multi-media and AI. There is an urgent need for a framework in which to unify the classical culture, in which mathematical financial modelling has a central place, with the emerging culture, where there is greater emphasis upon human interaction and experiential aspects of computer use. This thesis addresses the problem of developing software that takes into account the human factor, the integration of the social and technical aspects, human insight, the experiential and situated aspects, different viewpoints of analysis, a holistic rather than an abstract view of the domain of study, cognitive rather than operational activities, and group social interaction. The ultimate aspiration for this work is to transform the computer as it is used in finance from an advanced calculator to an 'instrument of mind'. Meeting the challenges of software support for finance is not only a matter of deployment, but also of software system development (SSD): this motivates our focus on the potential applications and prospects for an Empirical Modelling (EM) approach to SSD in finance. EM technology is a suite of principles, techniques, notations, and tools. EM is a form of situated modelling that involves the construction of artefacts that stand in a special relationship to the modeller's understanding and the situation. The modelling activity is rooted in observation and experiment, and exploits the key concepts of observables, dependencies and agency. The thesis extends the major findings of Sun (1999), in respect of the essential character of SSD, and its contextual and social aspects, by considering its particular application to the finance domain. The principles and qualities of EM as an approach to SSD are first introduced and illustrated with reference to a review of relevant existing models. The suitability of EM as a framework for SSD in finance is then discussed with reference to case studies drawn from the finance domain (the financial enterprise, the financial market, and investment). In particular, EM contributes: principles for software integration and virtual collaboration in the financial enterprise; a novel modelling approach adapting to the new trading model in the financial market; computer-based support for distributed financial engineering; and principles for a closer integration of the software system development and financial research development activities. This contribution is framed in a Situated Integration Model, a Human Information Behaviour Model, an Open Financial Market Model, a framework for distributed financial engineering, and a situated account of the financial research development cycle.
46

Multiresolution volumetric texture segmentation

Reyes-Aldasoro, Constantino Carlos January 2004 (has links)
This thesis investigates the segmentation of data in 2D and 3D by texture analysis using Fourier domain filtering. The field of texture analysis is a well-trodden one in 2D, but many applications, such as Medical Imaging, Stratigraphy or Crystallography, would benefit from 3D analysis instead of the traditional, slice-by-slice approach. With the intention of contributing to texture analysis and segmentation in 3D, a multiresolution volumetric texture segmentation (M-VTS) algorithm is presented. The method extracts textural measurements from the Fourier domain of the data via sub-band filtering using a Second Orientation Pyramid. A novel Bhattacharyya space, based on the Bhattacharyya distance is proposed for selecting of the most discriminant measurements and produces a compact feature space. Each dimension of the feature space is used to form a Quad Tree. At the highest level of the tree, new positional features are added to improve the contiguity of the classification. The classified space is then projected to lower levels of the tree where a boundary refinement procedure is performed with a 3D equivalent of butterfly filters. The performance of M-VTS is tested in 2D by classifying a set of standard texture images. The figures contain different textures that are visually stationary. M-VTS yields lower misclassification rates than reported elsewhere ([104, 111, 124]). The algorithm was tested in 3D with artificial isotropic data and three Magnetic Resonance Imaging sets of human knees with satisfactory results. The regions segmented from the knees correspond to anatomical structures that could be used as a starting point for other measurements. By way of example, we demonstrate successful cartilage extraction using our approach.
47

The equivalence of an operational and a denotational semantics for pure dataflow

Faustini, Antony Azio January 1982 (has links)
In this thesis we prove the equivalence of an operational and a denotational semantics for pure dataflow. The term pure dataflow refers to dataflow nets in which the nodes are functional (i.e., the output history is a function of the input history only) and the arcs are unbounded fifo queues. Gilles Kahn gave a method for the representation of a pure dataflow net as a set of equations; one equation for each arc in the net. Kahn stated, and we prove, that the operational behaviour of a pure dataflow net is exactly described by the least fixed point solution to the net’s associated set of equations. In our model we do not require that nodes be sequential nor deterministic, not even the functional nodes. As a consequence our model has a claim of being completely general. In particular our nets have what we call the elcapsulation property in that any subnet can be replaced in any pure dataflow context by a node having exactly the same input/output behaviour. Our model is also complete in the sense that our nodes have what we call the universality property, that is, for any continuous history function there exists a node that will compute it. The proof of the Kahn principle given in this thesis makes use of infinite games of perfect information. Infinite games turn out to be an extremely useful tool for defining and proving results about operational semantics. We use infinite games to give for the first time a completely general definition of subnet functionality. In addition their use in certain proofs is effective in reducing notational complexity. Finally we look at possible ways of extending Kahn’s denotational model by the introduction of pause objects called hiatons. Finally we describe interesting ways of refining our operational model.
48

Analytic proof systems for classical and modal logics of restricted quantification

Gent, Ian Philip January 1993 (has links)
This thesis is a study of the relationship between proof systems for propositional logic and for logics of restricted quantification incorporating restriction theories. Such logics are suitable for the study of special purpose reasoning as part of a larger system, an important research topic in automated reasoning. Also, modal and sorted logics can be expressed in this way. Thus, results on restricted quantification apply to a wide range of useful logics. D'Agostino's "expansion systems" are used to generalise results to apply to a variety of tableau-like propositional proof systems. A certain class of propositional expansion systems is defined, and extended for restricted quantification in two different ways. The less general, but more useful, extension is proved sound and complete provided that the restriction theory can be expressed as a set of definite Horn clauses. In the definite clauses case, the result is used to present a generalisation of Wallen's matrix characterisations of validity for modal logics. The use of restricted quantification enables more logics to be covered than Wallen did, and the use of expansion systems allows analogues of matrices to be defined for proof systems other than tableaux. To derive the results on matrices, the calculi for restricted quantification are made weaker, and so can be unsound for some restriction theories. However, much greater order independence of rule applications is obtained, and the weakening is sound if one of two new conditions introduced here hold, namely "alphabetical monotonicity" or "non-vacuity". Alphabetical monotonicity or non-vacuity are shown to hold for a range of interesting restriction theories associated with order sorted logics and some modal logics. I also show that if non-vacuity holds, then instantiation in restricted quantification can be completely separated from propositional reasoning. The major problem left open by this thesis is whether analogues of the previous matrix characterisations can be produced based on the proof systems introduced for nondefinite clause restriction theories.
49

Directional edge and texture representations for image processing

Yao, Zhen January 2007 (has links)
An efficient representation for natural images is of fundamental importance in image processing and analysis. The commonly used separable transforms such as wavelets axe not best suited for images due to their inability to exploit directional regularities such as edges and oriented textural patterns; while most of the recently proposed directional schemes cannot represent these two types of features in a unified transform. This thesis focuses on the development of directional representations for images which can capture both edges and textures in a multiresolution manner. The thesis first considers the problem of extracting linear features with the multiresolution Fourier transform (MFT). Based on a previous MFT-based linear feature model, the work extends the extraction method into the situation when the image is corrupted by noise. The problem is tackled by the combination of a "Signal+Noise" frequency model, a refinement stage and a robust classification scheme. As a result, the MFT is able to perform linear feature analysis on noisy images on which previous methods failed. A new set of transforms called the multiscale polar cosine transforms (MPCT) are also proposed in order to represent textures. The MPCT can be regarded as real-valued MFT with similar basis functions of oriented sinusoids. It is shown that the transform can represent textural patches more efficiently than the conventional Fourier basis. With a directional best cosine basis, the MPCT packet (MPCPT) is shown to be an efficient representation for edges and textures, despite its high computational burden. The problem of representing edges and textures in a fixed transform with less complexity is then considered. This is achieved by applying a Gaussian frequency filter, which matches the disperson of the magnitude spectrum, on the local MFT coefficients. This is particularly effective in denoising natural images, due to its ability to preserve both types of feature. Further improvements can be made by employing the information given by the linear feature extraction process in the filter's configuration. The denoising results compare favourably against other state-of-the-art directional representations.
50

Data integrity : an often-ignored aspect of safety systems : executive summary

Faulkner, Alastair January 2004 (has links)
Data is all-pervasive and is found in all aspects of modern computer systems, and yet many engineers seem reluctant to recognise the importance of data integrity. The conventional view of data, as simply an aspect of software, underestimates the role played by data errors in the behaviour of the system and their potential effect on the integrity of the overall system. In many cases hazard analysis is not applied to data in the same way that it is applied to other system components. Without data integrity requirements, data development and data provision may not attract the degree of rigour that would be required of other system components of a similar integrity. This omission also has implications for safety assessment where the data is often ignored or neglected. This position becomes self reenforcing, as without integrity requirements the importance of data integrity remains hidden. This research provides a wide-ranging overview of the use (and abuse) of data within safety systems, and proposes a range of strategies and techniques to improve the safety of such systems. A literature review and a survey of industrial practice confirmed the conventional view of data, and showed that there is little consistency in the methods used for data development. To tackle these problems this work proposes a novel paradigm, in which data is considered as a separate and distinct system component. This approach not only ensures that data is given the importance that it deserves, but also simplifies the task of providing guidance that is specific to data. Having developed this conceptual framework for data, the work then goes on to develop lifecycle models to assist with data development, and to propose a range of techniques appropriate for the various lifecycle phases. An important aspect of the development of any safety-related system is the production of a safety argument, and this research looks in some detail at the treatment of data, and data development, within this justification. The industrial survey reveals that in data-intensive systems data is often developed quite separately from other elements of the system. It also reveals that data is often produced by an extended data supply chain that may involve a number of disparate organisations. These characteristics of data distinguish it from other system components and greatly complicate the achievement and demonstration of safety. This research proposes methods of modelling complex data supply chains and proposes techniques for tackling the difficult task of safety justification for such systems.

Page generated in 0.1844 seconds