• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20545
  • 5226
  • 1262
  • 1212
  • 869
  • 670
  • 435
  • 410
  • 410
  • 410
  • 410
  • 410
  • 407
  • 158
  • 156
  • Tagged with
  • 34513
  • 34513
  • 14187
  • 10882
  • 3110
  • 2989
  • 2740
  • 2549
  • 2493
  • 2356
  • 2298
  • 2191
  • 2179
  • 2061
  • 1937
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Segmentation-based and region-adaptive lossless image compression underpinned by a stellar-field image model

Grunler, Christian Dieter January 2010 (has links)
The central question addressed in this research is whether lossless compression of stellar-field images can be enhanced in terms of compression ratio, by using image segmentation and region-adaptive bit-allocation which are based on a suitable image model. Therefore, special properties of stellarfield images, which compression algorithms could exploit, are studied. The research proposes and develops novel lossless compression algorithms for the compaction of stellar-field images. The proposed algorithms are based on image segmentation coupled to a domain-specific image data model and to a region-adaptive allocation of pixel bits. The algorithms exploit the distinctive characteristics of stellar-field images and aim to meet the requirements for compressing scientific-quality astronomical images. The image data model used is anchored on the property of a stellar-field image encapsulated in the characterisation of this type of images as consisting of “dot-like bright objects on a noisy background”. These novel algorithms segment the dot-like bright objects, corresponding to the high-dynamic-range areas of the image, from the noise-like low-dynamic-range background sky areas. Following the segmentation of the image, the algorithms perform region-adaptive image compression tuned to each specific component of the image data model. Besides the development of novel algorithms, the research also presents a survey of the state-of-the-art of compression algorithms for astronomical images. It reviews and compares existing methods claimed to be able to achieve lossless compression of stellar-field images and contributes an evaluation of a set of existing methods. Experiments to evaluate the performance of the algorithms investigated in this research were conducted using a set of standard astronomical test images. The results of the experiments show that the novel algorithms developed in this research can achieve compression ratios comparable to, and often better than existing methods. The evaluation results show that a significant compaction can be achieved by image segmentation and region-adaptive bitallocation, anchored on a domain-specific image data model. Based on the evaluation results, this research suggests application classes for the tested algorithms. On the test image set, existing methods which do not explicitly exploit the special characteristics of astronomical images were shown to lead to average compression ratios of 1.97 up to 3.92. Great differences were found between the results on 16-bit-per-pixel images and those on 32-bit-per-pixel images. For these existing methods, the average results on 16-bit-per-pixel images range from 1.37 up to 2.81, and from 3.81 up to 6.42 for 32-bit-per-pixel images. Therefore, it is concluded that for archiving data, compression methods may indeed save costs for storage media or data transfer time, especially if a large part of the raw images is encoded with 32 bits per pixel. With average compression ratios on the test image set in the range of 3.37 to 3.82, the simplest among the new algorithms developed in this research achieved a result which is comparable to the best existing methods. These simple algorithms use general-purpose methods, which have limited performance, for encoding the data streams of separate image regions corresponding to components of a stellar-field image. The most advanced of the new algorithms, which uses data encoders tuned to each image signal component, outperformed existing methods by about 10 percent (average of 4.29 on the test image set), in terms of size efficiency; it can yield a compression ratio of 7.87. Especially for applications where high volumes of image data have to be stored, the most advanced of the new algorithms should also be considered.
102

WEBFRAME : a framework for informing web developers' methodology selection

Kinmond, Robert M. January 2012 (has links)
This research explores web information systems developers‘ choices and use of methodologies. The stated aim of the research is to seek to identify key features of the developers‘ requirements for methodologies and then from this, to design a framework for use in practice. The literature review reveals that a great many methodologies are available but recent research also suggests that these are poorly used in practice. This study explores whether or not this is so and if so, why. Using an interpretevist epistemological framework the principles of Grounded Theory Methodology are used to conduct a mixed methods investigation. Structuration Theory offers a theoretical framework for analysis and development of the theory. An initial web-based survey aims to capture a breadth of developers‘ views and experiences. This is then followed with semi-structured interviews which enables exploration of the area in depth. The findings suggest that web information system developers are not following a published methodology but prefer instead to develop their own ‗bespoke‘ approach to suit the project. The developers seem to be aware of, and are using, traditional information system tools and importing them as appropriate into the web development methodologies. They are however, less aware or concerned with published web methodologies apparently needing greater flexibility and choice for developing web information systems than the published methodologies offer. Thus, the proposed new framework (entitled WEBFRAME) aims to provide web developers with a set of key principles to facilitate development of web information system development methodologies. This proposed framework is evaluated and validated by an expert panel of web developers with findings from this evaluation and validation reported here.
103

An animated pedagogical agent for assisting novice programmers within a desktop computer environment

Case, Desmond Robert January 2012 (has links)
Learning to program for the first time can be a daunting process, fraught with difficulty and setback. The novice learner is faced with learning two skills at the same time each that depends on the other; they are how a program needs to be constructed to solve a problem and how the structures of a program work towards solving a problem. In addition the learner has to develop practical skills such as how to design a solution, how to use the programming development environment, how to recognise errors, how to diagnose their cause and how to successfully correct them. The nature of learning how to program a computer can cause frustration to many and some to disengage before they have a chance to progress. Numerous authorities have observed that novice programmers make the same mistakes and encounter the same problems when learning their first programming language. The learner errors are usually from a fixed set of misconceptions that are easily corrected by experience and with appropriate guidance. This thesis demonstrates how a virtual animated pedagogical agent, called MRCHIPS, can extend the Beliefs-Desires-Intentions model of agency to provide mentoring and coaching support to novice programmers learning their first programming language, Python. The Cognitive Apprenticeship pedagogy provides the theoretical underpinning of the agent mentoring strategy. Case-Based Reasoning is also used to support MRCHIPS reasoning, coaching and interacting with the learner. The results indicate that in a small controlled study when novice learners are assisted by MRCHIPS they are more productive than those working without the assistance, and are better at problem solving exercises, there are also manifestations of higher of degree of engagement and learning of the language syntax.
104

Sequential recurrent connectionist algorithms for time series modeling of nonlinear dynamical systems

Mirikitani, Derrick Takeshi January 2010 (has links)
This thesis deals with the methodology of building data driven models of nonlinear systems through the framework of dynamic modeling. More specifically this thesis focuses on sequential optimization of nonlinear dynamic models called recurrent neural networks (RNNs). In particular, the thesis considers fully connected recurrent neural networks with one hidden layer of neurons for modeling of nonlinear dynamical systems. The general objective is to improve sequential training of the RNN through sequential second-order methods and to improve generalization of the RNN by regularization. The total contributions of the proposed thesis can be summarized as follows: 1. First, a sequential Bayesian training and regularization strategy for recurrent neural networks based on an extension of the Evidence Framework is developed. 2. Second, an efficient ensemble method for Sequential Monte Carlo filtering is proposed. The methodology allows for efficient O(H 2 ) sequential training of the RNN. 3. Last, the Expectation Maximization (EM) framework is proposed for training RNNs sequentially.
105

Hybrid modelling of time-variant heterogeneous objects

Kravtsov, Denis January 2011 (has links)
The physical world consists of a wide range of objects of a diverse constitution. Past research was mainly focussed on the modelling of simple homogeneous objects of a uniform constitution. Such research resulted in the development of a number of advanced theoretical concepts and practical techniques for describing such physical objects. As a result, the process of modelling and animating certain types of homogeneous objects became feasible. In fact most physical objects are not homogeneous but heterogeneous in their constitution and it is thus important that one is able to deal with such heterogeneous objects that are composed of diverse materials and may have complex internal structures. Heterogeneous object modelling is still a very new and evolving research area, which is likely to prove useful in a wide range of application areas. Despite its great promise, heterogeneous object modelling is still at an embryonic state of development and there is a dearth of extant tools that would allow one to work with static and dynamic heterogeneous objects. In addition, the heterogeneous nature of the modelled objects makes it appealing to employ a combination of different representations resulting in the creation of hybrid models. In this thesis we present a new dynamic Implicit Complexes (IC) framework incorporating a number of existing representations and animation techniques. This framework can be used for the modelling of dynamic multidimensional heterogeneous objects. We then introduce an Implicit Complexes Application Programming Interface (IC API). This IC API is designed to provide various applications with a unified set of tools allowing these to model time-variant heterogeneous objects. We also present a new Function Representation (FRep) API, which is used for the integration of FReps into complex time-variant hybrid models. This approach allows us to create a practical multilevel modelling system suited for complex multidimensional hybrid modelling of dynamic heterogeneous objects. We demonstrate the advantages of our approach through the introduction of a novel set of tools tailored to problems encountered in simulation applications, computer animation and computer games. These new tools empower users and amplify their creativity by allowing them to overcome a large number of extant modelling and animation problems, which were previously considered difficult or even impossible to solve.
106

Requirement validation with enactable descriptions of use cases

Kanyaru, J. M. January 2006 (has links)
The validation of stakeholder requirements for a software system is a pivotal activity for any nontrivial software development project. Often, differences in knowledge regarding development issues, and knowledge regarding the problem domain, impede the elaboration of requirements amongst developers and stakeholders. A description technique that provides a user perspective of the system behaviour is likely to enhance shared understanding between the developers and stakeholders. The Unified Modelling Language (UML) use case is such a notation. Use cases describe the behaviour of a system (using natural language) in terms of interactions between the external users and the system. Since the standardisation of the UML by the Object Management Group in 1997, much research has been devoted to use cases. Some researchers have focussed on the provision of writing guidelines for use case specifications whereas others have focussed on the application of formal techniques. This thesis investigates the adequacy of the use case description for the specification and validation of software behaviour. In particular, the thesis argues that whereas the user-system interaction scheme underpins the essence of the use case notation, the UML specification of the use case does not provide a mechanism by which use cases can describe dependencies amongst constituent interaction steps. Clarifying these issues is crucial for validating the adequacy of the specification against stakeholder expectations. This thesis proposes a state-based approach (the Educator approach) to use case specification where constituent events are augmented with pre and post states to express both intra-use case and inter-use case dependencies. Use case events are enacted to visualise implied behaviour, thereby enhancing shared understanding among users and developers. Moreover, enaction provides an early "feel" of the behaviour that would result from the implementation of the specification. The Educator approach and the enaction of descriptions are supported by a prototype environment, the EducatorTool, developed to demonstrate the efficacy and novelty of the approach. To validate the work presented in this thesis an industrial study, involving the specification of realtime control software, is reported. The study involves the analysis of use case specifications of the subsystems prior to the application of the proposed approach, and the analysis of the specification where the approach and tool support are applied. This way, it is possible to determine the efficacy of the Educator approach within an industrial setting.
107

Concatenative speech synthesis : a framework for reducing perceived distortion when using the TD-PSOLA algorithm

Longster, Jennifer Ann January 2003 (has links)
This thesis presents the design and evaluation of an approach to concatenative speech synthesis using the Titne-Domain Pitch-Synchronous OverLap-Add (I'D-PSOLA) signal processing algorithm. Concatenative synthesis systems make use of pre-recorded speech segments stored in a speech corpus. At synthesis time, the `best' segments available to synthesise the new utterances are chosen from the corpus using a process known as unit selection. During the synthesis process, the pitch and duration of these segments may be modified to generate the desired prosody. The TD-PSOLA algorithm provides an efficient and essentially successful solution to perform these modifications, although some perceptible distortion, in the form of `buzzyness', may be introduced into the speech signal. Despite the popularity of the TD-PSOLA algorithm, little formal research has been undertaken to address this recognised problem of distortion. The approach in the thesis has been developed towards reducing the perceived distortion that is introduced when TD-PSOLA is applied to speech. To investigate the occurrence of this distortion, a psychoacoustic evaluation of the effect of pitch modification using the TD-PSOLA algorithm is presented. Subjective experiments in the form of a set of listening tests were undertaken using word-level stimuli that had been manipulated using TD-PSOLA. The data collected from these experiments were analysed for patterns of co- occurrence or correlations to investigate where this distortion may occur. From this, parameters were identified which may have contributed to increased distortion. These parameters were concerned with the relationship between the spectral content of individual phonemes, the extent of pitch manipulation, and aspects of the original recordings. Based on these results, a framework was designed for use in conjunction with TD-PSOLA to minimise the possible causes of distortion. The framework consisted of a novel speech corpus design, a signal processing distortion measure, and a selection process for especially problematic phonemes. Rather than phonetically balanced, the corpus is balanced to the needs of the signal processing algorithm, containing more of the adversely affected phonemes. The aim is to reduce the potential extent of pitch modification of such segments, and hence produce synthetic speech with less perceptible distortion. The signal processingdistortion measure was developed to allow the prediction of perceptible distortion in pitch-modified speech. Different weightings were estimated for individual phonemes,trained using the experimental data collected during the listening tests.The potential benefit of such a measure for existing unit selection processes in a corpus-based system using TD-PSOLA is illustrated. Finally, the special-case selection process was developed for highly problematic voiced fricative phonemes to minimise the occurrence of perceived distortion in these segments. The success of the framework, in terms of generating synthetic speech with reduced distortion, was evaluated. A listening test showed that the TD-PSOLA balanced speech corpus may be capable of generating pitch-modified synthetic sentences with significantly less distortion than those generated using a typical phonetically balanced corpus. The voiced fricative selection process was also shown to produce pitch-modified versions of these phonemes with less perceived distortion than a standard selection process. The listening test then indicated that the signal processing distortion measure was able to predict the resulting amount of distortion at the sentence-level after the application of TD-PSOLA, suggesting that it may be beneficial to include such a measure in existing unit selection processes. The framework was found to be capable of producing speech with reduced perceptible distortion in certain situations, although the effects seen at the sentence-level were less than those seen in the previous investigative experiments that made use of word-level stimuli. This suggeststhat the effect of the TD-PSOLA algorithm cannot always be easily anticipated due to the highly dynamic nature of speech, and that the reduction of perceptible distortion in TD-PSOLA-modified speech remains a challenge to the speech community.
108

Machine learning for network based intrusion detection : an investigation into discrepancies in findings with the KDD cup '99 data set and multi-objective evolution of neural network classifier ensembles from imbalanced data

Engen, Vegard January 2010 (has links)
For the last decade it has become commonplace to evaluate machine learning techniques for network based intrusion detection on the KDD Cup '99 data set. This data set has served well to demonstrate that machine learning can be useful in intrusion detection. However, it has undergone some criticism in the literature, and it is out of date. Therefore, some researchers question the validity of the findings reported based on this data set. Furthermore, as identified in this thesis, there are also discrepancies in the findings reported in the literature. In some cases the results are contradictory. Consequently, it is difficult to analyse the current body of research to determine the value in the findings. This thesis reports on an empirical investigation to determine the underlying causes of the discrepancies. Several methodological factors, such as choice of data subset, validation method and data preprocessing, are identified and are found to affect the results significantly. These findings have also enabled a better interpretation of the current body of research. Furthermore, the criticisms in the literature are addressed and future use of the data set is discussed, which is important since researchers continue to use it due to a lack of better publicly available alternatives. Due to the nature of the intrusion detection domain, there is an extreme imbalance among the classes in the KDD Cup '99 data set, which poses a significant challenge to machine learning. In other domains, researchers have demonstrated that well known techniques such as Artificial Neural Networks (ANNs) and Decision Trees (DTs) often fail to learn the minor class(es) due to class imbalance. However, this has not been recognized as an issue in intrusion detection previously. This thesis reports on an empirical investigation that demonstrates that it is the class imbalance that causes the poor detection of some classes of intrusion reported in the literature. An alternative approach to training ANNs is proposed in this thesis, using Genetic Algorithms (GAs) to evolve the weights of the ANNs, referred to as an Evolutionary Neural Network (ENN). When employing evaluation functions that calculate the fitness proportionally to the instances of each class, thereby avoiding a bias towards the major class(es) in the data set, significantly improved true positive rates are obtained whilst maintaining a low false positive rate. These findings demonstrate that the issues of learning from imbalanced data are not due to limitations of the ANNs; rather the training algorithm. Moreover, the ENN is capable of detecting a class of intrusion that has been reported in the literature to be undetectable by ANNs. One limitation of the ENN is a lack of control of the classification trade-off the ANNs obtain. This is identified as a general issue with current approaches to creating classifiers. Striving to create a single best classifier that obtains the highest accuracy may give an unfruitful classification trade-off, which is demonstrated clearly in this thesis. Therefore, an extension of the ENN is proposed, using a Multi-Objective GA (MOGA), which treats the classification rate on each class as a separate objective. This approach produces a Pareto front of non-dominated solutions that exhibit different classification trade-offs, from which the user can select one with the desired properties. The multi-objective approach is also utilised to evolve classifier ensembles, which yields an improved Pareto front of solutions. Furthermore, the selection of classifier members for the ensembles is investigated, demonstrating how this affects the performance of the resultant ensembles. This is a key to explaining why some classifier combinations fail to give fruitful solutions.
109

Physically inspired methods and development of data-driven predictive systems

Budka, Marcin January 2010 (has links)
Traditionally building of predictive models is perceived as a combination of both science and art. Although the designer of a predictive system effectively follows a prescribed procedure, his domain knowledge as well as expertise and intuition in the field of machine learning are often irreplaceable. However, in many practical situations it is possible to build well–performing predictive systems by following a rigorous methodology and offsetting not only the lack of domain knowledge but also partial lack of expertise and intuition, by computational power. The generalised predictive model development cycle discussed in this thesis is an example of such methodology, which despite being computationally expensive, has been successfully applied to real–world problems. The proposed predictive system design cycle is a purely data–driven approach. The quality of data used to build the system is thus of crucial importance. In practice however, the data is rarely perfect. Common problems include missing values, high dimensionality or very limited amount of labelled exemplars. In order to address these issues, this work investigated and exploited inspirations coming from physics. The novel use of well–established physical models in the form of potential fields, has resulted in derivation of a comprehensive Electrostatic Field Classification Framework for supervised and semi–supervised learning from incomplete data. Although the computational power constantly becomes cheaper and more accessible, it is not infinite. Therefore efficient techniques able to exploit finite amount of predictive information content of the data and limit the computational requirements of the resource–hungry predictive system design procedure are very desirable. In designing such techniques this work once again investigated and exploited inspirations coming from physics. By using an analogy with a set of interacting particles and the resulting Information Theoretic Learning framework, the Density Preserving Sampling technique has been derived. This technique acts as a computationally efficient alternative for cross–validation, which fits well within the proposed methodology. All methods derived in this thesis have been thoroughly tested on a number of benchmark datasets. The proposed generalised predictive model design cycle has been successfully applied to two real–world environmental problems, in which a comparative study of Density Preserving Sampling and cross–validation has also been performed confirming great potential of the proposed methods.
110

An empirical investigation of software project schedule behaviour

Rainer, Austen William January 1999 (has links)
Two intensive, longitudinal case studies were conducted at IBM Hursley Park. There were several objectives to these case studies: first, to investigate the actual behaviour of the two projects in depth; second, to develop conceptual structures relating the lower-level processes of each project to the higher-level processes; third, to relate the lower-level and higher-level processes to project duration; fourth, to test a conjecture forwarded by Bradac et al i. e. that waiting is more prevalent during the end of a project than during the middle of a project. A large volume of qualitative and quantitative evidence was collected and analysed for each project. This evidence included minutes of status meetings, interviews, project schedules, and information from feedback workshops (which were conducted several months after the completion of the projects). The analysis generated three models and numerous insights into software project behaviour. The models concerned software project schedule behaviour, capability and an integration of schedule behaviour and capability. The insights concerned characteristics of a project (i. e. the actual progress of phases and milestones, the amount of workload on the project, the degree of capability of the project, tactics of management, and the sociotechnical aspects of a project) and characteristics of process areas within a project (i. e. waiting, poor progress and outstanding work). Support for the models and the insights was sought, with some success, from previous research. Despite the approach taken in this investigation (i. e. the collection of a large volume of evidence and the analyses of a wide variety of factors using a very broad perspective), this investigation has been unable to pinpoint definite causes to explain why a project will or will not complete according to its original plan. One `hint' of an explanation are the differences between the socio-technical contexts of the two projects and, related to this, the fact that tactics of management may be constrained by a project's socio-technical context. Furthermore, while the concept of a project as a distinct entity seems reasonable, the actual boundaries of a project in an organisation's `space-time' are ambiguous and very difficult to properly define. Therefore, it may be that those things that make a project difficult to distinguish from its surrounding organisation are interwoven with the socio-technical contexts of a project, and may be precisely those things that explain the progress of that project. Recommendations, based on the models, the insights and the conclusions, are provided for industry and research.

Page generated in 0.0847 seconds