• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Aspects and objects : a unified software design framework

Iqbal, Saqib January 2013 (has links)
Aspect-Oriented Software Development provides a means to modularize concerns of a system which are scattered over multiple system modules. These concerns are known as crosscutting concerns and they cause code to be scattered and tangled in multiple system units. The technique was first proposed at the programming level but evolved up through to the other phases of the software development lifecycle with the passage of time. At the moment, aspect-orientation is addressed in all phases of software development, such as requirements engineering, architecture, design and implementation. This thesis focuses on aspect oriented software design and provides a design language, Aspect-Oriented Design Language (AODL), to specify, represent and design aspectual constructs. The language has been designed to implement co-designing of aspectual and non aspectual constructs. The obliviousness between the constructs has been minimized to improve comprehensibility of the models. The language is applied in three phases and for each phase a separate set of design notations has been introduced. The design notations and diagrams are extensions of Unified Modelling Language (UML) and follow UML Meta Object Facility (UML MOF) rules. There is a separate notation for each aspectual construct and a set of design diagrams to represent their structural and behavioural characteristics. In the first phase, join points are identified and represented in the base program. A distinct design notation has been designated for join points, through which they are located using two diagrams, Join Point Identification Diagram and Join Point Behavioural Diagram. The former diagram identifies join points in a structural depiction of message passing among objects and the later locates them during the behavioural flow of activities of the system. In the second phase, aspects are designed using an Aspect Design Model that models the structural representation of an aspect. The model contains the aspect‟s elements and associations among them. A special diagram, known as the pointcut advice diagram, is nested in the model to represent relationship between pointcuts and their related advices. The rest of the features, such as attributes, operations and inter-type declarations are statically represented in the model. In the third and the final phase, composition of aspects is designed. There are three diagrams included in this phase. To design dynamic composition of aspects with base classes, an Aspect-Class Dynamic Model has been introduced. It depicts the weaving of advices into the base program during the execution of the system. The structural representation of this weaving is modelled using Aspect-Class Structural Model, which represents the relationships between aspects and base classes. The third model is the Pointcut Composition Model, which is a fine-grained version of the Aspect-Class Dynamic Model and has been proposed to depict a detailed model of compositions at pointcut-level. Besides these models, a tabular specification of pointcuts has also been introduced that helps in documenting pointcuts along with their parent aspects and interacting classes. AODL has been evaluated in two stages. In the first stage, two detailed case studies have been modelled using AODL. The first case study is an unimplemented system that is forward designed using AODL notations and diagrams, and the second is an implemented system which is reverse engineered and designed in AODL. A qualitative evaluation has been conducted in the second stage of evaluation to assess the efficacy and maturity of the language. The evaluation also compares the language with peer modelling approaches.
22

Design of a wireless intelligent fuzzy controller network

Saeed, Bahghtar Ibraheem January 2014 (has links)
Since the first application of fuzzy logic in the field of control engineering, fuzzy logic control has been successfully employed in controlling a wide variety of applications, such as commercial appliances, industrial automation, robots, traffic control, cement kilns and automotive engineering. The human knowledge on controlling complex and non-linear processes can be incorporated into a controller in the form of linguistic expressions. Despite these achievements, however, there is still a lack of an empirical or analytical design study which adequately addresses a systematic auto-tuning method. Indeed, tuning is one of the most crucial parts in the overall design of fuzzy logic controllers and it has become an active research field. Various techniques have been utilised to develop algorithms to fine-tune the controller parameters from a trial and error method to very advanced optimisation techniques. The structure of fuzzy logic controllers is not straightforward as is the case in PID controllers. In addition, there is also a set of parameters that can be adjusted, and it is not always easy to find the relationship between the parameters and the controller performance measures. Moreover, in general, controllers have a wide range of setpoints; changing from one value to another requiring the controller parameters to be re-tuned in order to maintain a satisfactory performance over the entire range of setpoints. This thesis deals with the design and implementation of a new intelligent algorithm for fuzzy logic controllers in a wireless network structure. The algorithm enables the controllers to learn about their plants and systematically tune their gains. The algorithm also provides the capability of retaining the knowledge acquired during the tuning process. Furthermore, this knowledge is shared on the network through a wireless communication link with other controllers. Based on the relationships between controller gains and the closed-loop characteristics, an auto-tuning algorithm is developed. Simulation experiments using standard second order systems demonstrate the effectiveness of the algorithm with respect to auto-tuning, tracking setpoints and rejecting external disturbances. Furthermore, a zero overshoot response is produced with improvements in the transient and the steady state responses. The wireless network structure is implemented using LabVIEW by composing a network of several fuzzy controllers. The results demonstrate that the controllers are able to retain and share the knowledge.
23

The use of advanced soft computing for machinery condition monitoring

Ahmed, Mahmud January 2014 (has links)
The demand for cost effective, reliable and safe machinery operation requires accurate fault detection and classification. These issues are of paramount importance as potential failures of rotating and reciprocating machinery can be managed properly and avoided in some cases. Various methods have been applied to tackle these issues, but the accuracy of those methods is variable and leaves scope for improvement. This research proposes appropriate methods for fault detection and diagnosis. The main consideration of this study is use Artificial Intelligence (AI) and related mathematics approaches to build a condition monitoring (CM) system that has incremental learning capabilities to select effective diagnostic features for the fault diagnosis of a reciprocating compressor (RC). The investigation involved a series of experiments conducted on a two-stage RC at baseline condition and then with faults introduced into the intercooler, drive belt and 2nd stage discharge and suction valve respectively. In addition to this, three combined faults: discharge valve leakage combined with intercooler leakage, suction valve leakage combined with intercooler leakage and discharge valve leakage combined with suction valve leakage were created and simulated to test the model. The vibration data was collected from the experimental RC and processed through pre-processing stage, features extraction, features selection before the developed diagnosis and classification model were built. A large number of potential features are calculated from the time domain, the frequency domain and the envelope spectrum. Applying Neural Networks (NNs), Support Vector Machines (SVMs), Relevance Vector Machines (RVMs) which integrate with Genetic Algorithms (GAs), and principle components analysis (PCA) which cooperates with principle components optimisation, to these features, has found that the features from envelope analysis have the most potential for differentiating various common faults in RCs. The practical results for fault detection, diagnosis and classification show that the proposed methods perform very well and accurately and can be used as effective tools for diagnosing reciprocating machinery failures.
24

Analysis of motivational factors influencing acceptance of technologically-enhanced personal, academic and professional development portfolios

Ahmed, Ejaz January 2014 (has links)
This research investigates factors that influence students’ intentions to use electronic portfolios (e-portfolios). E-portfolios are important pedagogical tools and a substantial amount of literature supports their role in personal, academic and professional development. However, achieving students' acceptance of e-portfolios is still a challenge for higher education institutions. One approach to understanding acceptance of e-portfolios is through technology acceptance based theories and models. A theoretical framework based on the Decomposed Theory of Planned Behaviour (DTPB) has therefore been developed, which proposes Attitude towards Behaviour (AB), Subjective Norms (SN) and Perceived Behavioural Control (PBC), and their decomposed factors as determinants of students' Behavioural Intention (BI) to use e-portfolios. Based on a positivistic philosophical standpoint, the study used a deductive research approach to test proposed hypotheses. Data was collected from 204 participants via a cross-sectional survey method and Structural Equation Modeling (SEM) was chosen for data analysis using a two-step approach. First, composite reliability, convergent validity and discriminant validity of the measures were established. Next, the structural model was analysed, in which Goodness of Fit (GoF) indices were observed and hypotheses were analysed. The results demonstrated that the theoretical model attained an acceptable fit with the data. The proposed personal, social and control factors in the model were shown to have significant influences on e-portfolio acceptance. The results suggest that use of DTPB can be extended to predict e-portfolio acceptance behaviour.
25

A framework for trend mining with application to medical data

Somaraki, Vassiliki January 2013 (has links)
This thesis presents research work conducted in the field of knowledge discovery. It presents an integrated trend-mining framework and SOMA, which is the application of the trend-mining framework in diabetic retinopathy data. Trend mining is the process of identifying and analysing trends in the context of the variation of support of the association/classification rules that have been extracted from longitudinal datasets. The integrated framework concerns all major processes from data preparation to the extraction of knowledge. At the pre-process stage, data are cleaned, transformed if necessary, and sorted into time-stamped datasets using logic rules. At the next stage, time-stamp datasets are passed through the main processing, in which the ARM technique of matrix algorithm is applied to identify frequent rules with acceptable confidence. Mathematical conditions are applied to classify the sequences of support values into trends. Afterwards, interestingness criteria are applied to obtain interesting knowledge, and a visualization technique is proposed that maps how objects are moving from the previous to the next time stamp. A validation and verification (external and internal validation) framework is described that aims to ensure that the results at the intermediate stages of the framework are correct and that the framework as a whole can yield results that demonstrate causality. To evaluate the thesis, SOMA was developed. The dataset is, in itself, also of interest, as it is very noisy (in common with other similar medical datasets) and does not feature a clear association between specific time stamps and subsets of the data. The Royal Liverpool University Hospital has been a major centre for retinopathy research since 1991. Retinopathy is a generic term used to describe damage to the retina of the eye, which can, in the long term, lead to visual loss. Diabetic retinopathy is used to evaluate the framework, to determine whether SOMA can extract knowledge that is already known to the medics. The results show that those datasets can be used to extract knowledge that can show causality between patients’ characteristics such as the age of patient at diagnosis, type of diabetes, duration of diabetes, and diabetic retinopathy.
26

Algorithm engineering : string processing

Berry, Thomas January 2002 (has links)
The string matching problem has attracted a lot of interest throughout the history of computer science, and is crucial to the computing industry. The theoretical community in Computer Science has a developed a rich literature in the design and analysis of string matching algorithms. To date, most of this work has been based on the asymptotic analysis of the algorithms. This analysis rarely tell us how the algorithm will perform in practice and considerable experimentation and fine-tuning is typically required to get the most out of a theoretical idea. In this thesis, promising string matching algorithms discovered by the theoretical community are implemented, tested and refined to the point where they can be usefully applied in practice. In the course of this work we have presented the following new algorithms. We prove that the time complexity of the new algorithms, for the average case is linear. We also compared the new algorithms with the existing algorithms by experimentation. " We implemented the existing one dimensional string matching algorithms for English texts. From the findings of the experimental results we identified the best two algorithms. We combined these two algorithms and introduce a new algorithm. " We developed a new two dimensional string matching algorithm. This algorithm uses the structure of the pattern to reduce the number of comparisons required to search for the pattern. " We described a method for efficiently storing text. Although this reduces the size of the storage space, it is not a compression method as in the literature. Our aim is to improve both space and time taken by a string matching algorithm. Our new algorithm searches for patterns in the efficiently stored text without decompressing the text. " We illustrated that by pre-processing the text we can improve the speed of the string matching algorithm when we search for a large number of patterns in a given text. " We proposed a hardware solution for searching in an efficiently stored DNA text.
27

The application of neural networks to problems in fringe analysis

Tipper, David John January 1999 (has links)
No description available.
28

Domain oriented object reuse based on genetic software architectures

Al-Yasiri, Adil January 1997 (has links)
In this thesis, a new systematic approach is introduced for developing software systems from domain-oriented components. The approach is called Domain Oriented Object Reuse (DOOR) which is based on domain analysis and Generic Software Architectures. The term 'Generic Software Architectures' is used to denote a new technique for building domain reference architectures using architecture schemas. The architecture schemas are used to model the components behaviour and dependency. Components dependencies describe components behaviour in terms of their inter-relationships within the same domain scope. DOOR uses the architecture schemas as a mechanism for specifying design conceptions within the modelled domain. Such conceptions provide design decisions and solutions to domain-specific problems which may be applied in the development of new systems. Previous research in the area of domain analysis and component-oriented reuse has established the need for a systematic approach to component-oriented development which emphasises the presentation side of the solution in the technology. DOOR addresses the presentation issue by organising the domain knowledge into levels of abstractions known to DOOR as sub-domains. These levels are organised in a hierarchical taxonomy tree which contains, in addition to sub-domains, a collection of reusable assets associated with each level. The tree determines the scope of reuse for every domain asset and the boundaries for their application. Thus, DOOR also answers the questions of reuse scope and domain boundaries which have also been raised by the reuse community. DOOR's reuse process combines development for reuse and development with reuse together. With this process, which is supported by a set of integrated tools, a number of guidelines have been introduced to assist in modelling the domain assets and assessing their reusability. The tools are also used for automatic assessment of the domain architecture and the design conceptions of its schemas. Furthermore, when a new system is synthesised, components are retrieved, with the assistance of the tools, according to the scope of reuse within which the system is developed. The retrieval procedure uses the components dependencies for tracing and retrieving the relevant components for the required abstraction.
29

Robust steganographic techniques for secure biometric-based remote authentication

Rashid, Rasber Dhahir January 2015 (has links)
Biometrics are widely accepted as the most reliable proof of identity, entitlement to services, and for crime-related forensics. Using biometrics for remote authentication is becoming an essential requirement for the development of knowledge-based economy in the digital age. Ensuring security and integrity of the biometric data or templates is critical to the success of deployment especially because once the data compromised the whole authentication system is compromised with serious consequences for identity theft, fraud as well as loss of privacy. Protecting biometric data whether stored in databases or transmitted over an open network channel is a serious challenge and cryptography may not be the answer. The main premise of this thesis is that Digital Steganography can provide an alternative security solutions that can be exploited to deal with the biometric transmission problem. The main objective of the thesis is to design, develop and test steganographic tools to support remote biometric authentication. We focus on investigating the selection of biometrics feature representations suitable for hiding in natural cover images and designing steganography systems that are specific for hiding such biometric data rather than being suitable for general purpose. The embedding schemes are expected to have high security characteristics resistant to several types of steganalysis tools and maintain accuracy of recognition post embedding. We shall limit our investigations to embedding face biometrics, but the same challenges and approaches should help in developing similar embedding schemes for other biometrics. To achieve this our investigations and proposals are done in different directions which explain in the rest of this section. Reviewing the literature on the state-of-art in steganography has revealed a rich source of theoretical work and creative approaches that have helped generate a variety of embedding schemes as well as steganalysis tools but almost all focused on embedding random looking secrets. The review greatly helped in identifying the main challenges in the field and the main criteria for success in terms of difficult to reconcile requirements on embedding capacity, efficiency of embedding, robustness against steganalysis attacks, and stego image quality. On the biometrics front the review revealed another rich source of different face biometric feature vectors. The review helped shaping our primary objectives as (1) identifying a binarised face feature factor with high discriminating power that is susceptible to embedding in images, (2) develop a special purpose content-based steganography schemes that can benefit from the well-defined structure of the face biometric data in the embedding procedure while preserving accuracy without leaking information about the source biometric data, and (3) conduct sufficient sets of experiments to test the performance of the developed schemes, highlight the advantages as well as limitations, if any, of the developed system with regards to the above mentioned criteria. We argue that the well-known LBP histogram face biometric scheme satisfies the desired properties and we demonstrate that our new more efficient wavelet based versions called LBPH patterns is much more compact and has improved accuracy. In fact the wavelet version schemes reduce the number of features by 22% to 72% of the original version of LBP scheme guaranteeing better invisibility post embedding. We shall then develop 2 steganographic schemes. The first is the LSB-witness is a general purpose scheme that avoids changing the LSB-plane guaranteeing robustness against targeted steganalysis tools, but establish the viability of using steganography for remote biometric-based recognition. However, it may modify the 2nd LSB of cover pixels as a witness for the presence of the secret bits in the 1st LSB and thereby has some disadvantages with regards to the stego image quality. Our search for a new scheme that exploits the structure of the secret face LBPH patterns for improved stego image quality has led to the development of the first content-based steganography scheme. Embedding is guided by searching for similarities between the LBPH patterns and the structure of the cover image LSB bit-planes partitioned into 8-bit or 4-bit patterns. We shall demonstrate the excellent benefits of using content-based embedding scheme in terms of improved stego image quality, greatly reduced payload, reduced lower bound on optimal embedding efficiency, robustness against all targeted steganalysis tools. Unfortunately our scheme was not robust against the blind or universal SRM steganalysis tool. However we demonstrated robustness against SRM at low payload when our scheme was modified by restricting embedding to edge and textured pixels. The low payload in this case is sufficient to embed a secret full face LBPH patterns. Our work opens new exciting opportunities to build successful real applications of content-based steganography and presents plenty of research challenges.
30

Many-objective genetic type-2 fuzzy logic based workforce optimisation strategies for large scale organisational design

Starkey, Andrew J. January 2018 (has links)
Workforce optimisation aims to maximise the productivity of a workforce and is a crucial practice for large organisations. The more effective these workforce optimisation strategies are, the better placed the organisation is to meet their objectives. Usually, the focus of workforce optimisation is scheduling, routing and planning. These strategies are particularly relevant to organisations with large mobile workforces, such as utility companies. There has been much research focused on these areas. One aspect of workforce optimisation that gets overlooked is organisational design. Organisational design aims to maximise the potential utilisation of all resources while minimising costs. If done correctly, other systems (scheduling, routing and planning) will be more effective. This thesis looks at organisational design, from geographical structures and team structures to skilling and resource management. A many-objective optimisation system to tackle large-scale optimisation problems will be presented. The system will employ interval type-2 fuzzy logic to handle the uncertainties with the real-world data, such as travel times and task completion times. The proposed system was developed with data from British Telecom (BT) and was deployed within the organisation. The techniques presented at the end of this thesis led to a very significant improvement over the standard NSGA-II algorithm by 31.07% with a P-Value of 1.86-10. The system has delivered an increase in productivity in BT of 0.5%, saving an estimated £1million a year, cut fuel consumption by 2.9%, resulting in an additional saving of over £200k a year. Due to less fuel consumption Carbon Dioxide (CO2) emissions have been reduced by 2,500 metric tonnes. Furthermore, a report by the United Kingdom’s (UK’s) Department of Transport found that for every billion vehicle miles travelled, there were 15,409 serious injuries or deaths. The system saved an estimated 7.7 million miles, equating to preventing more than 115 serious casualties and fatalities.

Page generated in 1.8078 seconds