• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Tool support for systematic reviews in software engineering

Marshall, Christopher January 2016 (has links)
Background: Systematic reviews have become an established methodology in software engineering. However, they are labour intensive, error prone and time consuming. These and other challenges have led to the development of tools to support the process. However, there is limited evidence about their usefulness. Aim: To investigate the usefulness of tools to support systematic reviews in software engineering and develop an evaluation framework for an overall support tool. Method: A literature review, taking the form of a mapping study, was undertaken to identify and classify tools supporting systematic reviews in software engineering. Motivated by its results, a feature analysis was performed to independently compare and evaluate a selection of tools which aimed to support the whole systematic review process. An initial version of an evaluation framework was developed to carry out the feature analysis and later refined based on its results. To obtain a deeper understanding of the technology, a survey was undertaken to explore systematic review tools in other domains. Semi-structured interviews with researchers in healthcare and social science were carried out. Quantitative and qualitative data was collected, analysed and used to further refine the framework. Results: The literature review showed an encouraging growth of tools to support systematic reviews in software engineering, although many had received limited evaluation. The feature analysis provided new insight into the usefulness of tools, determined the strongest and weakest candidate and established the feasibility of an evaluation framework. The survey provided knowledge about tools used in other domains, which helped further refine the framework. Conclusions: Tools to support systematic reviews in software engineering are still immature. Their potential, however, remains high and it is anticipated that the need for tools within the community will increase. The evaluation framework presented aims to support the future development, assessment and selection of appropriate tools.
122

Performance implications of using diverse redundancy for database replication

Stankovic, Vladimir January 2008 (has links)
Using diverse redundancy for database replication is the focus of this thesis. Traditionally, database replication solutions have been built on the fail-stop failure assumption, i.e. that crashes are believed to cause a majority of failures. However, recent findings refuted this common assumption, showing that many of the faults cause systematic non-crash failures. These findings demonstrate that the existing, non-diverse database replication solutions, which use the same database server products, are ineffective fault-tolerant mechanisms. At the same time, the findings motivated the use of diverse redundancy (when different database server products are used) as a promising way of improving dependability. It seems that using a fault-tolerant server, built with diverse database servers, would deliver improvements in availability and failure rates compared with the individual database servers or their replicated, non-diverse configurations. Besides the potential for improving dependability, one would like to evaluate the performance implications of using diverse redundancy in the context of database replication. This is the focal point of the research. The work performed to that end can be summarised as follows: - We conducted a substantial performance evaluation of database replication using diverse redundancy. We compared its performance to the ones of various non-diverse configurations as well as non-replicated databases. The experiments revealed systematic differences in behaviour of diverse servers. They point to the potential for performance improvement when diverse servers are used. Under particular workloads diverse servers performed better than both non-diverse and non-replicated configurations. - We devised a middleware-based database replication protocol, which provides dependability assurance and guarantees database consistency. It uses an eager update everywhere approach for replica control. Although we focus on the use of diverse database servers, the protocol can be used with the database servers from the same vendor too. We provide the correctness criteria of the protocol. Different regimes of operation of the protocol are defined, which would allow it to be dynamically optimised for either dependability or performance improvements. Additionally, it can be used in conjunction with high-performance replication solutions. - We developed an experimental test harness for performance evaluation of different database replication solutions. It enabled us to evaluate the performance of the diverse database replication protocol, e.g. by comparing it against known replication solutions. We show that, as expected, the improved dependability exhibited by our replication protocol carries a performance overhead. Nevertheless, when optimised for performance improvement our protocol shows good performance. - In order to minimise the performance penalty introduced by the replication we propose a scheme whereby the database server processes are prioritised to deliver performance improvements in cases of low to modest resource utilisation by the database servers. - We performed an uncertainty-explicit assessment of database server products. Using an integrated approach, where both performance and reliability are considered, we rank different database server products to aid selection of the components for the fault-tolerant server built out of diverse databases.
123

SVG 3D graphical presentation for Web-based applications

Lu, Jisheng January 2015 (has links)
Due to the rapid developments in the field of computer graphics and computer hardware, web-based applications are becoming more and more powerful, and the performance distance between web-based applications and desktop applications is increasingly closer. The Internet and the WWW have been widely used for delivering, processing, and publishing 3D data. There is increasingly demand for more and easier access to 3D content on the web. The better the browser experience, the more potential revenue that web-based content can generate for providers and others. The main focus of this thesis is on the design, develop and implementation of a new 3D generic modelling method based on Scalable Vector Graphics (SVG) for web-based applications. While the model is initialized using classical 3D graphics, the scene model is extended using SVG. A new algorithm to present 3D graphics with SVG is proposed. This includes the definition of a 3D scene in the framework, integration of 3D objects, cameras, transformations, light models and textures in a 3D scene, and the rendering of 3D objects on the web page, allowing the end-user to interactively manipulate objects on the web page. A new 3D graphics library for 3D geometric transformation and projection in the SVG GL is design and develop. A set of primitives in the SVG GL, including triangle, sphere, cylinder, cone, etc. are designed and developed. A set of complex 3D models in the SVG GL, including extrusion, revolution, Bezier surface, and point clouds are designed and developed. The new Gouraud shading algorithm and new Phong Shading algorithm in the SVG GL are proposed, designed and developed. The algorithms can be used to generate smooth shading and create highlight for 3D models. The new texture mapping algorithms for the SVG GL oriented toward web-based 3D modelling applications are proposed, designed and developed. Texture mapping algorithms for different 3D objects such as triangle, plane, sphere, cylinder, cone, etc. will also be proposed, designed and developed. This constitutes a unique and significant contribution to the disciplines of web-based 3D modelling, as well as to the process of 3D model popularization.
124

Analysis of motivational factors influencing acceptance of technologically-enhanced personal, academic and professional development portfolios

Ahmed, Ejaz January 2014 (has links)
This research investigates factors that influence students’ intentions to use electronic portfolios (e-portfolios). E-portfolios are important pedagogical tools and a substantial amount of literature supports their role in personal, academic and professional development. However, achieving students' acceptance of e-portfolios is still a challenge for higher education institutions. One approach to understanding acceptance of e-portfolios is through technology acceptance based theories and models. A theoretical framework based on the Decomposed Theory of Planned Behaviour (DTPB) has therefore been developed, which proposes Attitude towards Behaviour (AB), Subjective Norms (SN) and Perceived Behavioural Control (PBC), and their decomposed factors as determinants of students' Behavioural Intention (BI) to use e-portfolios. Based on a positivistic philosophical standpoint, the study used a deductive research approach to test proposed hypotheses. Data was collected from 204 participants via a cross-sectional survey method and Structural Equation Modeling (SEM) was chosen for data analysis using a two-step approach. First, composite reliability, convergent validity and discriminant validity of the measures were established. Next, the structural model was analysed, in which Goodness of Fit (GoF) indices were observed and hypotheses were analysed. The results demonstrated that the theoretical model attained an acceptable fit with the data. The proposed personal, social and control factors in the model were shown to have significant influences on e-portfolio acceptance. The results suggest that use of DTPB can be extended to predict e-portfolio acceptance behaviour.
125

A framework for trend mining with application to medical data

Somaraki, Vassiliki January 2013 (has links)
This thesis presents research work conducted in the field of knowledge discovery. It presents an integrated trend-mining framework and SOMA, which is the application of the trend-mining framework in diabetic retinopathy data. Trend mining is the process of identifying and analysing trends in the context of the variation of support of the association/classification rules that have been extracted from longitudinal datasets. The integrated framework concerns all major processes from data preparation to the extraction of knowledge. At the pre-process stage, data are cleaned, transformed if necessary, and sorted into time-stamped datasets using logic rules. At the next stage, time-stamp datasets are passed through the main processing, in which the ARM technique of matrix algorithm is applied to identify frequent rules with acceptable confidence. Mathematical conditions are applied to classify the sequences of support values into trends. Afterwards, interestingness criteria are applied to obtain interesting knowledge, and a visualization technique is proposed that maps how objects are moving from the previous to the next time stamp. A validation and verification (external and internal validation) framework is described that aims to ensure that the results at the intermediate stages of the framework are correct and that the framework as a whole can yield results that demonstrate causality. To evaluate the thesis, SOMA was developed. The dataset is, in itself, also of interest, as it is very noisy (in common with other similar medical datasets) and does not feature a clear association between specific time stamps and subsets of the data. The Royal Liverpool University Hospital has been a major centre for retinopathy research since 1991. Retinopathy is a generic term used to describe damage to the retina of the eye, which can, in the long term, lead to visual loss. Diabetic retinopathy is used to evaluate the framework, to determine whether SOMA can extract knowledge that is already known to the medics. The results show that those datasets can be used to extract knowledge that can show causality between patients’ characteristics such as the age of patient at diagnosis, type of diabetes, duration of diabetes, and diabetic retinopathy.
126

Computational fluid dynamics based diagnostics and optimal design of hydraulic capsule pipelines

Asim, Taimoor January 2013 (has links)
Scarcity of fossil fuels and rapid escalation in the energy prices around the world is affecting efficiency of established modes of cargo transport within transportation industry. Extensive research is being carried out on improving efficiency of existing modes of cargo transport, as well as to develop alternative means of transporting goods. One such alternative method can be through the use of energy contained within fluid flowing in pipelines in order to transfer goods from one place to another. Although the concept of using fluid pipelines for transportation purposes has been in practice for more than a millennium now, but the detailed knowledge of the flow behaviour in such pipelines is still a subject of active research. This is due to the fact that most of the studies conducted on transporting goods in pipelines are based on experimental measurements of global flow parameters, and only a rough approximation of the local flow behaviour within these pipelines has been reported. With the emergence of sophisticated analytical tools and the use of high performance computing facilities being installed throughout the globe, it is now possible to simulate the flow conditions within these pipelines and get better understanding of the underlying flow phenomena. The present study focuses on the use of advanced modelling tools to simulate the flow within Hydraulic Capsule Pipelines (HCPs) in order to quantify the flow behaviour within such pipelines. Hydraulic Capsule Pipeline is the term which refers to the transport of goods in hollow containers, typically of spherical or cylindrical shapes, termed as capsules, being carried along the pipeline by water. A novel modelling technique has been employed to carry out the investigations under various geometric and flow conditions within HCPs. Both qualitative and quantitative flow diagnostics has been carried out on the flow of both spherical and cylindrical shaped capsules in a horizontal HCP for on-shore applications. A train of capsules consisting of a single to multiple capsules per unit length of the pipeline has been modelled for practical flow velocities within HCPs. It has been observed that the flow behaviour within HCP depends on a number of fluid and geometric parameters. The pressure drop in such pipelines cannot be predicted from established methods. Development of a predictive tool for such applications is one of the aims that is been achieved in this study. Furthermore, investigations have been conducted on vertical pipelines as well, which are very important for off-shore applications of HCPs. The energy requirements for vertical HCPs are significantly higher than horizontal HCPs. It has been shown that a minimum average flow velocity is required to transport a capsule in a vertical HCP, depending upon the geometric and physical properties of the capsules. The concentric propagation, along the centreline of pipe, of heavy density capsules in vertical HCPs marks a significant variation from horizontal HCPs transporting heavy density capsules. Bends are an integral part of pipeline networks. In order to design any pipeline, it is essential to consider the effects of the bends on the overall energy requirements within the pipelines. In order to accurately design both horizontal and vertical HCPs, analysis of the flow behaviour and energy requirements, of varying geometric configurations, has been carried out. A novel modelling technique has been incorporated in order to accurately predict the velocity, trajectory and orientation of the capsules in pipe bends. Optimisation of HCPs plays a crucial rule towards worldwide commercial acceptability of such pipelines. Based on Least-Cost Principle, an optimisation methodology has been developed for single stage HCPs for both on-shore and off-shore applications. The input to the optimisation model is the solid throughput required from the system, and the outputs are the optimal diameter of the HCPs and the pumping requirements for the capsule transporting system. The optimisation model presented in the present study is both robust and user-friendly. A complete flow diagnostics and design, including optimisation, of Hydraulic Capsule Pipelines has been presented in this study. The advanced computational skills being incorporated in this study has made it possible to map and analyse the flow structure within HCPs. Detailed analysis on even the smallest scale flow variations in HCPs has led to a better understanding of the flow behaviour.
127

Neural trust model for multi-agent systems

Lu, Gehao January 2011 (has links)
Introducing trust and reputation into multi-agent systems can significantly improve the quality and efficiency of the systems. The computational trust and reputation also creates an environment of survival of the fittest to help agents recognize and eliminate malevolent agents in the virtual society. The thesis redefines the computational trust and analyzes its features from different aspects. A systematic model called Neural Trust Model for Multi-agent Systems is proposed to support trust learning, trust estimating, reputation generation, and reputation propagation. In this model, the thesis innovates the traditional Self Organizing Map (SOM) and creates a SOM based Trust Learning (STL) algorithm and SOM based Trust Estimation (STE) algorithm. The STL algorithm solves the problem of learning trust from agents' past interactions and the STE solve the problem of estimating the trustworthiness with the help of the previous patterns. The thesis also proposes a multi-agent reputation mechanism for generating and propagating the reputations. The mechanism exploits the patterns learned from STL algorithm and generates the reputation of the specific agent. Three propagation methods are also designed as part of the mechanism to guide path selection of the reputation. For evaluation, the thesis designs and implements a test bed to evaluate the model in a simulated electronic commerce scenario. The proposed model is compared with a traditional arithmetic based trust model and it is also compared to itself in situations where there is no reputation mechanism. The results state that the model can significantly improve the quality and efficacy of the test bed based scenario. Some design considerations and rationale behind the algorithms are also discussed based on the results.
128

Aspects and objects : a unified software design framework

Iqbal, Saqib January 2013 (has links)
Aspect-Oriented Software Development provides a means to modularize concerns of a system which are scattered over multiple system modules. These concerns are known as crosscutting concerns and they cause code to be scattered and tangled in multiple system units. The technique was first proposed at the programming level but evolved up through to the other phases of the software development lifecycle with the passage of time. At the moment, aspect-orientation is addressed in all phases of software development, such as requirements engineering, architecture, design and implementation. This thesis focuses on aspect oriented software design and provides a design language, Aspect-Oriented Design Language (AODL), to specify, represent and design aspectual constructs. The language has been designed to implement co-designing of aspectual and non aspectual constructs. The obliviousness between the constructs has been minimized to improve comprehensibility of the models. The language is applied in three phases and for each phase a separate set of design notations has been introduced. The design notations and diagrams are extensions of Unified Modelling Language (UML) and follow UML Meta Object Facility (UML MOF) rules. There is a separate notation for each aspectual construct and a set of design diagrams to represent their structural and behavioural characteristics. In the first phase, join points are identified and represented in the base program. A distinct design notation has been designated for join points, through which they are located using two diagrams, Join Point Identification Diagram and Join Point Behavioural Diagram. The former diagram identifies join points in a structural depiction of message passing among objects and the later locates them during the behavioural flow of activities of the system. In the second phase, aspects are designed using an Aspect Design Model that models the structural representation of an aspect. The model contains the aspect‟s elements and associations among them. A special diagram, known as the pointcut advice diagram, is nested in the model to represent relationship between pointcuts and their related advices. The rest of the features, such as attributes, operations and inter-type declarations are statically represented in the model. In the third and the final phase, composition of aspects is designed. There are three diagrams included in this phase. To design dynamic composition of aspects with base classes, an Aspect-Class Dynamic Model has been introduced. It depicts the weaving of advices into the base program during the execution of the system. The structural representation of this weaving is modelled using Aspect-Class Structural Model, which represents the relationships between aspects and base classes. The third model is the Pointcut Composition Model, which is a fine-grained version of the Aspect-Class Dynamic Model and has been proposed to depict a detailed model of compositions at pointcut-level. Besides these models, a tabular specification of pointcuts has also been introduced that helps in documenting pointcuts along with their parent aspects and interacting classes. AODL has been evaluated in two stages. In the first stage, two detailed case studies have been modelled using AODL. The first case study is an unimplemented system that is forward designed using AODL notations and diagrams, and the second is an implemented system which is reverse engineered and designed in AODL. A qualitative evaluation has been conducted in the second stage of evaluation to assess the efficacy and maturity of the language. The evaluation also compares the language with peer modelling approaches.
129

Design of a wireless intelligent fuzzy controller network

Saeed, Bahghtar Ibraheem January 2014 (has links)
Since the first application of fuzzy logic in the field of control engineering, fuzzy logic control has been successfully employed in controlling a wide variety of applications, such as commercial appliances, industrial automation, robots, traffic control, cement kilns and automotive engineering. The human knowledge on controlling complex and non-linear processes can be incorporated into a controller in the form of linguistic expressions. Despite these achievements, however, there is still a lack of an empirical or analytical design study which adequately addresses a systematic auto-tuning method. Indeed, tuning is one of the most crucial parts in the overall design of fuzzy logic controllers and it has become an active research field. Various techniques have been utilised to develop algorithms to fine-tune the controller parameters from a trial and error method to very advanced optimisation techniques. The structure of fuzzy logic controllers is not straightforward as is the case in PID controllers. In addition, there is also a set of parameters that can be adjusted, and it is not always easy to find the relationship between the parameters and the controller performance measures. Moreover, in general, controllers have a wide range of setpoints; changing from one value to another requiring the controller parameters to be re-tuned in order to maintain a satisfactory performance over the entire range of setpoints. This thesis deals with the design and implementation of a new intelligent algorithm for fuzzy logic controllers in a wireless network structure. The algorithm enables the controllers to learn about their plants and systematically tune their gains. The algorithm also provides the capability of retaining the knowledge acquired during the tuning process. Furthermore, this knowledge is shared on the network through a wireless communication link with other controllers. Based on the relationships between controller gains and the closed-loop characteristics, an auto-tuning algorithm is developed. Simulation experiments using standard second order systems demonstrate the effectiveness of the algorithm with respect to auto-tuning, tracking setpoints and rejecting external disturbances. Furthermore, a zero overshoot response is produced with improvements in the transient and the steady state responses. The wireless network structure is implemented using LabVIEW by composing a network of several fuzzy controllers. The results demonstrate that the controllers are able to retain and share the knowledge.
130

The use of advanced soft computing for machinery condition monitoring

Ahmed, Mahmud January 2014 (has links)
The demand for cost effective, reliable and safe machinery operation requires accurate fault detection and classification. These issues are of paramount importance as potential failures of rotating and reciprocating machinery can be managed properly and avoided in some cases. Various methods have been applied to tackle these issues, but the accuracy of those methods is variable and leaves scope for improvement. This research proposes appropriate methods for fault detection and diagnosis. The main consideration of this study is use Artificial Intelligence (AI) and related mathematics approaches to build a condition monitoring (CM) system that has incremental learning capabilities to select effective diagnostic features for the fault diagnosis of a reciprocating compressor (RC). The investigation involved a series of experiments conducted on a two-stage RC at baseline condition and then with faults introduced into the intercooler, drive belt and 2nd stage discharge and suction valve respectively. In addition to this, three combined faults: discharge valve leakage combined with intercooler leakage, suction valve leakage combined with intercooler leakage and discharge valve leakage combined with suction valve leakage were created and simulated to test the model. The vibration data was collected from the experimental RC and processed through pre-processing stage, features extraction, features selection before the developed diagnosis and classification model were built. A large number of potential features are calculated from the time domain, the frequency domain and the envelope spectrum. Applying Neural Networks (NNs), Support Vector Machines (SVMs), Relevance Vector Machines (RVMs) which integrate with Genetic Algorithms (GAs), and principle components analysis (PCA) which cooperates with principle components optimisation, to these features, has found that the features from envelope analysis have the most potential for differentiating various common faults in RCs. The practical results for fault detection, diagnosis and classification show that the proposed methods perform very well and accurately and can be used as effective tools for diagnosing reciprocating machinery failures.

Page generated in 0.1488 seconds