Background: Systematic reviews have become an established methodology in software engineering. However, they are labour intensive, error prone and time consuming. These and other challenges have led to the development of tools to support the process. However, there is limited evidence about their usefulness. Aim: To investigate the usefulness of tools to support systematic reviews in software engineering and develop an evaluation framework for an overall support tool. Method: A literature review, taking the form of a mapping study, was undertaken to identify and classify tools supporting systematic reviews in software engineering. Motivated by its results, a feature analysis was performed to independently compare and evaluate a selection of tools which aimed to support the whole systematic review process. An initial version of an evaluation framework was developed to carry out the feature analysis and later refined based on its results. To obtain a deeper understanding of the technology, a survey was undertaken to explore systematic review tools in other domains. Semi-structured interviews with researchers in healthcare and social science were carried out. Quantitative and qualitative data was collected, analysed and used to further refine the framework. Results: The literature review showed an encouraging growth of tools to support systematic reviews in software engineering, although many had received limited evaluation. The feature analysis provided new insight into the usefulness of tools, determined the strongest and weakest candidate and established the feasibility of an evaluation framework. The survey provided knowledge about tools used in other domains, which helped further refine the framework. Conclusions: Tools to support systematic reviews in software engineering are still immature. Their potential, however, remains high and it is anticipated that the need for tools within the community will increase. The evaluation framework presented aims to support the future development, assessment and selection of appropriate tools.
Burton, Daniel John
This thesis describes the design and implementation of a novel hybrid field/zone fire model, linking a fire field model to a zone model. This novel concept was implemented using SMARTFIRE (a fire field model produced at the University of Greenwich) and two different zone models (CFAST which is produced by NIST and FSEG-ZONE which has been produced by the author during the course of this work). The intention of the hybrid model is to reduce the amount of computation incurred in using field models to simulate multi-compartment geometries, and it will be implemented to allow users to employ the zone component without having to make further technical considerations, in line with the existing paradigm of the SMARTFIRE suite. In using the hybrid model only the most important or complex parts of the geometry are fully modelled using the field model. Other suitable and less important parts of the geometry are modelled using the zone model. From the field model‘s perspective the zone model is represented as an accurate pressure boundary condition. From the zone model‘s perspective the energy and mass fluxes crossing the interface between the models are seen as point sources. The models are fully coupled and iterate towards a solution ensuring both global conservation along with conservation between the regions of different computational method. By using this approach a significant proportion of the computational cells can be replaced by a relatively simple zone model, saving computational time. The hybrid model can be used in a wide range of situations but will be especially applicable to large geometries, such as hotels, prisons, factories or ships, where the domain size typically proves to be extremely computationally expensive for treatment using a field model. The capability to model such geometries without the associated mesh overheads could eventually permit simulations to be run in ‘faster-real-time’, allowing the spread of fire and effluents to be modelled, along with a close coupling with evacuation software, to provide a tool not just for research objectives, but to allow real time incident management in emergency situations. Initial ‘proof of concept’ work began with the development of one way coupling regimes to demonstrate that a valid link between models could allow communication and conservation of the respective variables. This was extended to a two-way coupling regime using the CFAST zone model and results of this implementation are presented. Fundamental differences between the SMARTFIRE and CFAST models resulted in the development of the FSEG-ZONE model to address several issues; this implementation and numerous results are discussed at length. Finally, several additions were made to the FSEG-ZONE model that are necessary for an accurate consideration of fire simulations. The test cases presented in this thesis show that a good agreement with full- field results can be obtained through use of the hybrid model, while the reduction in computational time realised is approximately equivalent to the percentage of domain cells that are replaced by the zone calculations of the hybrid model.
There are many types of bio-signals with various control application prospects. This dissertation regards possible application domain of electroencephalographic signal. The implementation of EEG signals, as a source of information used for control of external devices, became recently a growing concern in the scientific world. Application of electroencephalographic signals in Brain-Computer Interfaces (BCI) (variant of Human-Computer Interfaces (HCI)) as an implement, which enables direct and fast communication between the human brain and an external device, has become recently very popular. Currently available on the market, BCI solutions require complex signal processing methodology, which results in the need of an expensive equipment with high computing power. In this work, a study on using various types of EEG equipment in order to apply the most appropriate one was conducted. The analysis of EEG signals is very complex due to the presence of various internal and external artifacts. The signals are also sensitive to disturbances and non-stochastic, what makes the analysis a complicated task. The research was performed on customised (built by the author of this dissertation) equipment, on professional medical device and on Emotiv EPOC headset. This work concentrated on application of an inexpensive, easy to use, Emotiv EPOC headset as a tool for gaining EEG signals. The project also involved application of embedded system platform - TS-7260. That solution caused limits in choosing an appropriate signal processing method, as embedded platforms characterise with a little efficiency and low computing power. That aspect was the most challenging part of the whole work. Implementation of the embedded platform enables to extend the possible future application of the proposed BCI. It also gives more flexibility, as the platform is able to simulate various environments. The study did not involve the use of traditional statistical or complex signal processing methods. The novelty of the solution relied on implementation of the basic mathematical operations. The efficiency of this method was also presented in this dissertation. Another important aspect of the conducted study is that the research was carried out not only in a laboratory, but also in an environment reflecting real-life conditions. The results proved efficiency and suitability of the implementation of the proposed solution in real-life environments. The further study will focus on improvement of the signal-processing method and application of other bio-signals - in order to extend the possible applicability and ameliorate its effectiveness.
A new framework for supporting and managing multi-disciplinary system-simulation in a PLM environmentMahler, Michael January 2014 (has links)
In order to keep products and systems attractive to consumers, developers have to do what they can to meet growing customers’ requirements. These requirements could be direct demands of customers but could also be the consequence of other influences such as globalization, customer fragmentation, product portfolio, regulations and so on. In the manufacturing industry, most companies are able to meet these growing requirements with mechatronic and interdisciplinary designed and developed products, which demand the collaboration between different disciplines. For example, the generation of a virtual prototype and its simulation tools of a mechatronic and multi-disciplinary product or system could require the cooperation of multiple departments within a company or between business partners. In a simulation, a virtual prototype is used for testing a product or a system. This virtual prototype and test approach could be used from the early stages of the development process to the end of the product or system lifecycle. Over years, different approaches/systems to generating virtual prototypes and testing have been designed and developed. But these systems have not been properly integrated, although some efforts have been made with limited success. Therefore, the requirement exists to propose and develop new technologies, methods and methodologies for achieving this integration. In addition, the use of simulation tools requires special expertise for the generation of simulation models, plus the formats of product prototypes and simulation data are different for each system. This adds to the requirements of a guideline or framework for implementing the integration of a multi- and inter- disciplinary product design, simulation software and data management during the entire product lifecycle. The main functionality and metadata structures of the new framework have been identified and optimised. The multi-disciplinary simulation data and their collection processes, the existing PLM (product lifecycle management) software and their applications have been analysed. In addition, the inter-disciplinary collaboration between a variety of simulation software has been analysed and evaluated. The new framework integrates the identified and optimised functionality and metadata structures to support and manage multi- and inter-disciplinary simulation in a PLM system environment. It is believed that this project has made 6 contributions to new knowledge generation: (1) the New Conceptual Framework to Enhance the Support and Management of Multi-Disciplinary System-Simulation, (2) the New System-Simulation Oriented and Process Oriented Data Handling Approach, (3) the Enhanced Traceability of System-Simulation to Sources and Represented Products and Functions, (4) the New System-Simulation Derivation Approach, (5) the New Approach for the Synchronisation of System Describing Structures and (6) the Enhanced System-Simulation Result Data Handling Approach. In addition, the new framework would bring significant benefits to each industry it is applied to. They are: (1) the more effective re-use of individual simulation models in system-simulation context, (2) the effective pre-defining and preparing of individual simulation models, (3) the easy and native reviewable system-simulation structures in relation to input-sources, such as products and / or functions, (4) the easy authoring-software independent update of system-simulation-structures, product-structures and function-structures, (5) the effective, distributed and cohesive post-process and interpretation of system-simulation-results, (6) the effective, easy and unique traceability of the data which means cost reductions in documentation and data security, and (7) the greater openness and flexibility in simulation software interactions with the data holding system. Although the proposed and developed conceptual framework has not been implemented (that would require vast resources), it can be expected that the benefits in 7 above will lead to significant advances in the simulation of new product design and development over the whole lifecycle, offering enormous practical value to the manufacturing industry. Due to time and resource constraints as well as the effort that would be involved in the implementation of the proposed new framework, it is clear there are some limitations to this PhD thesis. Five areas have been identified where further work is needed to improve the quality of this project: (1) an expanded industrial sector and product design and development processes, (2) parameter oriented system and production description in the new framework, (3) the improved user interface design of the new framework, (4) the automatic generation of simulation processes and (5) enhancement of the individual simulation models.
The research work presented herein addresses time representation and temporal reasoning in the domain of artificial intelligence. A general temporal theory, as an extension of Alien and Hayes', Gallon's and Vilain's theories, is proposed which treats both time intervals and time points on an equal footing; that is, both intervals and points are taken as primitive time elements in the theory. This means that neither do intervals have to be constructed out of points, nor do points have to be created as some limiting construction of intervals. This approach is different from that of Ladkin, of Van Beek, of Dechter, Meiri and Pearl, and of Maiocchi, which is either to construct intervals out of points, or to treat points and intervals separately. The theory is presented in terms of a series of axioms which characterise a single temporal relation, "meets", over time elements. The axiomatisation allows non-linear time structures such as branching time and parallel time, and additional axioms specifying the linearity and density of time are specially presented. A formal characterisation for the open and closed nature of primitive intervals, which has been a problematic question of time representation in artificial intelligence, is provided in terms of the "meets" relation. It is shown to be consistent with the conventional definitions of open/closed intervals which are constructed out of points. It is also shown that this general theory is powerful enough to subsume some representative temporal theories, such as Alien and Hayes's interval based theory, Bruce's and McDermott's point based theories, and the interval and point based theory of Vilain, and of Gallon. A finite time network based on the theory is specially addressed, where a consistency checker in two different forms is provided for cases with, and without, duration reasoning, respectively. Utilising the time axiomatisation, the syntax and semantics of a temporal logic for reasoning about propositions whose truth values are associated with particular intervals/points are explicitly defined. It is shown that the logic is more expressive than that of some existing systems, such as Alien's interval-based logic, the revised theory proposed by Gallon, Shoham's point-based interval logic, and Haugh's MTA based logic; and the corresponding problems with these systems are satisfactorily solved. Finally, as an application of the temporal theory, a new architecture for a temporal database system which allows the expression of relative temporal knowledge of data transaction and data validity times is proposed. A general retrieval mechanism is presented for a database with a purely qualitative temporal component which allows queries with temporal constraints in terms of any logical combination of Alien's temporal relations. To reduce the computational complexity of the consistency checking algorithm when quantitative time duration knowledge is added, a class of databases, termed time-limited databases, is introduced. This class allows absolute-time-stamped and relative time information in a form which is suitable for many practical applications, where qualitative temporal information is only occasionally needed, and the efficient retrieval mechanisms for absolute-time-stamped databases may be adopted.
The work in this thesis presents an approach towards the effective monitoring of business processes using Case-Based Reasoning (CBR). The rationale behind this research was that business processes constitute a fundamental concept of the modern world and there is a constantly emerging need for their efficient control. They can be efficiently represented but not necessarily monitored and diagnosed effectively via an appropriate platform. Motivated by the above observation this research pursued to which extent there can be efficient monitoring, diagnosis and explanation of the workflows. Workflows and their effective representation in terms of CBR were investigated as well as how similarity measures among them could be established appropriately. The monitoring results and their following explanation to users were questioned as well as which should be an appropriate software architecture to allow monitoring of workflow executions. Throughout the progress of this research, several sets of experiments have been conducted using existing enterprise systems which are coordinated via a predefined workflow business process. Past data produced over several years have been used for the needs of the conducted experiments. Based on those the necessary knowledge repositories were built and used afterwards in order to evaluate the suggesting approach towards the effective monitoring and diagnosis of business processes. The produced results show to which extent a business process can be monitored and diagnosed effectively. The results also provide hints on possible changes that would maximize the accuracy of the actual monitoring, diagnosis and explanation. Moreover the presented approach can be generalised and expanded further to enterprise systems that have as common characteristics a possible workflow representation and the presence of uncertainty. Further work motivated by this thesis could investigate how the knowledge acquisition can be transferred over workflow systems and be of benefit to large-scale multidimensional enterprises. Additionally the temporal uncertainty could be investigated further, in an attempt to address it while reasoning. Finally the provenance of cases and their solutions could be explored further, identifying correlations with the process of reasoning.
This thesis proposes some information security systems to aid network temporal security applications with multivariate quadratic polynomial equations, image cryptography and image hiding. In the first chapter, some general terms of temporal logic, multivariate quadratic equations (MQ) problems and image cryptography/hiding are introduced. In particular, explanations of the need for them and research motivations are given, i.e., a formal characterization of time-series, an alternative scheme of MQ systems, a hybrid-key based image encryption and authentication system and a DWT-SVD (Discrete Wavelet Transform and Singular Value Decomposition) based image hiding system. This is followed by a literature review of temporal basis, ergodic matrix, cryptography and information hiding. After these tools are introduced, they are used to show how they can be applied in our research. The main part of this thesis is about using ergodic matrix and temporal logic in cryptography and hiding information. Specifically, it can be described as follows: A formal characterization of time-series has been presented for both complete and incomplete situations, where the time-series are formalized as a triple (ts, R, Dur) which denote the temporal order of time-elements, the temporal relationship between time-elements and the temporal duration of each time-element, respectively. A cryptosystem based on MQ is proposed. The security of many recently proposed cryptosystems is mainly based on the difficulty of solving large MQ systems. Apart from UOV schemes with proper parameter values, the basic types of these schemes can be broken down without great difficulty. Moreover, there are some shortages lying in some of these examined schemes. Therefore, a bisectional multivariate quadratic equation (BMQE) system over a finite field of degree q is proposed. The BMQE system is analysed by Kipnis and Shamir’s relinearization and fixing-variables method. It is shown that if the number of the equations is larger or equal to twice the number of the variables, and qn is large enough, the system is complicated enough to prevent attacks from some existing attacking schemes. A hybrid-key and ergodic-matrix based image encryption/authentication scheme has been proposed in this work. Because the existing traditional cryptosystems, such as RSA, DES, IDEA, SAFER and FEAL, are not ideal for image encryption for their slow speed and not removing the correlations of the adjacent pixels effectively. Another reason is that the chaos-based cryptosystems, which have been extensively used since last two decades, almost rely on symmetric cryptography. The experimental results, statistical analysis and sensitivity-based tests confirm that, compared to the existing chaos-based image cryptosystems, the proposed scheme provides more secure way for image encryption and transmission. However, the visible encrypted image will easily arouse suspicion. Therefore, a hybrid digital watermarking scheme based on DWT-SVD and ergodic matrix is introduced. Compared to other watermarking schemes, the proposed scheme has shown both significant improvement in perceptibility and robustness under various types of image processing attacks, such as JPEG compression, median filtering, average filtering, histogram equalization, rotation, cropping, Gaussian noise, speckle noise, salt-pepper noise. In general, the proposed method is a useful tool for ownership identification and copyright protection. Finally, two applications based on temporal issues were studied. This is because in real life, when two or more parties communicate, they probably send a series of messages, or they want to embed multiple watermarks for themselves. Therefore, we apply a formal characterization of time-series to cryptography (esp. encryption) and steganography (esp. watermarking). Consequently, a scheme for temporal ordered image encryption and a temporal ordered dynamic multiple digital watermarking model is introduced.
Empirical evidence that proves a serious game is an educationally effective tool for learning computer programming constructs at the computational thinking levelKazimoglu, Cagin January 2013 (has links)
Owing to their easy engagement and motivational nature, games predominantly in young age groups, have been omnipresent in education since ancient times. More recently, computer video games have become widely used, particularly in secondary and tertiary education, as a method of enhancing the understanding of some subject areas (especially in English language education, geography, history and health) and also used as an aid to attracting and retaining students. Many academics have proposed a number of approaches using video game-based learning (GBL), to impart theoretical and applied knowledge, especially in the Computer Science discipline. Despite several years of considerable effort, the empirical evidence in the GBL literature is still missing, specifically that which identifies what students learn from a serious game regarding programming constructs, and whether or not they acquire additional skills after they have been introduced to a GBL approach. Much of the existing work in this area explores the motivational aspect of video games and does not necessarily focus on what people can learn or which cognitive skills they can acquire that would be beneficial to support their learning in introductory computer programming. Hence, this research is concerned with the design, and determining the educational effectiveness, of a game model focused on the development of computational thinking (CT) skills through the medium of learning introductory programming constructs. The research is aimed at designing, developing and evaluating a serious game through a series of empirical studies in order to identify whether or not this serious game can be an educationally effective tool for learning computer programming at the CT level. The game model and its implementation are created to achieve two main purposes. Firstly, to develop a model that would allow students to practise a series of cognitive abilities that characterise CT, regardless of their programming background. Secondly, to support the learning of applied knowledge in introductory programming by demonstrating how a limited number of key introductory computer programming constructs which introductory programming students often find challenging and/or difficult to understand. In order to measure the impact of the serious game and its underlying game model, a pilot-study and a series of rigorous empirical studies have been designed. The pilot study was conducted as a freeform evaluation to obtain initial feedback on the game’s usability. A group of students following Computer Science and related degree programmes with diverse backgrounds and experience participated in the pilot-study and confirmed that they found the game enjoyable. The feedback obtained also showed that the majority of students believed the game would be beneficial in helping introductory programming students learn computational thinking skills. Having incorporated the feedback into a revised version of the game, a further series of rigorous studies were conducted, analysed and evaluated. In order to accurately measure the effect of the game, the findings of the studies were statistically analysed using parametric or non-parametric measures depending on the distribution of data gathered. Moreover, the correlations between how well students did in the game, the knowledge gain students felt, and the skills they felt they acquired after their game-play are thoroughly investigated. It was found that intrinsic motivation, attitude towards learning through game-play, students’ perception of their programming knowledge, how well students visualise programming constructs and their problem solving abilities were significantly enhanced after playing the game. The correlations of the studies provided evidence that there is no strong and significant relationship between the progress of students in the game and the computational thinking skills they felt they gained from it. It was concluded that students developed their computational thinking skills regardless of whether or not they reached the higher levels in the game. In addition to this, it was found that there are no strong and significant correlations between the key computer programming constructs and the computational thinking skills, which provides strong evidence that learning how introductory computer programming constructs work and developing computational thinking skills, are not directly connected to each other in the game environment. It was also found that students felt that their conditional logic, algorithmic thinking and simulation abilities had significantly developed after playing the game. As a result, this research concludes that the designed serious game is an educationally effective tool for a) learning how key introductory computer programming constructs work and b) developing cognitive skills in computational thinking.
Reig Galilea, Fermín Javier
The back end of a compiler performs machine-dependent tasks and low-level optimisations that are laborious to implement and difficult to debug. In addition, in languages that require run-time services such as garbage collection, the back end must interface with the run-time system to provide those services. The net result is that building a compiler back end entails a high implementation cost. In this dissertation I describe reusable code generation infrastructure that enables the construction of a complete programming language implementation (compiler and run-time system) with reduced effort. The infrastructure consists of a portable intermediate language, a compiler for this language and a low-level run-time system. I provide an implementation of this system and I show that it can support a variety of source programming languages, it reduces the overall eort required to implement a programming language, it can capture and retain information necessary to support run-time services and optimisations, and it produces efficient code.
Evaluation of information systems deployment in Libyan oil companies : towards an assessment frameworkAkeel, Hosian January 2013 (has links)
This research work provides an explorative study of information systems deployment in two oil companies in Libya, one domestic and one foreign. It focuses on evaluation, review and assessment methodologies for information system deployment in oil companies in Libya. It also takes into consideration related issues such as information systems strategies and strategic business alignment with information systems. The study begins with an overview of information systems deployment in Libyan-based oil companies. The study thereafter reviews Libya as a business environment and provides a literature review on information systems deployment, information systems strategies and existing assessment models for information systems deployment. A case study of each company is then presented. The research investigates information systems deployment along with associated business functions in the Libyan-based oil companies chosen as case studies. Detailed analysis of information systems deployed in the company has been carried out, following a comprehensive information gathering process from the case study companies. The analysis was done using existing scholarly models which include process mapping, system portfolio analysis, Nolan’s model, Zuboff’s model, the CPIT model and the MacFarlan-Peppard model. Earl’s model and Gottschalk’s model have also been reviewed in the literature and used to provide insightful analysis of the information system strategies of the case study companies. In the concluding section of this research work, a framework is established for the assessment of information systems deployment in similar business contexts - starting from the basis of process analysis and the information systems used, considering interfaces and linkages of the information systems and the suitability of the information systems in similar business contexts. The developed framework builds on the foundation of the existing assessment models for information systems deployment. This newly developed framework presented in this study is the contribution of this research work to knowledge. The developed framework is suited to assessing information systems deployment in oil companies in Libya and can be adapted to other oil companies in developing countries.
Page generated in 0.3397 seconds