• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Generic business process modelling framework for quantitative evaluation

Al-Tuwarijari, Jamal Mustafa January 2013 (has links)
Business processes are the backbone of organisations used to automate and increase the efficiency and effectiveness of their services and prod- ucts. The rapid growth of the Internet and other Web based technologies has sparked competition between organisations in attempting to provide a faster, cheaper and smarter environment for customers. In response to these requirements, organisations are examining how their business processes may be evaluated so as to improve business performance. This thesis proposes a generic framework to expand the applicability of various quantitative evaluation to a large class of business processes. The framework introduces a novel engineering methodology that defines a modelling formalism to represent business processes that can be solved for a set of performance and optimisation algorithms. The methodology allows various types of algorithms used in model-based business pro- cess improvement and optimisation to be plugged in a single modelling formalism. As a part of the framework, a generic modelling formalism (MWF-wR) is developed to represent business processes so as to allow quantitative evaluation and to select the parameters for the associated performance evaluation and optimisation. The generic framework is designed and implemented by developing soft- ware support tools using Java as object oriented programming language combining three main modules: (i) a business process specification mod- ule to define the components of the business process model, (ii) a stochas- tic Petri net module to map the business process model to a stochastic Petri net, and (iii) an algorithms module to solve the models for various performance optimisation objectives. Furthermore, a literature survey of different aspects of business processes including modelling and analy- sis techniques provides an overview of the current state of research and highlights gaps in business process modelling and performance analy- sis. Finally, experiments are introduced to investigate the validity of the presented approach.

WeDRisk : an approach to managing web and distributed software development risks

Keshlaf, Ayad Ali January 2013 (has links)
Web and distributed software developments are risky and face speci c challenges like time zone and cultural di erences. These challenges have resulted in new risks and risk management needs. In this thesis, a systematic review of existing software risk management approaches was conducted to investigate their ability to satisfy the risk management needs of web and distributed developments. The review identi es a number of weaknesses in existing approaches. Examples are the lack of consideration for web and distributed factors and lack of preparation for atypical risks. A new approach called WeDRisk is introduced to manage the risks from project, process and product perspectives. The WeDRisk approach addresses the weaknesses of existing approaches to risk management, which are less able to deal with the speci c challenges of web and distributed develop- ment. A key part of the approach is exibility to deal with the rapid evolution which is typical of such developments. This exibility is achieved by customiz- ing the risk management and providing a method for coping with atypical risks. WeDRisk also provides an improved risk estimation equation to consider web and distributed factors. The novel aspects of the WeDRisk approach were subjected to a series of evaluation cycles, including peer review, two controlled experiments, expert evaluation and a case study. In addition to a number of improvement sug- gestions, the evaluation results illustrate how WeDRisk is useful, understandable, exible, easy to use, and able to satisfy many web and distributed development risk management needs.

On the mechanisation of the logic of partial functions

Lovert, Matthew James January 2013 (has links)
It is well known that partial functions arise frequently in formal reasoning about programs. A partial function may not yield a value for every member of its domain. Terms that apply partial functions thus may not denote, and coping with such terms is problematic in two-valued classical logic. A question is raised: how can reasoning about logical formulae that can contain references to terms that may fail to denote (partial terms) be conducted formally? Over the years a number of approaches to coping with partial terms have been documented. Some of these approaches attempt to stay within the realm of two-valued classical logic, while others are based on non-classical logics. However, as yet there is no consensus on which approach is the best one to use. A comparison of numerous approaches to coping with partial terms is presented based upon formal semantic definitions. One approach to coping with partial terms that has received attention over the years is the Logic of Partial Functions (LPF), which is the logic underlying the Vienna Development Method. LPF is a non-classical three-valued logic designed to cope with partial terms, where both terms and propositions may fail to denote. As opposed to using concrete undfined values, undefinedness is treated as a \gap", that is, the absence of a defined value. LPF is based upon Strong Kleene logic, where the interpretations of the logical operators are extended to cope with truth value \gaps". Over the years a large body of research and engineering has gone into the development of proof based tool support for two-valued classical logic. This has created a major obstacle that affects the adoption of LPF, since such proof support cannot be carried over directly to LPF. Presently, there is a lack of direct proof support for LPF. An aim of this work is to investigate the applicability of mechanised (automated) proof support for reasoning about logical formulae that can contain references to partial terms in LPF. The focus of the investigation is on the basic but fundamental two-valued classical logic proof procedure: resolution and the associated technique proof by contradiction. Advanced proof techniques are built on the foundation that is provided by these basic fundamental proof techniques. Looking at the impact of these basic fundamental proof techniques in LPF is thus the essential and obvious starting point for investigating proof support for LPF. The work highlights the issues that arise when applying these basic techniques in LPF, and investigates the extent of the modifications needed to carry them over to LPF. This work provides the essential foundation on which to facilitate research into the modification of advanced proof techniques for LPF.

The IT performance in Saudi Arabian hospitals

Almarshad, Abdullah January 2013 (has links)
The objectives of this research are first to examine the level of implementation of IT in Saudi's hospitals; to examine possible relationships between IT implementation and performance; to identify the challenges that Saudi's hospitals face to implement and perform IT. To achieve these objectives, the research will employ various methods. These include questionnaire survey, semi-structured interviews and evaluation methods. The questionnaire survey data analysis showed that there was lack of organisational factors implementation in IT departments in Saudi Arabian hospitals. It showed also there were positive significant relationships between the organisational factors implementation and the IT performance indicator. The semi-structured interviews data analysis suggested a number of factors that negatively affect the implementation of organisational factors in IT department in Saudi Arabian hospitals and these are: supporting the IT department has middle priority, lack of training and knowledge amongst IT department, lack of IT improvement, lack of IT planning, unclear of IT department objectives, customer's less satisfaction, shortage of expertise, IT professionals' turnover, lack of ambition amongst IT employees, lack of sharing knowledge. The two evaluations indicated that the assessment for organisational factors implementation in IT departments developed in this research is applicable in practice. This tool can be used by the IT departments in Saudi Arabian hospitals to assess their organisational management implementation efforts. Through using this tool, hospitals can identify improvement possibilities and formulate an effective improvement plan.

Context aware drivers' behaviour detection system for VANET

Al-Sultan, Saif Jamal January 2013 (has links)
Wireless communications and mobile computing have led to the enhancement of, and improvement in, intelligent transportation systems (ITS) that focus on road safety applications. As a promising technology and a core component of ITS, Vehicle Ad hoc Networks (VANET) have emerged as an application of Mobile Ad hoc Networks (MANET), which use Dedicated Short Range Communication (DSRC) to allow vehicles in close proximity to communicate with one another, or to communicate with roadside equipment. These types of communication open up a wide range of potential safety and non-safety applications, with the aim of providing an intelligent driving environment that will offer road users more pleasant journeys. VANET safety applications are considered to represent a vital step towards improving road safety and enhancing traffic efficiency, as a consequence of their capacity to share information about the road between moving vehicles. This results in decreasing numbers of accidents and increasing the opportunity to save people's lives. Many researchers from different disciplines have focused their research on the development of vehicle safety applications. Designing an accurate and efficient driver behaviour detection system that can detect the abnormal behaviours exhibited by drivers (i.e. drunkenness and fatigue) and alert them may have an impact on the prevention of road accidents. Moreover, using Context-aware systems in vehicles can improve the driving by collecting and analysing contextual information about the driving environment, hence, increasing the awareness of the driver while driving his/her car. In this thesis, we propose a novel driver behaviour detection system in VANET by utilising a context-aware system approach. The system is comprehensive, non-intrusive and is able to detect four styles of driving behaviour: drunkenness, fatigue, reckless and normal behaviour. The behaviour of the driver in this study is considered to be uncertain context and is defined as a dynamic interaction between the driver, the vehicle and the environment; meaning it is affected by many factors and develops over the time. Therefore, we have introduced a novel Dynamic Bayesian Network (DBN) framework to perform reasoning about uncertainty and to deduce the behaviour of drivers by combining information regarding the above mentioned factors. A novel On Board Unit (OBU) architecture for detecting the behaviour of the driver has been introduced. The architecture has been built based on the concept of context-awareness; it is divided into three phases that represent the three main subsystems of context-aware system; sensing, reasoning and acting subsystems. The proposed architecture explains how the system components interact in order to detect abnormal behaviour that is being exhibited by driver; this is done to alert the driver and prevent accidents from occurring. The implementation of the proposed system has been carried out using GeNIe version 2.0 software to construct the DBN model. The DBN model has been evaluated using synthetic data in order to demonstrate the detection accuracy of the proposed model under uncertainty, and the importance of including a large amount of contextual information within the detection process.

An early warning system for risk management

Jadi, Amr January 2013 (has links)
Risk management in healthcare has solved a wide range of healthcare-related issues in Saudi Arabia. However, the limitation of risk management teams working under special conditions (needing to solve critical health-related issues) has highlighted the urgent need for an early risk warning system (ERWS) in healthcare. The influences of changing weather conditions demand that diabetic patients and doctors in Saudi Arabia have a continuous check on health conditions. The number of diabetic patients is increasing rapidly in Saudi Arabia. Hence, risk management teams in healthcare must be supported with a system that alerts to changes before the changes become a significant risk/problem. Our proposed approach does the following: 1) predicts changes in BP and blood sugar level within hospital environment at runtime. 2) Continually checks patient health status with respect to health condition at runtime. 3) Alerts to the changes as detected (e.g. risk or unknown parameter), and also provides feedback for patient and doctor. We present a computational model that defines the interaction and communication of the system components and describes the prediction and checking process in our proposed approach. We designed the architecture for our proposed approach with respect to the computational model. The thesis proposes an early risk warning system approach, which predicts and checks patient health conditions with respect to the ideal conditions according to medical standards. The health status of a patient will be communicated to doctors and patients on an emergency note if the predicted values are outside normal conditions. In this way, the risk can be mitigated before the occurrence of damage to patient health at runtime. To implement the proposed approach, neural networks is used for developing the prediction component using Java programming. The results of this research successfully predicted the health condition of a patient by checking outputs against medical standards. The risks defined in this research include hyperglycaemia, hypoglycaemia, hypertension and hypotension. Appropriate results were obtained for almost every patient when checked with four input parameters for 200 patients. Consistent results were produced by the risk prediction component and the alerts were generated after every five (5) seconds to communicate to the patients and doctors at runtime. Health status of all 200 patients can also be seen to check the changes in health conditions in the hospital environment. Finally, a case study with different scenarios based on changes in patient health status with respect to ideal conditions revealed evaluated the approach.

A knowledge based reengineering approach via ontology and description logic

Zhou, Hong January 2011 (has links)
Traditional software reengineering often involves a great deal of manual effort by software maintainers. This is time consuming and error prone. Due to the knowledge intensive properties of software reengineering, a knowledge-based solution is proposed in this thesis to semi-automate some of this manual effort. This thesis aims to explore the principle research question: “How can software systems be described by knowledge representation techniques in order to semi-automate the manual effort in software reengineering?” The underlying research procedure of this thesis is scientific method, which consists of: observation, proposition, test and conclusion. Ontology and description logic are employed to model and represent the knowledge in different software systems, which is integrated with domain knowledge. Model transformation is used to support ontology development. Description logic is used to implement ontology mapping algorithms, in which the problem of detecting semantic relationships is converted into the problem of deducing the satisfiability of logical formulae. Operating system ontology has been built with a top-down approach, and it was deployed to support platform specific software migration [132] and portable software development [18]. Data-dominant software ontology has been built via a bottom-up approach, and it was deployed to support program comprehension [131] and modularisation [130]. This thesis suggests that software systems can be represented by ontology and description logic. Consequently, it will help in semi-automating some of the manual tasks in software reengineering. However, there are also limitations: bottom-up ontology development may sacrifice some complexity of systems; top-down ontology development may become time consuming and complicated. In terms of future work, a greater number of diverse software system categories could be involved and different software system knowledge could be explored.

Security policy architecture for web services environment

Aldrawiesh, Khalid January 2012 (has links)
An enhanced observer is model that observes behaviour of a service and then automatically reports any changes in the state of the service to evaluator model. The e-observer observes the state of a service to determine whether it conforms to and obeys its intended behaviour or policy rules. E-observer techniques address most problems, govern and provide a proven solution that is re-usable in a similar context. This leads to an organisation and formalisation policy which is the engine of the e-observer model. Policies are used to refer to specific security rules for particular systems. They are derived from the goals of management that describe the desired behaviour of distributed heterogeneous systems and networks. These policies should be defended by security which has become a coherent and crucial issue. Security aims to protect these policies whenever possible. It is the first line of protection for resources or assets against events such as loss of availability, unauthorised access or modification of data. The techniques devised to protect information from intruders are general purpose in nature and, therefore, cannot directly enforce security that has no universal definition, the high degree of assurance of security properties of systems used in security-critical areas, such as business, education and financial, is usually achieved by verification. In addition, security policies express the protection requirements of a system in a precise and unambiguous form. They describe the requirements and mechanisms for securing the resources and assets between the sharing parties of a business transaction. However, Service-Oriented Computing (SOC) is a new paradigm of computing that considers "services" as fundamental elements for developing applications/solutions. SOC has many advantages that support IT to improve and increase its capabilities. SOC allows flexibility to be integrated into application development. This allows services to be provided in a highly distributed manner by Web services. Many organisations and enterprises have undertaken developments using SOC. Web services (WSs) are examples of SOC. WSs have become more powerful and sophisticated in recent years and are being used successfully for inter-operable solutions across various networks. The main benefit of web services is that they use machine-to-machine interaction. This leads initially to explore the "Quality" aspect of the services. Quality of Service (QoS) describes many techniques that prioritise one type of traffic or programme that operates across a network connection. Hence, QoS has rules to determine which requests have priority and uses these rules in order to specify their priority to real-time communications. In addition, these rules can be sophisticated and expressed as policies that constrain the behaviour of these services. The rules (policies) should be addressed and enforced by the security mechanism. Moreover, in SOC and in particular web services, services are black boxes where behaviour may be completely determined by its interaction with other services under confederation system. Therefore, we propose the design and implementation of the “behaviour of services,” which is constrained by QoS policies. We formulate and implement novel techniques for web service policy-based QoS, which leads to the development of a framework for observing services. These services interact with each other by verifying them in a formal and systematic manner. This framework can be used to specify security policies in a succinct and unambiguous manner; thus, we developed a set of rules that can be applied inductively to verify the set of traces generated by the specification of our model’s policy. These rules could be also used for verifying the functionality of the system. In order to demonstrate the protection features of information system that is able to specify and concisely describe a set of traces generated, we subsequently consider the design and management of Ponder policy language to express QoS and its associated based on criteria, such as, security. An algorithm was composed for analysing the observations that are constrained by policies, and then a prototype system for demonstrating the observation architecture within the education sector. Finally, an enforcement system was used to successfully deploy the prototype’s infrastructure over Web services in order to define an optimisation model that would capture efficiency requirements. Therefore, our assumption is, tracing and observing the communication between services and then takes the decision based on their behaviour and history. Hence, the big issue here is how do we ensure that some given security requirements are satisfied and enforced? The scenario here is under confederation system and based on the following:  System’s components are Web-services.  These components are black boxes and designed/built by various vendors.  Topology is highly changeable. Consequently, the main issues are: • The proposal, design and development of a prototype of observation system that manages security policy and its associated aspects by evaluating the outcome results via the evaluator model. • Taming the design complexity of the observation system by leaving considerable degrees of freedom for their structure and behaviour and by bestowing upon them certain characteristics, and to learn and adapt with respect to dynamically changing environments.

Towards understanding and improving the process of small group collaborative learning in software engineering education

Oriogun, Peter Kehinde January 2006 (has links)
The research aim of this submission for PhD by Prior Output is to understand and improve the process of small group collaborative learning in software engineering education. The research portfolio supporting the submission specifically deals with a number of background studies (the establishment of an optimal software life cycle process model for teaching software engineering in the small group collaborative setting) leading to the development of an appropriate pedagogical approach for underpinning small group learning, understanding the type of learning interaction that was taking place within such small group learning, and finally, the development of appropriate methods for analysing collaborative small group learning in software engineering education. In the portfolio of work submitted for the PhD, I have systematically investigated my research aim and problem in studies involving 241 different students over a period of 8 years. I contend in my submission that I have made a significant contribution to knowledge in my quest to understand and improve the process of small group collaborative learning in software engineering education within higher education, in order to prepare students for employment in software engineering by (i) developing and testing a documentation toolkit for collaborative problem-based learning (ii) a methodological tool for analysing and understanding inter-rater reliability (iii) a framework for the development of teamwork and cognitive reasoning when learning in small groups.

Context-aware Personal Learning Environment

Alharbi, Mafawez January 2014 (has links)
Research is now shifting away from Virtual Learning Environments (VLEs) and towards the use of the Personal Learning Environment (PLE). A review of a number of PLE architectures are presented in the literature, and while they convey well the concept of a PLE, nevertheless they could best be described as high-level architectures, (sometimes referred to as frameworks in the literature), which focus mainly the functionality of PLEs. In particular, there is little published which gives a detailed designed of a PLE architecture. Moreover, the published work focuses largely on the support for lifelong learning and formal / informal learning; these are two of the main limitations of VLEs. However, this study argues that unexplored potential remains, as there is scope for PLEs to cover more areas. To the best of our knowledge, none of the existing PLE architectures have context-aware systems embedded within their architecture. There is no intelligence in these architectures to filter the e-resources and to predict the user need. In addition, the current PLE architectures are not dynamic; it cannot adopt the user current situation. The user of the current PLE architectures receives too much e-resource. The architecture proposed in this research incorporates a context-aware engine. Thus there is intelligence built into the architecture and thus the PLE system is automatically responsive to the context information. There are three types of sensors in any context-aware system (physical, virtual and logical), and these are the elements of the system that gather the context information. In this research, the emphasis will be on virtual sensors which gather the information from virtual space; virtual space here includes any systems which produce information as a set of results. Thus, the context-aware architecture and the implementation of the context-aware engine are major contributions of the work.

Page generated in 0.3782 seconds