• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 615
  • 468
  • 359
  • 179
  • 78
  • 13
  • 12
  • 12
  • 12
  • 12
  • 12
  • 12
  • 10
  • 9
  • 4
  • Tagged with
  • 2165
  • 901
  • 562
  • 559
  • 559
  • 260
  • 224
  • 200
  • 158
  • 147
  • 139
  • 138
  • 134
  • 125
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Autonomous finite capacity scheduling using biological control principles

Manyonge, Lawrence January 2012 (has links)
The vast majority of the research efforts in finite capacity scheduling over the past several years has focused on the generation of precise and almost exact measures for the working schedule presupposing complete information and a deterministic environment. During execution, however, production may be the subject of considerable variability, which may lead to frequent schedule interruptions. Production scheduling mechanisms are developed based on centralised control architecture in which all of the knowledge base and databases are modelled at the same location. This control architecture has difficulty in handling complex manufacturing systems that require knowledge and data at different locations. Adopting biological control principles refers to the process where a schedule is developed prior to the start of the processing after considering all the parameters involved at a resource involved and updated accordingly as the process executes. This research reviews the best practices in gene transcription and translation control methods and adopts these principles in the development of an autonomous finite capacity scheduling control logic aimed at reducing excessive use of manual input in planning tasks. With autonomous decision-making functionality, finite capacity scheduling will as much as practicably possible be able to respond autonomously to schedule disruptions by deployment of proactive scheduling procedures that may be used to revise or re-optimize the schedule when unexpected events occur. The novelty of this work is the ability of production resources to autonomously take decisions and the same way decisions are taken by autonomous entities in the process of gene transcription and translation. The idea has been implemented by the integration of simulation and modelling techniques with Taguchi analysis to investigate the contributions of finite capacity scheduling factors, and determination of the ‘what if’ scenarios encountered due to the existence of variability in production processes. The control logic adopts the induction rules as used in gene expression control mechanisms, studied in biological systems. Scheduling factors are identified to that effect and are investigated to find their effects on selected performance measurements for each resource in used. How they are used to deal with variability in the process is one major objective for this research as it is because of the variability that autonomous decision making becomes of interest. Although different scheduling techniques have been applied and are successful in production planning and control, the results obtained from the inclusion of the autonomous finite capacity scheduling control logic has proved that significant improvement can still be achieved.
322

Relative-fuzzy : a novel approach for handling complex ambiguity for software engineering of data mining models

Imam, Ayad Tareq January 2010 (has links)
There are two main defined classes of uncertainty namely: fuzziness and ambiguity, where ambiguity is ‘one-to-many’ relationship between syntax and semantic of a proposition. This definition seems that it ignores ‘many-to-many’ relationship ambiguity type of uncertainty. In this thesis, we shall use complex-uncertainty to term many-to-many relationship ambiguity type of uncertainty. This research proposes a new approach for handling the complex ambiguity type of uncertainty that may exist in data, for software engineering of predictive Data Mining (DM) classification models. The proposed approach is based on Relative-Fuzzy Logic (RFL), a novel type of fuzzy logic. RFL defines a new formulation of the problem of ambiguity type of uncertainty in terms of States Of Proposition (SOP). RFL describes its membership (semantic) value by using the new definition of Domain of Proposition (DOP), which is based on the relativity principle as defined by possible-worlds logic. To achieve the goal of proposing RFL, a question is needed to be answered, which is: how these two approaches; i.e. fuzzy logic and possible-world, can be mixed to produce a new membership value set (and later logic) that able to handle fuzziness and multiple viewpoints at the same time? Achieving such goal comes via providing possible world logic the ability to quantifying multiple viewpoints and also model fuzziness in each of these multiple viewpoints and expressing that in a new set of membership value. Furthermore, a new architecture of Hierarchical Neural Network (HNN) called ML/RFL-Based Net has been developed in this research, along with a new learning algorithm and new recalling algorithm. The architecture, learning algorithm and recalling algorithm of ML/RFL-Based Net follow the principles of RFL. This new type of HNN is considered to be a RFL computation machine. The ability of the Relative Fuzzy-based DM prediction model to tackle the problem of complex ambiguity type of uncertainty has been tested. Special-purpose Integrated Development Environment (IDE) software, which generates a DM prediction model for speech recognition, has been developed in this research too, which is called RFL4ASR. This special purpose IDE is an extension of the definition of the traditional IDE. Using multiple sets of TIMIT speech data, the prediction model of type ML/RFL-Based Net has classification accuracy of 69.2308%. This accuracy is higher than the best achievements of WEKA data mining machines given the same speech data.
323

Electronic patient record security policy in Saudi Arabia National Health Service

Aldajani, Mouhamad January 2012 (has links)
Saudi Arabia is in the process of implementing Electronic Patient Records (EPR) throughout its National Health services. One of the key challenges during the adoption process is the security of EPR. This thesis investigates the current state of EPR security in Saudi Arabia’s National Health Services (SA NHS) both from a policy perspective and with regard to its implementation in SA NHS’s information systems. To facilitate the analysis of EPR security, an EPR model has been developed that captures the information that is stored as part of the electronic record system in conjunction with stated security requirements. This model is used in the analysis of policy consistency and to validate operational reality against stated policies at various levels within the SA NHS. The model is based on a comprehensive literature survey and structured interviews which established the current state of practice with respect to EPRs in a representative Saudi Arabian hospital. The key contribution of this research is the development and evaluation of a structured and model-based analysis approach to EPR security at the early adoption stage in SA, based on types of information present in EPRs and the needs of the users of EPRs. The key findings show that the SA EPR adoption process is currently proceeding without serious consideration for security policy to protect EPR and a lack of awareness amongst hospital staff.
324

Distributive time division multiplexed localization technique for WLANs

Khan, Adnan Umar January 2012 (has links)
This thesis presents the research work regarding the solution of a localization problem in indoor WLANs by introducing a distributive time division multiplexed localization technique based on the convex semidefinite programming. Convex optimizations have proven to give promising results but have limitations of computational complexity for a larger problem size. In the case of localization problem the size is determined depending on the number of nodes to be localized. Thus a convex localization technique could not be applied to real time tracking of mobile nodes within the WLANs that are already providing computationally intensive real time multimedia services. Here we have developed a distributive technique to circumvent this problem such that we divide a larger network into computationally manageable smaller subnets. The division of a larger network is based on the mobility levels of the nodes. There are two types of nodes in a network; mobile, and stationery. We have placed the mobile nodes into separate subnets which are tagged as mobile whereas the stationary nodes are placed into subnets tagged as stationary. The purpose of this classification of networks into subnets is to achieve a priority-based localization with a higher priority given to mobile subnets. Then the classified subnets are localized by scheduling them in a time division multiplexed way. For this purpose a time-frame is defined consisting of finite number of fixed duration time-slots such that within the slot duration a subnet could be localized. The subnets are scheduled within the frames with a 1:n ratio pattern that is within n number of frames each mobile subnet is localized n times while each stationary subnet consisting of stationary nodes is localized once. By using this priority-based scheduling we have achieved a real time tracking of mobile node positions by using the computationally intensive convex optimization technique. In addition, we present that the resultant distributive technique can be applied to a network having diverse node density that is a network with its nodes varying from very few to large numbers can be localized by increasing frame duration. This results in a scalable technique. In addition to computational complexity, another problem that arises while formulating the distance based localization as a convex optimization problem is the high-rank solution. We have also developed the solution based on virtual nodes to circumvent this problem. Virtual nodes are not real nodes but these are nodes that are only added within the network to achieve low rank realization. Finally, we developed a distributive 3D real-time localization technique that exploited the mobile user behaviour within the multi-storey indoor environments. The estimates of heights by using this technique were found to be coarse. Therefore, it can only be used to identify floors in which a node is located.
325

Robust controller for delays and packet dropout avoidance in solar-power wireless network

Al-Azzawi, Waleed January 2013 (has links)
Solar Wireless Networked Control Systems (SWNCS) are a style of distributed control systems where sensors, actuators, and controllers are interconnected via a wireless communication network. This system setup has the benefit of low cost, flexibility, low weight, no wiring and simplicity of system diagnoses and maintenance. However, it also unavoidably calls some wireless network time delays and packet dropout into the design procedure. Solar lighting system offers a clean environment, therefore able to continue for a long period. SWNCS also offers multi Service infrastructure solution for both developed and undeveloped countries. The system provides wireless controller lighting, wireless communications network (WI-FI/WIMAX), CCTV surveillance, and wireless sensor for weather measurement which are all powered by solar energy.
326

Ubiquitous robotics system for knowledge-based auto-configuration system for service delivery within smart home environments

Al-Khawaldeh, Mustafa Awwad Salem January 2014 (has links)
The future smart home will be enhanced and driven by the recent advance of the Internet of Things (IoT), which advocates the integration of computational devices within an Internet architecture on a global scale [1, 2]. In the IoT paradigm, the smart home will be developed by interconnecting a plethora of smart objects both inside and outside the home environment [3-5]. The recent take-up of these connected devices within home environments is slowly and surely transforming traditional home living environments. Such connected and integrated home environments lead to the concept of the smart home, which has attracted significant research efforts to enhance the functionality of home environments with a wide range of novel services. The wide availability of services and devices within contemporary smart home environments make their management a challenging and rewarding task. The trend whereby the development of smart home services is decoupled from that of smart home devices increases the complexity of this task. As such, it is desirable that smart home services are developed and deployed independently, rather than pre-bundled with specific devices, although it must be recognised that this is not always practical. Moreover, systems need to facilitate the deployment process and cope with any changes in the target environment after deployment. Maintaining complex smart home systems throughout their lifecycle entails considerable resources and effort. These challenges have stimulated the need for dynamic auto-configurable services amongst such distributed systems. Although significant research has been directed towards achieving auto-configuration, none of the existing solutions is sufficient to achieve auto-configuration within smart home environments. All such solutions are considered incomplete, as they lack the ability to meet all smart home requirements efficiently. These requirements include the ability to adapt flexibly to new and dynamic home environments without direct user intervention. Fulfilling these requirements would enhance the performance of smart home systems and help to address cost-effectiveness, considering the financial implications of the manual configuration of smart home environments. Current configuration approaches fail to meet one or more of the requirements of smart homes. If one of these approaches meets the flexibility criterion, the configuration is either not executed online without affecting the system or requires direct user intervention. In other words, there is no adequate solution to allow smart home systems to adapt dynamically to changing circumstances, hence to enable the correct interconnections among its components without direct user intervention and the interruption of the whole system. Therefore, it is necessary to develop an efficient, adaptive, agile and flexible system that adapts dynamically to each new requirement of the smart home environment. This research aims to devise methods to automate the activities associated with customised service delivery for dynamic home environments by exploiting recent advances in the field of ubiquitous robotics and Semantic Web technologies. It introduces a novel approach called the Knowledge-based Auto-configuration Software Robot (Sobot) for Smart Home Environments, which utilises the Sobot to achieve auto-configuration of the system. The research work was conducted under the Distributed Integrated Care Services and Systems (iCARE) project, which was designed to accomplish and deliver integrated distributed ecosystems with a homecare focus. The auto-configuration Sobot which is the focus of this thesis is a key component of the iCARE project. It will become one of the key enabling technologies for generic smart home environments. It has a profound impact on designing and implementing a high quality system. Its main role is to generate a feasible configuration that meets the given requirements using the knowledgebase of the smart home environment as a core component. The knowledgebase plays a pivotal role in helping the Sobot to automatically select the most appropriate resources in a given context-aware system via semantic searching and matching. Ontology as a technique of knowledgebase representation generally helps to design and develop a specific domain. It is also a key technology for the Semantic Web, which enables a common understanding amongst software agents and people, clarifies the domain assumptions and facilitates the reuse and analysis of its knowledge. The main advantages of the Sobot over traditional applications is its awareness of the changing digital and physical environments and its ability to interpret these changes, extract the relevant contextual data and merge any new information or knowledge. The Sobot is capable of creating new or alternative feasible configurations to meet the system's goal by utilising inferred facts based on the smart home ontological model, so that the system can adapt to the changed environment. Furthermore, the Sobot has the capability to execute the generated reconfiguration plan without interrupting the running of the system. A proof-of-concept testbed has been designed and implemented. The case studies carried out have shown the potential of the proposed approach to achieve flexible and reliable auto-configuration of the smart home system, with promising directions for future research.
327

Long running transactions within enterprise resource planning systems

Bajahzar, Abdullah January 2014 (has links)
Recently, one of the major problems in various countries is the management of complicated organisations to cope with the increasingly competitive marketplace. This problem can be solved using Enterprise Resource Planning (ERP) systems which can offer an integrated view of the whole business process within an organisation in real-time. However, those systems have complicated workflow, are costly to be analysed to manage the whole business process in those systems. Thus, Long Running Transaction (LRTs) models have been proposed as optimal solutions, which can be used to simplify the analysis of ERP systems workflow to manage the whole organiational process and ensure that completed transactions in a business process are not processed in any other process. Practically, LRTs models have various problems, such as the rollback and check-pointing activities. This led to the use of Communication Closed Layers (CCLs) for decomposing processes into layers to be analysed easily using sequential programs. Therefore, the purpose of this work is to develop an advanced approach to implement and analyse the workflow of an organisation in order to deal with failures in Long Running Transaction (LRTs) within Enterprise Resource Planning (ERP) systems using Communication Closed Layers (CCLs). Furthermore, it aims to examine the possible enhancements for the available methodology for ERP systems based on studying the LRT suitability and applicability to model the ERP workflows and offer simple and elegant constructs for implementing those complex and expensive ERP workflow systems. The implemented model in this thesis offers a solution for two main challenges; incompatibilities that result from the application of transitional transaction processing concepts to the ERP context and the complexity of ERP workflow. The first challenge is addressed based on offering new semantics to allow modelling of concepts, such as rollbacks and check-points through various constraints, while the second is addressed through the use of the Communication Closed Layer (CCL) approach. The implemented computational reconfigurable model of an ERP workflow system in this work is able to simulate real ERP workflow systems and allows obtaining more understanding of the use of ERP system in enterprise environments. Moreover, a case study is introduced to evaluate the application of the implemented model using three scenarios. The conducted evaluation stage explores the effectiveness of executable ERP computational models and offers a simple methodology that can be used to build those systems using novel approaches. Based on comparing the current model with two previous models, it can be concluded that the new model outperforms previous models based on benefiting from their features and solving their limitations which make them inappropriate to be used in the context of ERP workflow models.
328

Identifying memory address disclosures

North, John January 2015 (has links)
Software is still being produced and used that is vulnerable to exploitation. As well as being in devices in the homes of many people around the world, programs with these vulnerabilities are maintaining life-critical systems such as power-stations, aircraft and medical devices and are managing the creation and distribution of billions of pounds every year. These systems are actively being exploited by governments, criminals and opportunists and have led to loss of life and a loss of wealth. This dependence on software that is vulnerable to exploitation has led to a society with tangible concerns over cyber-crime, cyber-terrorism and cyber-warfare. As well as attempts to eliminate these vulnerabilities, techniques have been developed to mitigate their effects; these prophylactic techniques do not eliminate the vulnerabilities but make them harder to exploit. As software exploitation is an ever evolving battle between the attackers and the defenders, identifying methods to bypass these mitigations has become a new battlefield in this struggle and the techniques that are used to do this require vulnerabilities of their own. As many of the mitigation techniques are dependent upon secrecy of one form or another, vulnerabilities which allow an attacker to view those secrets are now of importance to attackers and defenders. Leaking of the contents of computer memory has always been considered a vulnerability, but until recently it has not typically been considered a serious one. As this can be used to bypass key mitigation techniques, these vulnerabilities are now considered critical to preventing whole classes of software exploitation. This thesis is about detecting these types of leaks and the information they disclose. It discusses the importance of these disclosures, both currently and in the future. It then introduces the first published technique to be able to reliably identify specific classes of these leaks, particularly address disclosures and canary-disclosures. The technique is tested against a series of applications, across multiple operating systems, using both artificial examples and software that is critical, commonplace and complex.
329

The fabrication of integrated strain sensors for 'smart' implants using a direct write additive manufacturing approach

Wei, Li-Ju January 2015 (has links)
Over the 1980’s, the introduction of Additive Manufacturing (AM) technologies has provided alternative methods for the fabrication of complex three-dimensional (3D) synthetic bone tissue implant scaffolds. However, implants are still unable to provide post surgery feedback. Implants often loosen due to mismatched mechanical properties of implant material and host bone. The aim of this PhD research is to fabricate an integrated strain gauge that is able to monitor implant strain for diagnosis of the bone healing process. The research work presents a method of fabricating electrical resistance strain gauge sensors using rapid and mask-less process by experimental development (design of experiment) using the nScrypt 3Dn-300 micro dispensing direct write (MDDW) system. Silver and carbon electrical resistance strain gauges were fabricated and characterised. Carbon resistive strain gauges with gauge factor values greater than 16 were measured using a proven cantilever bending arrangement. This represented a seven to eight fold increase in sensitivity over commercial gauges that would be glued to the implant materials. The strain sensor fabrication process was specifically developed for directly fabricating resistive strain sensor structures on synthetic bone implant surface (ceramic and titanium) without the use of glue and to provide feedback for medical diagnosis. The reported novel approach employed a biocompatible parylene C as a dielectric layer between the electric conductive titanium and the strain gauge. Work also showed that parylene C could be used as an encapsulation material over strain gauges fabricated on ceramic without modifying the performance of the strain gauge. It was found that the strain gauges fabricated on titanium had a gauge factor of 10.0±0.7 with a near linear response to a maximum of 200 micro strain applied. In addition, the encapsulated ceramic strain gauge produced a gauge factor of 9.8±0.6. Both reported strain gauges had a much greater sensitivity than that of standard commercially available resistive strain gauges.
330

A new approach to systems integration in the mechatronic engineering design process of manufacturing systems

Proesser, Malte January 2014 (has links)
Creating flexible and automated production facilities is a complex process that requires high levels of cooperation involving all mechatronics disciplines, where software tools being utilised have to work as closely as their users. Some of these tools are well-integrated but others can hardly exchange any data. This research aims to integrate the software systems applied by the mechatronic engineering disciplines to enable an enhanced design process characterised by a more parallel and iterative work flow. This thesis approaches systems integration from a data modelling point of view because it sees information transfer between heterogeneous data models as a key element of systems integration. A new approach has been developed which is called middle-in data modelling strategy since it is a combination of currently applied top-down and bottom-up approaches. It includes the separation of data into core design data which is modelled top-down and detailed design data modules which are modelled bottom-up. The effectiveness of the integration approach has been demonstrated in a case study undertaken for the mechatronic engineering design process of body shop production lines in the automotive industry. However, the application of the middle-in data modelling strategy is not limited to this use case: it can be used to enhance a variety of system integration tasks. The middle-in data modelling strategy is tested and evaluated in comparison with present top-down and bottom-up data modelling strategies on the basis of three test cases. These test cases simulated how the systems integration solutions based on the different data modelling strategies react to certain disturbances in the data exchange process as they would likely occur during industrial engineering design work. The result is that the top-down data modelling strategy is best in maintaining data integrity and consistency while the bottom-up strategy is most flexibly adaptable to further developments of systems integration solutions. The middle-in strategy combines the advantages of top-down and bottom-up approaches while their weaknesses and disadvantages are kept at a minimum. Hence, it enables the maintenance of data modelling consistency while being responsive to multidisciplinary requirements and adaptive during its step-by-step introduction into an industrial engineering process.

Page generated in 0.0215 seconds