• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

A market model for controlled resource allocation in distributed operating systems

Messer, Alan January 1999 (has links)
This thesis explores the potential for providing processes with control over their resource allocation in a general-purpose distributed system. Rather than present processes with blind explicit control or leave the decision to the operating system, a compromise, called process-centric resource allocation is proposed whereby processes have informed control of their resource allocation, while the operating system ensures fair consumption. The motivations for this approach to resource allocation and its background are reviewed culminating in the description of a set of desired attributes for such a system. A three layered architecture called ERA is then proposed and presented in detail. The lowest layer, provides a unified framework for processes to choose resources, describe their priority and describes the range of available resources. A resource information mechanism, used to support choices of distributed resources then utilises this framework. Finally, experimental demonstrations of process-centric resource allocation are used to illustrate the third layer. This design and its algorithms together provide a resource allocation system wherein distributed resources are shared fairly amongst competing processes which can choose their resources. The system allows processes to mimic traditional resource allocations and perform novel and beneficial resource optimisations. Experimental results are presented indicating that this can be achieved with low overhead and in a scalable fashion.
652

Three-dimensional interactive connection diagrams for knowledge engineering

Jones, Sara January 1993 (has links)
This thesis describes research into human factors aspects of the use of 3-dimensional node and link diagrams, called Interactive Connection Diagrams (leDs), in the human-computer interface of tools for knowledge engineering. This research was carried out in two main stages: the first concentrated on perceptual aspects of 3-d ICDs, and the second on more general aspects of their use in realistic situations. A final section looked briefly at the possibility of formally specifying 3-d ICD representations. The main aim of the first stage was to investigate whether users were able to make effective judgements about the relative depths of components in 3-d ICDs. Controlled experiments were carried out to determine the extent to which such judgements were supported by the use of a particular approach to creating the illusion of depth. The results of these experiments showed that users were able to make reasonably effective judgements about the relative depths of components in 3-d ICDs. 3-d ICDs produced using the approach of interest were therefore argued to be suitable for use in the second stage of the study. In the second stage, case studies were used to investigate the utility in more realistic knowledge engineering situations of tools supporting 3-d ICDs, and the usability of depth-related features of a prototype tool which permits 3-d leDs to be viewed and edited. On the basis of the findings of these studies it is claimed that tools supporting 3-d ICDs will, in some situations, be more useful than those which employ only more conventional 2-d versions. It was found that depth-related features of the prototype tool were usable but should be improved upon in future implementations. The third and final section of work involved a preliminary investigation into the formal specification of the 3-d ICD representations of the kind used in the second set of studies. A scheme for specifying the range of 3-d leO languages currently supported by the prototype tool was developed, and each of the particular 3-d ICD languages used in the case studies were specified. Implications of the results of this work are discussed and a number of suggestions regarding directions for future work are made. The overall conclusion is that 3-d ICDs have considerable potential as a medium in which to represent knowledge structures for use in knowledge engineering.
653

Towards efficient collective communication in multicomputer interconnection networks

Al-Dubai, Ahmed January 2004 (has links)
No description available.
654

Key management scheme for Smart Grid

Alohali, B. January 2016 (has links)
A Smart Grid (SG) is a modern electricity supply system. It uses information and communication technology (ICT) to run, monitor and control data between the generation source and the end user. It comprises a set of technologies that uses sensing, embedded processing and digital communications to intelligently control and monitor an electricity grid with improved reliability, security, and efficiency. SGs are classified as Critical Infrastructures. In the recent past, there have been cyber-attacks on SGs causing substantial damage and loss of services. A recent cyber-attack on Ukraine's SG caused over 2.3 million homes to be without power for around six hours. Apart from the loss of services, some portions of the SG are yet to be operational, due to the damage caused. SGs also face security challenges such as confidentiality, availability, fault tolerance, privacy, and other security issues. Communication and networking technologies integrated into the SG require new and existing security vulnerabilities to be thoroughly investigated. Key management is one of the most important security requirements to achieve data confidentiality and integrity in a SG system. It is not practical to design a single key management scheme/framework for all systems, actors and segments in the smart grid, since the security requirements of various sub-systems in the SG vary. We address two specific sub-systems categorised by the network connectivity layer – the Home Area Network (HAN) and the Neighbourhood Area Network (NAN). Currently, several security schemes and key management solutions for SGs have been proposed. However, these solutions lack better security for preventing common cyber-attacks such as node capture attack, replay attack and Sybil attack. We propose a cryptographic key management scheme that takes into account the differences in the HAN and NAN segments of the SG with respect to topology, authentication and forwarding of data. The scheme complies with the overall performance requirements of the smart grid. The proposed scheme uses group key management and group authentication in order to address end-to-end security for the HAN and NAN scenarios in a smart grid, which fulfils data confidentiality, integrity and scalability requirements. The security scheme is implemented in a multi-hop sensor network using TelosB motes and ZigBee OPNET simulation model. In addition, replay attack, Sybil attack and node capture attack scenarios have been implemented and evaluated in a NAN scenario. Evaluation results show that the scheme is resilient against node capture attacks and replay attacks. Smart Meters in a NAN are able to authenticate themselves in a group rather than authenticating one at a time. This significant improvement over existing schemes is discussed with comparisons with other security schemes.
655

Replica placement in peer-to-peer systems

Wan Awang, Wan Suryani January 2016 (has links)
In today’s distributed applications, replica placement is essential since moving the data in the vicinity of an application will provide many benefits. The increasing requirements of data for scientific applications and collaborative access to these data make data placement even more important. Until now, replication is one of the main mechanisms used in distributed data whereby identical copies of data are generated and stored at various distributed sites to improve data access performance and data availability. Most work considers file’s popularity as one of the important parameters taken into consideration when designing replica placement strategies. However, this thesis argues that a combination of popularity and affinity files are the most important parameters which can be used in decision making whilst improving data access performance and data availability in distributed environments. A replica placement mechanism called Affinity Replica Placement Mechanism (ARPM) is proposed focusing on popular files and affinity files. The idea of ARPM is to improve data availability and accessibility in peer-to-peer (P2P) replica placement strategy. A P2P simulator, PeerSim, was used to evaluate the performance of this dynamic replica placement strategy. The simulation results demonstrated the effectiveness of ARPM hence provided a proof that ARPM has contributed towards a new dimension of replica placement strategy that incorporates the affinity and popularity of files replicas in P2P systems.
656

Functional programming languages in computing clouds : practical and theoretical explorations

Fritsch, Joerg January 2016 (has links)
Cloud platforms must integrate three pillars: messaging, coordination of workers and data. This research investigates whether functional programming languages have any special merit when it comes to the implementation of cloud computing platforms. This thesis presents the lightweight message queue CMQ and the DSL CWMWL for the coordination of workers that we use as artefact to proof or disproof the special merit of functional programming languages in computing clouds. We have detailed the design and implementation with the broad aim to match the notions and the requirements of computing clouds. Our approach to evaluate these aims is based on evaluation criteria that are based on a series of comprehensive rationales and specifics that allow the FPL Haskell to be thoroughly analysed. We find that Haskell is excellent for use cases that do not require the distribution of the application across the boundaries of (physical or virtual) systems, but not appropriate as a whole for the development of distributed cloud based workloads that require communication with the far side and coordination of decoupled workloads. However, Haskell may be able to qualify as a suitable vehicle in the future with future developments of formal mechanisms that embrace non-determinism in the underlying distributed environments leading to applications that are anti-fragile rather than applications that insist on strict determinism that can only be guaranteed on the local system or via slow blocking communication mechanisms.
657

Narrative construction in information visualisation

Badawood, Donia January 2015 (has links)
Storytelling has been used throughout the ages as means of communication, conveying and transmitting knowledge from one person to another and from one generation to the next. In various domains, formulating of messages, ideas, or findings into a story has proven its efficiency in making them comprehensible, memorable, and engaging. Information Visualization as an academic field also utilises the power of storytelling to make visualizations more understandable and interesting for a variety of audiences. Although storytelling has been a a topic of interest in information visualization for some time, little or no empirical evaluations exist to compare different approaches to storytelling through information visualization. There is also a need for work that addresses in depth some criteria and techniques of storytelling such as transition types in visual stories in general and data-driven stories in particular. Two sets of experiments were conducted to explore how two different models of information visualization delivery influence narratives constructed by audiences. The first model involves direct narrative by a speaker using visualization software to tell a data-story, while the second involves constructing a story by interactively exploring the visualization software. The first set of experiments is a within-subject experiment with 13 participants, and the second set of experiments is a between-subject experiment with 32 participants. In both rounds, an open-ended questionnaire was used in controlled laboratory settings in which the primary goal was to collect a number of written data-stories derived from the two models. The data-stories and answers written by the participants were all analysed and coded using data-driven and pre-set themes. The themes include reported impressions about the story, insight types reported, narrative structures, curiosity about the data, and ease of telling a story after experimenting with each model. The findings show that while the delivery model has no effect on how easy or difficult the participants found telling a data story to be, it does have an effect on the tendency to identify and use outliers' insights in the data story if they are not distracted by direct narration. It also affects the narrative structure and depth of the data story. Examining some more mature domains of visual storytelling, such as films and comics, can be highly beneficial to this new sub-field of data visualization. In the research in hand, a taxonomy of panel-to-panel transitions in comics has been used. The definitions of the components of this taxonomy have been refined to reflect the nature of data-stories in information visualization, and the taxonomy has then been used in coding a number of VAST Challenge videos. The transitions used in each video have been represented graphically with a diagram that shows how the information was added incrementally in order to tell a story that answers a particular question. A number of issues have been taken into account when coding transitions in each video and when designing and creating the visual diagram, such as nested transitions, the use of sub-topics, and delayed transitions. The major contribution of this part of the research is the provision of a taxonomy and description of transition types in the context of narrative visualization, an explanation of how this taxonomy can be used to code transitions in narrative visualization, and a visual summary as a means of summarising that coding. The approaches to data analysis and different storytelling axes, both in the experimental work and in proposing and applying the framework of transition types used, can be usefully applied to other studies and comparisons of storytelling approaches.
658

Evolutionary combinatorial optimisation for energy storage scheduling, and web-based power systems analysis using PHP

Agamah, Simon January 2016 (has links)
Two research areas are covered in this thesis: the formulation of a novel evolutionary combinatorial optimisation algorithm for energy storage system (ESS) scheduling, and web-based power systems analysis (WBPSA) using PHP programming. An increase in electricity demand usually calls for reinforcement of the network equipment to handle the new load and network operators sometimes postpone or avoid this reinforcement by using ESS to store electrical energy when network usage is low and release it to be used in the grid during periods of high demand. The ESS operation must be scheduled to be effective and there are several scheduling methods that depend on energy generation data, flexible time-of-use tariffs or closed loop set-points. This thesis proposes a method that uses only historic or forecasted demand data which is also a requirement for other methods. The methodology formulates an electricity demand profile and ESS as a combination of the one-dimensional bin packing problem and the subset sum problem and solves them heuristically with specific modifications and transformations to obtain viable schedules. The schedules may then be optimised further using genetic algorithm optimisation. Comparative analyses with other algorithms and case studies using real-world data are used for verification. The algorithm is shown to be effective and has some advantages when compared to other existing algorithms; hence it can be used in scenarios where other methods are not applicable. On the second topic the thesis explores web-based power systems analysis platforms and shows that most use a web server primarily as an interface for exchanging requests and results between a front-end web browser and specialised back-end computation software written in a general programming language. A web server runs programs written in scripting languages such as PHP, which is the most popular web server programming language. Recent versions of web scripting languages have the computational capabilities required for power systems analysis and can handle the task of modelling networks and analysing them. This provides an opportunity for a slimmer 2-tier framework in which the web server also acts as the computation layer. The requirements for general power systems modelling are discussed and a methodology for realising web-based simulation using PHP programming is developed. Some of the modelling functions are handled natively in PHP and some require the use of extensions. The results show that using PHP for simulations can result in simpler access to power systems analysis functions in websites and web applications. The memory consumed by the PHP library developed is seen to be low and the computation time for reasonably large networks is in the millisecond range.
659

Network-aware resource management for mobile cloud

Sarathchandra Magurawalage, Chathura M. January 2017 (has links)
The author proposes a novel system architecture for mobile cloud computing (MCC) that includes a controller for managing computing and communication resources in Cloud Radio Access Network (C-RAN) environment. The gathered monitoring information in the controller is used when making resource allocation/management decisions. A unified protocol has been proposed, which utilises the same packet format for mobile task offloading and resource management. Moreover, the packet format and the message types of the protocol have been presented. An MCC scenario (i.e., cloudlet+clone) that consists of a cloudlet layer has been studied, in which the cloudlets are deployed next to Wi-Fi access points and serve as a localised service point in proximity to mobile devices to improve the performance of mobile cloud services. On top of this, an offloading algorithm is proposed with the main aim of deciding whether to offload to a clone or a cloudlet. The architecture described above has been implemented as a prototype by focussing on resource management in the mobile cloud. A partial implementation of a resource monitoring module that monitors both computing and communication resources have also been presented. Auto-scaling enables efficient computing resource management in the mobile cloud. An empirical performance analysis of cloud vertical scaling for mobile cloud resource management has been conducted. The working procedures of the proposed unified protocol have been illustrated to show the mobile task offloading and resource allocation functions. Simulation results of cloudlet+clone mobile task offloading algorithm demonstrate the effectiveness and efficiency of the presented task offloading architecture, and offloading algorithm on response time and energy consumption. The empirical vertical auto-scaling performance analysis for mobile cloud resource allocation shows that time delays when scaling resources (CPU, RAM, disk) in mobile cloud varies. Moreover, the scaling delay depends on the scaling amount at the given iteration.
660

Intelligent selection of grinding conditions

Li, Yan January 1996 (has links)
No description available.

Page generated in 0.0481 seconds