• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 4
  • Tagged with
  • 144
  • 30
  • 26
  • 24
  • 24
  • 19
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

A semantic framework for unified cloud service search, recommendation, retrieval and management

Fang, Daren January 2015 (has links)
Cloud computing (CC) is a revolutionary paradigm of consuming Information and Communication Technology (ICT) services. However, while trying to find the optimal services, many users often feel confused due to the inadequacy of service information description. Although some efforts are made in the semantic modelling, retrieval and recommendation of cloud services, existing practices would only work effectively for certain restricted scenarios to deal for example with basic and non-interactive service specifications. In the meantime, various service management tasks are usually performed individually for diverse cloud resources for distinct service providers. This results into significant decreased effectiveness and efficiency for task implementation. Fundamentally, it is due to the lack of a generic service management interface which enables a unified service access and manipulation regardless of the providers or resource types. To address the above issues, the thesis proposes a semantic-driven framework, which integrates two main novel specification approaches, known as agility-oriented and fuzziness-embedded cloud service semantic specifications, and cloud service access and manipulation request operation specifications. These consequently enable comprehensive service specification by capturing the in-depth cloud concept details and their interactions, even across multiple service categories and abstraction levels. Utilising the specifications as CC knowledge foundation, a unified service recommendation and management platform is implemented. Based on considerable experiment data collected on real-world cloud services, the approaches demonstrate distinguished effectiveness in service search, retrieval and recommendation tasks whilst the platform shows outstanding performance for a wide range of service access, management and interaction tasks. Furthermore, the framework includes two sets of innovative specification processing algorithms specifically designed to serve advanced CC tasks: while the fuzzy rating and ontology evolution algorithms establish a manner of collaborative cloud service specification, the service orchestration reasoning algorithms reveal a promising means of dynamic service compositions.
92

Media fragment semantics : the linked data approach

Li, Yunjia January 2015 (has links)
In the last few years, the explosion of multimedia content on the Web has made multimedia resources the “first class citizen” of the Web. While these resources are easily stored and shared, it is becoming more difficult to find specific video/audio content, especially to identify, link, navigate, search and share the content inside multimedia resources. The concept of media fragment refers to the deep linking into multimedia resources, but making annotations to media fragments and linking them to other resources on the Web have yet to be adopted. The Linked Data principles offer guidelines for publishing Linked Data on the Web, so that data can be better connected to each other and explored by machines. Publishing media fragments and annotations as Linked Data will enable the media fragments to be transparently integrated into current Web content. This thesis takes the Linked Data approach to realise the interlinking of media fragments to other resources on the Web and demonstrate how the Linked Data can help improve the indexing of media fragments. This thesis firstly identifies the gap between media fragments and Linked Data, and the major requirements that need to be fulfilled to bridge that gap based on the current situation of presenting and sharing multimedia data on the Web. Then, by extending the Linked Data principles, this thesis proposes Interlinking Media Fragment Principles as the basic rationale and best practice of applying Linked Data principles to media fragments. To further automate the media fragments publishing process, a core RDF model and a media fragment enriching framework are designed to link media fragments into the Linked Open Data Cloud via annotations and visualise media fragments on the Web pages. A couple of examples are implemented to demonstrate the use of interlinked media fragments, including the case to enrich YouTube videos with named entities and using media fragments for video classifications. The Media Fragment Indexing Framework is proposed to solve the fundamental problem of media fragments indexing for search engines and, as an example, Twitter is adopted as the source for media fragment annotations. The thesis concludes that applying Linked Data principles to media fragments will bring semantics to media fragments, which will improve the multimedia indexing on a fine-grained level and new research areas can be explored based on the interlinked media fragments.
93

A fault-tolerant mechanism for desktop cloud systems

Alwabel, Abdulelah January 2015 (has links)
Cloud computing is a paradigm that promises to move IT another step towards the age of computing utility. Traditionally, Clouds employ dedicated resources located in data centres to provide services to clients. The resources in such Cloud systems are known to be highly reliable with a low probability of failure. Desktop Cloud computing is a new type of Cloud computing that aims to provide Cloud services at little or no cost. This ambition can be achieved by combining Cloud computing and Volunteer computing into Desktop Clouds, harnessing non-dedicated resources when idle. The resources can be any type of computing machine, for example a standard PC, but such computing resources are renowned for their volatility; failures can happen at any time without warning. In Cloud computing, tasks are submitted by Cloud users or brokers to be processed and executed by virtual machines (VMs), and virtual mechanisms are hosted by physical machines (PMs). In this context, throughput is defined as the proportion of the total number of tasks that are successfully processed, so the failure of a PM can have a negative impact on this measure of a Desktop Cloud system by causing the destruction of all hosted VMs, leading to the loss of submitted tasks currently being processed. The aim of this research is to design a VM allocation mechanism for Desktop Cloud systems that is tolerant to node failure. VM allocation mechanisms are responsible for allocating VMs to PMs and migrating them during runtime with the objective of optimisation, yet those available pay little attention to node failure events. The contribution of this research is to propose a Fault-Tolerant VM allocation mechanism that handles failure events in PMs in Desktop Clouds to ensure that the throughput of Desktop Cloud system remains within acceptable levels by employing a replication technique. Since doing so causes an increase of power consumption in PMs, the mechanism is enhanced with a migration policy to minimise this effect, evaluated using three metrics: throughput of tasks; power consumption of PMs; and service availability. The evaluation is conducted using DesktopCloudSim, a tool developed for the purpose by this study as an extension to CloudSim, the well-known Cloud simulation tool, to simulate node failure events in Cloud systems, analysing node failure with real data sets of collected from Failure Trace Archives. The experiments demonstrate that the FT mechanism improves the throughput of Cloud systems statistically significantly compared with traditional mechanisms (First Come First Serve, Greedy and RoundRobin) in the presence of node failures. The FT mechanism reduces power consumption statistically significantly when its migration policy is employed.
94

Querying the web of data with low latency : high performance distributed SPARQL processing and benchmarking

Wang, Xin January 2014 (has links)
The Web of Data extends the World Wide Web (WWW) in a way that applications can understand information and cooperate with humans on complex tasks. The basis of performing complex tasks is low latency queries over the Web of Data. The large scale and distributed nature of the Web of Data have negative impacts on several critical factors for efficient query processing, including fast data transmission between datasets, predictable data distribution and statistics that summarise and describe certain patterns in the data. Moreover, it is common on the Web of Data that the same resource is identified by multiple URIs. This phenomenon, named co-reference, potentially increases the complexity of query processing, and makes it even harder to obtain accurate statistics. With the aforementioned challenges, it is not clear whether it is possible to achieve efficient queries on the Web of Data on a large scale. In this thesis, we explore techniques to improve the efficiency of querying the Web of Data on a large scale. More specifically, we investigate two typical scenarios on the Web of Data, which are: 1) the scenario in which all datasets provide detailed statistics that are possibly available on a large scale, and 2) the scenario in which co-reference is taken into account, and datasets’ statistics are not reliable. For each scenario we explore existing and novel optimisation techniques that are tailored for querying the Web of Data, as well as well developed techniques with careful adjustments. For the scenario with detailed statistics we provide a scheme that implements a statistics query optimisation approach that requires detailed statistics, and intensively exploits parallelism. We propose an efficient algorithm called Parallel Sub-query Identification () to increase the degree of parallelism. () breaks a SPARQL query into sub-queries that can be processed in parallel while not increasing network traffic. We combine with dynamic programming to produce query plans with both minimum costs and a fair degree of parallelism. Furthermore, we develop a mechanism that maximally exploits bandwidth and computing power of datasets. For the scenario having co-reference and without reliable statistics we provide a scheme that implements a dynamic query optimisation approach that takes co-reference into account, and utilises runtime statistics to elevate query efficiency even further. We propose a model called Virtual Graph to transform a query and all its co-referent siblings into a single query with pre-defined bindings. Virtual Graph reduces the large number of outgoing and incoming requests that is required to process co-referent queries individually. Moreover, Virtual Graph enables query optimisers to find the optimal plan with respect to all co-referent queries as a whole. () is used in this scheme as well but provides a higher degree of parallelism with the help of runtime statistics. A Minimum-Spanning-Tree-based algorithm is used in this scheme as a result of using runtime statistics. The same parallel execution mechanism used in the previous scenario is adopted here as well. In order to examine the effectiveness of our schemes in practice, we deploy the above approaches in two distributed SPARQL engines, LHD-s and LHD-d respectively. Both engines are implemented using a popular Java-based platform for building Semantic Web applications. They can be used as either standalone applications or integrated into existing systems that require quick response of Linked Data queries. We also propose a scalable and flexible benchmark, called Distributed SPARQL Evaluation Framework (DSEF), for evaluating optimisation approaches in the Web of Data. DSEF adopts a expandable virtual-machine-based structure and provides a set of efficient tools to help easily set up RDF networks of arbitrary sizes. We further investigate the proportion and distribution of co-reference in the real world, based on which DESF is able to simulate co-reference for given RDF datasets. DSEF bases its soundness in the usage of widely accepted assessment data and queries. By comparing both LHD-s and LHD-d with existing approaches using DSEF, we provide evidence that neither existing statistics provided by datasets nor cost estimation methods, are sufficiently accurate. On the other hand, dynamic optimisation using runtime statistics together with carefully tuned parallelism are promising for significantly reducing the latency of large scale queries on the Web of Data. We also demonstrate that () and Virtual Graph algorithms significantly increase query efficiency for queries with or without co-reference. In summary, the contributions of this these include: 1) proposing two schemes for improving query efficiency in two typical scenarios in the Web of Data; 2) providing implementations, named LHD-s and LHD-d, for the two schemes respectively; 3) proposing a scalable and flexible evaluation framework for distributed SPARQL engines called DSEF; and 4) showing evidence that runtime-statistics-based dynamic optimisation with parallelism are promising to reduce latency of Linked Data queries on a large scale.
95

EXPRESS : resource-oriented and RESTful Semantic Web services

Alowisheq, Areeb January 2014 (has links)
This thesis investigates an approach that simplifies the development of Semantic Web services (SWS) by removing the need for additional semantic descriptions. The most actively researched approaches to Semantic Web services introduce explicit semantic descriptions of services that are in addition to the existing semantic descriptions of the service domains. This increases their complexity and design overhead. The need for semantically describing the services in such approaches stems from their foundations in service-oriented computing, i.e. the extension of already existing service descriptions. This thesis demonstrates that adopting a resource-oriented approach based on REST will, in contrast to service-oriented approaches, eliminate the need for explicit semantic service descriptions and service vocabularies. This reduces the development efforts while retaining the significant functional capabilities. The approach proposed in this thesis, called EXPRESS (Expressing RESTful Semantic Services), utilises the similarities between REST and the Semantic Web, such as resource realisation, self-describing representations, and uniform interfaces. The semantics of a service is elicited from a resource’s semantic description in the domain ontology and the semantics of the uniform interface, hence eliminating the need for additional semantic descriptions. Moreover, stub-generation is a by-product of the mapping between entities in the domain ontology and resources. EXPRESS was developed to test the feasibility of eliminating explicit service descriptions and service vocabularies or ontologies, to explore the restrictions placed on domain ontologies as a result, to investigate the impact on the semantic quality of the description, and explore the benefits and costs to developers. To achieve this, an online demonstrator that allows users to generate stubs has been developed. In addition, a matchmaking experiment was conducted to show that the descriptions of the services are comparable to OWL-S in terms of their ability to be discovered, while improving the efficiency of discovery. Finally, an expert review was undertaken which provided evidence of EXPRESS’s simplicity and practicality when developing SWS from scratch.
96

Women's internet usage in university settings in Malaysia and the United Kingdom : a comparative case study

Husain, Kalthom January 2010 (has links)
The revolution in information technology has resulted in innovations that are having increasingly important effects on the life of their users, in both their personal and work lives. In particular, the Internet and associated applications such as email and the World Wide Web have had profound impacts over the last twenty or so years that they have been in widespread use, raising issues about various types of digital divide, including that between more and less developed nations. This thesis reports a study carried out on two continents, Europe and Asia, to compare and contrast the adoption of these innovations in a roughly comparable context, that of a University department. Interviews were carried out with 27 women drawn from administrative and academic staff in the University of Brighton (UK) and Kolej Universiti Teknikal Kebangsaan (Malaysia).
97

A forensically-enabled IaaS cloud computing architecture

Alqahtany, Saad January 2017 (has links)
Cloud computing has been advancing at an intense pace. It has become one of the most important research topics in computer science and information systems. Cloud computing offers enterprise-scale platforms in a short time frame with little effort. Thus, it delivers significant economic benefits to both commercial and public entities. Despite this, the security and subsequent incident management requirements are major obstacles to adopting the cloud. Current cloud architectures do not support digital forensic investigators, nor comply with today’s digital forensics procedures – largely due to the fundamental dynamic nature of the cloud. When an incident has occurred, an organization-based investigation will seek to provide potential digital evidence while minimising the cost of the investigation. Data acquisition is the first and most important process within digital forensics – to ensure data integrity and admissibility. However, access to data and the control of resources in the cloud is still very much provider-dependent and complicated by the very nature of the multi-tenanted operating environment. Thus, investigators have no option but to rely on the Cloud Service Providers (CSPs) to acquire evidence for them. Due to the cost and time involved in acquiring the forensic image, some cloud providers will not provide evidence beyond 1TB despite a court order served on them. Assuming they would be willing or are required to by law, the evidence collected is still questionable as there is no way to verify the validity of evidence and whether evidence has already been lost. Therefore, dependence on the CSPs is considered one of the most significant challenges when investigators need to acquire evidence in a timely yet forensically sound manner from cloud systems. This thesis proposes a novel architecture to support a forensic acquisition and analysis of IaaS cloud-base systems. The approach, known as Cloud Forensic Acquisition and Analysis System (Cloud FAAS), is based on a cluster analysis of non-volatile memory that achieves forensically reliable images at the same level of integrity as the normal “gold standard” computer forensic acquisition procedures with the additional capability to reconstruct the image at any point in time. Cloud FAAS fundamentally, shifts access of the data back to the data owner rather than relying on a third party. In this manner, organisations are free to undertaken investigations at will requiring no intervention or cooperation from the cloud provider. The novel architecture is validated through a proof-of-concept prototype. A series of experiments are undertaken to illustrate and model how Cloud FAAS is capable of providing a richer and more complete set of admissible evidence than what current CSPs are able to provide. Using Cloud FAAS, investigators have the ability to obtain a forensic image of the system after, just prior to or hours before the incident. Therefore, this approach can not only create images that are forensically sound but also provide access to deleted and more importantly overwritten files – which current computer forensic practices are unable to achieve. This results in an increased level of visibility for the forensic investigator and removes any limitations that data carving and fragmentation may introduce. In addition, an analysis of the economic overhead of operating Cloud FAAS is performed. This shows the level of disk change that occurs is well with acceptable limits and is relatively small in comparison to the total volume of memory available. The results show Cloud FAAS has both a technical and economic basis for solving investigations involving cloud computing.
98

Functional programming languages in computing clouds : practical and theoretical explorations

Fritsch, Joerg January 2016 (has links)
Cloud platforms must integrate three pillars: messaging, coordination of workers and data. This research investigates whether functional programming languages have any special merit when it comes to the implementation of cloud computing platforms. This thesis presents the lightweight message queue CMQ and the DSL CWMWL for the coordination of workers that we use as artefact to proof or disproof the special merit of functional programming languages in computing clouds. We have detailed the design and implementation with the broad aim to match the notions and the requirements of computing clouds. Our approach to evaluate these aims is based on evaluation criteria that are based on a series of comprehensive rationales and specifics that allow the FPL Haskell to be thoroughly analysed. We find that Haskell is excellent for use cases that do not require the distribution of the application across the boundaries of (physical or virtual) systems, but not appropriate as a whole for the development of distributed cloud based workloads that require communication with the far side and coordination of decoupled workloads. However, Haskell may be able to qualify as a suitable vehicle in the future with future developments of formal mechanisms that embrace non-determinism in the underlying distributed environments leading to applications that are anti-fragile rather than applications that insist on strict determinism that can only be guaranteed on the local system or via slow blocking communication mechanisms.
99

Network-aware resource management for mobile cloud

Sarathchandra Magurawalage, Chathura M. January 2017 (has links)
The author proposes a novel system architecture for mobile cloud computing (MCC) that includes a controller for managing computing and communication resources in Cloud Radio Access Network (C-RAN) environment. The gathered monitoring information in the controller is used when making resource allocation/management decisions. A unified protocol has been proposed, which utilises the same packet format for mobile task offloading and resource management. Moreover, the packet format and the message types of the protocol have been presented. An MCC scenario (i.e., cloudlet+clone) that consists of a cloudlet layer has been studied, in which the cloudlets are deployed next to Wi-Fi access points and serve as a localised service point in proximity to mobile devices to improve the performance of mobile cloud services. On top of this, an offloading algorithm is proposed with the main aim of deciding whether to offload to a clone or a cloudlet. The architecture described above has been implemented as a prototype by focussing on resource management in the mobile cloud. A partial implementation of a resource monitoring module that monitors both computing and communication resources have also been presented. Auto-scaling enables efficient computing resource management in the mobile cloud. An empirical performance analysis of cloud vertical scaling for mobile cloud resource management has been conducted. The working procedures of the proposed unified protocol have been illustrated to show the mobile task offloading and resource allocation functions. Simulation results of cloudlet+clone mobile task offloading algorithm demonstrate the effectiveness and efficiency of the presented task offloading architecture, and offloading algorithm on response time and energy consumption. The empirical vertical auto-scaling performance analysis for mobile cloud resource allocation shows that time delays when scaling resources (CPU, RAM, disk) in mobile cloud varies. Moreover, the scaling delay depends on the scaling amount at the given iteration.
100

Securing access to cloud computing for critical infrastructure

Younis, Y. A. January 2015 (has links)
Cloud computing offers cost effective services on-demand which encourage critical infrastructure providers to consider migrating to the cloud. Critical infrastructures are considered as a backbone of modern societies such as power plants and water. Information in cloud computing is likely to be shared among different entities, which could have various degrees of sensitivity. This requires robust isolation and access control mechanisms. Although various access control models and policies have been developed, they cannot fulfil requirements for a cloud based access control system. The reason is that cloud computing has a diverse sets of security requirements and unique security challenges such as multi-tenant and heterogeneity of security policies, rules and domains. This thesis provides a detailed study of cloud computing security challenges and threats, which were used to identify security requirements for various critical infrastructure providers. We found that an access control system is a crucial security requirement for the surveyed critical infrastructure providers. Furthermore, the requirement analysis was used to propose a new criteria to evaluate access control systems for cloud computing. Moreover, this work presents a new cloud based access control model to meet the identified cloud access control requirements. The model does not only ensure the secure sharing of resources among potential untrusted tenants, but also has the capacity to support different access permissions for the same cloud user. Our focused in the proposed model is the lack of data isolation in lower levels (CPU caches), which could lead to bypass access control models to gain some sensitive information by using cache side-channel attacks. Therefore, the thesis investigates various real attack scenarios and the gaps in existing mitigation approaches. It presents a new Prime and Probe cache side-channel attack, which can give detailed information about addresses accessed by a virtual machine with no need for any information about cache sets accessed by the virtual machine. The design, implementation and evaluation of a proposed solution preventing cache side-channel attacks are also presented in the thesis. It is a new lightweight solution, which introduces very low overhead (less than 15,000 CPU cycles). It can be applied in any operating system and prevents cache side-channel attacks in cloud computing. The thesis also presents a new detecting cache side-channel attacks solution. It focuses on the infrastructure used to host cloud computing tenants by counting cache misses caused by a virtual machine. The detection solutions has 0% false negative and 15% false positive.

Page generated in 0.027 seconds