• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 85
  • 55
  • 23
  • 20
  • 17
  • 16
  • 8
  • 7
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 588
  • 174
  • 152
  • 145
  • 135
  • 134
  • 96
  • 76
  • 64
  • 61
  • 61
  • 59
  • 57
  • 56
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Improving Desktop System Security Using Compartmentalization

January 2018 (has links)
abstract: Compartmentalizing access to content, be it websites accessed in a browser or documents and applications accessed outside the browser, is an established method for protecting information integrity [12, 19, 21, 60]. Compartmentalization solutions change the user experience, introduce performance overhead and provide varying degrees of security. Striking a balance between usability and security is not an easy task. If the usability aspects are neglected or sacrificed in favor of more security, the resulting solution would have a hard time being adopted by end-users. The usability is affected by factors including (1) the generality of the solution in supporting various applications, (2) the type of changes required, (3) the performance overhead introduced by the solution, and (4) how much the user experience is preserved. The security is affected by factors including (1) the attack surface of the compartmentalization mechanism, and (2) the security decisions offloaded to the user. This dissertation evaluates existing solutions based on the above factors and presents two novel compartmentalization solutions that are arguably more practical than their existing counterparts. The first solution, called FlexICon, is an attractive alternative in the design space of compartmentalization solutions on the desktop. FlexICon allows for the creation of a large number of containers with small memory footprint and low disk overhead. This is achieved by using lightweight virtualization based on Linux namespaces. FlexICon uses two mechanisms to reduce user mistakes: 1) a trusted file dialog for selecting files for opening and launching it in the appropriate containers, and 2) a secure URL redirection mechanism that detects the user’s intent and opens the URL in the proper container. FlexICon also provides a language to specify the access constraints that should be enforced by various containers. The second solution called Auto-FBI, deals with web-based attacks by creating multiple instances of the browser and providing mechanisms for switching between the browser instances. The prototype implementation for Firefox and Chrome uses system call interposition to control the browser’s network access. Auto-FBI can be ported to other platforms easily due to simple design and the ubiquity of system call interposition methods on all major desktop platforms. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
112

Performance comparison of Linux containers(LXC) and OpenVZ during live migration : An experiment

Indukuri, Pavan Sutha Varma January 2016 (has links)
Context: Cloud computing is one of the most widely used technologies all over the world that provides numerous products and IT services. Virtualization is one of the innovative technologies in cloud computing which has advantages of improved resource utilisation and management. Live migration is an innovative feature of virtualization that allows a virtual machine or container to be transferred from one physical server to another.  Live migration is a complex process which can have a significant impact on cloud computing when used by the cloud-based software.  Objectives: In this study, live migration of LXC and OpenVZ containers has been performed.  Later the performance of LXC and OpenVZ has been conducted in terms of total migration time and downtime. Further CPU utilisation, disk utilisation and an average load of the servers is also evaluated during the process of live migration. The main aim of this research is to compare the performance of LXC and OpenVZ during live migration of containers.  Methods: A literature study has been done to gain knowledge about the process of live migration and the metrics that are required to compare the performance of LXC and OpenVZ during live migration of containers. Further an experiment has been conducted to compute and evaluate the performance metrics that have been identified in the literature study. The experiment was done to investigate and evaluate migration process for both LXC and OpenVZ. Experiments were designed and conducted based on the objectives which were to be met. Results:  The results of the experiments include the migration performance of both LXC and OpenVZ. The performance metrics identified in the literature review, total migration time and downtime, were evaluated for LXC and OpenVZ. Further graphs were plotted for the CPU utilisation, disk utilisation, and average load during the live migration of containers. The results were analysed to compare the performance differences between OpenVZ and LXC during live migration of containers. Conclusions.  The conclusions that can be drawn from the experiment. LXC has shown higher utilisation, thus lower performance when compared with OpenVZ. However, LXC has less migration time and downtime when compared to OpenVZ.
113

Probabilistic Risk Assessment in Clouds: Models and Algorithms

Palhares, André Vitor de Almeida 08 March 2012 (has links)
Submitted by Pedro Henrique Rodrigues (pedro.henriquer@ufpe.br) on 2015-03-04T17:17:29Z No. of bitstreams: 2 dissert-avap.pdf: 401311 bytes, checksum: 5bd3f82323bd612e8265a6ab8a55eda0 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-04T17:17:29Z (GMT). No. of bitstreams: 2 dissert-avap.pdf: 401311 bytes, checksum: 5bd3f82323bd612e8265a6ab8a55eda0 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2012-03-08 / Cloud reliance is critical to its success. Although fault-tolerance mechanisms are employed by cloud providers, there is always the possibility of failure of infrastructure components. We consequently need to think proactively of how to deal with the occurrence of failures, in an attempt to minimize their effects. In this work, we draw the risk concept from probabilistic risk analysis in order to achieve this. In probabilistic risk analysis, consequence costs are associated to failure events of the target system, and failure probabilities are associated to infrastructural components. The risk is the expected consequence of the whole system. We use the risk concept in order to present representative mathematical models for which computational optimization problems are formulated and solved, in a Cloud Computing environment. In these problems, consequence costs are associated to incoming applications that must be allocated in the Cloud and the risk is either seen as an objective function that must be minimized or as a constraint that should be limited. The proposed problems are solved either by optimal algorithm reductions or by approximation algorithms with provably performance guarantees. Finally, the models and problems are discussed from a more practical point of view, with examples of how to assess risk using these solutions. Also, the solutions are evaluated and results on their performance are established, showing that they can be used in the effective planning of the Cloud.
114

Assessing performance and security in virtualized home residential gateways

Modig, Dennis January 2014 (has links)
Over the past years the use of digital devices has increased heavily, and home networks continue to grow in size and complexity. By the use of virtualized residential gateways, advanced functionality can be moved away from the home and thereby decrease the administrative burden for the home user. Using virtualizing residential gateways instead of physical devices creates new challenges. This project is looking in to how the choice of virtualization technology impacts performance and security by investigating operating system level virtualization in contrast to full virtualization for use in home residential gateways. Results show that operating system level virtualization uses fewer resources in terms of disk, memory, and processor in virtualized residential gateways. The results also show that with choice of setups and virtualization technologies different security issues arises, which has been analyzed in lab environment. Recommendations regarding solutions to security issues are proposed in the concluding parts of this thesis.
115

Virtualization of Data Centers : Case Study on Server Virtualizationn

Kappagantula, Sri Kasyap January 2018 (has links)
Nowadays, Data Centers use Virtualization, as a technique to make use of the opportunity for extension of independent virtual resources from the available physical hardware. Virtualization technique is implemented in the Data Centers to maximize the utilization of physical hardware (which significantly reduces the energy consumption and operating costs) without affecting the “Quality of Service (QoS)”. The main objective of this thesis is to study, the different network topologies used in the Data Center architecture, and to compare the QoS parameters of the virtual server over the physical server and also to abstract the better technology exists for virtualization. The research methodology used in this thesis is “qualitative”. To measure the QoS, we take the Latency, Packet loss, Throughput aspects of virtual servers under different virtualization technologies (KVM, ESXi, Hyper-V, Fusion, and Virtualbox) and compare their performance over the physical server. The work also investigates the CPU, RAM Utilizations and compare the physical and virtual server's behavior under different load conditions. The Results shows us that the virtual servers have performed better in terms of resource utilization, Latency and response times when compared to the physical servers. But there are some factors like backup and recovery, VM Sprawl, capacity planning, building a private cloud, which should be addressed for the growth of virtual data centers. Parameters that affect the performance of the virtual servers are addressed and the trade-off between the virtual and physical servers is established in terms of QoS aspects. The overall performance of virtual servers is effective when compared to the performance of physical servers.
116

Virtualization of Data Centers : study on Server Energy Consumption Performance

Padala, Praneel Reddy January 2018 (has links)
Due to various reasons data centers have become ubiquitous in our society. Energy costs are significant portion of data centers total lifetime costs which also makes financial sense to operators. This increases huge concern towards the energy costs and environmental impacts of data center. Power costs and energy efficiency are the major challenges front of us.From overall cyber energy used, 15% is used by networking portion ofa data center. Its estimated that the energy used by network infrastructure in a data center world wide is 15.6 billion kWh and is expected to increase to around 50%. Power costs and Energy Consumption plays a major role through out the life time of a data center, which also leads to the increase in financial costs for data center operators and increased usage of power resources. So, resource utilization has become a major issue in the data centers. The main aim of this thesis study is to find the efficient way for utilization of resources and decrease the energy costs to the operators in the data centers using virtualization. Virtualization technology is used to deploy virtual servers on physical servers which uses the same resources and helps to decrease the energy consumption of a data center.
117

Port QEMU na HelenOS / Port of QEMU to HelenOS

Mareš, Jan January 2015 (has links)
QEMU is a machine emulator that is able to emulate environment of various hardware platforms, including PC, PowerPC, ARM and SPARC. The goal of this master thesis is to port QEMU to HelenOS, thus allowing developers run the emulation of HelenOS inside HelenOS. The thesis contains a detailed analysis of the possible porting approaches (including the port of prerequisite libraries or their replacements) and an analysis of the features of QEMU (a reasonable subset of all features of QEMU) that are essential for achieving the goal and features that can be omitted in the prototype implementation. The primary focus of the implementation is to support the PC (x86 and x86-64) guest environment. Although not part of the prototype implementation, the thesis also focuses on analyzing the requirements for running QEMU as a virtualization hypervisor in HelenOS. Powered by TCPDF (www.tcpdf.org)
118

Scheduling Design for Advance Virtual Network Services

Bai, Hao 16 November 2016 (has links)
Network virtualization allows operators to host multiple client services over their base physical infrastructures. Today, this technique is being used to support a wide range of applications in cloud computing services, content distribution, large data backup, etc. Accordingly, many different algorithms have also been developed to achieve efficient mapping of client virtual network (VN) requests over physical topologies consisting of networking infrastructures and datacenter compute/storage resources. However as applications continue to expand, there is a growing need to implement scheduling capabilities for virtual network demands in order to improve network resource utilization and guarantee quality of service (QoS) support. Now the topic of advance reservation (AR) has been studied for the case of scheduling point-to-point connection demands. Namely, many different algorithms have been developed to support various reservation models and objectives. Nevertheless, few studies have looked at scheduling more complex "topology-level'' demands, including virtual network services. Moreover, as cloud servers expand, many providers want to ensure user quality support at future instants in time, e.g., for special events, sporting venues, conference meetings, etc. In the light of above, this dissertation presents one of the first studies on advance reservation of virtual network services. First, the fixed virtual overlay network scheduling problem is addressed as a special case of the more generalized virtual network scheduling problem and a related optimization presented. Next, the complete virtual network scheduling problem is studied and a range of heuristic and meta-heuristic solutions are proposed. Finally, some novel flexible advance reservation models are developed to improve service setup and network resource utilization. The performance of these various solutions is evaluated using various methodologies (discrete event simulation and optimization tools) and comparisons made with some existing strategies.
119

Resilience of Cloud Networking Services for Large Scale Outages

Pourvali, Mahsa 06 April 2017 (has links)
Cloud infrastructure services are enabling organizations and enterprises to outsource a wide range of computing, storage, and networking needs to external service providers. These offerings make extensive use of underlying network virtualization, i.e., virtual network (VN) embedding, techniques to provision and interconnect customized storage/computing resource pools across large network substrates. However, as cloud-based services continue to gain traction, there is a growing need to address a range of resiliency concerns, particularly with regards to large-scale outages. These conditions can be triggered by events such as natural disasters, malicious man-made attacks, and even cascading power failures. Overall, a wide range of studies have looked at network virtualization survivability, with most efforts focusing on pre-fault protection strategies to set aside backup datacenter and network bandwidth resources. These contributions include single node/link failure schemes as well as recent studies on correlated multi-failure \disaster" recovery schemes. However, pre-fault provisioning is very resource-intensive and imposes high costs for clients. Moreover this approach cannot guarantee recovery under generalized multi-failure conditions. Although post-fault restoration (remapping) schemes have also been studied, the effectiveness of these methods is constrained by the scale of infrastructure damage. As a result there is a pressing need to investigate longer-term post-fault infrastructure repair strategies to minimize VN service disruption. However this is a largely unexplored area and requires specialized consideration as damaged infrastructures will likely be repaired in a time-staged, incremental manner, i.e., progressive recovery. Furthermore, more specialized multicast VN (MVN) services are also being used to support a range of content distribution and real-time streaming needs over cloud-based infrastructures. In general, these one-to-many services impose more challenging requirements in terms of geographic coverage, delay, delay variation, and reliability. Now some recent studies have looked at MVN embedding and survivability design. In particular, the latter contributions cover both pre-fault protection and post-fault restoration methods, and also include some multi-failure recovery techniques. Nevertheless, there are no known efforts that incorporate risk vulnerabilities into the MVN embedding process. Indeed, there is a strong need to develop such methods in order to reduce the impact of large-scale outages, and this remains an open topic area. In light of the above, this dissertation develops some novel solutions to further improve the resiliency of the network virtualization services in the presence of large outages. Foremost, new multi-stage (progressive) infrastructure repair strategies are proposed to improve the post-fault recovery of VN services. These contributions include advanced simulated annealing metaheuristics as well as more scalable polynomial-time heuristic algorithms. Furthermore, enhanced \risk-aware" mapping solutions are also developed to achieve more reliable multicast (MVN) embedding, providing a further basis to develop more specialized repair strategies in the future. The performance of these various solutions is also evaluated extensively using custom-developed simulation models.
120

Utilization of Dynamic Attributes in Resource Discovery for Network Virtualization

Amarasinghe, Heli January 2012 (has links)
The success of the internet over last few decades has mainly depended on various infrastructure technologies to run distributed applications. Due to diversification and multi-provider nature of the internet, radical architectural improvements which require mutual agreement between infrastructure providers have become highly impractical. This escalating resistance towards the further growth has created a rising demand for new approaches to address this challenge. Network virtualization is regarded as a prominent solution to surmount these limitations. It decouples the conventional Internet service provider’s role into infrastructure provider (InP) and service provider (SP) and introduce a third player known as virtual network Provider (VNP) which creates virtual networks (VNs). Resource discovery aims to assist the VNP in selecting the best InP that has the best matching resources for a particular VN request. In the current literature, resource discovery focuses mainly on static attributes of network resources highlighting the fact that utilization on dynamic attributes imposes significant overhead on the network itself. In this thesis we propose a resource discovery approach that is capable of utilizing the dynamic resource attributes to enhance the resource discovery and increase the overall efficiency of VN creation. We realize that recourse discovery techniques should be fast and cost efficient, enough to not to impose any significant load. Hence our proposed scheme calculates aggregation values of the dynamic attributes of the substrate resources. By comparing aggregation values to VN requirements, a set of potential InPs is selected. The potential InPs satisfy basic VN embedding requirements. Moreover, we propose further enhancements to the dynamic attribute monitoring process using a vector based aggregation approach.

Page generated in 0.2272 seconds