Spelling suggestions: "subject:"engineering, computer."" "subject:"engineering, coomputer.""
1 |
Architectures for secure cloud computing serversSzefer, Jakub M. 27 November 2013 (has links)
<p> Cloud computing, enabled by virtualization technologies, has become an important computing paradigm. However, by choosing the cloud computing model the customers give up control, e.g. over the system software, of servers where their code executes and where their data is stored. In this dissertation, we propose to leverage server hardware to provide protections for the code and data inside a customer's virtual machines on the remote cloud servers. In particular, this dissertation explores a threat that has not been addressed by researchers before -- that of the virtualization (system) software becoming compromised or malicious and attacking other virtual machines on the server. The high-level goal is to make code and data executing in a remote virtual machine as secure as if it were executing inside a customer's own office on a dedicated server, despite customer's lack of control over the system software. The first new research direction that we present is our hypervisor-free virtualization, which is realized in the NoHype architecture. Hypervisor-free virtualization takes a novel approach of removing the need for a virtualization layer during a virtual machine's runtime. This eliminates the attack surface from potentially malicious virtual machines to the virtualization layer and reduces the attackers' means for gaining virtualization layer privileges that they could then use to compromise the rest of the system. The hypervisor-free virtualization can be realized on existing hardware. The second new research direction that we present is our hypervisor-secure virtualization, which is realized in the HyperWall architecture. The architecture proposes new hardware so that untrusted virtualization layer can dynamically manage server resources, such as memory allocation, but confidentiality and integrity of virtual machines memory is protected. We also present hardware trust evidence mechanisms, which can be used to attest to the customer configuration and enforcement of protections of their virtual machines. The last part of this dissertation presents a new security verification methodology. Our methodology can be used to help check the correctness of hardware-software security architectures. Performing security verification, which is different from functional verification, can help find security bugs and facilitate committing designs to hardware.</p>
|
2 |
A study on virtualization technology and its impact on computer hardwareSemnanian, Amir Ali 09 August 2013 (has links)
<p> Underutilization of hardware is one of the challenges that large organizations have been trying to overcome. Most of today's computer hardware is designed and architected for hosting a single operating system and application. Virtualization is the primary solution for this problem. Virtualization is the capability of a system to host multiple virtual computers while running on a single hardware platform. This has both advantages and disadvantages. This thesis concentrates on introducing virtualization technology and comparing different techniques through which virtualization is achieved. It will examine how computer hardware can be virtualized and the impact virtualization would have on different parts of the system. This study evaluates the changes necessary to hardware architectures when virtualization is used. This thesis provides an analysis of the benefits of this technology which conveys to the computer industry and the disadvantages which accompany this new solution. Finally the future of virtualization technology and how it can affect the infrastructure of an organization is evaluated.</p>
|
3 |
Quality of service (QoS) in software-as-a-service (SaaS)Gupta, Sonalika 20 August 2013 (has links)
<p> Quality of Service (QoS) plays a key role in the successful development of software-as-a-service (SaaS). A quality model needs to be devised for ensuring customers' satisfaction and to set an expected level of quality standards. In this research, several journals and research papers were reviewed, and quality models available for service-oriented architectures (SOAs) were analyzed. A lot of research is done for SOA systems but we need more QoS models for SaaS. In this thesis, the quality attributes for SaaS were examined and reported. An ontology model for QoS in SaaS was also proposed in the thesis. Service level agreements, of companies that implement SaaS, were studied and compared in order to study the current market trends in the field of SaaS quality. This thesis provides a method to compare the QoS of various service providers for SaaS. The implementation of a proposed ontology model will help improve the current standard of quality for software available as a service.</p>
|
4 |
Reliable SRAM FingerprintingKim, Joonsoo 20 September 2013 (has links)
<p> Device identification, as human identification has been, has become critical to mitigate growing security problems. In the era of ubiquitous computing, it is important to ensure universal device identities that are versatile in number of ways, for example, to enhance computer security or to enable large-scale data capture, management and analysis. For device identities, simple labeling works only if they are properly managed under a highly controlled environment. We can also impose hard-coded serial numbers into non-volatile memories but it is well known that this is expensive and vulnerable to security attacks. Hence, it is desirable to develop reliable and secure device identification methods using fingerprint-like characteristics of the electronic devices. </p><p> As technology scales, process variation has become the most critical barrier to overcome for modern chip development. Ironically, there are some research works to exploit the aggressive process variation for the identification of individual devices. They find measurable physical characteristics that are unique to each integrated circuit. Among them, device identification using initial power-up values of SRAM cells, called SRAM <i>fingerprints,</i> has been emphasized lately in part due to the abundant availability of SRAM cells in modern microprocessors. More importantly, since the cross-coupled inverter structure of each SRAM cell amplifies even the small mismatches between two inverter nodes, it is thus very sensitive to and maximizes the effect of random process variation, making SRAM fingerprints to acquire great features as a naturally inherent device ID.</p><p> Therefore, this work focuses on achieving reliable device identification using SRAM fingerprints. As of date, this dissertation shows the most comprehensive feature characterization of SRAM fingerprints based on the large datasets measured from the real devices under various environmental conditions. SRAM fingerprints in three different process technologies—IBM 32nm SOI technology, IBM 65nm bulk technology, and TSMC 90nm low-k dielectric technology—have been investigated across different temperatures or voltages. By using formal statistical tools, the required features for SRAM fingerprints necessary to be usable as device IDs—uniqueness, randomness, independence, reproducibility, etc.—have been empirically proven.</p><p> As some of the previous works mentioned, there is an inherent unreliability of the initial states of SRAM cells so that there is always some chance of errors during identification process. It is observed that, under environmental variations, the instability aggravates even more. Most of the previous work, however, ignores the temperature dependence of the SRAM power-up values, which turns out to be critical against our past speculations and becomes a real challenge in realizing a reliable SRAM-based device identification. Note that temperature variation will not be negligible in many situations, for example, authentication of widely distributed sensors.</p><p> We show that it is possible to achieve SRAM-based device identification system that reliably operates under a wide range of temperatures. The proposed system is composed of three major steps: enrollment, system evaluation, and matching. During the enrollment process, power-up samples of SRAM fingerprints are captured from each manufactured device and the feature information or <i> characterization identifier</i> (CID) is characterized to generate a representative fingerprint value associated with the product device. By collecting the samples and the CIDs, system database gets constructed before distributing devices to the field. During the matching process, we take a single sample fingerprint of a power-cycle experiment, the <i>field identifier</i> (FID), and perform a match against a repository of CID’s of all manufactured devices. There is an additional monitoring subsystem, called system evaluation, that estimates the system accuracy with the system database. It controls the system parameters while maintaining the system accuracy requirement.</p><p> This work delivers a total-package statistical framework that raises design issues of each step and provides systematic solutions to deal with these inter-related issues. We provide statistical methods to determine sample size for the enrollment of chip identities, to generate the representative fingerprint features with the limited number of test samples, and to estimate the system performance along with the proposed system parameter values and the confidence interval of the estimation. A novel matching scheme is proposed to improve the system accuracy and increase population coverage under environmental variations, especially temperature variation. Several advanced mechanisms to exploit the instability for our benefit is also discussed along with supporting state-of-the-art circuit technologies. All these pioneering theoretical frameworks have been validated by the comprehensive empirical analysis based on the real SRAM fingerprint datasets introduced earlier.</p><p> The main contribution here is that this work provides a comprehensive interdisciplinary framework to enable reliable SRAM fingerprinting, even if the fingerprint, depending on ambient conditions, exhibits nondeterministic behaviors. Furthermore, the interdisciplinary bases introduced in our work are expected to provide generic fundamental methodologies that apply to device fingerprints in general, not just to SRAM fingerprints. (Abstract shortened by UMI.)</p>
|
5 |
Performance evaluation of routing protocols in finding stable paths in vanetIbrahim, Mohamed Elsaid Awad 01 May 2015 (has links)
<p> With the increase in technology, many developers have advanced their knowledge in improving road safety by designing various devices such as Vehicular AD Hoc Networks (VANETs). These VANETs are important in ensuring there is a continuous vehicle-to-vehicle communication along the roads while at close range in order to prevent road accidents. Similarly, VANETS are meant to ensure vehicles are alerted of events occurring at their surrounding through information sharing between vehicles to other vehicles (V2V) and vehicles to stationary objects built along the roads (V2I). However, MAC sub-layer protocol is common when designing VANET devices; this is because VANETs are ineffectual in preventing road accidents when messages cannot get to the other party. The path breaks causes delay of inconsistency and packet delays from source to destination. There are several improvement measures that can be taken to VANET is effective and efficient in improving road safety through inter vehicle communication and vehicle to stationary VANET devices installed along the roads. Since VANET operates in a wireless environment, there are other interferences from wireless devices such as mobile phones, laptops and other operational devices installed in vehicles. This thesis evaluated the performance of multiple routing protocols on MANET to assess their ability in finding stable paths. The evaluation led to practical suggestions on how to design better routing protocols for VANET. </p><p> Keywords: MAC sub-layer, VANET, RTS, CTS, PASTA, TDMA and V2V/V2I</p>
|
6 |
QoS-Aware Data Query and Dissemination in Mobile Opportunistic NetworksLiu, Yang 07 April 2015 (has links)
<p> Mobile opportunistic networks are formed by mobile users who share similar interests and connect with one another by exploiting Bluetooth and/or WiFi connections. Such networks not only re-assemble the real-world interaction between people, but also can effectively propagate data among mobile users. This dissertation focuses on QoS-aware data query and dissemination in mobile opportunistic networks. </p><p> Firstly, I develop a distributed data query protocol for practical applications. To demonstrate the feasibility and efficiency of the proposed scheme and to gain useful empirical insights, I carry out a testbed experiment by using 25 off-the-shelf Dell Streak tablets for a period of 15 days. Moreover, extensive simulations are carried out to learn the performance trend under various network settings, which are not practical to build and evaluate in laboratories. </p><p> Secondly, the QoS-aware delivery probability (QDP) is introduced to reflect the capability of a node to deliver data to a destination within a given delay budget. Two experiments are carried out to demonstrate and evaluate the proposed QoS-aware data delivery scheme. Moreover, simulation results are obtained under DieselNet trace and power-law mobility model to study the scalability and performance trend. Our experiments and simulations demonstrate that the proposed scheme achieves efficient resource allocation according to the desired delay budget, and thus supports effective QoS provisioning. </p><p> Finally, I study the problem of delay-constrained least-cost multicast in mobile opportunistic networks. I formally formulate the problem and show it is NP-complete. Given its NP-completeness, I explore efficient and scalable heuristic solutions. I first introduce a centralized heuristic algorithm which aims to discover a tree for multicasting, in order to meet the delay constraint and achieve low communication cost. I develop a distributed online algorithm that makes an efficient decision on every transmission opportunity. I prototype the proposed distributed online multicast algorithm using Nexus tablets and conduct an experiment that involves 37 volunteers and lasts for 21 days to demonstrate its effectiveness. I also carry out simulations to evaluate the scalability of the proposed schemes.</p>
|
7 |
Malware Vectors| A Technique for Discovering Defense LogicsStocco, Gabriel Fortunato 10 March 2015 (has links)
<p> Organizations face Cyber attacks of increasing sophistication. However, detection measures have not kept up with the pace of advancement in attack design. Common detection systems use detection rules or heuristics based on behaviors of known previous attacks and often crafted manually. The result is a defensive system which is both too sensitive, result- ing in many false positives, and not sensitive enough, missing detection of new attacks. </p><p> Building upon our work developing the Covertness Capability Calculus, we propose Malware Vectors, a technique for the discovery of defense logic via remote probing. Malware Vectors proposes a technique for building malware by discovering obserables which can be generated without triggering detection. Malware Vectors generates probes to establish a vector of acceptable observable values that the attack may generate without triggering detection. We test attacks against an unknown defense logic and show that it is trivial to discover a covert way to carry out an attack. We extend this simulation to randomly generated defense logics and find that without a change in underlying strategy defenders cannot improve their position significantly. Further, we find that discovery of full logic can be efficiently achieved using only Membership Queries in most cases. Finally, we propose some techniques that a defender could implement to attempt to defend against the Malware Vectors technique. </p>
|
8 |
Social network coding rate control in information centric delay tolerant networksWood, Samuel Bennett 19 February 2015 (has links)
<p> Tactical and emergency-response networks require efficient communication without a managed infrastructure in order to meet the requirements of mission critical applications. In these networks, mobility, disruption, limited network resources, and limited host resources are the norm instead of the exception. Despite these constraints, applications must quickly and reliably share data collected from their environment to allow users to coordinate and make critical decisions. Our previous work demonstrates that applying information-centric paradigms to the tactical edge can provide performance benefits over traditional address centric approaches. We expand on this work and investigate how social relationships can be inferred and exploited to improve network performance in volatile networks.</p><p> As a result of our investigation, we propose SOCRATIC (SOCial RATe control for Information Centric networks), a novel approach to dissemination that unifies replication and network coding, which takes advantage of social content and context heuristics to improve network performance. SOCRATIC replicates network encoded blocks according to a <i>popularity index</i> metric that captures social relationships, and is shared during neighbor discovery. The number of encoded blocks that is relayed to a node depends on its interest in the data object and its popularity index, i.e., how often and for how long it meets other nodes. We observe that nodes with similar interests tend to be co-located and we exploit this information through use of a generalization of a data object-to-interest matching function that quantifies this similarity. Encoded blocks are subsequently replicated towards the subscriber if a stable path exists. We evaluate an implementation of SOCRATIC through a detailed network emulation of a tactical scenario and demonstrate that it can achieve better performance than the existing socially agnostic approaches.</p>
|
9 |
Network models for multiprogramming computer systemsSencer, Mehmet Akin January 1974 (has links)
Abstract not available.
|
10 |
Architecture and programming paradigm for a scalable, metamorphic and cloud-collaborative user environmentTropper, Robin January 2010 (has links)
A growing number of enterprise applications on the Internet ranging from banking transactions to business management make use of real-time collaboration. Simultaneous access from any device to any set of applications shared among many users is a hot area of research and development.
This thesis designed a thick-client for real-time collaboration supporting the applications development and interoperability. It introduces a new programming paradigm, algorithms and protocols to bring real-time collaboration to a web-based platform. Its component-oriented metamorphic architecture supports a run-time scalable multi-desktop environment connecting client applications through automated remote procedure call and the object request broker pattern while providing new mechanisms for dynamic resource loading.
The new architecture supports unsolicited server control actions on the client using an event model to simulate interruptions and sustained user-activity during network failure. Results obtained validate the correctness of the approach and the feasibility of an extensible web-based platform for real-time collaboration.
|
Page generated in 0.1618 seconds