• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Towards a Scalable Docker Registry

Littley, Michael Brian 29 June 2018 (has links)
Containers are an alternative to virtual machines rapidly increasing in popularity due to their minimal overhead. To help facilitate their adoption, containers use management systems with central registries to store and distribute container images. However, these registries rely on other, preexisting services to provide load balancing and storage, which limits their scalability. This thesis introduces a new registry design for Docker, the most prevalent container management system. The new design coalesces all the services into a single, highly scalable, registry. By increasing the scalability of the registry, the new design greatly decreases the distribution time for container images. This work also describes a new Docker registry benchmarking tool, the trace player, that uses real Docker registry workload traces to test the performance of new registry designs and setups. / Master of Science / Cloud services allow many different web applications to run on shared machines. The applications can be owned by a variety of customers to provide many different types of services. Because these applications are owned by different customers, they need to be isolated to ensure the users’ privacy and security. Containers are one technology that can provide isolation to the applications on a single machine, and they are rapidly gaining popularity as they incur less overhead on the applications that use them. This means the applications will run faster with the same isolation guarantees as other isolation technologies. Containers also allow the cloud provider to run more applications on a single machine, letting them serve more customers. Docker is by far the most popular container management system on the market. It provides a registry service for containerized application storage and distribution. Users can store snapshots of their applications on the registry, and then use the snapshots to run multiple copies of the application on different machines. As more and more users use the registry service, the registry becomes slower, making it take longer for users to pull their applications from the registry. This will increase the start time of their application, making them harder to scale out their application to more machines to accommodate more customers of their services. This work creates a new registry design that will allow the registry to handle more users, and allow them to retrieve their applications even faster than what’s currently possible. This will allow them to more rapidly scale their applications out to more machines to handle more customers. The customers, in turn, will have a better experience.
2

The Geometric Design of Spherical Mechanical Linkages with Differential Task Specifications: Experimental Set Up and Applications

Kapila Bala, Phani Neehar 2011 August 1900 (has links)
The thesis focuses on the development of an experimental set up for a recently developed failure recovery technique of spatial robot manipulators. Assuming a general configuration of the spatial robot arm, a task is specified. This task contains constraints on position, velocity and acceleration to be satisfied. These constraints are derived from contact and curvature specifications. The technique synthesizes the serial chain and tests if the task can be satisfied in case of a joint failure. An experimental set up was developed in order to validate the failure recovery technique. It includes a robot arm mounted on a movable platform. The arm and platform are controlled by NI sbRIO board and are programmed in LabVIEW. The experimental results of the failure recovery technique were obtained for the case of Elbow failure in robot manipulators. The thesis considers two applications of the synthesis of spherical five –degree-of-freedom serial chains: Power assist for human therapeutic movement and Synthesis of Parallel Mechanical Linkages. A spherical TS chain has been synthesized for these two applications using the Mathematica software.
3

Failure and Workspace Analysis of Parallel Robot Manipulators

Nazari, VAHID 10 March 2014 (has links)
A failure recovery methodology based on decomposing the platform task space into the major and secondary subtasks is proposed which enables the manipulator to minimize the least-squares error of the major subtasks and to optimize the secondary criterion. A methodology for wrench recovery of parallel manipulators is proposed so that the platform task is divided into the recoverable and non-recoverable subtasks based on the number and type of actuator failures, manipulator configuration and task/application purposes. It is investigated that when the Jacobian matrix of the manipulator is of full row-rank and the minimum 2-norm of the joint velocity vector satisfies the velocity limits of the joints, the full recovery of the platform twist will be provided. If the full recovery of the platform twist cannot be achieved, the optimization method followed by the partitioned Jacobian matrix is used to deal with the failure recovery. It is verified that the optimization method recovers as many as possible components of the platform velocity vector when the objective function, 2-norm of the overall velocity vector of the healthy joints, is minimized. To model uncertainty in the kinematic parameters, the interval analysis is proposed. Different interval-based algorithms to enclose the solution set to the interval linear systems are applied and the solution sets are compared. A novel approach in characterizing the exact solution of the interval linear system is proposed to deal with the failure recovery of parallel manipulators with velocity limits of the joints and uncertainty in the kinematic parameters. Simulation results show how the solution sets of the joint velocity vector are characterized by introducing uncertainties in the kinematic parameters. The calculation of the exact solution takes more computation time compared to the interval-based algorithms. However, the interval-based algorithms give the wider solution box with less computation time. The effect of variations and/or uncertainties in design parameters on the workspace of wire-actuated parallel manipulators without and with gravity is investigated. Simulation results show how the workspace size and shape are changed under variations in design parameters. / Thesis (Ph.D, Mechanical and Materials Engineering) -- Queen's University, 2014-03-09 16:18:12.74
4

Improving Operating System Security, Reliability, and Performance through Intra-Unikernel Isolation, Asynchronous Out-of-kernel IPC, and Advanced System Servers

Sung, Mincheol 28 March 2023 (has links)
Computer systems are vulnerable to security exploits, and the security of the operating system (OS) is crucial as it is often a trusted entity that applications rely on. Traditional OSs have a monolithic design where all components are executed in a single privilege layer, but this design is increasingly inadequate as OS code sizes have become larger and expose a large attack surface. Microkernel OSs and multiserver OSs improve security and reliability through isolation, but they come at a performance cost due to crossing privilege layers through IPCs, system calls, and mode switches. Library OSs, on the other hand, implement kernel components as libraries which avoids crossing privilege layers in performance-critical paths and thereby improves performance. Unikernels are a specialized form of library OSs that consist of a single application compiled with the necessary kernel components, and execute in a single address space, usually atop a hypervisor for strong isolation. Unikernels have recently gained popularity in various application domains due to their better performance and security. Although unikernels offer strong isolation between each instance due to virtualization, there is no isolation within a unikernel. Since the model eliminates the traditional separation between kernel and user parts of the address space, the subversion of a kernel or application component will result in the subversion of the entire unikernel. Thus, a unikernel must be viewed as a single unit of trust, reducing security. The dissertation's first contribution is intra-unikernel isolation: we use Intel's Memory Protection Keys (MPK) primitive to provide per-thread permission control over groups of virtual memory pages within a unikernel's single address space, allowing different areas of the address space to be isolated from each other. We implement our mechanisms in RustyHermit, a unikernel written in Rust. Our evaluations show that the mechanisms have low overhead and retain unikernel's low system call latency property: 0.6% slowdown on applications including memory/compute intensive benchmarks as well as micro-benchmarks. Multiserver OS, a type of microkernel OS, has high parallelism potential due to its inherent compartmentalization. However, the model suffers from inferior performance. This is due to inter-process communication (IPC) client-server crossings that require context switches for single-core systems, which are more expensive than traditional system calls; on multi-core systems (now ubiquitous), they have poor resource utilization. The dissertation's second contribution is Aoki, a new approach to IPC design for microkernel OSs. Aoki incorporates non-blocking concurrency techniques to eliminate in-kernel blocking synchronization which causes performance challenges for state-of-the-art microkernels. Aoki's non-blocking (i.e., lock-free and wait-free) IPC design not only improves performance and scalability, but also enhances reliability by preventing thread starvation. In a multiserver OS setting, the design also enables the reconnection of stateful servers after failure without loss of IPC states. Aoki solves two problems that have plagued previous microkernel IPC designs: reducing excessive transitions between user and kernel modes and enabling efficient recovery from failures. We implement Aoki in the state-of-the-art seL4 microkernel. Results from our experiments show that Aoki outperforms the baseline seL4 in both fastpath IPC and cross-core IPC, with improvements of 2.4x and 20x, respectively. The Aoki IPC design enables the design of system servers for multiserver OSs with higher performance and reliability. The dissertation's third and final contribution is the design of a fault-tolerant storage server and a copy-free file system server. We build both servers using NetBSD OS's rumprun unikernel, which provides robust isolation through hardware virtualization, and is capable of handling a wide range of storage devices including NVMe. Both servers communicate with client applications using Aoki's IPC design, which yields scalable IPC. In the case of the storage server, the IPC also enables the server to transparently recover from server failures and reconnect to client applications, with no loss of IPC state and no significant overhead. In the copy-free file system server's design, applications grant the server direct memory access to file I/O data buffers for high performance. The performance problems solved in the server designs have challenged all prior multiserver/microkernel OSs. Our evaluations show that both servers have a performance comparable to Linux and the rumprun baseline. / Doctor of Philosophy / Computer security is extremely important, especially when it comes to the operating system (OS) – the foundation upon which all applications execute. Traditional OSs adopt a monolithic design in which all of their components execute at a single privilege level (for achieving high performance). However, this design degrades security as the vulnerability of a single component can be exploited to compromise the entire system. The problem is exacerbated when the OS codebase becomes large, as is the current trend. To overcome this security challenge, researchers have developed alternative OS models such as microkernels, multiserver OSs, library OSs, and recently, unikernels. The unikernel model has recently gained popularity in application domains such as cloud computing, the internet of things (IoT), and high-performance computing due to its improved security and performance. In this model, a single application is compiled together with its necessary OS components to produce a single, small executable image. Unikernels execute atop a hypervisor, a software layer that provides strong isolation between unikernels, usually by leveraging special hardware instructions. Both ideas improve security. The dissertation's first contribution improves the security of unikernels by enabling isolation within a unikernel. This allows different components of a unikernel (e.g., safe code, unsafe code, kernel code, user code) to be isolated from each other. Thus, the vulnerability of a single component cannot be exploited to compromise the entire system. We used Intel's Memory Protection Keys (MPK), a hardware feature of Intel CPUs, to achieve this isolation. Our implementation of the technique and experimental evaluations revealed that the technique has low overhead and high performance. The dissertation's second contribution improves the performance of multiserver OSs. This OS model has excellent potential for parallelization, but its performance is hindered by slow communication between applications and OS subsystems (which are programmed as clients and servers, respectively). We develop Aoki, an Inter-Process Communication (IPC) technique that enables faster and more reliable communication between clients and servers in multiserver OSs. Our implementation of Aoki in the state-of-the-art seL4 microkernel and evaluations reveal that the technique improves IPC latency over seL4's by as much as two orders of magnitude. The dissertation's third and final contribution is the design of two servers for multiserver OSs: a storage server and a file system server. The servers are built as unikernels running atop the Xen hypervisor and are powered by Aoki's IPC mechanism for communication between the servers and applications. The storage server is designed to recover its state after a failure with no loss of data and little overhead, and the file system server is designed to communicate with applications with little overhead. Our evaluations show that both servers achieve their design goals: they have comparable performance to that of state-of-the-art high-performance OSes such as Linux.
5

Efficient and Reliable In-Network Query Processing in Wireless Sensor Networks

Malhotra, Baljeet Singh 11 1900 (has links)
The Wireless Sensor Networks (WSNs) have emerged as a new paradigm for collecting and processing data from physical environments, such as wild life sanctuaries, large warehouses, and battlefields. Users can access sensor data by issuing queries over the network, e.g., to find what are the 10 highest temperature values in the network. Typically, a WSN operates by constructing a logical topology, such as a spanning tree, built on top of the physical topology of the network. The constructed logical topology is then used to disseminate queries in the network, and also to process and return the results of such queries back to the user. A major challenge in this context is prolonging the network's lifetime that mainly depends on the energy cost of data communication via wireless radios, which is known to be very expensive as compared to the cost of data processing within the network. In this research, we investigate some of the core problems that deal with the different aspects of in-network query processing in WSNs. In that context, we propose an efficient filtering based algorithm for the top-k query processing in WSNs. Through a systematic study of the top-k query processing in WSNs we propose several solutions in this thesis, which are applicable not only to the top-k queries, but also to in-network query processing problems in general. Specifically, we consider broadcasting and convergecasting, which are two basic operations that are required by many in-network query processing solutions. Scheduling broadcasting and convergecasting is another problem that is important for energy efficiency in WSNs. Failure of communication links, which are common in WSNs, is yet another important issue that needs to be addressed. In this research, we take a holistic approach to deal with the above problems while processing the top-k queries in WSNs. To this end, the thesis makes several contributions. In particular, our proposed solutions include new logical topologies, scheduling algorithms, and an overall sophisticated communication framework, which allows to process the top-k queries efficiently and with increased reliability. Extensive simulation studies reveal that our solutions are not only energy efficient, saving up to 50% of the energy cost as compared to the current state-of-the-art solutions, but they are also robust to link failures.
6

Improving Efficiency and Effectiveness of Multipath Routing in Computer Networks

Lee, Yong Oh 2012 May 1900 (has links)
In this dissertation, we studied methods for improving efficiency and effectiveness of multipath routing in computer networks. We showed that multipath routing can improve network performance for failure recovery, load balancing, Quality of Service (QoS), and energy consumption. We presented a method for reducing the overhead of computing dynamic path metrics, one of the obstacles for implementing dynamic multipath routing in real world networks. In the first part, we proposed a method for building disjoint multipaths that could be used for local failure recovery as well as for multipath routing. Proactive failure recovery schemes have been recently proposed for continuous service of delay-sensitive applications during failure transients at the cost of extra infrastructural support in the form of routing table entries, extra addresses, etc. These extra infrastructure supports could be exploited to build alternative disjoint paths in those frameworks, while keeping the lengths of the alternative paths close to those of the primary paths. The evaluations showed that it was possible to extend the proactive failure recovery schemes to provide support for nearly-disjoint paths which could be employed in multipath routing for load balancing and QoS. In the second part, we proposed a method for reducing overhead of measuring dynamic link state information for multipath routing, specifically path delays used in Wardrop routing. Even when dynamic routing could be shown to offer convergence properties without oscillations, it has not been widely adopted. One of reasons was that the expected cost of keeping the link metrics updated at various nodes in the network. We proposed threshold-based updates to propagate the link state only when the currently measured link state differs from the last updated state consider- ably. Threshold-based updates were shown through analysis and simulations to offer bounded guarantees on path quality while significantly reducing the cost of propagating the dynamic link metric information. The simulation studies indicated that threshold based updates can reduce the number of link updates by up to 90-95% in some cases. In the third part, we proposed methods of using multipath routing for reducing energy consumption in computer networks. Two different approaches have been advocated earlier, from traffic engineering and topology control to hardware-based approaches. We proposed solutions at two different time scales. On a finer time granularity, we employed a method of forwarding through alternate paths to enable longer sleep schedules of links. The proposed schemes achieved more energy saving by increasing the usage of active links and the down time of sleeping links as well as avoiding too frequent link state changes. To the best of our knowledge, this was the first technique combining a routing scheme with hardware scheme to save energy consumption in networks. In our evaluation, alternative forwarding reduced energy consumption by 10% on top of a hardware-based sleeping scheme. On a longer time granularity, we proposed a technique that combined multipath routing with topology control. The proposed scheme achieved increased energy savings by maximizing the link utilization on a reduced topology where the number of active nodes and links are minimized. The proposed technique reduced energy consumption by an additional 17% over previous schemes with single/shortest path routing.
7

Efficient and Reliable In-Network Query Processing in Wireless Sensor Networks

Malhotra, Baljeet Singh Unknown Date
No description available.
8

A Framework for the Development of Scalable Heterogeneous Robot Teams with Dynamically Distributed Processing

Martin, Adrian 08 August 2013 (has links)
As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to successfully complete its tasks. The HAA implementation designed following the Control ad libitum philosophy proved to be capable of dynamic team formation and extremely robust against both hardware and software failure; and, due to the modularity of the system there is significant potential for reuse of assets and future extensibility. One future goal is to make the source code publically available and establish a forum for the development and exchange of new agents.
9

A Framework for the Development of Scalable Heterogeneous Robot Teams with Dynamically Distributed Processing

Martin, Adrian 08 August 2013 (has links)
As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to successfully complete its tasks. The HAA implementation designed following the Control ad libitum philosophy proved to be capable of dynamic team formation and extremely robust against both hardware and software failure; and, due to the modularity of the system there is significant potential for reuse of assets and future extensibility. One future goal is to make the source code publically available and establish a forum for the development and exchange of new agents.
10

Individual Reactions To Organizational Ethical Failures And Recovery Attempts: A Recovery Paradox?

Caldwell, James 01 January 2009 (has links)
The vast majority of behavioral ethical research focuses on the antecedents of unethical behavior. Consequently, questions involving the consequences of organizational unethical behavior remain largely unanswered. Therefore, extant business ethics research largely neglects the impacts of organizational unethical behavior on individuals. Moreover, questions involving what organizations can do to correct or recover from having engaged in unethical behavior as well as individual responses to those efforts are also mostly ignored. Therefore, the purpose of this study is to investigate the impact of unethical activity on employees and explore organizations that have failed ethically and their attempts at recovery. This study explores two issues. First, how do employees react to organizational unethical behavior (OUB) and to what extent are those reactions dependent on contextual and individual factors? Second, to what extent can organizations recover from the negative impacts of ethical failure? More specifically, is it possible for organizations that fail in their ethical responsibilities to recover such that they are paradoxically "better-off" than their counterparts that never failed in the first place? To explore these issues I review, integrate and draw upon the ethical decision-making and service failure recovery literatures for theoretical support. Empirical testing included two studies. The first was a field study using survey data acquired from the Ethics Resource Center (ERC) in which over 29,000 participants were asked about their perceptions of ethics at work. Second, a supplemental field study was conducted in which 100 employees rated the characteristics of unethical acts (e.g. severity). Results revealed a negative direct effect of severity and controllability of the OUB on perceptions of organizational ethicality and a negative direct effect of controllability of the OUB on organizational satisfaction. Ethical context moderated the relationship between OUB controllability and perceived organizational ethicality. Partial support was found for the moderating effects of ethical context on the relationship between OUB severity and perceived organizational ethicality. Results also supported an ethical failure recovery paradox.

Page generated in 0.04 seconds