611 |
Using Distributed Computing To Improve The Performance Of Genetic Algorithms For Job Shop Scheduling ProblemsShah, Nihar January 2004 (has links)
No description available.
|
612 |
RMBench: A Benchmarking Suite for Distributed Real-Time MiddlewareDelaney, Matthew 10 October 2005 (has links)
No description available.
|
613 |
Parameterized Verification and Synthesis for Distributed Agreement-Based SystemsNouraldin Jaber (13796296) 19 September 2022 (has links)
<p> </p>
<p>Distributed agreement-based systems use common distributed agreement protocols such as leader election and consensus as building blocks for their target functionality—processes in these systems may need to agree on a leader, on the members of a group, on owners of locks, or on updates to replicated data. Such distributed agreement-based systems are common and potentially permit modular, scalable verification approaches that mimic their modular design. Interestingly, while there are many verification efforts that target agreement protocols themselves, little attention has been given to distributed agreement-based systems that build on top of these protocols. </p>
<p>In this work, we aim to develop a fully-automated, modular, and usable parameterized verification approach for distributed agreement-based systems. To do so, we need to overcome the following challenges. First, the fully automated parameterized verification problem, i.e, the problem of algorithmically checking if the system is correct for any number of processes, is a well-known <em>undecidable </em>problem. Second, to enable modular verification that leverages the inherently-modular nature of these agreement-based systems, we need to be able to support <em>abstractions </em>of agreement protocols. Such abstractions can replace the agreement protocols’ implementations when verifying the overall system; enabling modular reasoning. Finally, even when the verification is fully automated, a system designer still needs assistance in <em>modeling </em>their distributed agreement-based systems. </p>
<p>We systematically tackle these challenges through the following contributions. </p>
<p>First, we support efficient, decidable verification of distributed agreement-based systems by developing a computational model—the GSP model—for reasoning about distributed (agreement-based) systems that admits decidability and <em>cutoff </em>results. Cutoff results enable practical verification by reducing the parameterized verification problem to the verification problem of a system with a fixed, finite number of processes. The GSP model supports generalized communication primitives and global guards, both of which are essential to enable abstractions of agreement protocols. </p>
<p>Then, we address the usability and modularity aspects by developing a framework, QuickSilver, tailored for modeling and modular parameterized verification of distributed agreement-based systems. QuickSilver provides an intuitive domain-specific language, called Mercury, that is equipped with two agreement primitives capable of abstracting away agreement protocols when modeling agreement-based systems; enabling modular verification. QuickSilver extends the decidability and cutoff results of the GSP model to provide fully automated, efficient parameterized verification for a large class of systems modeled in Mercury. </p>
<p>Finally, we leverage synthesis techniques to further enhance the usability of our approach and propose Cinnabar, a tool that supports synthesis of distributed agreement-based systems with efficiently-decidable parameterized verification. Cinnabar allows a system de- signer to provide a sketch of their Mercury model and uses a counterexample-guided synthesis procedure to search for model completions that both belong to the efficiently-decidable fragment of Mercury and are correct. </p>
<p>We evaluate our contributions on various interesting distributed agreement-based systems adapted from real-world applications, such as a data store, a lock service, a surveillance system, a pathfinding algorithm for mobile robots, and more. </p>
|
614 |
Context Sensitive Interaction Interoperability for Distributed Virtual EnvironmentsAhmed, Hussein Mohammed 23 June 2010 (has links)
The number and types of input devices and related interaction technique types are growing rapidly. Innovative input devices such as game controllers are no longer used just for games, propriety consoles and specific applications, they are also used in many distributed virtual environments, especially the so-called serious virtual environments.
In this dissertation a distributed, service based framework is presented to offer context-sensitive interaction interoperability that can support mapping between input devices and suitable application tasks given the attributes (device, applications, users, and interaction techniques) and the current user context without negatively impacting performances of large scale distributed environments.
The mapping is dynamic and context sensitive taking into account the context dimensions of both the virtual and real planes. What device or device component to use, how and when to use them depend on the application, task performed, the user and the overall context, including location and presence of other users. Another use of interaction interoperability is as a testbed for input devices, and interaction techniques making it possible to test reality based interfaces and interaction techniques with legacy applications.
The dissertation provides a description how the framework provides these affordances and a discussion of motivations, goals and the addressed challenges. Several proof of the concept implementations were developed and an evaluation of the framework performance (in terms of system characteristics) demonstrates viability, scalability and negligible delays. / Ph. D.
|
615 |
New Techniques in the Design of Distributed Power SystemsWatson, Robert III 17 August 1998 (has links)
Power conversion system design issues are expanding their role in information technology equipment design philosophies. These issues include not only improving power conversion efficiency, but also increased concerns regarding the cost and complexity of the power conversion design techniques utilized to satisfy the host system's total performance requirements. In particular, in computer system (personal computers, workstations, and servers) designs, the power "supplies" are rapidly becoming a limiting factor in meeting overall design objectives.
This dissertation addresses the issue of simplifying the architecture of distributed power systems incorporated into computing equipment. In the dissertation's first half, the subject of the design of the distributed power system's front-end converter is investigated from the perspective of simplifying the conversion process while simultaneously improving efficiency. This is initially accomplished by simplifying the second-stage DC/DC converter in the standard two-stage front-end design (PFC followed by DC/DC conversion) through the incorporation of secondary-side control. Unique modifications are then made to two basic topologies (the flyback and boost converter topologies) that enable the two-stage front-end design to be reduced to an isolated PFC conversion stage, resulting in a front-end design that features reduced complexity and higher efficiency.
In the dissertation's second half, the overall DC distributed power system design concept is simplified through the elimination of power processing conversion steps - the result being the creation of a high-frequency (HF) AC distributed power system. Design techniques for generating, distributing, and processing HF AC power in this new system are developed and experimentally verified. Also, an experimental comparison between both DC and AC distributed power systems is performed, illustrating in a succinct fashion the merits and limitations of both approaches. / Ph. D.
|
616 |
Distributed Feedback Control Algorithms for Cooperative Locomotion: From Bipedal to Quadrupedal RobotsKamidi, Vinaykarthik Reddy 25 March 2022 (has links)
This thesis synthesizes general and scalable distributed nonlinear control algorithms with application to legged robots. It explores both naturally decentralized problems in legged locomotion, such as the collaborative control of human-lower extremity prosthesis and the decomposition of high-dimensional controllers of a naturally centralized problem into a net- work of low-dimensional controllers while preserving equivalent performance. In doing so, strong nonlinear interaction forces arise, which this thesis considers and sufficiently addresses. It generalizes to both symmetric and asymmetric combinations of subsystems. Specifically, this thesis results in two distinct distributed control algorithms based on the decomposition approach.
Towards synthesizing the first algorithm, this thesis presents a formal foundation based on de- composition, Hybrid Zero Dynamics (HZD), and scalable optimization to develop distributed controllers for hybrid models of collaborative human-robot locomotion. This approach con- siders a centralized controller and then decomposes the dynamics and parameterizes the feedback laws to synthesize local controllers. The Jacobian matrix of the Poincaré map with local controllers is studied and compared with the centralized ones. An optimization problem is then set up to tune the parameters of the local controllers for asymptotic stability. It is shown that the proposed approach can significantly reduce the number of controller parameters to be optimized for the synthesis of distributed controllers, deeming the method computationally tractable. To evaluate the analytical results, we consider a human amputee with the point of separation just above the knee and assume the average physical parameters of a human male. For the lower-extremity prosthesis, we consider the PRleg, a powered knee-ankle prosthetic leg, and together, they form a 19 Degrees of Freedom (DoF) model. A multi-domain hybrid locomotion model is then employed to rigorously assess the performance of the afore-stated control algorithm via numerical simulations. Various simulations involving the application of unknown external forces and altering the physical parameters of the human model unbeknownst to the local controllers still result in stable amputee loco- motion, demonstrating the inherent robustness of the proposed control algorithm.
In the later part of this thesis, we are interested in developing distributed algorithms for the real-time control of legged robots. Inspired by the increasing popularity of Quadratic programming (QP)-based nonlinear controllers in the legged locomotion community due to their ability to encode control objectives subject to physical constraints, this thesis exploits the idea of distributed QPs. In particular, this thesis presents a formal foundation to systematically decompose QP-based centralized nonlinear controllers into a network of lower-dimensional local QPs. The proposed approach formulates a feedback structure be- tween the local QPs and leverages a one-step communication delay protocol. The properties of local QPs are analyzed, wherein it is established that their steady-state solutions on periodic orbits (representing gaits) coincide with that of the centralized QP. The asymptotic convergence of local QPs' solutions to the steady-state solution is studied via Floquet theory. Subsequently, to evaluate the effectiveness of the analytical results, we consider an 18 DoF quadrupedal robot, A1, as a representative example. The network of distributed QPs mentioned earlier is condensed to two local QPs by considering a front-hind decomposition scheme. The robustness of the distributed QP-based controller is then established through rigorous numerical simulations that involve exerting unmodelled external forces and intro- ducing unknown ground height variations. It is further shown that the proposed distributed QPs have reduced sensitivity to noise propagation when compared with the centralized QP.
Finally, to demonstrate that the resultant distributed QP-based nonlinear control algorithm translates equivalently well to hardware, an extensive set of blind locomotion experiments on the A1 robot are undertaken. Similar to numerical simulations, unknown external forces in the form of aggressive pulls and pushes were applied, and terrain uncertainties were introduced with the help of arbitrarily displaced wooden blocks and compliant surfaces. Additionally, outdoor experiments involving a wide range of terrains such as gravel, mulch, and grass at various speeds up to 1.0 (m/s) reiterate the robust locomotion observed in numerical simulations. These experiments also show that the computation time is significantly dropped when the distributed QPs are considered over the centralized QP. / Doctor of Philosophy / Inspiration from animals and human beings has long driven the research of legged loco- motion and the subsequent design of the robotic counterparts: bipedal and quadrupedal robots. Legged robots have also been extended to assist human amputees with the help of powered prostheses and aiding people with paraplegia through the development of exoskeleton suits. However, in an effort to capture the same robustness and agility demonstrated by nature, our design abstractions have become increasingly complicated. As a result, the en- suing control algorithms that drive and stabilize the robot are equivalently complicated and subjected to the curse of dimensionality. This complication is undesirable as failing to compute and prescribe a control action quickly destabilizes and renders the robot uncontrollable.
This thesis addresses this issue by seeking nature for inspiration through a different perspective. Specifically, through some earlier biological studies on cats, it was observed that some form of locality is implemented in the control of animals. This thesis extends this observation to the control of legged robots by advocating an unconventional solution. It proposes that a high-dimensional, single-legged agent be viewed as a virtual composition of multiple, low-dimensional subsystems. While this outlook is not new and forms precedent to the vast literature of distributed control, the focus has always been on large-scale systems such as power networks or urban traffic networks that preserve sparsity, mathematically speaking. On the contrary, legged robots are underactuated systems with strong interaction forces acting amongst each subsystem and dense mathematical structures. This thesis considers this problem in great detail and proposes developments that provide theoretical stability guarantees for the distributed control of interconnected legged robots. As a result, two distinctly different distributed control algorithms are formulated.
We consider a naturally decentralized structure appearing in the form of a human-lower extremity prosthesis to synthesize distributed controllers using the first control algorithm.
Subsequently, the resultant local controllers are rigorously validated through extensive full- order simulations. In order to validate the second algorithm, this thesis considers the problem of quadrupedal locomotion as a representative example. It assumes for the purposes of control synthesis that the quadruped is comprised of two subsystems separated at the geometric center, resulting in a front and hind subsystem. In addition to rigorous validation via numerical simulations, in the latter part of this thesis, to demonstrate that distributed controllers preserve practicality, rigorous and extensive experiments are undertaken in indoor and outdoor settings on a readily available quadrupedal robot A1.
|
617 |
Optimizing Distributed Tracing Overhead in a Cloud Environment with OpenTelemetryElias, Norgren January 2024 (has links)
To gain observability in distributed systems, some telemetry generation and gathering must be implemented. This is especially important when systems have layers of dependencies on other microservices. One method for observability is called distributed tracing. Distributed tracing is the act of building causal event chains between microservices, which are called traces. Finding bottlenecks and dependencies within each call chain is possible with the traces. One framework for implementing distributed tracing is OpenTelemetry. The developer must determine design choices when deploying OpenTelemetry in a Kubernetes cluster. For example, OpenTelemetry provides a collector that collects spans, which are parts of a trace from microservices. These collectors can be deployed one on each node, called a daemonset. Or it can be deployed with one for each service, called sidecars. This study compared the performance impact of the sidecar and daemonset setup to that of having no OpenTelemetry implemented. The resources analyzed were CPU usage, network usage, and RAM usage. Tests were done in a permutation of 4 different scenarios. Experiments were run on 4 and 2 nodes, as well as a balanced and unbalanced service placement setup. The experiments were run in a cloud environment using Kubernetes. The tested system was an emulation of one of Nasdaq's systems based on real data from the company. The study concluded that having OpenTelemetry added overhead / increased resource usage in all cases. Having the daemonset setup, compared to no OpenTelemetry, increased CPU usage by 46.5 %, network usage by 18.25 %, and memory usage by 47.5 % on average. Sidecar did, in most cases, perform worse than the daemonset setup in most cases and resources, especially in RAM and CPU usage.
|
618 |
Performance Overhead Of OpenTelemetry Sampling Methods In A Cloud InfrastructureKarkan, Tahir Mert January 2024 (has links)
This thesis explores the overhead of distributed tracing in OpenTelemetry, using different sampling strategies, in a cloud environment. Distributed tracing is telemetry data that allows developers to analyse causal events in a system with temporal information. This comes at the cost of overhead, in terms of CPU, memory and network usage, as the telemetry data has to be generated and sent through collectors that handle traces and at last sends them to a backend. By sampling using three different sampling strategies, head and tail based sampling and a mixture of those two, overhead can be reduced at the price of losing some information. To gain a measure of how this information loss impacts application performance, synthetic error messages are introduced in traces and used to gauge how many traces with errors the sampling strategies can detect. All three sampling strategies were compared for services that sent more and less data between nodes in Kubernetes. The experiments were also tested in a two and four nodes setup. This thesis was conducted with Nasdaq as it is of their interest to have high performing monitoring tools and their systems were analysed and emulated for relevance. The thesis concluded that tail based sampling had the highest overhead (71.33% CPU, 23.7% memory and 5.6% network average overhead compared to head based sampling) for the benefit of capturing all the errors. Head based sampling had the least overhead, except in the node that had deployed Jaeger as the backend for traces, where its higher total sampling rate added on average 12.75% CPU overhead for the four node setup compared to mixed sampling. Although, mixed sampling captured more errors. When measuring the overall time taken for the experiments, the highest impact could be observed when more requests had to be sent between nodes.
|
619 |
Distributed Pressure and Temperature Sensing Based on Stimulated Brillouin ScatteringWang, Jing 04 February 2014 (has links)
Brillouin scattering has been verified to be an effective mechanism in temperature and strain sensing. This kind of sensors can be applied to civil structural monitoring of pipelines, railroads, and other industries for disaster prevention. This thesis first presents a novel fiber sensing scheme for long-span fully-distributed pressure measurement based on Brillouin scattering in a side-hole fiber. After that, it demonstrates that Brillouin frequency keeps linear relation with temperature up to 1000°C; Brillouin scattering is a promising mechanism in high temperature distributed sensing.
A side-hole fiber has two longitudinal air holes in the fiber cladding. When a pressure is applied on the fiber, the two principal axes of the fiber birefringence yield different Brillouin frequency shifts in the Brillouin scattering. The differential Brillouin scattering continuously along the fiber thus permits distributed pressure measurement. Our sensor system was designed to analyze the Brillouin scattering in the two principal axes of a side-hole fiber in time domain. The developed system was tested under pressure from 0 to 10,000 psi for 100m and 600m side-hole fibers, respectively. Experimental results show fibers with side holes of different sizes possess different pressure sensitivities. The highest sensitivity of the measured pressure induced differential Brillouin frequency shift is 0.0012MHz/psi. The demonstrated spatial resolution is 2m, which maybe further improved by using shorter light pulses. / Master of Science
|
620 |
On the Fault-tolerance and High Performance of Replicated Transactional SystemsHirve, Sachin 28 September 2015 (has links)
With the recent technological developments in last few decades, there is a notable shift in the way business/consumer transactions are conducted. These transactions are usually triggered over the internet and transactional systems working in the background ensure that these transactions are processed. The majority of these transactions nowadays fall in Online Transaction Processing (OLTP) category, where low latency is preferred characteristic. In addition to low latency, OLTP transaction systems also require high service continuity and dependability.
Replication is a common technique that makes the services dependable and therefore helps in providing reliability, availability and fault-tolerance. Deferred Update Replication (DUR) and Deferred Execution Replication (DER) represent the two well known transaction execution models for replicated transactional systems. Under DUR, a transaction is executed locally at one node before a global certification is invoked to resolve conflicts against other transactions running on remote nodes. On the other hand, DER postpones the transaction execution until the agreement on a common order of transaction requests is reached. Both DUR and DER require a distributed ordering layer, which ensures a total order of transactions even in case of faults.
In today's distributed transactional systems, performance is of paramount importance. Any loss in performance, e.g., increased latency due to slow processing of client requests, may entail loss of revenue for businesses. On one hand, the DUR model is a good candidate for transaction processing in those systems in case the conflicts among transactions are rare, while it can be detrimental for high conflict workload profiles. On the other hand, the DER model is an attractive choice because of its ability to behave as independent of the characteristics of the workload, but trivial realizations of the model ultimately do not offer a good performance increase margin. Indeed transactions are executed sequentially and the total order layer can be a serious bottleneck for latency and scalability.
This dissertation proposes novel solutions and system optimizations to enhance the overall performance of replicated transactional systems. The first presented result is HiperTM, a DER-based transaction replication solution that is able to alleviate the costs of the total order layer via speculative execution techniques. HiperTM exploits the time that is between the broadcast of a client request and the finalization of the order for that request to speculatively execute the request, so to achieve an overlapping between replicas coordination and transactions execution. HiperTM proposes two main components: OS-Paxos, a novel total order layer that is able to early deliver requests optimistically according to a tentative order, which is then either confirmed or rejected by a final total order; SCC, a lightweight speculative concurrency control protocol that is able to exploit the optimistic delivery of OS-Paxos and execute transactions in a speculative fashion. SCC still processes write transactions serially in order to minimize the code instrumentation overheads, but it is able to parallelize the execution of read-only transactions thanks to its built-in object multiversion scheme.
The second contribution in this dissertation is X-DUR, a novel transaction replication system that addressed the high cost of local and remote aborts in case of high contention on shared objects in DUR based approaches, due to which the performance is adversely affected. Exploiting the knowledge of client's transaction locality, X-DUR incorporates the benefits of state machine approach to scale-up the distributed performance of DUR systems.
As third contribution, this dissertation proposes Archie, a DER-based replicated transactional system that improves HiperTM in two aspects. First, Archie includes a highly optimized total order layer that combines optimistic-delivery and batching thus allowing the anticipation of a big amount of work before the total order is finalized. Then the concurrency control is able to process transactions speculatively and with a higher degree of parallelism, although the order of the speculative commits still follows the order defined by the optimistic delivery.
Both HiperTM and Archie perform well up to a certain number of nodes in the system, beyond which their performance is impacted by limitations of single leader-based total-order layer. This motivates the design of Caesar, the forth contribution of this dissertation, which is a transactional system based on a novel multi-leader partial order protocol. Caesar enforces a partial order on the execution of transactions according to their conflicts, by letting non-conflicting transactions to proceed in parallel and without enforcing any synchronization during the execution (e.g., no locks).
As the last contribution, this dissertation presents Dexter, a replication framework that exploits the commonly observed phenomenon such that not all read-only workloads require up-to-date data. It harnesses the application specific freshness and content-based constraints of read-only transactions to achieve high scalability. Dexter services the read-only requests according to the freshness guarantees specified by the application and routes the read-only workload accordingly in the system to achieve high performance and low latency. As a result, Dexter framework also alleviates the interference between read-only requests and read-write requests thereby helping to improve the performance of read-write requests execution as well. / Ph. D.
|
Page generated in 0.0676 seconds