• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 314
  • 274
  • 30
  • 21
  • 13
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 802
  • 802
  • 267
  • 221
  • 149
  • 145
  • 114
  • 97
  • 88
  • 80
  • 78
  • 75
  • 72
  • 72
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Improving the Selection of Surrogates During the Cold-Start Phase of a Cyber Foraging Application to Increase Application Performance

Kowalczk, Brian 31 August 2014 (has links)
Mobile devices are generally less powerful and more resource constrained than their desktop counterparts are, yet many of the applications that are of the most value to users of mobile devices are resource intensive and difficult to support on a mobile device. Applications such as games, video playback, image processing, voice recognition, and facial recognition are resource intensive and often exceed the limits of mobile devices. Cyber foraging is an approach that allows a mobile device to discover and utilize surrogate devices present in the local environment to augment the capabilities of the mobile device. Cyber foraging has been shown to be beneficial in augmenting the capabilities of mobile devices to conserve power, increase performance, and increase the fidelity of applications. The cyber foraging scheduler determines what operation to execute remotely and what surrogate to use to execute the operation. Virtually all cyber foraging schedulers in use today utilize historical data in the scheduling algorithm. If historical data about a surrogate is unavailable, execution history must be generated before the scheduler's algorithm can utilize the surrogate. The period between the arrival time of a surrogate and when historical data become available is called the cold-start state. The cold-start state delays the utilization of potentially beneficial surrogates and can degrade system performance. The major contribution of this research was the extension of a historical-based prediction algorithm into a low-overhead estimation-enhanced algorithm that eliminated the cold-start state. This new algorithm performed better than the historical and random scheduling algorithms in every operational scenario. The four operational scenarios simulated typical use-cases for a mobile device. The scenarios simulated an unconnected environment, an environment where every surrogate was available, an environment where all surrogates were initially unavailable and surrogates joined the system slowly over time, and an environment where surrogates randomly and quickly joined and departed the system. One future research possibility is to extend the heuristic to include storage system I/O performance. Additional extensions include accounting for architectural differences between CPUs and the utilization of Bayesian estimates to provide metrics based upon performance specifications rather than direct
32

The control of flexible robots

Shifman, Jeffrey Joseph January 1991 (has links)
No description available.
33

Specification and proof in real-time systems

Davies, Jim January 1991 (has links)
No description available.
34

Distributed Estimation of a class of Nonlinear Systems

Park, Derek Heungyoul 12 December 2012 (has links)
This thesis proposes a distributed observer design for a class of nonlinear systems that arise in the application of model reduction techniques. Distributed observer design techniques have been proposed in the literature to address estimation problems over sensor networks. In large complex sensor networks, an efficient technique that minimizes the extent of the required communication is highly desirable. This is especially true when sensors have problems caused by physical limitations that result in incorrect information at the local level affecting the estimation of states globally. To address this problem, scalable algorithms for a suitable distributed observer have been developed. Most algorithms are focussed on large linear dynamical systems and they are not directly generalizable to nonlinear systems. In this thesis, scalable algorithms for distributed observers are proposed for a class of large scale observable nonlinear system. Distributed systems models multi-agent systems in which each agents attempts to accomplish local tasks. In order to achieve global objectives, there should be agreement regarding some commonly known variables that depend on the state of all agents. These variables are called consensus states. Once identified, such consensus states can be exploited in the development of distributed consensus algorithms. Consensus algorithms are used to develop information exchange protocols between agents such that global objectives are met through local action. In this thesis, a higher order observer is applied in the distributed sensor network system to design a distributed observer for a class nonlinear systems. Fusion of measurement and covariance information is applied to the higher order filter as the first method. The consensus filter is embedded in the local nonlinear observer for fusion of data. The second method is based on the communication of state estimates between neighbouring sensors rather than fusion of data measurement and covariance. The second method is found to reduce disagreement of the states estimation between each sensor. The performance of these new algorithms is demonstrated by simulation, and the second method is effectively applied over the first method. / Thesis (Master, Chemical Engineering) -- Queen's University, 2012-12-12 11:22:49.113
35

Performance Optimization Techniques and Tools for Distributed Graph Processing

Kalavri, Vasiliki January 2016 (has links)
In this thesis, we propose optimization techniques for distributed graph processing. First, we describe a data processing pipeline that leverages an iterative graph algorithm for automatic classification of web trackers. Using this application as a motivating example, we examine how asymmetrical convergence of iterative graph algorithms can be used to reduce the amount of computation and communication in large-scale graph analysis. We propose an optimization framework for fixpoint algorithms and a declarative API for writing fixpoint applications. Our framework uses a cost model to automatically exploit asymmetrical convergence and evaluate execution strategies during runtime. We show that our cost model achieves speedup of up to 1.7x and communication savings of up to 54%. Next, we propose to use the concepts of semi-metricity and the metric backbone to reduce the amount of data that needs to be processed in large-scale graph analysis. We provide a distributed algorithm for computing the metric backbone using the vertex-centric programming model. Using the backbone, we can reduce graph sizes up to 88% and achieve speedup of up to 6.7x. / <p>QC 20160919</p>
36

Theories for Session-based Governance for Large-scale Distributed Systems

Chen, Tsu-Chun January 2013 (has links)
Large-scale distributed systems and distributed computing are the pillars of IT infrastructure and society nowadays. Robust theoretical principles for designing, building, managing and understanding the interactive behaviours of such systems need to be explored. A promising approach for establishing such principles is to view the session as the key unit for design, execution and verification. Governance is a general term for verifying whether activities meet the specified requirements and for enforcing safe behaviours among processes. This thesis, based on the asynchronous -calculus and the theory of session types, provides a monitoring framework and a theory for validating specifications, verifying mutual behaviours during runtime, and taking actions when noncompliant behaviours are detected. We explore properties and principles for governing large-scale distributed systems, in which autonomous and heterogeneous system components interact with each other in the network to accomplish application goals. This thesis, incorporating lessons from my participation in a substantial practical project, the Ocean Observatories Initiative (OOI), proposes an asynchronous monitoring framework and the process calculus for dynamically governing the asynchronous interactions among distributed multiple applications. We prove that this monitoring model guarantees the satisfaction of global assertions, and state and prove theorems of local and global safety, transparency, and session fidelity. We also study and introduce the semantic mechanisms for runtime session-based governance and the principles of validation of stateful specifications through capturing the runtime asynchronous interactions.
37

A General Framework for Multiparty Computations

Reistad, Tord Ingolf January 2012 (has links)
Multiparty computation is a computation between multiple players which want to compute a common function based on private input. It was first proposed over 20 years ago and has since matured into a well established science. The goal of this thesis has been to develop efficient protocols for different operations used in multiparty computation and to propose uses for multiparty computation in real world systems. This thesis therefore gives the reader an overview of multiparty computation from the simplest primitives to the current state of software frameworks for multiparty computation, and provides ideas for future applications. Included in this thesis is a proposed model of multiparty computation based on a model of communication complexity. This model provides a good foundation for the included papers and for measuring the efficiency of multiparty computation protocols. In addition to this model, a more practical approach is also included, which examines different secret sharing schemes and how they are used as building blocks for basic multiparty computation operations. This thesis identifies five basic multiparty computation operations: sharing, recombining, addition, multiplication and negation, and shows how these five operations can be used to create more complex operations. In particular two operations “less-than” and “bitwise decomposition” are examined in detail in the included papers. “less-than” performs the “&lt;” operator on two secret shared values with a secret shared result and “bitwise decomposition” takes a secret shared value and transforms it into a vector of secret shared bitwise values. The overall goal of this thesis has been to create efficient methods for multiparty computation so that it might be used for practical applications in the future.
38

Dynamic Binding of Names in Calculi for Mobile Processes

Vivas Frontana, Jose Luis January 2001 (has links)
No description available.
39

Performance Isolation in Cloud Storage Systems

Singh, Akshay K. 09 1900 (has links)
Cloud computing enables data centres to provide resource sharing across multiple tenants. This sharing, however, usually comes at a cost in the form of reduced isolation between tenants, which can lead to inconsistent and unpredictable performance. This variability in performance becomes an impediment for clients whose services rely on consistent, responsive performance in cloud environments. The problem is exacerbated for applications that rely on cloud storage systems as performance in these systems is a ffected by disk access times, which often dominate overall request service times for these types of data services. In this thesis we introduce MicroFuge, a new distributed caching and scheduling middleware that provides performance isolation for cloud storage systems. To provide performance isolation, MicroFuge's cache eviction policy is tenant and deadline-aware, which enables the provision of isolation to tenants and ensures that data for queries with more urgent deadlines, which are most likely to be a ffected by competing requests, are less likely to be evicted than data for other queries. MicroFuge also provides simplifi ed, intelligent scheduling in addition to request admission control whose performance model of the underlying storage system will reject requests with deadlines that are unlikely to be satisfi ed. The middleware approach of MicroFuge makes it unique among other systems which provide performance isolation in cloud storage systems. Rather than providing performance isolation for some particular cloud storage system, MicroFuge can be deployed on top of any already deployed storage system without modifying it. Keeping in mind the wide spectrum of cloud storage systems available today, such an approach make MicroFuge very adoptable. In this thesis, we show that MicroFuge can provide signifi cantly better performance isolation between tenants with di fferent latency requirements than Memcached, and with admission control enabled, can ensure that more than certain percentage of requests meet their deadlines.
40

System Support for Strong Accountability

Yumerefendi, Aydan Rafet January 2009 (has links)
<p>Computer systems not only provide unprecedented efficiency and</p><p>numerous benefits, but also offer powerful means and tools for</p><p>abuse. This reality is increasingly more evident as deployed software</p><p>spans across trust domains and enables the interactions of</p><p>self-interested participants with potentially conflicting goals. With</p><p>systems growing more complex and interdependent, there is a growing</p><p>need to localize, identify, and isolate faults and unfaithful behavior. </p><p>Conventional techniques for building secure systems, such as secure</p><p>perimeters and Byzantine fault tolerance, are insufficient to ensure</p><p>that trusted users and software components are indeed</p><p><italic>trustworthy</italic>. Secure perimeters do not work across trust domains and fail</p><p>when a participant acts within the limits of the existing security</p><p>policy and deliberately manipulates the system to her own</p><p>advantage. Byzantine fault tolerance offers techniques to tolerate</p><p>misbehavior, but offers no protection when replicas collude or are</p><p>under the control of a single entity. </p><p>Complex interdependent systems necessitate new mechanisms that</p><p>complement the existing solutions to identify improper behavior and</p><p>actions, limit the propagation of incorrect information, and assign</p><p>responsibility when things go wrong. This thesis </p><p>addresses the problems of misbehavior and abuse by offering tools and</p><p>techniques to integrate <italic>accountability</italic> into computer systems. A</p><p>system is accountable if it offers means to identify and expose</p><p><italic>semantic</italic> misbehavior by its participants. An accountable system</p><p>can construct undeniable evidence to demonstrate its correctness---the</p><p>evidence serves as explicit proof of misbehavior and can be strong enough</p><p>to be used as a basis for social sanction external to the</p><p>system. </p><p>Accountability offers strong disincentives for abuse and</p><p>misbehavior but may have to be ``designed-in'' to an application's</p><p>specific protocols, logic, and internal representation; achieving</p><p>accountability using general techniques is a challenge. Extending</p><p>responsibility to end users for actions performed by software</p><p>components on their behalf is not trivial, as it requires an ability </p><p>to determine whether a component correctly represents a</p><p>user's intentions. Leaks of private information are yet another</p><p>concern---even correctly functioning</p><p>applications can leak sensitive information, for which their owners</p><p>may be accountable. Important infrastructure services, such as</p><p>distributed virtual resource economies, offer a range of application-specific</p><p>issues such as fine-grain resource delegation, virtual</p><p>currency models, and complex work-flows.</p><p>This thesis work addresses the aforementioned problems by designing,</p><p>implementing, applying, and evaluating a generic methodology for</p><p>integrating accountability into network services and applications. Our</p><p><italic>state-based</italic> approach decouples application state management from</p><p>application logic to enable services to demonstrate that they maintain</p><p>their state in compliance with user requests, i.e., state changes do take</p><p>place, and the service presents a consistent view to all clients and</p><p>observers. Internal state managed in this way, can then be used to feed</p><p>application-specific verifiers to determine the correctness the service's</p><p>logic and to identify the responsible party. The state-based approach</p><p>provides support for <italic>strong</italic> accountability---any detected violation</p><p>can be proven to a third party without depending on replication and</p><p>voting. </p><p>In addition to the generic state-based approach, this thesis explores how</p><p>to leverage application-specific knowledge to integrate accountability in</p><p>an example application. We study the invariants and accountability</p><p>requirements of an example application--- a lease-based virtual resource</p><p>economy. We present the design and implementation of several key elements</p><p>needed to provide accountability in the system. In particular, we describe</p><p>solutions to the problems of resource delegation, currency spending, and</p><p>lease protocol compliance. These solutions illustrate a complementary</p><p>technique to the general-purpose state-based approach, developed in the</p><p>earlier parts of this thesis. </p><p>Separating the actions of software and its user is at the heart of the</p><p>third component of this dissertation. We design, implement, and evaluate</p><p>an approach to detect information leaks in a commodity operating system.</p><p>Our novel OS abstraction---a <italic>doppelganger</italic> process---helps track</p><p>information flow without requiring application rewrite or instrumentation.</p><p>Doppelganger processes help identify sensitive data as they are about to</p><p>leave the confines of the system. Users can then be alerted about the</p><p>potential breach and can choose to prevent the leak to avoid becoming</p><p>accountable for the actions of software acting on their behalf.</p> / Dissertation

Page generated in 0.096 seconds