• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 263
  • 21
  • 20
  • 13
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 719
  • 719
  • 256
  • 182
  • 139
  • 131
  • 97
  • 91
  • 76
  • 69
  • 66
  • 66
  • 64
  • 62
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Theories for Session-based Governance for Large-scale Distributed Systems

Chen, Tsu-Chun January 2013 (has links)
Large-scale distributed systems and distributed computing are the pillars of IT infrastructure and society nowadays. Robust theoretical principles for designing, building, managing and understanding the interactive behaviours of such systems need to be explored. A promising approach for establishing such principles is to view the session as the key unit for design, execution and verification. Governance is a general term for verifying whether activities meet the specified requirements and for enforcing safe behaviours among processes. This thesis, based on the asynchronous -calculus and the theory of session types, provides a monitoring framework and a theory for validating specifications, verifying mutual behaviours during runtime, and taking actions when noncompliant behaviours are detected. We explore properties and principles for governing large-scale distributed systems, in which autonomous and heterogeneous system components interact with each other in the network to accomplish application goals. This thesis, incorporating lessons from my participation in a substantial practical project, the Ocean Observatories Initiative (OOI), proposes an asynchronous monitoring framework and the process calculus for dynamically governing the asynchronous interactions among distributed multiple applications. We prove that this monitoring model guarantees the satisfaction of global assertions, and state and prove theorems of local and global safety, transparency, and session fidelity. We also study and introduce the semantic mechanisms for runtime session-based governance and the principles of validation of stateful specifications through capturing the runtime asynchronous interactions.
32

A General Framework for Multiparty Computations

Reistad, Tord Ingolf January 2012 (has links)
Multiparty computation is a computation between multiple players which want to compute a common function based on private input. It was first proposed over 20 years ago and has since matured into a well established science. The goal of this thesis has been to develop efficient protocols for different operations used in multiparty computation and to propose uses for multiparty computation in real world systems. This thesis therefore gives the reader an overview of multiparty computation from the simplest primitives to the current state of software frameworks for multiparty computation, and provides ideas for future applications. Included in this thesis is a proposed model of multiparty computation based on a model of communication complexity. This model provides a good foundation for the included papers and for measuring the efficiency of multiparty computation protocols. In addition to this model, a more practical approach is also included, which examines different secret sharing schemes and how they are used as building blocks for basic multiparty computation operations. This thesis identifies five basic multiparty computation operations: sharing, recombining, addition, multiplication and negation, and shows how these five operations can be used to create more complex operations. In particular two operations “less-than” and “bitwise decomposition” are examined in detail in the included papers. “less-than” performs the “<” operator on two secret shared values with a secret shared result and “bitwise decomposition” takes a secret shared value and transforms it into a vector of secret shared bitwise values. The overall goal of this thesis has been to create efficient methods for multiparty computation so that it might be used for practical applications in the future.
33

Dynamic Binding of Names in Calculi for Mobile Processes

Vivas Frontana, Jose Luis January 2001 (has links)
No description available.
34

System Support for Strong Accountability

Yumerefendi, Aydan Rafet January 2009 (has links)
<p>Computer systems not only provide unprecedented efficiency and</p><p>numerous benefits, but also offer powerful means and tools for</p><p>abuse. This reality is increasingly more evident as deployed software</p><p>spans across trust domains and enables the interactions of</p><p>self-interested participants with potentially conflicting goals. With</p><p>systems growing more complex and interdependent, there is a growing</p><p>need to localize, identify, and isolate faults and unfaithful behavior. </p><p>Conventional techniques for building secure systems, such as secure</p><p>perimeters and Byzantine fault tolerance, are insufficient to ensure</p><p>that trusted users and software components are indeed</p><p><italic>trustworthy</italic>. Secure perimeters do not work across trust domains and fail</p><p>when a participant acts within the limits of the existing security</p><p>policy and deliberately manipulates the system to her own</p><p>advantage. Byzantine fault tolerance offers techniques to tolerate</p><p>misbehavior, but offers no protection when replicas collude or are</p><p>under the control of a single entity. </p><p>Complex interdependent systems necessitate new mechanisms that</p><p>complement the existing solutions to identify improper behavior and</p><p>actions, limit the propagation of incorrect information, and assign</p><p>responsibility when things go wrong. This thesis </p><p>addresses the problems of misbehavior and abuse by offering tools and</p><p>techniques to integrate <italic>accountability</italic> into computer systems. A</p><p>system is accountable if it offers means to identify and expose</p><p><italic>semantic</italic> misbehavior by its participants. An accountable system</p><p>can construct undeniable evidence to demonstrate its correctness---the</p><p>evidence serves as explicit proof of misbehavior and can be strong enough</p><p>to be used as a basis for social sanction external to the</p><p>system. </p><p>Accountability offers strong disincentives for abuse and</p><p>misbehavior but may have to be ``designed-in'' to an application's</p><p>specific protocols, logic, and internal representation; achieving</p><p>accountability using general techniques is a challenge. Extending</p><p>responsibility to end users for actions performed by software</p><p>components on their behalf is not trivial, as it requires an ability </p><p>to determine whether a component correctly represents a</p><p>user's intentions. Leaks of private information are yet another</p><p>concern---even correctly functioning</p><p>applications can leak sensitive information, for which their owners</p><p>may be accountable. Important infrastructure services, such as</p><p>distributed virtual resource economies, offer a range of application-specific</p><p>issues such as fine-grain resource delegation, virtual</p><p>currency models, and complex work-flows.</p><p>This thesis work addresses the aforementioned problems by designing,</p><p>implementing, applying, and evaluating a generic methodology for</p><p>integrating accountability into network services and applications. Our</p><p><italic>state-based</italic> approach decouples application state management from</p><p>application logic to enable services to demonstrate that they maintain</p><p>their state in compliance with user requests, i.e., state changes do take</p><p>place, and the service presents a consistent view to all clients and</p><p>observers. Internal state managed in this way, can then be used to feed</p><p>application-specific verifiers to determine the correctness the service's</p><p>logic and to identify the responsible party. The state-based approach</p><p>provides support for <italic>strong</italic> accountability---any detected violation</p><p>can be proven to a third party without depending on replication and</p><p>voting. </p><p>In addition to the generic state-based approach, this thesis explores how</p><p>to leverage application-specific knowledge to integrate accountability in</p><p>an example application. We study the invariants and accountability</p><p>requirements of an example application--- a lease-based virtual resource</p><p>economy. We present the design and implementation of several key elements</p><p>needed to provide accountability in the system. In particular, we describe</p><p>solutions to the problems of resource delegation, currency spending, and</p><p>lease protocol compliance. These solutions illustrate a complementary</p><p>technique to the general-purpose state-based approach, developed in the</p><p>earlier parts of this thesis. </p><p>Separating the actions of software and its user is at the heart of the</p><p>third component of this dissertation. We design, implement, and evaluate</p><p>an approach to detect information leaks in a commodity operating system.</p><p>Our novel OS abstraction---a <italic>doppelganger</italic> process---helps track</p><p>information flow without requiring application rewrite or instrumentation.</p><p>Doppelganger processes help identify sensitive data as they are about to</p><p>leave the confines of the system. Users can then be alerted about the</p><p>potential breach and can choose to prevent the leak to avoid becoming</p><p>accountable for the actions of software acting on their behalf.</p> / Dissertation
35

Dynamic Binding of Names in Calculi for Mobile Processes

Vivas Frontana, Jose Luis January 2001 (has links)
No description available.
36

DistNeo4j: Scaling Graph Databases through Dynamic Distributed Partitioning

Nicoara, Daniel 14 October 2014 (has links)
Social networks are large graphs which require multiple servers to store and manage them. Providing performant scalable systems that store these graphs through partitioning them into subgraphs is an important issue. In such systems each partition is hosted by a server to satisfy multiple objectives. These objectives include balancing server loads, reducing remote traversals (number of edges cut), and adapting the partitioning to changes in the structure of the graph in the face of changing workloads. To address these issues, a dynamic repartitioning algorithm is required to modify an existing partitioning to maintain good quality partitions. Such a repartitioner should not impose a significant overhead to the system. This thesis introduces a greedy repartitioner, which dynamically modifies a partitioning using a small amount of resources. In contrast to the existing repartitioning algorithms, the greedy repartitioner is performant (in terms of time and memory), making it suitable for implementing and using it in a real system. The greedy repartitioner is integrated into DistNeo4j, which is designed as an extension of the open source Neo4j graph database system, to support workloads over partitioned graph data distributed over multiple servers. Using real-world data sets, this thesis shows that DistNeo4j leverages the greedy repartitioner to maintain high quality partitions and provides a 2 to 3 times performance improvement over the de-facto hash-based partitioning.
37

Coding and Maintenance Strategies for Cloud Storage: Correlated Failures, Mobility and Architecture Awareness

Calis, Gokhan, Calis, Gokhan January 2017 (has links)
As a result of evergrowing data and recent interest in storing and analyzing it, distributed storage systems (DSS), which is also known as cloud storage, have become one of the most important research areas in the literature. Not only such networks are being used as backbone systems for companies like Google, Microsoft and Facebook but also they have accelerated the growth of cloud computing, which is an essential business line for institutions such as IBM, Amazon and Salesforce. In this dissertation, the focus is on the storage side of cloud in order to address the important questions in designing such systems. First, coding theoretic approach is taken to handle correlated failures of multiple storage nodes. In particular, this dissertation studies distributed storage systems that can provide resilience against correlated failure patterns that affect the availability of multiple storage nodes, i.e., power loss that may affect multiple disks. Maximum file size that can be stored in such DSS is studied and then optimal code constructions are provided. This dissertation also studies cloud storage systems that prevent data loss from mixed failure patterns of disks and sectors in disk drives. Specifically, a general code construction is proposed to overcome such failures for any given parameter set. Due to its large field size requirement of proposed construction, a relaxation on the efficiency of storage system is considered to provide codes with smaller field sizes. Maintenance of cloud storage systems is also studied. To that end, this dissertation first studies the maintenance of DSS that include a backup node, which is called hierarchical DSS. Hierarchical DSS can model cellular networks such as femtocell as well as caching in wireless networks. In particular, we present an upper bound on the file size that can be stored over hierarchical DSS and propose optimal code constructions. Then, maintenance cost and data access cost for users of such DSS are studied. Lastly, mobility effects of cloud storage over wireless devices are studied. Specifically, an analysis on the mobile cloud storage system that initiates the maintenance process after certain number of devices remains in the network is performed and different maintenance strategies are proposed that are optimal with respect to average cost in certain mobility regimes.
38

Collaborative detection of cyberbullying behavior in Twitter data

Mangaonkar, Amrita January 2017 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / As the size of Twitter© data is increasing, so are undesirable behaviors of its users. One such undesirable behavior is cyberbullying, which could lead to catastrophic consequences. Hence, it is critical to efficiently detect cyberbullying behavior by analyzing tweets, in real-time if possible. Prevalent approaches to identifying cyberbullying are mainly stand-alone, and thus, are time-consuming. This thesis proposes a new approach called distributed-collaborative approach for cyberbullying detection. It contains a network of detection nodes, each of which is independent and capable of classifying tweets it receives. These detection nodes collaborate with each other in case they need help in classifying a given tweet. The study empirically evaluates various collaborative patterns, and it assesses the performance of each pattern in detail. Results indicate an improvement in recall and precision of the detection mechanism over the stand- alone paradigm. Further, this research analyzes scalability of the approach by increasing the number of nodes in the network. The empirical results obtained from experimentation show that the system is scalable. The study performed also incorporates the experiments that analyze behavior distributed-collaborative approach in case of failures in the system. Additionally, the proposed thesis tests this approach on a different domain, such as politics, to explore the possibility of the generalization of results.
39

ARTS and CRAFTS: Predictive Scaling for Request-Based Services in the Cloud

Guenther, Andrew 01 June 2014 (has links) (PDF)
Modern web services can see well over a billion requests per day. Data and services at such scale require advanced software and large amounts of computational resources to process requests in reasonable time. Advancements in cloud computing now allow us to acquire additional resources faster than in traditional capacity planning scenarios. Companies can scale systems up and down as required, allowing them to meet the demand of their customers without having to purchase their own expensive hardware. Unfortunately, these, now routine, scaling operations remain a primarily manual task. To solve this problem, we present CRAFTS (Cloud Resource Anticipation For Timing Scaling), a system for automatically identifying application throughput and predictive scaling of cloud computing resources based on historical data. We also present ARTS (Automated Request Trace Simulator), a request based workload generation tool for constructing diverse and realistic request patterns for modern web applications. ARTS allows us to evaluate CRAFTS' algorithms on a wide range of scenarios. In this thesis, we outline the design and implementation of both ARTS and CRAFTS and evaluate the effectiveness of various prediction algorithms applied to real-world request data and artificial workloads generated by ARTS.
40

Rich Cloud-based Web Applications with Cloudbrowser 2.0

Pan, Xiaozhong 21 June 2015 (has links)
When developing web applications using traditional methods, developers need to partition the application logic between client side and server side, then implement these two parts separately (often using two different programming languages) and write the communication code to synchronize the application's state between the two parts. CloudBrowser is a server- centric web framework that eliminates this need for partitioning applications entirely. In CloudBrowser, the application code is executed in server side virtual browsers which preserve the application's presentation state. The client web browsers act like rendering devices, fetching and rendering the presentation state from the virtual browsers. The client-server communication and user interface rendering is implemented by the framework under the hood. CloudBrowser applications are developed in a way similar to regular web pages, using no more than HTML, CSS and JavaScript. Since the user interface state is preserved, the framework also provides a continuous experience for users who can disconnect from the application at any time and reconnect to pick up at where they left off. The original implementation of CloudBrowser was single-threaded and supported deployment on only one process. We implemented CloudBrowser 2.0, a multi-process implementation of CloudBrowser. CloudBrowser 2.0 can be deployed on a cluster of servers as well as a single multi-core server. It distributes the virtual browsers to multiple processes and dispatches client requests to the associated virtual browsers. CloudBrowser 2.0 also refines the CloudBrowser application deployment model to make the framework a PaaS platform. The developers can develop and deploy different types of applications and the platform will automatically scale them to multiple servers. / Master of Science

Page generated in 0.0456 seconds