• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27506
  • 5236
  • 1483
  • 1311
  • 1311
  • 1311
  • 1311
  • 1311
  • 1301
  • 1211
  • 867
  • 671
  • 512
  • 158
  • 156
  • Tagged with
  • 43057
  • 43057
  • 14704
  • 11025
  • 3185
  • 2987
  • 2822
  • 2605
  • 2596
  • 2542
  • 2509
  • 2492
  • 2391
  • 2289
  • 2128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Designing incentives for peer-to-peer systems

January 2010 (has links)
Peer-to-peer systems, networks of egalitarian nodes without a central authority, can achieve massive scalability and fault tolerance through the pooling together of individual resources. Unfortunately, most nodes represent self-interested, or rational, parties that will attempt to maximize their consumption of shared resources while minimizing their own contributions. This constitutes a type of attack that can destabilize the system. The first contribution of this thesis is a proposed taxonomy for these rational attacks and the most common solutions used in contemporary designs to thwart them. One approach is to design the P2P system with incentives for cooperation, so that rational nodes voluntarily behave. We broadly classify these incentives as being either genuine or artificial , with the former describing incentives inherent in peer interactions, and the latter describing a secondary enforcement system. We observe that genuine incentives tend to be more robust to rational manipulations than artificial counterparts. Based on this observation, we also propose two extensions to BitTorrent, a P2P file distribution protocol. While this system is popular, accounting for approximately one-third of current Internet traffic, it has known limitations. Our extensions use genuine incentives to address some of these problems. The first extension improves seeding, an altruistic mode wherein nodes that have completed their download continue to provide upload service. We incentivize seeding by giving long-term identifiers to clients enabling seeding clients to be recognized and rewarded in subsequent downloads. Simulations demonstrate that our method is highly effective in protecting swarms from aggressive clients such as BitTyrant. Finally, we introduce The BitTorrent Anonymity Marketplace , wherein each peer simultaneously joins multiple swarms to disguise their true download intentions. Peers then trade one torrent for another, making the cover traffic valuable as a means of obtaining the real target. Thus, when a neighbor receives a request from a peer for blocks of a torrent, it does not know if the peer is really downloading that torrent, or only using it in trade. Using simulation, we demonstrate that nodes cannot determine peer intent from observed interactions.
522

Efficient tamper-evident data structures for untrusted servers

January 2010 (has links)
Many real-world applications run on untrusted servers or are run on servers that are subject to strong insider attacks. Although we cannot prevent an untrusted server from modifying or deleting data, with tamper-evident data structures, we can discover when this has occurred. If an untrusted server knows that a particular reply will not be checked for correctness, it is free to lie. Auditing for correctness is thus a frequent but overlooked operation. In my thesis, I present and evaluate new efficient data structures for tamper-evident logging and tamper-evident storage of changing data on untrusted servers, focussing on the costs of the entire system. The first data structure is a new tamper-evident log design. I propose new semantics of tamper-evident logs in terms of the auditing process, required to detect misbehavior. To accomplish efficient auditing, I describe and benchmark a new tree-based data structure that can generate such proofs with logarithmic size and space, significantly improving over previous linear constructions while also offering a flexible query mechanism with authenticated results. The remaining data structures are designs for a persistent authenticated dictionary (PAD) that allows users to send lookup requests to an untrusted server and get authenticated answers, signed by a trusted author, for both the current and historical versions of the dataset. Improving on prior constructions that require logarithmic storage and time, I present new classes of efficient PAD algorithms offering constant-sized authenticated answers or constant storage per update. I implement 21 different versions of PAD algorithms and perform a comprehensive evaluation using contemporary cloud-computing prices for computing and bandwidth to determine the most monetarily cost-effective designs.
523

Grid-centric scheduling strategies for workflow applications

January 2010 (has links)
Grid computing faces a great challenge because the resources are not localized, but distributed, heterogeneous and dynamic. Thus, it is essential to provide a set of programming tools that execute an application on the Grid resources with as little input from the user as possible. The thesis of this work is that Grid-centric scheduling techniques of workflow applications can provide good usability of the Grid environment by reliably executing the application on a large scale distributed system with good performance. We support our thesis with new and effective approaches in the following five aspects. First, we modeled the performance of the existing scheduling approaches in a multi-cluster Grid environment. We implemented several widely-used scheduling algorithms and identified the best candidate. The study further introduced a new measurement, based on our experiments, which can improve the schedule quality of some scheduling algorithms as much as 20 fold in a multi-cluster Grid environment. Second, we studied the scalability of the existing Grid scheduling algorithms. To deal with Grid systems consisting of hundreds of thousands of resources, we designed and implemented a novel approach that performs explicit resource selection decoupled from scheduling Our experimental evaluation confirmed that our decoupled approach can be scalable in such an environment without sacrificing the quality of the schedule by more than 10%. Third, we proposed solutions to address the dynamic nature of Grid computing with a new cluster-based hybrid scheduling mechanism. Our experimental results collected from real executions on production clusters demonstrated that this approach produces programs running 30% to 100% faster than the other scheduling approaches we implemented on both reserved and shared resources. Fourth, we improved the reliability of Grid computing by incorporating fault- tolerance and recovery mechanisms into the workow application execution. Our experiments on a simulated multi-cluster Grid environment demonstrated the effectiveness of our approach and also characterized the three-way trade-off between reliability, performance and resource usage when executing a workflow application. Finally, we improved the large batch-queue wait time often found in production Grid clusters. We developed a novel approach to partition the workow application and submit them judiciously to achieve less total batch-queue wait time. The experimental results derived from production site batch queue logs show that our approach can reduce total wait time by as much as 70%. Our approaches combined can greatly improve the usability of Grid computing while increasing the performance of workow applications on a multi-cluster Grid environment.
524

Parameterization and adaptive search for graph coloring register allocation

January 2010 (has links)
Graph coloring register allocators use heuristics for register coalescing and allocation, which are relevant to the number of physical registers that a group of virtual registers will use after allocation. They cannot be determined accurately in allocation, thus we made them tunable by introducing new parameters as the thresholds for coalescing and the thresholds for defining constrained live intervals in simplification. Experiments demonstrated neither the aggressive method nor the conservative method can outperform the other for all tests and the best parameters vary significantly among programs. This parameterization is profitable because the best running time reached by varying the parameters is up to 16% faster than the best of fixed-parameter methods. Hill-climbing and random probe algorithms were used to find good parameters, and the later performed better. Further analysis reveals the search space has many irregular fluctuations that are not suitable for the hill-climber.
525

Dynamic Software Reconfiguration in Sensor Networks

Kogekar, Sachin Vijay 08 September 2004 (has links)
Dynamic software reconfiguration poses a major challenge to the widespread application of sensor networks. This thesis presents an approach for implementing dynamic software reconfiguration in sensor networks. The thesis proposes a software reconfiguration architecture that utilizes explicit models of the design space of the embedded application, captured by formally modeling all application software components. System requirements are expressed as formal constraints on QoS parameters that are measured at runtime. Reconfiguration is performed by transitioning from one point of the operation space to another based on the constraints. The software reconfiguration architecture implements components that communicate the latest configurations of the embedded application to the sensor nodes and that monitor the health of the network. The thesis presents a simple sensor network application that performs one-dimensional tracking to demonstrate the software reconfiguration architecture deployed over a sensor network testbed.
526

GME-MOF: AN MDA METAMODELING ENVIRONMENT FOR GME

Emerson, Matthew Joel 24 March 2005 (has links)
Versatile model-based design demands languages and tools which are suitable for the cre- ation, manipulation, transformation, and composition of domain-specific modeling languages and domain models. The Meta Object Facility (MOF) forms the cornerstone of the OMGs Model Driven Architecture (MDA) as the standard metamodeling language for the specifica- tion of domain-specific languages. This thesis describes an implementation of MOF v1.4 as an alternative metamodeling language for the Generic Modeling Environment (GME), the flagship tool of Model Integrated Computing (MIC). This implementation utilizes model- to-model transformations specified with the Graph Rewriting and Transformation tool suite (GReAT) to translate between MOF and the UML-based GME metamodeling language. The technique described by this paper illustrates the role formally well-defined metamod- eling and metamodel-based model transformation approaches can play in interfacing MIC technology to new and evolving modeling standards.
527

Interpretive Parsing Techniques for Building Object Networks

Somogyi, Nora Eva 27 April 2005 (has links)
Model based programming is becoming increasingly popular and important because it can be used by not only programmers but also people who have specific domain knowledge but little programming knowledge. Many people encounter the problem that the old domain specific data elements are stored in some kind of text format, and modeling tools do not provide interfaces to import these data. In fact, there is a need to be able to import the existing data into the modeling tool in an easy and comprehensive way to avoid loss of data or cumbersome manual transformation of data from one application to the other. Many model-based tools are offered on the market, both academic and business, that can be used to build and manipulate large object network models. Unfortunately, these tools usually do not provide reusable mechanisms that can convert text to data structures; however, they usually do provide processes to extract and format text from the object networks. Some tools creating and manipulating such network models are developed at the ISIS (Institute for Software Integrated Systems), such as GME (Generic Modeling Environment), UDM (Universal Data Model), and GReaT (Graph Re-writing and Transformation Engine). There is a need for converting text into data structures in a convenient way. In this thesis, interpretive parsing techniques are introduced that provide a convenient way to parse a text file and interpret the object construction actions defined in a corresponding grammar to generate data structures from the text file. A grammar editor tool and a simple interpretive parser that processes the text file and the constructed grammar is the right choice for solving the problem. This thesis consists of two parts: the first part focuses on modeling grammars, and the second part focuses on parsing. Two toolsets, GME and UDM are used to accomplish the task of generating object networks from text files: GME is used to describe an environment to model grammars, and UDM is used to parse text files and create object networks. The developed grammar modeling tool offers the construction of context-free grammars and allows the user to conveniently assign object construction actions to the structure of the grammar. The developed interpretive parser tool works as a recursive-descent predictive LL(k) parser, offers early detection mechanisms to indicate errors in the grammar, and interprets the object construction actions included in its grammar description to build up an object network.
528

Impaired Cognitive Flexibility and Intact Cognitive Control in Autism: A Computational Cognitive Neuroscience Approach

Kriete, Trenton Edward 08 April 2005 (has links)
In people with autism, the ability to enact a behavior in the presence of competing responses appears intact, while the ability to fluently adapt cognitive control in the face of changing task contingencies is impaired. In this paper, the Cross-Task Generalization model (Rougier et al., in press), which offers a formal account of the effect of dopamine on frontal cortex function, is used to capture performance of both normally functioning individuals and people with autism on a classic test of cognitive control, the Stroop task (Stroop, 1935), and one of cognitive flexibility, the Wisconsin Card Sort Test (Berg, 1948). By weakening the effect of the dopamine signal on frontal cortex, the model fits quantitative and qualitative results of autistic performance on these tasks and demonstrates the potential usefulness of computational cognitive neuroscience approaches in autism research.
529

Synthesizing and Evaluating Data-Driven Motion Transitions

Wang, Jing 12 August 2005 (has links)
to be entered
530

Sensor Node Localization Using Mobile Acoustic Beacons

Kushwaha, Manish 12 August 2005 (has links)
Node localization, also known as self-localization, is the problem of localizing physical sensor nodes in a given sensor network deployment. Localization is an essential tool for the deployment of low-cost sensor networks for use in location-aware applications and ubiquitous networking. In this thesis, we present a mobile acoustic beacon based sensor node localization method. Our technique is passive in that the sensor nodes themselves do not need to generate an acoustic signal for ranging. This saves cost, energy and provides stealthy operation. The acoustic ranging method used in this work provides longer range and higher accuracy. The mobile beacon can generate much more acoustic energy than a severely resource constrained sensor node, thereby significantly increasing the range. The localization algorithm is especially designed to work in reverberant environment such as dense urban terrain. The presented algorithm handles non-Gaussian ranging errors caused by echoes. Node locations are computed centrally by solving a global non-linear optimization problem in an iterative and incremental fashion.

Page generated in 0.0622 seconds