261 |
Status report of research on distributed information and decision systems in command-and-control / Research on distributed information and decision systems in command-and-controlJanuary 1982 (has links)
prepared by: Michael Athans [et al.] / Description based on: Sept.1981/Sept.1982. / Prepared under contract ONR/N00014-77-C-0532 (NR 041-519 and NR 277-300x).
|
262 |
Eye movements in reading strategies : how reading strategies modulate effects of distributed processing and oculomotor controlWotschack, Christiane January 2009 (has links)
Throughout its empirical research history eye movement research has always been aware of the differences in reading behavior induced by individual differences and task demands. This work introduces a novel comprehensive concept of reading strategy, comprising individual differences in reading style and reading skill as well as reader goals. In a series of sentence reading experiments recording eye movements, the influence of reading strategies on reader- and word-level effects assuming distributed processing has been investigated. Results provide evidence for strategic, top-down influences on eye movement control that extend our understanding of eye guidance in reading. / Seit Beginn der Blickbewegungsforschung beim Lesen ist man sich über Unterschiede im Blickverhalten bewusst, die im Zusammenhang mit individuellen Unterschieden oder Aufgabenanforderungen stehen. Unter dem Begriff ‚Lesestrategie’ wurden diese Unterschiede hauptsächlich für diagnostische Zwecke verwendet. Diese Studie verwendet eine neue, umfassende Definition von Lesestrategie und berücksichtigt sowohl individuelle Unterschiede in Lesestil und Lesevermögen als auch Ziel und Intention des Lesers. In einer Reihe von Satzleseexperimenten, bei denen die Blickbewegungen aufgezeichnet wurden, wurde der Einfluss von Lesestrategien auf Effekte der Leser-und Wortebene untersucht, wobei eine verteilte Verarbeitung beim Lesen angenommen wird. Die Ergebnisse liefern Evidenzen für strategische, top-down Einflüsse auf die Blickbewegungen und leisten einen wichtigen Beitrag für das bessere Verständnis der Blickbewegungskontrolle beim Lesen.
|
263 |
Programming Idioms and Runtime Mechanisms for Distributed Pervasive ComputingAdhikari, Sameer 13 October 2004 (has links)
The emergence of pervasive computing power and networking infrastructure is enabling new applications. Still, many milestones need to be reached before pervasive computing becomes an
integral part of our lives. An important missing piece is the middleware that allows developers to easily create interesting pervasive computing applications.
This dissertation explores the middleware needs of distributed pervasive applications. The main contributions of this thesis are the design, implementation, and evaluation of two systems: D-Stampede and Crest. D-Stampede allows pervasive applications to access live stream data from multiple sources using time as an index. Crest allows applications to organize historical events, and to reason about them using time, location, and identity. Together they meet the important needs of pervasive computing applications.
D-Stampede supports a computational model called the thread-channel graph. The threads map to computing devices ranging from small to high-end processing elements. Channels serve as the conduits among the threads, specifically tuned to handle time-sequenced streaming data. D-Stampede allows the dynamic creation of threads and channels, and for the dynamic establishment (and removal) of the plumbing among them.
The Crest system assumes a universe that consists of participation servers and event stores, supporting a set of applications. Each application consists of distributed software entities working together. The participation server helps the application entities to discover each other for interaction purposes. Application entities can generate events, store them at an event store, and correlate events. The entities can communicate with one another directly, or indirectly through the event store.
We have qualitatively and quantitatively evaluated D-Stampede and Crest. The qualitative aspect refers to the ease of programming afforded by our programming abstractions for pervasive applications. The quantitative aspect measures the cost of the API calls, and the performance
of an application pipeline that uses the systems.
|
264 |
Dynamic Differential Data Protection for High-Performance and Pervasive ApplicationsWidener, Patrick M. (Patrick McCall) 20 July 2005 (has links)
Modern distributed applications are long-lived, are expected to
provide flexible and adaptive data services, and must meet the
functionality and scalability challenges posed by dynamically changing
user communities in heterogeneous execution environments. The
practical implications of these requirements are that reconfiguration
and upgrades are increasingly necessary, but opportunities to perform
such tasks offline are greatly reduced. Developers are responding to
this situation by dynamically extending or adjusting application
functionality and by tuning application performance, a typical method
being the incorporation of client- or context-specific code into
applications' execution loops.
Our work addresses a basic roadblock in deploying such solutions: the protection of key
application components and sensitive data in distributed applications.
Our approach, termed Dynamic Differential Data Protection (D3P),
provides fine-grain methods for providing component-based protection
in distributed applications. Context-sensitive, application-specific
security methods are deployed at runtime to enforce restrictions in
data access and manipulation. D3P is suitable for low- or
zero-downtime environments, since deployments are performed while
applications run. D3P is appropriate for high performance environments
and for highly scalable applications like publish/subscribe, because
it creates native codes via dynamic binary code generation. Finally,
due to its integration into middleware, D3P can run across a wide
variety of operating system and machine platforms.
This dissertation introduces D3P, using sample
applications from the high performance and pervasive computing domains
to illustrate the problems addressed by our D3P solution. It also
describes how D3P can be integrated into modern middleware. We
present experimental evaluations which demonstrate the fine-grain
nature of D3P, that is, its ability to capture individual end users'
or components' needs for data protection, and also describe the
performance implications of using D3P in data-intensive applications.
|
265 |
Clearwater: An Extensible, Pliable, and Customizable Approach to Code GenerationSwint, Galen Steen 10 July 2006 (has links)
Since the advent of RPC Stub Generator, software tools that translate a high level specification into executable programs have been instrumental in facilitating the development of distributed software systems. Developers write programs at a high level abstraction with high readability and reduced initial development cost. However, existing approaches to building code generation tools for such systems have difficulties evolving these tools to meet challenges of new standards, new platforms and languages, or changing product scopes, resulting in generator tools with limited lifespan.
The difficulties in evolving generator tools can be characterized as a combination of three challenges that appear inherently difficult to solve simultaneously: the abstraction mapping challenge translating a high-level abstraction into a low-level implementation), the interoperable heterogeneity challenge stemming from multiple input and output formats, and the flexible customization challenge to extend base functionality for evolution or new applications. The Clearwater approach to code generation uses XML-based technologies and software tools to resolve these three challenges with three important code generation features: specification extensibility, whereby an existing specification format can accommodate extensions or variations at low cost; generator pliability, which allows the generator to operator on an extensible specification and/or support multiple and new platforms; and flexible customization, which allows an application developer to make controlled changes to the output of a code generator to support application-specific goals.
The presentation will outline the Clearwater approach and apply it to meet the above three challenges in two domain areas. The first area is information flow applications (e.g., multimedia streaming and event processing), a horizontal domain in which the ISG code generator creates QoS-customized communication code using the Infopipe abstraction and specification language. The second area is enterprise application staging (e.g., complex N-tier distributed applications), a vertical domain in which the Mulini code generator creates multiple types of source code supporting automatic staging of distributed heterogeneous applications in a data center environment. The success of applying Clearwater to these domains shows the effectiveness of our approach.
|
266 |
Meeting Data Sharing Needs of Heterogeneous Distributed UsersZhan, Zhiyuan 16 January 2007 (has links)
The fast growth of wireless networking and mobile computing devices has enabled us to access information from anywhere at any time. However, varying user needs and system resource constraints are two major heterogeneity factors that pose a challenge to information sharing systems. For instance, when a new information item is produced, different users may have different requirements for when the new value should become visible. The resources that each device can contribute to such information sharing applications also vary. Therefore, how to enable information sharing across computing platforms with varying resources to meet different user demands is an important problem for distributed systems research.
In this thesis, we address the heterogeneity challenge faced by such systems. We assume that shared information is encapsulated in distributed objects, and we use object replication to increase system scalability and robustness, which introduces the consistency problem. Many consistency models have been proposed in recent years but they are either too strong and do not scale very well, or too weak to meet many users' requirements. We propose a Mixed Consistency (MC) model as a solution. We introduce an access constraints based approach to combine both strong and weak consistency models together. We also propose a MC protocol that combines existing implementations together with minimum modifications. It is designed to tolerate crash failures and slow processes/communication links in the system. We also explore how the heterogeneity challenge can be addressed in the transportation layer by developing an agile dissemination protocol. We implement our MC protocol on top of a distributed publisher-subscriber middleware, Echo. We finally measure the performance of our MC implementation. The results of the experiments are consistent with our expectations. Based on the functionality and performance of mixed consistency protocols, we believe that this model is effective in addressing the heterogeneity of user requirements and available resources in distributed systems.
|
267 |
Analysis of Passive End-to-End Network Performance MeasurementsSimpson, Charles Robert, Jr. 02 January 2007 (has links)
NETI@home, a distributed network measurement infrastructure to collect passive end-to-end network measurements from Internet end-hosts was developed and discussed. The data collected by this infrastructure, as well as other datasets, were used to conduct studies on the behavior of the network and network users as well as the security issues affecting the Internet. A flow-based comparison of honeynet traffic, representing malicious traffic, and NETI@home traffic, representing typical end-user traffic, was conducted. This comparison showed that a large portion of flows in both datasets were failed and potentially malicious connection attempts. We additionally found that worm activity can linger for more than a year after the initial release date. Malicious traffic was also found to originate from across the allocated IP address space. Other security-related observations made include the suspicious use of ICMP packets and attacks on our own NETI@home server. Utilizing observed TTL values, studies were also conducted into the distance of Internet routes and the frequency with which they vary. The frequency and use of network address translation and the private IP address space were also discussed. Various protocol options and flags were analyzed to determine their adoption and use by the Internet community. Network-independent empirical models of end-user network traffic were derived for use in simulation. Two such models were created. The first modeled traffic for a specific TCP or UDP port and the second modeled all TCP or UDP traffic for an end-user. These models were implemented and used in GTNetS. Further anonymization of the dataset and the public release of the anonymized data and their associated analysis tools were also discussed.
|
268 |
Towards IQ-Appliances: Quality-awareness in Information VirtualizationNiranjan Mysore, Radhika 03 May 2007 (has links)
Our research addresses two important problems that arise in modern large-scale distributed systems:
1. The necessity to virtualize their data flows by applying actions such as filtering, format translation, coalescing or splitting, etc.
2. The desire to separate such actions from application level logic, to make it easier for future service-oriented codes to inter-operate in diverse and dynamic environments.
This research considers the runtimes of the `information appliances used for these purposes, particularly with respect to their ability to provide diverse levels of Quality of Service (QoS) in lieu of dynamic application behaviors and the consequent changes in the resource needs of their data flows. Our specific contribution is the enrichment of these runtimes with methods for QoS-awareness, thereby giving them the ability to deliver desired levels of QoS even under sudden requirement changes IQ-appliances. For experimental evaluation, we enrich a prototype implementation of an IQ-appliance, based on the Intel IXP network processor, with the additional
functionality needed to guarantee QoS constraints for diverse data streams. Measurements demonstrate the feasibility and utility of the approach. Further, we enhance the Self-Virtualized Network Interface developed in previous work from our group with QoS awareness and demonstrate the importance of such functionality in end-to-end
virtualized infrastructures.
|
269 |
A market-based approach to resource allocation in manufacturingBrydon, Michael 11 1900 (has links)
In this thesis, a framework for market-based resource allocation in manufacturing is
developed and described. The most salient feature of the proposed framework is that
it builds on a foundation of well-established economic theory and uses the theory to
guide both the agent and market design. There are two motivations for introducing
the added complexity of the market metaphor into a decision-making environment
that is traditionally addressed using monolithic, centralized techniques. First, markets
are composed of autonomous, self-interested agents with well defined boundaries,
capabilities, and knowledge. By decomposing a large, complex decision problem along
these lines, the task of formulating the problem and identifying its many conflicting
objectives is simplified. Second, markets provide a means of encapsulating the many
interdependencies between agents into a single mechanism—price. By ignoring the
desires and objectives of all other agents and selfishly maximizing their own expected
utility over a set of prices, the agents achieve a high degree of independence from one
another. Thus, the market provides a means of achieving distributed computation.
To test the basic feasibility of the market-based approach, a prototype, system is used
to generate solutions to small instances of a very general class of manufacturing
scheduling problems. The agents in the system bid in competition with other agents
to secure contracts for scarce production resources. In order to accurately model the
complexity and uncertainty of the manufacturing environment, agents are
implemented as decision-theoretic planners. By using dynamic programming, the
agents can determine their optimal course of action given their resource requirements.
Although each agent-level planning problem (like the global level planning problem)
induces an unsolvably large Markov Decision Problem, the structured dynamic
programming algorithm exploits sources of independence within the problem and is
shown to greatly increase the size of problems that can be solved in practice.
In the final stage of the framework, an auction is used to determine the ultimate
allocation of resource bundles to parts. Although the resulting combinational auctions
are generally intractable, highly optimized algorithms do exist for finding efficient
equilibria. In this thesis, a heuristic auction protocol is introduced and is shown to be
capable of eliminating common modes of market failure in combinational auctions.
|
270 |
A reconfigurable distributed process control environment for a network of PC's using Ada and NetBIOS.Randelhoff, Mark Charles. January 1992 (has links)
No abstract / Thesis (M.Sc.-Electronic Engineering)-University of Natal, 1992.
|
Page generated in 0.0425 seconds