Spelling suggestions: "subject:"computerscience"" "subject:"composerscience""
431 |
Specifying and Verifying Compliance in Commitment ProtocolsVenkatraman, Mahadevan 18 July 1999 (has links)
<p>Interaction protocols are specific, often standard, constraints on thebehaviors of autonomous agents in a multiagent system. Protocols areessential to the functioning of open systems, such as those that arisein most interesting web applications. A variety of common protocolsin negotiation and electronic commerce are best treated ascommitment protocols, which may be defined and analyzed interms of the creation, satisfaction, or manipulation of thecommitments among the participating agents.When protocols are employed in open environments, such as theInternet, they must be executed by agents that behave more or lessautonomously and whose internal designs are not known. In suchsettings, therefore, there is a risk that the participating agents mayfail to comply with the given protocol. Without a rigorous means toverify compliance, the very idea of protocols for interoperation issubverted. We develop an approach for verifying whether the behavior ofan agent complies with a given commitment protocol. Our approach requiresthe specification of commitment protocols in temporal logic, andinvolves a novel way of synthesizing and applying ideas fromdistributed computing and logics of program.<P>
|
432 |
Packet Loss Recovery for Unicast Interactive Video Transmission over the InternetJoshi, Srinath R 16 February 2000 (has links)
<p>The current Internet is not reliable; packet loss rates are frequently high, and varying over time. Transmitting high-quality interactive video over the Internet is challenging because the quality of compressed video is very susceptible to packet losses. Loss of packets belonging to a video frame manifests itself not only in the reduced quality of that frame but also in the propagation of that distortion to successive frames. This error propagationproblem is inherent in many motion-based video codecs due to the interdependence of encoded video frames. Recently a new approach to improving error resilience of compressed video, called Recovery from Error Spread using Continuous Updates(RESCU), has been proposed. In RESCU, picture coding patterns can be adjusted to enable the use of transport level recovery schemes in recoveringlost packets without having to introduce any playout delay at the receiving end. In this thesis, we propose dynamic loss recovery schemes which when combined with RESCU, effectively alleviate error propagation in the transmission of interactive video. In these schemes, picture coding patterns are dynamically adapted to current network conditions in order to maximize the effectiveness of hybrid transport level recovery (employing both forward error correction and retransmission) in reducing error propagation. Through an experimental study based on actual Internet transmission traces representing various network conditions, we study the effectiveness and limitations of our proposed techniques and compare their performance with that of existing video error recovery techniques such as Intra-H.261 and H.263+(NEWPRED). Our study indicates that the proposed techniques exhibit better error resilience and incur much less bit overhead than existing video error recovery techniques. This document also describes the implementation of a prototype interactive video transmission system employing a proposed loss recovery scheme.<P>
|
433 |
An Adaptive Non-parametric Kernel Method for ClassificationKaszycki, Gregory John 28 March 2000 (has links)
<p>One statistical method of estimating an underlying distribution from a set of samples is the non-parametric kernel method. This method can be used in classification toestimate the underlying distributions of the various classes. Since it can be shown that there is no perfect shape to a kernel function used to estimate an underlyingdistribution, several functions have been proposed and none is superior in all cases. This thesis demonstrates that a function can be created that adapts its shape to fitthe properties of the underlying data set. The function adapts its shape based on a pair of parameters and the algorithm uses a hill-climbing algorithm to determine thebest pair of parameters to use for the data set. This method gets consistently better accuracy than existing non-parametric kernel methods and comparable accuracy toother classification techniques. <p>In addition, this method can estimate information about the underlying characteristics of the data, based on the optimal parameters to thekernel function. This information includes the complexity of the underlying concept function and the general noise level in the measured attributes of the examples. <P>
|
434 |
Inconsistency Identification and Resolution in Goal-Driven Requirements Analysis.Dempster, John Hampton 19 June 2000 (has links)
<p>This thesis addresses the need for inconsistency identification and resolution techniques to assist practitioners employing goal-based requirements analysis. Several approaches for inconsistency identification exist, however, the techniques introduced in this thesis are targeted specifically for analyzing goals in conjunction with the Goal-Based Requirements Analysis Method (GBRAM). Reducing inconsistency in a requirements specification greatly impacts its quality, as well as the overall success of a software development effort. Addressing and reducing the number of inconsistencies before development begins, minimizes the need to continually revisit what needs to be developed (the requirements), and consequently reduces development time and costs since inconsistencies are resolved early on (prior to system design and implementation). This thesis introduces a series of techniques and associated heuristics for identifying and resolving inconsistency in requirements specifications. These techniques and heuristics are demonstrated in the analysis of two case studies, the second of which is further described in Deriving Goals from a Use Case Based Requirements Specification for an Electronic Commerce System [ADS00]. The main contributions of this thesis include techniques that augment the GBRAM which provide specific guidance to the identification of inconsistency, and a structured goal syntax to aid analysts in forming goals for future analysis, and heuristics to aids in inconsistency identification and resolution<P>
|
435 |
Deadlock Analysis of Message-Passing Programs with Identical ProcessesZhou, Jun 10 January 2001 (has links)
<p>Deadlocks are a common type of faults in message-passing programs. One approach to detecting deadlocks in a message-passing program is to perform reachability analysis, which involvesderiving possible global states of the program. The resulting state graph is referred to as a reachability graph (RG). The size of the RG of a message-passing program, in the worst case, is anexponential function of the number of processes in the program. This problem, referred to as the state explosion problem, makes reachability analysis impractical for message-passing programs withmany processes. <br><br>Assume that P is a message-passing program that contains one process type T with a dynamic number of instances. Let P_m denote the version of P that has m instances of T. To detect deadlocks inP, we apply reachability analysis to P_1, P_2, ..., and P_n, where n is an integer chosen randomly or according to some criterion. If the value of n is large, reachability analysis of P_n is impractical. Ifthe value of n is small, we have little confidence on whether P_k is deadlock-free for any k > n. A deadlock cutoff number c for P means that under certain conditions, if P_c has no deadlocks, then P_khas no deadlocks for any k > c. For message-passing programs that contain two or more process types with dynamic numbers of instances, their deadlock cutoff vectors are defined in a similar way. <br><br>In this dissertation we present four approaches to finding deadlock cutoff numbers for synchronous message-passing programs. These approaches are based on the use of observational equivalence,projection equivalence, client/server reachability graphs, and abstract client/server reachability graphs, respectively. The first three approaches assume that each process in a synchronousmessage-passing program is modeled as a labeled transition system (LTS). The fourth approach assumes that each process in a synchronous message-passing program is modeled as acommunicating finite state machine (CFSM) or extended CFSM. <br><br>Observational equivalence is an important concept in verifying properties of LTS systems. Let L be an instance of process type T in P. The environment of L in P_m, m>0, is defined to be thecomposition of processes in P_m excluding L. In other words, P_m is the composition of L and its environment in P_m. We show that if there exists an integer n such that the environments of L in P_nand P_{n+1} are observational equivalent and P_n has no global deadlocks, then P_k, k>n, has no global deadlocks and n is a deadlock cutoff number for P. We also show how to apply this approach toring-structured programs. One major problem with this approach is that it fails to find deadlock cutoff numbers for many message-passing programs. <br><br>To improve the above approach, we define a new equivalence relation called projection equivalence, which is weaker than observational equivalence. The projection of L in P_m, i>0, is defined to bethe behavior of L in P_m. The environments of L in P_i and P_j, i=\=j, are said to be projection equivalent if L has the same projection in P_i and P_j. We show how to apply projection equivalence tofind deadlock cutoff numbers for client/server programs and ring-structured programs. A client/server program contains a server and a number of clients. Clients call the server to request service;they do not communication with each other. The server cannot call individual clients. For client/server programs, we define a new type of reduced reachability graphs, called client/server reachabilitygraphs or CSRGs. The size of the CSRG of a client/server program is a polynomial function of the number of clients. Based on CSRGs, we show how to determine the existence of a deadlock cutoffnumber for a client/server program and how to find this number if it exists. We also show how to find deadlock cutoff vectors for client/server programs with two or more types of clients. <br><br>Finally, we consider client/server programs with two-way communication, which allows the server to call individual clients. Each client in such a program is represented as a communicating finitestate machine (CFSM), while the server is represented as an extended CFSM. For such programs, we define a new type of reduced reachability graphs, called abstract CSRGs or ACSRGs. Basedon ACSRGs, we show how to find deadlock cutoff numbers for client/server programs with two-way communication. Our empirical studies show that ACSRGs are much smaller than theircorresponding RGs. For example, for a solution to the gas station problem with one pump and six customers, its ACSRG has 74 states and its RG has 25394 states. <P>
|
436 |
Operating System Kernel for All Real Time SystemsPinnix, Justin Everett 26 March 2001 (has links)
<p><P>PINNIX, JUSTIN EVERETT. Operating System Kernel for All Real Time Systems.(Under the direction of Robert J. Fornaro and Vicki E. Jones.)</P><P>This document describes the requirements, design, and implementation of OSKAR, ahard real time operating system for Intel Pentium compatible personal computers.OSKAR provides rate monotonic scheduling, fixed and dynamic priority scheduling,semaphores, message passing, priority ceiling protocols, TCP/IP networking, and globaltime synchronization using the Global Positioning System (GPS). It is intended toprovide researchers a test bed for real time projects that is inexpensive, simple tounderstand, and easy to extend.</P><P>The design of the system is described with special emphasis on design tradeoffs made toimprove real time requirements compliance. The implementation is covered in detail atthe source code level. Experiments to qualify functionality and obtain performanceprofiles are included and the results explained.</P><P>
|
437 |
Developing Predictor Surfaces for Vowels and Voiced Fricatives for Lip SynchronizationKrothapalli, Chandrika 11 June 2001 (has links)
<p>KROTHAPALLI, CHANDRIKA. Developing Predictor Surfaces for Vowels and Voiced Fricatives for Lip Synchronization. (Under the direction of David F. McAllister, Robert D. Rodman and Donald Bitzer.)This paper describes a method to construct predictor surfaces for mouth parameters, using Delaunay triangulation. The first and second moments of the input speech signal are mapped to the shape of the mouth. Predictor surfaces are built for four external shape parameters of the mouth. The surfaces include shapes for vowels and some voiced fricatives. Described also is a method for developing real time animation synchronized with sound for vowel and voiced fricative utterances with or without silence and sequences of utterances. The content or kind of speech is not known in advance. Spectral analysis is used to classify the type of sound. Training sounds are used to generate predictor surfaces. Voiced samples containing single sound without any mouth movement during utterance are used to train the system. All the extreme mouth positions and frequently occurring mouth shapes are taken into consideration during training. Relative values of the mouth parameters are set for these sounds; interpolatory surfaces are built using this known data and are used to predict the parameter values for future recordings. Hermite cubic polynomials are used to generate the shapes necessary to depict a human mouth and the jaw. Voice of three speakers is recorded and a comparison of the surfaces of these speakers is made. A speaker-dependent lip synchronization system that develops animation for vowel and voiced fricative utterances is developed.<P>
|
438 |
Appearance Preserving Data SimplificationWalter, Jason David 04 April 2001 (has links)
<p>Many visualization environments constantly face the issue of dealingwith large, complex datasets. Often these datasets are so complexthat rendering a visualization would seem impractical. Likewise,enormous amounts of data may overwhelm the human visual system; therebyrendering the data incomprehensible. Thus, the need arises to deal withthese datasets in some arbitrary manner such that the resultingdataset represents the original whole --- while reducing thecost on the human and computer visual system. <p>A closely related problem can be found in geometric models, typicallyrepresented as a piecewise linear collection of connected polygons (amesh). Meshes can be obtained from range scanners or created with acomputer aided design package. However, these obtained meshes areoften very dense and have high spatial frequency. An active area ofcomputer graphics research is directed at the simplification of thesedense meshes. Initially, mesh simplification research aimed atpreserving only the topology, but the most recent research, appearancepreserving mesh simplification, is aimed at simplification whilepreserving surface properties of the mesh, such as color or texture.<p>Our work addresses the use of appearance preserving meshsimplification in a data simplification environment, as well as, theissues of doing so. As a result, we present and demonstrate a generalmethod to simplify large multidimensional datasets using anyappearance preserving mesh simplification algorithm. We add the use ofprincipal components analysis to reduce the dimensionality of the dataprior to simplification, which allows faster simplification on highdimensional data, and despite the reduction in dimensionality we haveshown full preservation of key features in the dataset. In addition,we introduce spatial locks to preserve important data elements duringthe simplification process.<p><P>
|
439 |
EPRAM: A Risk Analysis and Mitigation-Based Evolutionary Prototyping Model for Quality Requirements Development.Carter, Ryan Alden 05 April 2001 (has links)
<p>Evolutionary prototyping focuses on the iteration of software planning, implementation, and evaluation while gathering a correct and consistent set of requirements. The process lends particular strength to building quality software due, in part, to the ongoing clarification of existing requirements and the discovery of previously missing or unknown requirements. Traditionally, however, the iterative reexamination of a system?s requirements has not been the panacea that practitioners sought due to the predisposition for requirements creep and the difficulty in managing it. This thesis describes the use of evolutionary prototyping in conjunction with an aggressive risk-mitigation strategy. Together, these techniques support successful requirement discovery and clarification and guard against the negative effects of requirements creep; an aspect that general evolutionary prototyping methodologies have not mastered. These techniques are embodied in a comprehensive software development model, which has been christened as the EPRAM (Evolutionary Prototyping with Risk Analysis and Mitigation) model. To ensure that quality is inherent within this process model, the Software Engineering Institute?s (SEI) Capability Maturity Model (CMM) was tailored to conform to the development environments of small teams, projects, and organizations and was used as a mature base upon which to build the model. The model was intentionally designed to comply with the Level 2 Key Process Areas (KPA) of the CMM. Validation of the EPRAM model has occurred on several software development efforts employing the model to support the rapid development of electronic commerce applications. <P>
|
440 |
Multicasting in a Partially Tunable Broadcast WDM NetworkTHAKER, DHAVAL V 03 May 2001 (has links)
<p>We consider the problem of scheduling multicast packet transmissions in a broad-cast single hop WDM network. Tunablity isprovided only at the one end, namely at thetransmitter. Our objective is to schedule multicast transmission in a tunable transmitterand a fixed receiver broadcast WDM network. In a Single-hop WDM network having fixedreceivers, the unicast and multicast traffic can be scheduled by a single scheduling algorithm.If so, the problem of scheduling multicast traffic, reduces to a Wavelength Assignmentproblem as to assign wavelengths to the fixed receivers before scheduling multicast packettransmission. A receiver-to-channel assignment has to meet two con icting requirements.The first requirement is to minimize the number of retransmissions. The retransmissionsare caused when members of a multicast group are assigned to different wavelengths andthe group traffic is transmitted on each of these wavelengths. The second requirement is tomaximize the channel utilization, to balance the incoming traffic optimally on all the avail-able wavelengths. We address a fairly general version of the problem as we allow arbitrarytraffic demands and arbitrary multicast group membership distribution. First, we definethe Wavelength Assignment problem formally and prove it to be NP-Hard problem. Sincethe problem is intractable in nature, next we develop different heuristics. The heuristicsare evaluated based on their success in achieving the tradeoff between lower running timerequirements and the accuracy of the obtained result to the optimum solution. Finally, wevary the different system parameters such as the number of nodes, channels and multicastgroups and analyze their in uence on the performance of the developed heuristics.<P>
|
Page generated in 0.1077 seconds