• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 18
  • 11
  • 9
  • 8
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improving correctness and failure handling in workflow management systems

Kamath, Mohan Umesh 01 January 1998 (has links)
A workflow management system (WFMS) facilitates the coordinated execution of applications (steps) that comprise a business process (workflow) across distributed nodes. State-of-the-art WFMSs do not have adequate support for handling various correctness and failure handling requirements of workflows. Different correctness requirements arise due to the dependencies between steps that access shared resources. Failure of steps in a workflow and system failures can cause data inconsistencies if handled improperly. Scalability is also a problem in current WFMSs since they use a centralized workflow control architecture that can easily become a performance bottleneck. In this thesis, we have developed the concepts and infrastructure necessary to address the above issues. To handle dependencies across workflows we have developed techniques for expressing and handling mutual-exclusion, relative ordering and rollback dependency requirements across workflow instances. To handle failure of steps within a workflow, we have proposed a new opportunistic scheme that avoids unnecessary compensations and re-executions when workflows are rolled back partially and re-executed. To handle system failures we have designed suitable logging schemes and protocols. To achieve scalability while satisfying the different correctness and failure handling requirements, we have enhanced our techniques to work on parallel and distributed workflow control architectures. To realize the above concepts, we have designed a workflow specification language, a two stage compiler and a rule-based run-time system. A workflow designer specifies the workflow schema and the resources accessed by the steps from a global database of resources. The two stage workflow compiler determines data dependencies and translates the high level schema into a uniform set of rules. The run-time system interprets these rules and executes the workflows in accordance with their requirements under central, parallel and distributed workflow control. To demonstrate the usefulness and practicality of our approach, we have implemented a prototype system that can offer better correctness, performance and functionality than state-of-the-art WFMSs.
2

Network and end-host support for heterogenous services in the Internet

Sahu, Sambit 01 January 2001 (has links)
The rapid growth of the Internet has been accompanied by an evolution of new applications, ranging from simple data services to complex applications such as IP telephony, video on demand, and interactive multimedia communication. These new applications require an end-to-end IP architecture that can support multiple levels of service while preserving the scalability and simplicity of the current so-called “best-effort” Internet service model. This dissertation examines and proposes network and end-host support for these cutting-edge applications accounting for both the heterogeneity in the service requirements and the scalability concern. In the first part of this dissertation, we focus on the end-host and propose solutions for handling the variable resource requirements of multimedia streaming applications. We show that commonly-used round-based scheduling techniques for transferring data between the disk and the network interface are not well suited for the retrieval of multimedia streams. We propose lazy-EDF scheduling and show that it minimizes the server buffer and disk bandwidth requirements for a static set of requests. Simulation studies with MPEG traces demonstrate the significant performance gains possible under lazy-EDF. Using the optimality properties of lazy-EDF, we next propose greedy-fill with lazy discard (GFD) scheduling to address dynamic request arrivals and admission control in a disk subsystem. In addition, we propose efficient workload description for stored VBR video that we demonstrate to increase the resource utilization as high as 200–250% over the best known result. In the second part of this dissertation, we focus on scalable network support for providing different classes of services to multimedia applications. In particular, we focus on the achievable services and limitations of the recently proposed differentiated services (diffserv) architecture in which simple mechanisms are used within the network core and more complicated functionality is possible only at the network edge. First we determine the mechanisms at the network core that are best suited to provide service differentiation. In order to do so, we compare the ability of proposed diffserv mechanisms in providing delay and loss differentiation across service classes. Next we examine the impact of diffserv mechanisms on the applications that use TCP congestion control protocol. We present a simple and yet accurate analytical model for TCP when “profile-based” marking is used at the network edge for providing service differentiation. Using this model, we examine whether it is possible to provide rate guarantees to TCP flows by solely setting the profile parameters at the network edge. We find that it is not always easy to regulate or guarantee the rate achieved by a TCP flow using this approach. We derive conditions that determine what rates are achievable and provide insights for choosing appropriate profile parameters for these achievable services. These observations are validated in a testbed implementation using PCs running modified Linux kernel.
3

Designing Umeli: A Case for Medsersiated Design, a participatory approach to designing interactive systems for semi-literate users

Gitau, Shuko 10 1900 (has links)
This dissertation documents a journey into the design of Ummeli with a community of semi-­‐literate job seekers in Khayelitsha, Cape Town whose primary access to the internet was through their mobile phones. Working closely with this community over many months, we developed Ummeli, a suite of tools that allow the user to build their CVs; browse and apply for employment and training opportunities; recommend and post jobs; get employment tips and connect to other job seekers. To design Ummeli, Ethnographic Action Research (EAR) was embraced, not as a methodology, but as a research approach, a foundation from which to incorporate participatory approaches to designing Information communication technologies for development (ICT4D). User Centered Design (UCD) was incorporated as a design approach. Ummeli was built by a combination of insights drawn from a lived-­‐in experience, and employing UCD informed methods of participatory design (PD). Here we employed Human Access Point (HAP) a form of PD that allows for a member of the community to be a proxy for the design process. Learn to Earn, an NGO based in Khayelitsha became the HAP, and took the critical role in that they, highlighted, translated, evaluated and represented what was most crucial for the community; their input allowed Ummeli to match the community’s need. In the process, we came across concepts such as Umqweno, which represents yearnings and desires, replacing our own perception systems requirements. Siyazenzela, representing a communal participatory approach to doing life; and Ubuntu, which captures the spirit behind Africa’s communal identity, which were all adopted into the original EAR framework. In this document we set out to demonstrate what it means to be a “reflective practitioner” as we adopted appropriated and reconfigured aspects of participatory UCD methods to fit culturally relevant contexts. The process allowed for constant reflections leading to “aha” moments. In the end, we had created Ummeli, with over 80,000 users, and developed Mediated Design, a culturally indoctrinated xii participatory approach to designing interactive system with and for semi-­‐literate people.
4

Automatic and efficient data virtualization system for scientific datasets

Weng, Li, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 128-134).
5

Translation of on object role model schema into the formal language Z

Ravalli, Gilbert. January 2002 (has links)
Thesis (MSc.) - Swinburne University of Technology, Faculty of Information & Communication Technologies, 2002. / Dissertation for the degree of Master of Science (Research), Faculty of Information and Communication Technologies, Swinburne University of Technology, 2002. Typescript. Bibliography p. 142-146.
6

System architectures based on functionality offloading

Bohra, Aniruddha. January 2008 (has links)
Thesis (Ph. D.)--Rutgers University, 2008. / "Graduate Program in Computer Science." Includes bibliographical references (p. 138-153).
7

Multi-aspect component models enabling the reuse of engineering analysis models in SysML /

Jobe, Jonathan Michael January 2008 (has links)
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Paredis, Chris; Committee Member: McGinnis, Leon; Committee Member: Schaefer, Dirk
8

Design and control of parallel rule-firing production systems

Neiman, Daniel E 01 January 1992 (has links)
This dissertation studies the issues raised by the parallel execution of rules in a pattern-matching production system. There are two main areas of concern: maintaining correctness during the course of simultaneous rule executions, and controlling the execution of productions without introducing serial bottlenecks. It is demonstrated that guaranteeing program correctness using a serializability criterion introduces an unacceptable overhead and reduces the potential parallel speedup to a single order of magnitude. Instead of attempting to automatically extract coexecutable sets of parallel rules, the approach taken in this research is to define a minimal set of language constructs which allow correct parallel programs to be designed. The view that the rule-based computation has an algorithmic structure allows us to attach a semantic interpretation to rule firing. By examining the role of each rule in the overall computation, we can understand and begin to find a solution to the problems of controlling rule firing and ensuring correctness while maximizing effective use of parallel processing resources. When rules are executed in parallel, the conventional control mechanisms applied to rule-based systems act to limit parallel activity. Two novel rule-firing policies are described: an asynchronous rule-firing policy that causes rules to be executed as soon as they become enabled, and a task-based scheduler that allows multiple independent tasks to run asynchronously with respect to each other while allowing rules to execute either synchronously or asynchronously within the context of each task. Because the asynchronous execution of rules reduces the opportunities for performing conflict resolution, methods for performing heuristic discrimination at various points in the rule execution cycle are discussed. The experimental results of this research are presented in the context of UMass Parallel OPS5, a rule-based language that incorporates parallelism at the rule, action, and match levels, and provides language constructs for supporting the design of parallel rule-based programs including a locking scheme for working memory elements and operators for specifying local synchronization of rules and actions. Results are presented for a number of programs illustrating common AI paradigms including search, inference, and constraint satisfaction problems.
9

Learning situation-specific control in multi-agent systems

Nagendraprasad, Maram V 01 January 1997 (has links)
The work presented in this thesis deals with techniques to improve problem solving control skills of cooperative agents through machine learning. In a multi-agent system, the local problem solving control of an agent can interact in complex and intricate ways with the problem solving control of other agents. In such systems, an agent cannot make effective control decisions based purely on its local problem solving state. Effective cooperation requires that the global problem-solving state influence the local control decisions made by an agent. We call such an influence cooperative control. An agent with a purely local view of the problem solving situation cannot learn effective cooperative control decisions that may have global implications, due to the uncertainty about the overall state of the system. This gives rise to the need for learning more globally situated control knowledge. An agent needs to associate appropriate views of the global situation with the knowledge learned about effective control decisions. We call this form of knowledge situation-specific control. This thesis investigates learning such situation-specific cooperative control knowledge. Despite the agreement among researchers in multi-agent systems about the importance of the ability for agents to learn and improve their performance, this work represents one of the few attempts at demonstrating the utility and viability of machine learning techniques for learning control in complex heterogeneous multi-agent systems. More specifically, this thesis empirically demonstrates the effectiveness of learning situation-specific control for three aspects of cooperative control: (1) Organizational roles. Organizational roles are policies for assigning responsibilities for various tasks to be performed by each of the agents in the context of global problem solving. This thesis studies learning organizational roles in a multi-agent parametric design system called L-TEAM. (2) Negotiated search. One way the agents can overcome the partial local perspective problem is by engaging in a failure-driven exchange of non-local requirements to develop the closest possible approximation to the actual composite search space. This thesis uses a case-based learning method to endow the agents with the capability to approximate non-local search requirements in a given situation, thus avoiding the need for communication. (3) Coordination strategies. Coordination mechanisms provide an agent with the ability to behave more coherently in a particular problem solving situation. The work presented in this thesis deals with incorporating learning capabilities into agents to enable them to choose a suitable subset of the coordination mechanisms based on the present problem solving situation to derive approximate coordination strategies.
10

Knowledge-based interval modeling method for efficient global optimization and process tuning

Yang, Dong Zhe 01 January 2000 (has links)
A Knowledge-Based Interval Modeling (KBIM) Method is introduced for global optimization and process tuning. A novel feature of the KBIM Method is its ability to take advantage of the a priori knowledge of the system by incorporating the linear/nonlinear sensitivity information between the objective function/constraints and the system variables in the form of an interval model. The interval model is then used to estimate the feasible/plausible region within the input space as the basis of search for the global optimum. The noted features of the KBIM Method are that (1) initial trials are not required to construct the interval model, (2) the interval model produces bounds for the objective function and constraints so that it may not be trapped into the local optima, and (3) learning is incorporated to update the interval model based on new input-output data that become available during the search. The updated interval model is shown to lead to more accurate estimates of the feasible/plausible region. The utility of the KBIM Method is demonstrated in three different fields: global optimization, injection molding process tuning, and helicopter track and balance. In global optimization, the KBIM Method is used to search for the global optimum of both unconstrained and constrained benchmark problems. In tuning of injection molding, the method is used as an on-line tuning method to define the feasible region (process window) of the process and to search for a set of feasible machine setpoints in order to improve the production yield. In helicopter track and balance, the KBIM Method selects a set of blade modifications so as to reduce the vibration of the aircraft within the specification limits. The application results indicate that the method provides a viable means of incorporating the a priori knowledge for global optimization and process tuning.

Page generated in 0.0771 seconds