• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 536
  • 76
  • 18
  • 9
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • 3
  • 2
  • Tagged with
  • 775
  • 775
  • 217
  • 195
  • 143
  • 123
  • 107
  • 106
  • 87
  • 86
  • 75
  • 71
  • 70
  • 66
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A design methodology for self-timed VLSI systems

Al-Helwani, A. M. January 1985 (has links)
No description available.
22

Robust adaptive control

Fu, Ye January 1989 (has links)
This thesis describes discrete robust adaptive control methods based on using slow sampling and slow adaptation. For the stability analysis, we consider that the plant model order is not exactly known and assume that the estimation model order is lower than the plant model order. A stability condition is derived with a given upper limit for the adaptation gain which is related to a strictly positive real operator. Discussion of the relation between sampling and stability condition is then given. For the robust adaptive control design, we study slow adaptation and predictive control. For the slow adaptation, the main idea is to use only good estimates and use a compensation filter. Some frequency domain information on the plant is necessary for this method. For predictive control, we discuss the relationship between the extended control horizon and the critical sampling. For a simple case, it is shown that the larger extended control horizon brings more robust adaptive control. The purpose of this thesis is to provide robust discrete adaptive controller design guidelines, especially in such cases as using slow sampling frequency, slow adaptation rate. It is true that in practice, for various discrete adaptive control algorithms, slow sampling and slow adaptation rate will bring more robustness. The use of slow sampling and slow adaptation rate is simple and economic, thus a careful choice of sampling rate and adaptation rate is highly recommended. This thesis provides such guidelines for choosing proper sampling rate and adaptation rate for robust discrete adaptive control. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
23

Hierarchical synthesis of control systems at the conceptual design stage

Joshi, Sanjay Kumar 01 January 1991 (has links)
We have developed a systematic procedure for a limited class of chemical processes that includes control problems at the early stage of a flow sheet development. The procedure decomposes the control problem into a set of sub-problems. For the economic evaluations it is assumed that the raw-material and the recycle costs dominate the process economics, and therefore the variables which affect the input-output and the recycle material and energy flows are considered as the optimization variables (process-flow optimizations). The results obtained from this procedure will be helpful in the following areas: (1) Identifying potentially inoperable flow sheets due to the presence of trace component impurities in the feed streams or produced in the reactor, (2) Estimating the economically justified modifications (both in the flow sheet structure and the sizes of process units) to the optimum base-case design, (3) Generating alternative sets of process-flow, control structures (a set of controlled and manipulated variables, along with their pairing, that can drive the input-output and the recycle flows to their desired steady-states), (4) Estimating the magnitude of the overshoot of the manipulated variables during the transients, and changing the structure of the flow sheet, the equipment sizes, or the structure of the steady-state control structure to accommodate disturbances in an optimum way. Based on the process economics and the relative gain analysis, the optimum control structure is synthesized that would minimize the total operating cost in the face of disturbances. The optimum values of the sizes of the constrained unit designs (which restrict the process-flow optimization in the face of disturbances), and the optimum values of the process variables (i.e., the design variables and the process flows) are determined by solving a two-stage optimization problem. A method for developing approximate, dynamic models for the process flows for continuous chemical plants with recycle streams is described. (Abstract shortened with permission of author.)
24

Decentralized design for robust performance of large-scale interconnected systems

Wong, Jor Yan 01 January 1993 (has links)
A large scale interconnected system consists of several subsystems that interact dynamically with one another through an interconnection network. Because of the constraints on information flow, a decentralized control system usually provides a more practical control solution than a centralized control system. Previous studies on the control of large scale interconnected systems often ignore the important issue of modelling uncertainties in the subsystems and the interconnection dynamics, and thus result in control systems that are inadequate. This thesis develops a design procedure for designing decentralized control systems to achieve robust performance of large scale interconnected systems. Based on a set of subsystem interface specifications, the global design problem is first decomposed into a set of subsystem design problems. Each subsystem then attempts to solve the subsystem problems independently. The decomposition procedure has the property that if all the subsystem design problems are solved, then the original global system design problem is also solved. A number of issues arise from the decentralized design procedure: the problem of interface selection; the problem of robust performance characterization for systems with external signals which are modelled using independent bounds on subsets of the signals (component bounded signals); and the problem of selecting a single compensator to simultaneously satisfy the design objectives for each of several design models (multiple design models). The first issue arises from the problem decomposition procedure. In the context of this thesis, the latter two issues arise from the solution of the subsystem design problems. In addition, these problems can arise from other practical problem formulation and are of interest in their own right. These issues have been addressed in this thesis, and solution or algorithmic approaches have been provided.
25

Synthesis of integrated chemical systems

Chang, Wen-Chi 01 January 1998 (has links)
Algorithmic and heuristic-based approaches are proposed for synthesizing integrated chemical systems. The former is used in the synthesis of reactor network and reactor-recycle-separator systems; the latter in the synthesis of integrated crystallization systems. In the algorithmic method, a network, or superstructure, which embeds all possible equipment to be used in the process and the potential interconnections among the equipment is generated. The procedure for generating the reactor network and the reactor-recycle-separator flow sheet structure is described. A nonlinear programming (NLP) problem is then formulated for the network. The optimal flow sheet and accompanying operational conditions are obtained by solving the NLP problem. For integrated crystallization process synthesis, a heuristic-based systematic procedure is developed. In a step-by-step manner, the procedure guides the user to generate alternative flow sheets for a given crystallization task. First, the required unit operations are determined by comparing the product specifications (production rate, product purity, and others) with the crystallizer effluent characteristics (occlusions, inclusions, crystal size, and others). Second, the destinations of the reaction solvent, mother liquor, wash liquid, recrystallization solvent, and drowning-out solvent are assigned. Then, the solvent recovery system is considered to recover the solvents and unconverted reactants, and to remove impurities from the system. Downstream system problems such as excessive filtration time and/or filter size are often caused by unfavorable crystal size. Various crystallizer designs to improve the crystal size distribution are discussed; short-cut equipment models are used to evaluate the alternatives for potential improvement. Issues related to minimization of inclusion impurities and heat integration are also examined. Guidelines are provided to help the user to add more details to the flow sheet at each level.
26

A process boundary based approach to separations synthesis

Pressly, Thomas Gilbert 01 January 1998 (has links)
Process boundaries and difficult regions for separation units limit the feasible products and recovery of those products. When process boundaries are encountered, a separating agent and or combinations of different types of equipment are used. In this manner, a number of steps are used collectively to meet the separations objective. One type of equipment configuration, the distillation-membrane hybrid, has been studied for binary and multicomponent systems. In this hybrid, the distillation column performs the bulk of the separation, because of the favorable economics of distillation. The membrane is used to bypass the process boundaries and difficult regions. Methods for applying and screening these hybrids were developed. Several configurations were examined conceptually. Case studies were performed on the following systems: water-acetic acid, ethanol-water, propylene-propane, benzene-heptane-octane, methanol-ethanol-water. Separations synthesis using all possible separation units (crystallization, membranes, distillation, decantation, extraction, etc.) was then examined. A design methodology for generating flowsheets of process alternatives to separate multicomponent systems was developed based on representing process boundaries with linear hyperplanes. This approximation allowed the generation of process alternatives using relatively simple calculations.
27

Improving correctness and failure handling in workflow management systems

Kamath, Mohan Umesh 01 January 1998 (has links)
A workflow management system (WFMS) facilitates the coordinated execution of applications (steps) that comprise a business process (workflow) across distributed nodes. State-of-the-art WFMSs do not have adequate support for handling various correctness and failure handling requirements of workflows. Different correctness requirements arise due to the dependencies between steps that access shared resources. Failure of steps in a workflow and system failures can cause data inconsistencies if handled improperly. Scalability is also a problem in current WFMSs since they use a centralized workflow control architecture that can easily become a performance bottleneck. In this thesis, we have developed the concepts and infrastructure necessary to address the above issues. To handle dependencies across workflows we have developed techniques for expressing and handling mutual-exclusion, relative ordering and rollback dependency requirements across workflow instances. To handle failure of steps within a workflow, we have proposed a new opportunistic scheme that avoids unnecessary compensations and re-executions when workflows are rolled back partially and re-executed. To handle system failures we have designed suitable logging schemes and protocols. To achieve scalability while satisfying the different correctness and failure handling requirements, we have enhanced our techniques to work on parallel and distributed workflow control architectures. To realize the above concepts, we have designed a workflow specification language, a two stage compiler and a rule-based run-time system. A workflow designer specifies the workflow schema and the resources accessed by the steps from a global database of resources. The two stage workflow compiler determines data dependencies and translates the high level schema into a uniform set of rules. The run-time system interprets these rules and executes the workflows in accordance with their requirements under central, parallel and distributed workflow control. To demonstrate the usefulness and practicality of our approach, we have implemented a prototype system that can offer better correctness, performance and functionality than state-of-the-art WFMSs.
28

On security issues in data networks

Cai, Songlin 01 January 2005 (has links)
This dissertation studies several security issues in data networks, to reveal the vulnerability, to propose defense mechanisms, to provide better tools for analysis, and to develop good security architecture. This dissertation consists of the following three parts: (1) Internet-like topologies which capture the inherent properties are desirable for studying the resilience of Internet against malicious attack or normal failure. A novel hierarchical Internet topology generator is proposed to capture the inherent properties of Internet topology: power law degree distribution and hierarchical structure. (2) An analysis on the inherent trust built in TCP shows that the client could stretch a TCP connection tens of times and keep occupying the resource in the server with little abnormality to be detected. This could be potentially used in denial of service attack. (3) Some security setting like Bounded Storage Model calls for high-speed random number generating, while the current real random number generator would not be able to offer. A hybrid random-bit sequence generated by a pseudo-random number generator with the parameters specified randomly might be useful in this setting. A study on hybrid system using Linear Congruential Recurrence is presented, and hopefully it will provide insight for the study on hybrid system using one-way function.
29

Perturbations of H-infinity state feedback systems

Geray, Okan 01 January 1996 (has links)
In this dissertation, a new approach to perturbations of state feedback H$\sb{\infty}$ optimization techniques has been developed. New methods based on sensitivity theory have been devised that make application of formal H$\sb{\infty}$ synthesis techniques to feedback system design more efficient. The sensitivity of the state feedback H$\sb{\infty}$ synthesis optimal solution is quantified for a certain class of regular and singular perturbations. This dissertation considers the problem of adjusting H$\sb{\infty}$ weighting functions to improve design by parametric variations. Estimates for open and closed loop transfer functions are provided to assess the parametric change in design. Full state is assumed to be available for feedback in this dissertation. Both regular perturbation and singular perturbation results have been developed for high frequency variations in weighting functions. The state feedback H$\sb{\infty}$ optimal solution is characterized in order to estimate the first order change in the H$\sb{\infty}$ optimal value as a result of both regularly and singularly perturbed weighting functions used as design parameters.
30

Network and end-host support for heterogenous services in the Internet

Sahu, Sambit 01 January 2001 (has links)
The rapid growth of the Internet has been accompanied by an evolution of new applications, ranging from simple data services to complex applications such as IP telephony, video on demand, and interactive multimedia communication. These new applications require an end-to-end IP architecture that can support multiple levels of service while preserving the scalability and simplicity of the current so-called “best-effort” Internet service model. This dissertation examines and proposes network and end-host support for these cutting-edge applications accounting for both the heterogeneity in the service requirements and the scalability concern. In the first part of this dissertation, we focus on the end-host and propose solutions for handling the variable resource requirements of multimedia streaming applications. We show that commonly-used round-based scheduling techniques for transferring data between the disk and the network interface are not well suited for the retrieval of multimedia streams. We propose lazy-EDF scheduling and show that it minimizes the server buffer and disk bandwidth requirements for a static set of requests. Simulation studies with MPEG traces demonstrate the significant performance gains possible under lazy-EDF. Using the optimality properties of lazy-EDF, we next propose greedy-fill with lazy discard (GFD) scheduling to address dynamic request arrivals and admission control in a disk subsystem. In addition, we propose efficient workload description for stored VBR video that we demonstrate to increase the resource utilization as high as 200–250% over the best known result. In the second part of this dissertation, we focus on scalable network support for providing different classes of services to multimedia applications. In particular, we focus on the achievable services and limitations of the recently proposed differentiated services (diffserv) architecture in which simple mechanisms are used within the network core and more complicated functionality is possible only at the network edge. First we determine the mechanisms at the network core that are best suited to provide service differentiation. In order to do so, we compare the ability of proposed diffserv mechanisms in providing delay and loss differentiation across service classes. Next we examine the impact of diffserv mechanisms on the applications that use TCP congestion control protocol. We present a simple and yet accurate analytical model for TCP when “profile-based” marking is used at the network edge for providing service differentiation. Using this model, we examine whether it is possible to provide rate guarantees to TCP flows by solely setting the profile parameters at the network edge. We find that it is not always easy to regulate or guarantee the rate achieved by a TCP flow using this approach. We derive conditions that determine what rates are achievable and provide insights for choosing appropriate profile parameters for these achievable services. These observations are validated in a testbed implementation using PCs running modified Linux kernel.

Page generated in 0.0674 seconds