Spelling suggestions: "subject:"engineering - atemsystem cience"" "subject:"engineering - atemsystem cscience""
201 |
Application of the discrete maximum principle to the short-term scheduling of a hydroelectric power systemBensalem, Ahmed January 1988 (has links)
The purpose of this study is to develop a general algorithm to solve in a robust, flexible and fast way the short-term scheduling problems of two different multi-reservoir hydroelectric power systems. The first system consists of four reservoirs in series, the second corresponds to that of the Ste Maurice river. / The solution method is based on the discrete maximum principle. Gradient method is used for the solution of the two-point boundary value problem. Two algorithms are suggested for dealing with difficulties posed by the state variable constraints. The first uses the augmented lagrangian method, the second is iterative. / Both algorithms give a satisfactory solution for the problem. However, the first requires more memory space than the second but converges more rapidly. The second algorithm has the advantage of executing an iteration in less CPU time. For large scale problems the second (iterative) algorithm option is recommended.
|
202 |
Action diagrams : a methodology for the specification and verification of real-time systemsKhordoc, Karim January 1996 (has links)
In this thesis, we address issues in the specification, simulation, and formal verification of systems that are characterzed by real-time constraints and a mix of protocol and data computation aspects. We propose a novel specification language and modeling methodology--HAAD (Hierarchical Annotated Action Diagrams). In HAAD, the interface behavior or a system is captured as a hierarchy of action diagrams. The internal behavior is modeled by an Extended Finite State Machine (EFSM). A leaf action diagram defines a behavior (a template) over a set of ports. Procedures and predicates are attached to actions in order to describe the functional aspects of the interface. / We propose algorithms and methods for the automatic generation of simulation models and response verification scripts from HAAD specifications. These models perform "on-the-fly parsing" of actions received at their I/O ports, sequencing through state transitions based on the result of this parsing, detecting incorrect, or ill-formed interface operations (bus cycles), verifying that all timing constraints at the input of the model are met, and driving the model outputs with appropriate delays. / We formalize the operational semantics of leaf action diagrams under linear timing constraints, based on the concepts of a block machine and causal block machine. We state the realizability of an action diagram in terms of the existence of a causal block machine derived from the action diagram. We examine the problem of the compatability of concurrent, communicating leaf action diagrams described by linear timing constraints and we show the inaccuracies of known methods that address this problem. We define the action diagram compatibility problem in terms of the compatibility of all the possible combinations of causal block machines derived form these action diagrams. We prove that such enumeration is not needed in answering the compatibility question. This leads to an exact and efficient compatibility verification procedure.
|
203 |
Planning of manipulator pushing operations : dynamic simulation and analysisConsol, Christian Patrick January 1990 (has links)
In the field of robotic manipulation, a key problem addressed by researchers is the presence of uncertainties in the position and orientation of objects to be manipulated by a computer controlled robotic manipulator. One approach used to reduce these uncertainties is the use of pushing operations, without the use of sensors. In this thesis we first used the quasi-static assumption made by several researchers to plan trajectories of pushed sliding objects. Quasi-static pushing refers to operations in which the mass or acceleration of the object is significantly small that inertial forces are dominated by frictional forces. The travel distance of the object, which is first pushed with one point of contact before coming into contact with a second point of the pusher, is calculated. Then, we compare these trajectories with ones predicted based on a dynamical model of the system. This new model takes inertial forces into consideration and it comprises a more realistic friction model as it also includes a velocity dependent coefficient of friction factor. Results showed that predictions in terms of object rotation direction in similar in both cases, but that predictably the object rotation rate, in the dynamical system, is slower due inertial forces, and therefore a longer travel distance is needed to reorient an object.
|
204 |
Environment support for business modelling : concepts, architecture and implementationShen, Xijin, 1966- January 1994 (has links)
The goal of business modelling is the design, analysis, and simulation of an enterprise's architectural structures and their information technology components. To comprehensively support business modelling, an appropriate modelling environment with adequate visualization mechanisms is required. Such an environment may handle model information in a flexible, yet expressive way and support substitution, documentation, validation and dynamic analysis of models as well as model visualization and alternative representations. / We have developed a business modelling approach which is based on the formalism of extended, colored Petri nets. To support and validate our approach, we have engineered the Macrotec environment. Macrotec meets a list of requirements we have identified as crucial for the quality of a comprehensive modelling environment. It is conceived as a set of various tools which are seamlessly integrated. Our experience with Macrotec suggests that our concepts and environment substantially facilitate business modelling.
|
205 |
Mechanizing dynamic security analysisMarceau, Richard J. January 1993 (has links)
The object of software frameworks is to mechanize human processes in order to accomplish high-level tasks that call upon diverse software tools. This thesis describes the ELISA framework prototype which performs power-system dynamic security analysis in the operations planning environment. ELISA mechanizes routines traditionally carried out by experts that are essential to power-system dynamic security analysis, greatly accelerating the realization of complex processes. Typically, ELISA executes appropriate load-flow and transient-stability simulations (i.e. using commercially available simulation software), performs result analysis, identifies and executes changes to the input and repeats this process until a user-defined goal, such as finding transient stability transfer limits, has been achieved. / A taxonomy of dynamic security analysis in operations planning is proposed employing the semantic net, class-object-property and rule paradigms. All of these are required to cover the full spectrum of knowledge found in the high-level goals, the process details, the complex conditional structures and the acceptance criteria which characterize dynamic security analysis. This taxonomy also describes the language of operations planners, defining not only the features presently supported by ELISA, but also providing a roadmap to future enhancements. Typical sensitivity studies are presented using a 700-bus production model of the Hydro-Quebec network to illustrate the considerable leverage afforded from using ELISA-like software. / In addition, the thesis addresses the issue of how such tools can assist in performing research to improve our understanding of fundamental power systems behaviour. Using the ELISA prototype as a laboratory test bed, it is shown that the signal energy E of a network's transient response acts as a barometer to define the relative severity of any normal contingency with respect to power generation or transfer P. For a given contingency, as P is varied and the network approaches instability, signal energy increases smoothly and predictably towards an asymptote which defines the network's stability limit: This limit, in turn, permits us to compare the severity of different contingencies. This behaviour can be explained in terms of the effect of increasing power on the damping component of dominant poles, and a simple function is derived which estimates network stability limits with surprising accuracy from two or three stable simulations. / As a corollary to this, it is also shown that a network's transient response can be screened for instability using a simple frequency-domain criterion. Essentially, this criterion requires performing the Fourier transform of a network's transient voltage response at various monitoring locations: When P is varied and the network goes beyond its stability limit, the angle of the Fourier transform's polar plot fundamentally changes its behaviour, passing from a clockwise to a counterclockwise rotational behaviour about the origin. This is confirmed by results obtained from performing stability-limit searches on the Hydro-Quebec system. Used in conjunction with signal energy analysis for determining stability limit proximity, this criterion can be quite useful for mechanized security-limit-determination tools such as ELISA. / Signal energy limit estimation and the proposed stability criterion are shown to be applicable to all normal contingencies and these results hold not-withstanding the presence of many active, nonlinear elements in the network.
|
206 |
Robust control of uncertain time-delay systemsHaurani, Ammar January 2004 (has links)
This work addresses the problem of robust stabilization and robust Hinfinity control of uncertain time-delay systems. The time-delays are considered to be present in the states and/or the outputs, and the uncertainties in the system representation are of the parametric norm-bounded type. Both cases of actuators, with and without saturation are studied, and the state-feedback and output-feedback control designs are presented. Two methods for analysis and synthesis of controllers are used: The first is based on the transfer function, and the second on the use of functionals. / In the context of the design method based on transfer functions, the problem of Hinfinity output feedback design for a class of uncertain linear continuous-time or discrete-time systems, with delayed states and/or outputs (only for the continuous-time case), and norm-bounded parametric uncertainties is considered. The objective is to design a linear output feedback controller such that, for the unknown state and output time-delays and all admissible norm-bounded parameter uncertainties, the feedback system remains robustly stable and the transfer function from the exogenous disturbances to the state-error outputs meets the prescribed Hinfinity norm upper-bound constraint. The output feedback structure does not depend on the time-delay. The conditions for the existence of the desired robust Hinfinity output feedback and the analytical expression of these controllers, are then characterized in terms of matrix Riccati-type inequalities. In the continuous-time context, both the time-invariant and the time-varying cases are treated. Finally, examples are presented to demonstrate the validity and the solvability of the proposed design methods. / Still in the same context, the state-feedback robust stabilization problem for neutral systems with time-varying delays and saturating actuators is addressed. The systems considered are continuous-time, with parametric uncertainties entering all the matrices in the system representation. The model used for the representation of actuator saturations is that of differential inclusions. A saturating control law is designed and a region of initial conditions is specified within which local asymptotic stability of the closed-loop system is ensured. / Finally, the robust output-feedback stabilization problem for state-delayed systems with time-varying delays and saturating actuators is addressed. The systems considered are again continuous-time, with parametric uncertainties entering all the matrices in the system representation. Two models are used for the representation of actuator saturations: sector modeling and differential inclusions. Saturating control laws are designed, and in the case of differential inclusions, a region of initial conditions is specified within which local asymptotic stability of the closed-loop system is ensured. (Abstract shortened by UMI.)
|
207 |
Implementation of a global router for symmetrical FPGAs based on the dual-affine variant of Karmarkar's interior-point methodBuhescu, Razvan Mihai January 1995 (has links)
FPGAs have been among the fastest growing segments of the semiconductor industry and this growth is expected to continue over the next several years. The complexity of FPGAs makes manual designs time consuming and error prone; design automation tools are essential to the effective use of FPGAs. Routing is one part of the design process of VLSI chips that is computational complex and hence very time consuming. / The arrival of the interior-point method proposed by N. Karmarkar was an important event in linear programming and several variants have been developed since then. Recent experiments in a variety of areas report that Karmarkar's algorithm performs asymptotically faster than the traditional simplex method and is competitive with simulated annealing. / In this thesis, a global router for symmetrical FPGAs is implemented based on the dual-affine variant of Karmarkar's interior-point method. Current approaches in VLSI global routing, the model of symmetrical FPGAs and the formulation of the global-routing problem as a mixed-integer linear program are reviewed. The objective of the global router is to find a routing that balances the channel-segment densities in the symmetrical FPGA. Specific implementation details are covered in depth and experimental results for samples of random global-routing problems are reported.
|
208 |
Specification driven architectural modelling environment for telecommunication systems synthesisTanir, Oryal January 1994 (has links)
Design automation has steadily contributed to improvements witnessed in the system design process. Initial applications were to address low level design concerns such as transistor layout and simulation; however the focus of tools has slowly been progressing up the design abstraction scale. The current state-of-the-art provides modelling capabilities at different levels of abstraction, but solutions for synthesis issues at the register-transfer and lower levels are the norm. The proliferation of design description languages at different abstraction levels has prompted the need for standardization (VHDL and Open-Verilog) to promote design migration and re-use. / While design automation has helped in reducing design time-lines and design churn, a major source of design difficulties is just recently being addressed and promise to be the next wave in design automation applicability. The problems arise within the architectural (or system) level of abstraction very early in the design cycle. The recent research in this field attempts to bridge the design process gap between specification and design, and provides a platform for experimenting with hardware and software trade-offs. / This dissertation studies the requirements for an environment for architectural design. In particular, an environment specific to the telecommunications domain is proposed in order to limit the potentially large design exploration space. An intermediate design language is also introduced to accommodate both high level modelling and synthesis driven by the user and environment. Finally a Design Analysis and Synthesis Environment (DASE) is described to facilitate the architectural level activities. The environment, a proof of concept, provides generic modal library, simulation, synthesis and Petri-net analysis support. Realistic design examples are explored, to illustrate architectural design activities with the environment.
|
209 |
Reasons for non-compliance with mandatory information assurance policies by a trained populationShelton, D. Cragin 13 May 2015 (has links)
<p> Information assurance (IA) is about protecting key attributes of information and the data systems. Treating IA as a system, it is appropriate to consider the three major elements of any system: <i>people, processes,</i> and <i>tools.</i> While IA tools exist in the form of hardware and software, tools alone cannot assure key information attributes. IA procedures and the people that must follow those procedures are also part of the system. There is no argument that people do not follow IA procedures. A review of the literature showed that not only is there no general consensus on why people do not follow IA procedures, no discovered studies simply asked people their reasons. Published studies addressed reasons for non-compliance, but always within a framework of any one of several assumed theories of human performance. The study described here took a first small step by asking a sample from an under-studied population, users of U.S. federal government information systems, why they have failed to comply with two IA procedures related to password management, and how often. The results may lay the groundwork for extending the same methodology across a range of IA procedures, eventually suggesting new approaches to motivating people, modifying procedures, or developing tools to better meet IA goals. In the course of the described study, an unexpected result occurred. The study plan had included comparing the data for workers with and without IA duties. However, almost all of the respondents in the survey declared having IA duties. Consideration of a comment by a pilot study participant brought the realization that IA awareness programs emphasizing universal responsibility for information security may have caused the unexpected responses. The study conclusions address suggestions for refining the question in future studies.</p>
|
210 |
Post-Disaster Interim Housing| Forecasting Requirements and Determining Key Planning FactorsJachimowicz, Adam 16 September 2014 (has links)
<p> Common tenets in the field of emergency management hold that all disasters are different and all disasters hold a great deal of uncertainty. For these and many other reasons, many challenges are present when providing post-disaster assistance to victims. The Federal Emergency Management Agency (FEMA) has identified post-disaster interim housing as one of its greatest challenges. These challenges have been highlighted in recent years in the media as spectacular failures as evidenced during the recovery efforts for Hurricane Katrina. Partly in response, FEMA developed the <i>National Disaster Housing Strategy </i> that establishes the framework and strategic goals of providing housing to disaster victims. This strategy calls for emergency management professionals to both anticipate needs and balance a host of factors to provide quick, economical, and community-based housing solutions that meet the individual, family, and community needs while enabling recovery. The first problem is that emergency management officials need to make decisions early on without actual event data in order to provide timely interim housing options to victims. The second problem is that there is little guidance and no quantitative measures on prioritizing the many factors that these same officials need for providing interim housing. This research addressed both of these problems. To anticipate needs, a series of models were developed utilizing historical data provided by FEMA and regression analysis to produce a series of forecast models. The models developed were for the cost of a housing mission, the number of individuals applying to FEMA for assistance, the number of people eligible for housing assistance and the number of trailers FEMA will provide as interim housing. The variables analyzed and used to make the prediction were; population, wind-speed, homeownership rate, number of households, income, and poverty level. Of the four models developed, the first three demonstrated statistical significance, while the last one did not. The models were limited only to wind related hazards. These models and associated forecasts can assist federal, state, and local government officials with scoping and planning for a housing mission. In addition, the models also provide insight into how the six variables used to make the prediction can influence it. The second part of this research used a structured feedback process (Delphi) and expert opinion to develop a ranked list of the most important factors that emergency management officials should consider when conducting operational planning for a post-disaster housing mission. This portion of the research took guidance from the "National Disaster Housing Strategy" and attempted to quantify it based on the consensus opinion of a group of experts. The top three factors that were determined by the Delphi were 1) House disaster survivors as soon as possible 2) The availability of existing housing and 3) Status of infrastructure.</p>
|
Page generated in 0.1069 seconds