271 |
Automating test generation for discrete event oriented real-time embedded systemsCunning, Steven J., 1963- January 2000 (has links)
The purpose of this work is to provide a method for the automatic generation of test scenarios from the behavioral requirements of a system. The goal of the generated suite of test scenarios is to allow the system design to be validated against the requirements. The benefits of automatic test generation include improved efficiency, completeness (coverage), and objectivity (removal of human bias). The Model-based Codesign method is refined by defining a design process flow. This process flow includes the generation of test suites from requirements and the application of these tests across multiple levels of the design path. An approach is proposed that utilizes what is called a requirements model and a set of four algorithms. The requirements model is an executable model of the proposed system defined in a deterministic state-based modeling formalism. Each action in the requirements model that changes the state of the model is identified with a unique requirement identifier. The scenario generation algorithms perform controlled simulations of the requirements model in order to generate a suite of test scenarios applicable for black box testing. A process defining the generation and use of the test scenarios is developed. This process also includes the treatment of temporal requirements which are considered separately from the generation of the test scenarios. An algorithm is defined to combine the test scenarios with the environmental temporal requirements to produce timed test scenarios in the IEEE standard C/ATLAS test language. An algorithm is also defined to describe the behavior of the test environment as it interprets and applies the C/ATLAS test programs. Finally, an algorithm to analyze the test results logged while applying the test scenario is defined. Measurements of several metrics on the scenario generation algorithms have been collected using prototype tools. The results support the position that the algorithms are performing reasonably well, that the generated test scenarios are adequately efficient, and that the processing time needed for test generation grows slowly enough to support much larger systems.
|
272 |
Integrated scenario and process modeling support for collaborative requirements elicitationHickey, Ann Marie January 1999 (has links)
Information systems development research has documented the importance and the difficulty of eliciting requirements from users. Research on the use of Group Support Systems (GSS) for requirements elicitation led to development of the Collaborative Software Engineering Methodology (CSEM) and identified the need for collaborative methods and tools to provide a dynamic picture of the business processes that a system must support. Recent research suggests that scenarios can fill this need. A review of the scenario literature showed that although there is widespread agreement on the usefulness of scenarios, there are many questions on how to implement a user-focused, scenario-based systems development process. The purpose of this research was to advance understanding in this area and to determine: What are the collaborative modeling processes, tools, and facilitation techniques needed to effectively elicit scenarios from users in a group environment? A two-phase, multi-method systems development research approach was used. The first phase focused on use of a general-purpose GSS for collaborative scenario elicitation. A conceptual framework and initial methodology were developed and then evaluated during exploratory case studies and a laboratory experiment. The second phase focused on development and evaluation of a special-purpose GSS and methodology. Phase I results showed that: users can easily define scenarios which provide rich pictures of the problem domain; an iterative, collaborative methodology with scenario and action prompts is needed to ensure scenario completeness; and limitations of general-purpose GSS negatively impacted productivity. The Collaborative Distributed Scenario and Process Analyzer (SPA) provides integrated textual scenario and graphical process modeling capabilities which successfully overcame these limitations. This research made several contributions. CSEM was extended to define scenario usage opportunities throughout development. Scenario content, form, group process and facilitation techniques were defined for collaborative scenario elicitation using a general-purpose GSS, which can be used now by practitioners. A special-purpose GSS tool (SPA) was developed and integrated into a comprehensive methodology which allows user groups to rapidly define and analyze scenarios in face-to-face and distributed settings. Finally, flexibility designed into SPA opens up opportunities for many other uses for SPA and serves as a first-step towards a build-your-own GSS tool.
|
273 |
Mechatronic design of flexible manipulatorsZhou, Pixuan January 1999 (has links)
The construction of lightweight manipulators with a larger speed range is one of the major goals in the design of well-behaving industrial robotic arms. Their use will lead to higher productivity and less energy consumption than is common with heavier, rigid arms. However, due to the flexibility involved with link deformation and the complexity of distributed parameter systems, modeling and control of flexible manipulators still remain a major challenge to robotic research. A compromise between modeling costs and control efficiency for real-time implementation is inevitable. The interdependency of subsystems results in a local optimal performance in the traditional design scheme. An important research topic in flexible manipulator design is the pursuit of better system performance while avoiding model-intensive or control-intensive work. This problem can be solved using the proposed mechatronic design approach. It treats the mechanical, electrical and control components of a flexible manipulator concurrently. The result is an improved design with an explicit link shape and controller parameters which result in the control problem and modeling accuracy no longer being critical for obtaining desired performance. Dynamics of flexible manipulators with rotatory inertia are derived, and state-space equations with the integration of DC motor dynamics are developed as a theoretical base for mechatronic designs. Two case studies based on LQR formula and Hinfinity control are considered. The beam shape and controller parameters are obtained using an adaptive iterative algorithm with the accommodation of various geometrical constraints. Also, different output feedback strategies are investigated to evaluate the impacts of various controller structures. Finally, a sensitivity analysis in terms of parameter variations and model uncertainties is conducted to reveal the robustness of this mechatronic design.
|
274 |
Potential features for object identification by robotsHussain, Sheikh Akmal, 1963- January 1991 (has links)
Object identification is very important for Robotic Manipulation of objects. A study on potential has been done. Three prime techniques of features extraction have been analyzed: Vision, Touch and Destruction of materials (NDE). The role of perceptual organization in aiding object identification is also discussed. A list of features has been obtained for each technique. Evaluation, based on cost of computation and accuracy of computation, of techniques for features extraction is presented. Some sample object identification systems have been designed using Classification Expert System Maker (CESM). Use of D-matrix (distiguishability matrix) is emphasized to get the optimal feature set used to generate a classification tree. The classification tree is transferred into CESM knowledge base to obtain an expert system. A comprehensive multisensor system for object identification is proposed.
|
275 |
Intelligent space laboratory organizational design using system entity structure conceptsKelly, Michael Robert, 1953- January 1990 (has links)
This thesis is the product of a knowledge acquisition effort, whose objective was to obtain information essential to the modelling and simulation of a robotically operated laboratory on board the forthcoming space station "Freedom." The information is represented using the system entity structure, a knowledge representation scheme that utilizes artificial intelligence concepts. The system entity structure details the design information and associated knowledge required for the intelligent autonomous operation of the space-based laboratory. The approach is proven to be very beneficial for organizing and displaying the vast amounts of information that constitute this intricate system design. Knowledge management, representation, and the nature of a future software implementation are also addressed.
|
276 |
Internal quality audit program in the aerospace industryTubalado, Dario M. 18 September 2013 (has links)
<p>Internal quality auditing (IQA) in the aerospace defense industry is not optional. Under Part 46 of the Federal Acquisition Regulation (FAR) all businesses providing product and services to the U.S. government are required to comply with their contract's quality requirements. The amount of compliance audits organizations receive are directly proportional to the number of government related contracts they possess. Therefore, most organizations are forced to focus IQAs on compliance to survive. The release of AS9100 international aerospace standards in 1999 was pivotal in eliminating these multiple audit requirements that plague the industry. However, the focus on IQA for compliance has remained rooted within the IQA system. Audit experts claim that recent updates included on AS9100 Rev C would change IQA's focus from auditing for compliance to auditing for effectiveness and performance. </p>
|
277 |
Data-based Analysis and Control for Nonlinear Dynamical SystemsWang, Zhuo 03 October 2013 (has links)
<p> In recent years, human society has entered the age of big data. With the rapid development of information sciences and technology, many businesses and industries have undergone great changes. While the scale of the enterprises is increasing, production technology, production equipments and industrial processes are becoming more and more complex. As a consequence, traditional methods for analyzing and controlling the system, which need to establish mathematical models based on physical and chemical mechanisms, have become infeasible. Due to modern digital sensor, digital storage, digital communication and processing technologies and their universal applications, these enterprises generate a vast amount of data on a daily basis, which reflect various information about the system dynamics. How to effectively use these online and offline data to combine data mining, pattern recognition and computer technologies with control theory and systems engineering, has become a very important issue that needs to be addressed. </p><p> The aim of our work is to develop some data-based methods not only for system analysis, but also for nonlinear systems control under the condition of no explicit mathematical models. The study on system properties is an important topic in control theory and systems engineering. We first develop a series of data-based methods for analyzing the state/output controllability and state observability, the stability of the equilibrium point, the input-to-state stability as well as the input-to-output stability of linear discrete-time systems, which have unknown parameter matrices. These data-based methods only use measured data to compute the state/output controllability matrices and the state observability matrix, in order to verify the corresponding properties. Compared with the traditional model-based approaches, which have to identify the system parameter matrices, they have the advantages of higher calculation precision and lower computational complexity. We then develop some data-based methods to analyze the stability of a class of nonlinear discrete-time systems, which have unknown mathematical models. These methods also study the domain of attraction for the asymptotically stable equilibrium point, by using the measured state data. </p><p> After system analysis, we give a direct data-based output feedback control (DDBOFC) method for a class of nonlinear systems, which have unknown mathematical models. This method is characterized by the low requirement for the priori knowledge about the system dynamics, and it studies the control problem in two stages. In the first stage, we assumed that there are not measurement noises or process noises in the measured data. We apply a fast sampling technique to measure the output signal, which contains information about the nonlinear system. The zero-order hold (ZOH) as well as the control switch are also applied to collect system information. Then, the corresponding Jacobian matrices are calculated according to these sampled output data, and the feedback gain matrix is calculated and adjusted based on these Jacobian matrices. Theoretical analysis on the convergence and simulation results demonstrate the feasibility of this DDBOFC method. In the second stage, we study the case where there are measurement noises in the sampled output data. We still apply the fast sampling, the ZOH and the control switch for information collection. This method applies a data-based least squares estimation (DBLSE) algorithm to obtain the best unbiased estimators of corresponding Jacobian matrices, based on which the feedback gain matrix is calculated and adjusted. Also, we present theoretical analysis and computer simulation to demonstrate the feasibility.</p><p> The last part of this thesis is an indirect data-based output feedback control (IDBOFC) method for a class of nonlinear discrete-time systems, which have unknown mathematical models. This IDBOFC method is also based on some prior knowledge about the system. We first use the neural network and historical I/O data to establish an equivalent model, which approximates the original system. Then, with this approximate model and the measured output data, we use a nonlinear programming method to estimate the corresponding Jacobian matrices. The feedback controller is designed in real-time according to these Jacobian matrices, which can drive the output signal to its desired value. </p>
|
278 |
A path planning and obstacle avoidance hybrid system using a connectionist networkSchuster, Christopher Emmet January 1991 (has links)
Automated path planning and obstacle avoidance has been the subject of intensive research in recent times. Most efforts in the field of semiautonomous mobile-robotic navigation involve using Artificial Intelligence search algorithms on a structured environment to achieve either good or optimal paths. Other approaches, such as incorporating Artificial Neural Networks, have also been explored. By implementing a hybrid system using the parallel-processing features of connectionist networks and simple localized search techniques, good paths can be generated using only low-level environmental sensory data. This system can negotiate structured two- and three-dimensional grid environments, from a start position to a goal, while avoiding all obstacles. Major advantages of this method are that solution paths are good in a global sense and path planning can be accomplished in real time if the system is implemented in customized parallel-processing hardware. This system has been proven effective in solving two- and three-dimensional maze-type environments.
|
279 |
Towards a behavioral approach to linear approximate modelingGatt, George John January 1993 (has links)
In this thesis, the foundations for the development of a behavioral approach to linear approximate modeling, are established. A particular data set, consisting of stable, discrete-time, purely exponential time series and a specific class of dynamical models are considered. A misfit function, between the data measurements and a system, belonging to this model class, is defined and the problem of characterizing all members of our model class, for which the value of the misfit function remains below a prespecified error level, is addressed. The concept of the block Hankel matrix, constructed from the data measurements, is then introduced, and it is shown that the optimal Hankel-norm approximation theory provides the main tool for a partial solution of the above problem.
|
280 |
Message passing versus distributed shared memory on networks of workstationsLu, Honghui January 1995 (has links)
We compared the message passing library Parallel Virtual Machine (PVM) with the distributed shared memory system TreadMarks, on networks of workstations. We presented the performance of nine applications, including Water and Barnes-Hut from the SPLASH benchmarks; 3-D FFT, Integer Sort and Embarrassingly Parallel from the NAS benchmarks; ILINK, a widely used genetic analysis program; and SOR, TSP, and QuickSort.
TreadMarks performed nearly identical to PVM on computation bound programs, such as the Water simulation of 1728 molecules. For most of the other applications, including ILINK, TreadMarks performed within 75% of PVM with 8 processes. The separation of synchronization and data transfer, and additional messages to request updates for data in the invalidate-based shared-memory protocol were two of the reasons for TreadMarks's lower performance. TreadMarks also suffered from extra data communication due to false sharing. Moreover, PVM benefited from the ability to aggregate scattered data in a single message.
|
Page generated in 0.046 seconds