• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 867
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1759
  • 421
  • 359
  • 295
  • 270
  • 260
  • 254
  • 223
  • 211
  • 192
  • 179
  • 171
  • 128
  • 123
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

HIERARCHICAL HYBRID-MODEL BASED DESIGN, VERIFICATION, SIMULATION, AND SYNTHESIS OF MISSION CONTROL FOR AUTONOMOUS UNDERWATER VEHICLES

Bhattacharyya, Siddhartha 01 January 2005 (has links)
The objective of modeling, verification, and synthesis of hierarchical hybrid mission control for underwater vehicle is to (i) propose a hierarchical architecture for mission control for an autonomous system, (ii) develop extended hybrid state machine models for the mission control, (iii) use these models to verify for logical correctness, (iv) check the feasibility of a simulation software to model the mission executed by an autonomous underwater vehicle (AUV) (v) perform synthesis of high-level mission coordinators for coordinating lower-level mission controllers in accordance with the given mission, and (vi) suggest further design changes for improvement. The dissertation describes a hierarchical architecture in which mission level controllers based on hybrid systems theory have been, and are being developed using a hybrid systems design tool that allows graphical design, iterative redesign, and code generation for rapid deployment onto the target platform. The goal is to support current and future autonomous underwater vehicle (AUV) programs to meet evolving requirements and capabilities. While the tool facilitates rapid redesign and deployment, it is crucial to include safety and performance verification into each step of the (re)design process. To this end, the modeling of the hierarchical hybrid mission controller is formalized to facilitate the use of available tools and newly developed methods for formal verification of safety and performance specifications. A hierarchical hybrid architecture for mission control of autonomous systems with application to AUVs is proposed and a theoretical framework for the models that make up the architecture is outlined. An underwater vehicle like any other autonomous system is a hybrid system, as the dynamics of the vehicle as well as its vehicle level control is continuous whereas the mission level control is discrete, making the overall system a hybrid system i.e., one possessing both continuous and discrete states. The hybrid state machine models of the mission controller modules is derived from their implementation done using TEJA, a software for representing hybrid systems with support for auto code generation. The verification of their logical correctness properties has been done using UPPAAL, a software tool for verification of timed automata a special kind of hybrid system. A Teja to Uppaal converter, called dem2xml, has been created at Applied Reserarch Lab that converts a hybrid (timed) autonomous system description in Teja to an Uppaal system description. Verification work involved developing abstract models for the lower level vehicle controllers with which the mission controller modules interact and follow a hierarchical approach: Assuming the correctness of level-zero or vehicle controllers, we establish the correctness of level-one mission controller modules, and then the correctness of level-two modules, etc. The goal of verification is to show that any valid meaning for a mission formalized in our research verifies the safe and correct execution of actions. Simulation of the sequence of actions executed for each of the operations give a better view of the combined working of the mission coordinators and the low level controllers. So we next looked into the feasibility of simulating the operations executed during a mission. A Perl program has been developed to convert the UPPAAL files in .xml format to OpenGL graphic files. The graphic files simulate the steps involved in the execution of a sequence of operations executed by an AUV. The highest level coordinators send mission orders to be executed by the lower level controllers. So a more generalized design of the highest level controllers would help to incorporate the execution of a variety of missions for a vast field of applications. Initially, we consider manually synthesized mission coordinator modules. Later we design automated synthesis of coordinators. This method synthesizes mission coordinators which coordinate the lower level controllers for the execution of the missions ordered and can be used for any autonomous system.
332

Synthesis of correct-by-design schedulers for hybrid systems

Soulat, Romain 18 February 2014 (has links) (PDF)
In this thesis, we are interested in designing schedulers for hybrid systems. We consider two specific subclasses of hybrid systems, real-time systems where tasks are competing for the access to common resources, and sampled switched systems where a choice has to be made on dynamics of the system to reach goals. Scheduling consists in defining the order in which the tasks will be run on the processors in order to complete all the tasks before a given deadline. In the first part of this thesis, we are interested in the scheduling of periodic tasks on multiprocessor architectures. We are especially interested in the robustness of schedulers, i.e., to prove that some values of the system parameters can be modified, and until what value they can be extended while preserving the scheduling order and meeting the deadlines. The Inverse Method can be used to prove the robustness of parametric timed systems. In this thesis, we introduce a state space reduction technique which allows us to treat challenging case studies such as one provided by Astrium EADS for the launcher Ariane 6. We also present how an extension of the Inverse Method, the Behavioral Cartography, can solve the problem of schedulability, i.e., finding the area in the parametric space in which there exists a scheduler that satisfies all the deadlines. We compare this approach to an analytic method to illustrate the interest of our approach In the second part of this thesis, we are interested in the control of affine switched systems. These systems are governed by a finite family of affine differential equations. At each time step, a controller can choose which dynamics will govern the system for the next time step. Controlling in this sense can be seen as a scheduling on the order of dynamics the system will have to use. The objective for the controller can be to make the system stay in a given area of the state space (stability) or to reach a given region of the state space (reachability). In this thesis, we propose a novel approach that computes a scheduler where the strategy is uniform for dense subsets of the state space. Moreover, our approach only uses forward computation, which is better suited than backward computation for contractive systems. We show that our designed controllers, systems evolve to a limit cyclic behavior. We apply our method to several case studies from the literature and on a real-life prototype of a multilevel voltage converter. Moreover, we show that our approach can be extended to systems with perturbations and non-linear dynamics.
333

Commissioning of modulator-based IMRT with XiO treatment planning system

Obata, Yasunori, Oguchi, Hiroshi 01 1900 (has links)
No description available.
334

Accelerating Mixed-Abstraction SystemC Models on Multi-Core CPUs and GPUs

Kaushik, Anirudh Mohan January 2014 (has links)
Functional verification is a critical part in the hardware design process cycle, and it contributes for nearly two-thirds of the overall development time. With increasing complexity of hardware designs and shrinking time-to-market constraints, the time and resources spent on functional verification has increased considerably. To mitigate the increasing cost of functional verification, research and academia have been engaged in proposing techniques for improving the simulation of hardware designs, which is a key technique used in the functional verification process. However, the proposed techniques for accelerating the simulation of hardware designs do not leverage the performance benefits offered by multiprocessors/multi-core and heterogeneous processors available today. With the growing ubiquity of powerful heterogeneous computing systems, which integrate multi-processor/multi-core systems with heterogeneous processors such as GPUs, it is important to utilize these computing systems to address the functional verification bottleneck. In this thesis, I propose a technique for accelerating SystemC simulations across multi-core CPUs and GPUs. In particular, I focus on accelerating simulation of SystemC models that are described at both the Register-Transfer Level (RTL) and Transaction Level (TL) abstractions. The main contributions of this thesis are: 1.) a methodology for accelerating the simulation of mixed abstraction SystemC models defined at the RTL and TL abstractions on multi-core CPUs and GPUs and 2.) An open-source static framework for parsing, analyzing, and performing source-to-source translation of identified portions of a SystemC model for execution on multi-core CPUs and GPUs.
335

Examining Methods and Practices of Source Data Verification in Canadian Critical Care Randomized Controlled Trials

Ward, Roxanne E. 21 March 2013 (has links)
Statement of the Problem: Source data verification (SDV) is the process of comparing data collected at the source to data recorded on a Case Report Form, either paper or electronic (1) to ensure that the data are complete, accurate and verifiable. Good Clinical Practice (GCP) Guidelines are vague and lack evidence as to the degree of SDV and whether or not SDV affects study outcomes. Methods of Investigation: We performed systematic reviews to establish the published evidence-base for methods of SDV and to examine the effect of SDV on study outcomes. We then conducted a national survey of Canadian Critical Care investigators and research coordinators regarding their attitudes and beliefs regarding SDV. We followed by an audit of the completed and in-progress Randomized Controlled Trials (RCTs) of the Canadian Critical Care Trials Group (CCCTG). Results: Systematic Review of Methods of SDV: The most common reported or recommended frequency of source data verification (10/14 - 71%) was either based on level or risk, or that it be conducted early (i.e. after 1st patient enrolled). The amount of SDV recommended or reported, varied from 5-100%. Systematic Review of Impact of SDV on Study Outcomes: There was no difference in study outcomes for 1 trial and unable to assess in the other. National Survey of Critical Care Investigators and Research Coordinators: Data from the survey found that 95.8% (115/120) of respondents believed that SDV was an important part of Quality Assurance; 73.3% (88/120) felt that academic studies should do more SDV; and 62.5% (75/120) felt that there is insufficient funding available for SDV. Audit of Source Data Verification Practices in CCCTG RCTs: In the national audit of in-progress and completed CCCTG RCTs, 9/15 (60%) included a plan for SDV and 8/15 (53%) actually conducted SDV. Of the 9 completed published trials, 44% (4/9) conducted SDV. Conclusion: There is little evidence base for methods and effect of SDV on study outcomes. Based on the results of the systematic review, survey, and audit, more research is needed to support the evidence base for the methods and effect of SDV on study outcomes.
336

Collaborative supply chain modelling and performance measurement

Angerhofer, Bernhard J. January 2002 (has links)
For many years, supply chain research focused on operational aspects and therefore mainly on the optimisation of parts of the production and distribution processes. Recently, there has been an increasing interest in supply chain management and collaboration between supply chain partners. However, there is no model that takes into consideration all aspects required to adequately represent and measure the performance of a collaborative supply chain. This thesis proposes a model of a collaborative supply chain, consisting of six constituents, all of which are required in order to provide a complete picture of such a collaborative supply chain. In conjunction with that, a collaborative supply chain performance indicator is developed. It is based on three types of measures to allow the adequate measurement of collaborative supply chain performance. The proposed model of a collaborative supply chain and the collaborative supply chain performance indicator are implemented as a computer simulation. This is done in the form of a decision support environment, whose purpose is to show how changes in any of the six constituents affect collaborative supply chain performance. The decision support environment is configured and populated with information and data obtained in a case study. Verification and validation testing in three different scenarios demonstrate that the decision support environment adequately fulfils it purpose.
337

Combining vision verification with a high level robot programming language

Yin, Baolin January 1984 (has links)
This thesis describes work on using vision verification within an object level language for describing robot assembly (RAPT). The motivation for this thesis is provided by two problems. The first is how to enhance a high level robot programming language so that it can encompass vision commands to locate workpieces of an assembly. The second is how to find a way of making full use of sensory information to update the robot system's knowledge about the environment. The work described in this thesis consists of three parts: (1) adding vision commands into the RAPT input language so that the user can specify vision verification tasks; (2) implementing a symbolic geometrical reasoning system so that vision data can be reasoned about symbolically at compile time in order to speed up run time operations; (3) providing a framework which enables the RAPT system to make full use of the sensory information. The vision commands allow partial information about positions to be combined with sensory information in a general way, and the symbolic reasoning system allows much of the reasoning work about vision information to be done before the actual information is obtained. The framework combines a verification vision facility with an object level language in an intelligent way so that all ramifications of the effects of sensory data are taken account of. The heart of the framework is the modifying factor array. The position of each object is expressed as the product of two parts: the planned position and the difference between this and "he actual one. This difference, referred to as the modifying factor of an object, is stored in the modifying factor array. The planned position is described by the user in the usual way in a RAPT program and its value is inferred by the RAPT reasoning system. Modifying factors of objects whose positions are directly verified are defined at compile time as symbolic expressions containing variables whose value will become known at run time. The modifying factors of other objects (not directly verified) may be dependent upon positions of objects which are verified. At compile time the framework reasons about the influence of the sensory information on the objects which are not verified directly by the vision system, and establishes connections among modifying factors of objects in each situation. This framework makes the representation of the influence of vision information on the robot's knowledge of the environment compact and simple. All the programming has been done. It has been tested with simulated data and works successfully.
338

Task effects on sentence processing using eye-tracking

玉岡, 賀津雄, 早川, 杏子, TAMAOKA, Katsuo, HAYAKAWA, Kyoko, MANSBRIDGE, Michael 05 December 2014 (has links)
No description available.
339

Semantic Integration of Time Ontologies

Ong, Darren 15 December 2011 (has links)
Here we consider the verification and semantic integration for the set of first-order time ontologies by Allen-Hayes, Ladkin, and van Benthem that axiomatize time as points, intervals, or a combination of both within an ontology repository environment. Semantic integration of the set of time ontologies is explored via the notion of theory interpretations using an automated reasoner as part of the methodology. We use the notion of representation theorems for verification by characterizing the models of the ontology up to isomorphism and proving that they are equivalent to the intended structures for the ontology. Provided is a complete account of the meta-theoretic relationships between ontologies along with corrections to their axioms, translation definitions, proof of representation theorems, and a discussion of various issues such as class-quantified interpretations, the impact of namespacing support for Common Logic, and ontology repository support for semantic integration as related to the time ontologies examined.
340

Semantic Integration of Time Ontologies

Ong, Darren 15 December 2011 (has links)
Here we consider the verification and semantic integration for the set of first-order time ontologies by Allen-Hayes, Ladkin, and van Benthem that axiomatize time as points, intervals, or a combination of both within an ontology repository environment. Semantic integration of the set of time ontologies is explored via the notion of theory interpretations using an automated reasoner as part of the methodology. We use the notion of representation theorems for verification by characterizing the models of the ontology up to isomorphism and proving that they are equivalent to the intended structures for the ontology. Provided is a complete account of the meta-theoretic relationships between ontologies along with corrections to their axioms, translation definitions, proof of representation theorems, and a discussion of various issues such as class-quantified interpretations, the impact of namespacing support for Common Logic, and ontology repository support for semantic integration as related to the time ontologies examined.

Page generated in 0.1435 seconds