• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 245
  • 73
  • 31
  • 9
  • 6
  • 6
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 452
  • 452
  • 156
  • 139
  • 115
  • 99
  • 91
  • 77
  • 77
  • 52
  • 52
  • 49
  • 46
  • 45
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Detection of Feature Interactions in Automotive Active Safety Features

Juarez Dominguez, Alma L. January 2012 (has links)
With the introduction of software into cars, many functions are now realized with reduced cost, weight and energy. The development of these software systems is done in a distributed manner independently by suppliers, following the traditional approach of the automotive industry, while the car maker takes care of the integration. However, the integration can lead to unexpected and unintended interactions among software systems, a phenomena regarded as feature interaction. This dissertation addresses the problem of the automatic detection of feature interactions for automotive active safety features. Active safety features control the vehicle's motion control systems independently from the driver's request, with the intention of increasing passengers' safety (e.g., by applying hard braking in the case of an identified imminent collision), but their unintended interactions could instead endanger the passengers (e.g., simultaneous throttle increase and sharp narrow steering, causing the vehicle to roll over). My method decomposes the problem into three parts: (I) creation of a definition of feature interactions based on the set of actuators and domain expert knowledge; (II) translation of automotive active safety features designed using a subset of Matlab's Stateflow into the input language of the model checker SMV; (III) analysis using model checking at design time to detect a representation of all feature interactions based on partitioning the counterexamples into equivalence classes. The key novel characteristic of my work is exploiting domain-specific information about the feature interaction problem and the structure of the model to produce a method that finds a representation of all different feature interactions for automotive active safety features at design time. My method is validated by a case study with the set of non-proprietary automotive feature design models I created. The method generates a set of counterexamples that represent the whole set of feature interactions in the case study.By showing only a set of representative feature interaction cases, the information is concise and useful for feature designers. Moreover, by generating these results from feature models designed in Matlab's Stateflow translated into SMV models, the feature designers can trace the counterexamples generated by SMV and understand the results in terms of the Stateflow model. I believe that my results and techniques will have relevance to the solution of the feature interaction problem in other cyber-physical systems, and have a direct impact in assessing the safety of automotive systems.
192

Development Of A Library For Automated Verification Of Uml Models

Celik, Makbule Filiz 01 April 2006 (has links) (PDF)
Software designs are mostly modeled as Unified Modeling Language (UML) diagrams when object oriented software development is concerned. Some popular topics in the industry such as Model Driven Development, generating test cases automatically in the early phases of software development, automated generation of code from design model etc. use the benefits of UML designs. All of these topics have something in common which is the need for accuracy against the meta-model not to face problems in the latter phases of the development process. Support on the full checking of the design models is necessary for the detection of design inconsistencies. This thesis presents an approach for automated verification of UML design models and explains the implementation of the library called UMLChecker.
193

Software verification for programmable logic controllers

Huuck, Ralf. Unknown Date (has links) (PDF)
University, Diss., 2003--Kiel.
194

Abstraction-based verification of parameterized networks

Baukus, Kai. Unknown Date (has links) (PDF)
University, Diss., 2003--Kiel.
195

Exploring the limits of parameterized system verification

Stahl, Karsten. Unknown Date (has links) (PDF)
University, Diss., 2003--Kiel.
196

Safety-aware apprenticeship learning

Zhou, Weichao 03 July 2018 (has links)
It is well acknowledged in the AI community that finding a good reward function for reinforcement learning is extremely challenging. Apprenticeship learning (AL) is a class of “learning from demonstration” techniques where the reward function of a Markov Decision Process (MDP) is unknown to the learning agent and the agent uses inverse reinforcement learning (IRL) methods to recover expert policy from a set of expert demonstrations. However, as the agent learns exclusively from observations, given a constraint on the probability of the agent running into unwanted situations, there is no verification, nor guarantee, for the learnt policy on the satisfaction of the restriction. In this dissertation, we study the problem of how to guide AL to learn a policy that is inherently safe while still meeting its learning objective. By combining formal methods with imitation learning, a Counterexample-Guided Apprenticeship Learning algorithm is proposed. We consider a setting where the unknown reward function is assumed to be a linear combination of a set of state features, and the safety property is specified in Probabilistic Computation Tree Logic (PCTL). By embedding probabilistic model checking inside AL, we propose a novel counterexample-guided approach that can ensure both safety and performance of the learnt policy. This algorithm guarantees that given some formal safety specification defined by probabilistic temporal logic, the learnt policy shall satisfy this specification. We demonstrate the effectiveness of our approach on several challenging AL scenarios where safety is essential.
197

[en] AN APPROACH FOR DEALING WITH INCONSISTENCIES IN DATA MASHUPS / [pt] UMA ABORDAGEM PARA LIDAR COM INCONSISTÊNCIAS EM COMBINAÇÕES DE DADOS

EVELINE RUSSO SACRAMENTO FERREIRA 24 May 2016 (has links)
[pt] A grande quantidade de dados disponíveis na Web permite aos usuários combinarem e rapidamente integrarem dados provenientes de fontes diferentes, pertencentes ao mesmo domínio de aplicação. Entretanto, combinações de dados construídas a partir de fontes de dados independentes e heterogêneas podem gerar inconsistências e, portanto, confundir o usuário que faz uso de tais dados. Esta tese aborda o problema de criação de uma combinação consistente de dados a partir de fontes de dados mutuamente inconsistentes. Especificamente, aborda o problema de testar quando os dados a serem combinados são inconsistentes em relação a um conjunto pré-definido de restrições. As principais contribuições desta tese são: (1) a formalização da noção de combinação consistente de dados, tratando os dados retornados pelas fontes como uma Teoria de Defaults e considerando uma combinação consistente de dados como uma extensão desta teoria; (2) um verificador de modelos para uma família de Lógicas de Descrição, usado para analisar e separar os dados consistentes e inconsistentes, além de testar a consistência e completude das combinações de dados obtidas; (3) um procedimento heurístico para computar tais combinações consistentes de dados. / [en] With the amount of data available on the Web, consumers can mashup and quickly integrate data from different sources belonging to the same application domain. However, data mashups constructed from independent and heterogeneous data sources may contain inconsistencies and, therefore, puzzle the user when observing the data. This thesis addresses the problem of creating a consistent data mashup from mutually inconsistent data sources. Specifically, it deals with the problem of testing, when data to be combined is inconsistent with respect to a predefined set of constraints. The main contributions of this thesis are: (1) the formalization of the notion of consistent data mashups by treating the data returned from the data sources as a default theory and considering a consistent data mashup as an extension of this theory; (2) a model checker for a family of Description Logics, which analyzes and separates consistent from inconsistent data and also tests the consistency and completeness of the obtained data mashups; (3) a heuristic procedure for computing such consistent data mashups.
198

Gestion autonomique d'objets communicants dans le cadre des réseaux machine à machine sous des contraintes temporelles / Autonomic management of communicating objects in machine-to-machine systems operating under temporal constraints

Gharbi, Ghada 08 November 2016 (has links)
La baisse des coûts de communication, l'amélioration de la performance des réseaux et l'évolution des plateformes de services dédiées permettant de gérer une multitude d'objets, a conduit à l'apparition de nouveaux usages et de nouvelles applications rassemblées sous l'appellation "Machine-à-Machine'' abrégée en M2M. Ce travail de thèse propose de répondre aux défis d'autogestion caractérisés par les récentes études de l'informatique autonomique. Il traite de la modélisation et de la validation des systèmes M2M opérant dans un contexte dynamique et sous un ensemble de propriétés structurelles et temporisées. Pour ce faire, nous proposons de nous appuyer sur les grammaires de graphes et des techniques de model checking. Dans un premier temps, nous nous sommes intéressés à la vérification au moment de la conception des communications M2M opérant sous des contraintes temporisées. Pour ce faire, nous avons proposé une approche de vérification formelle basée sur les techniques de model checking. Pour caractériser les entités M2M ainsi que leurs propriétés temporisées, un modèle formel basé sur les automates temporisés a été introduit. Étant donné que les systèmes M2M impliquent un grand nombre d'éléments, une approche de vérification partielle du système a été adoptée. La vérification au moment de la conception est une étape très importante, cependant elle n'est pas suffisante. En effet, les systèmes M2M sont hautement dynamiques et leur adaptation au moment de l'exécution est cruciale pour garantir leur bon fonctionnement. Dans un premier temps, nous nous sommes intéressés à la gestion des propriétés structurelles des systèmes M2M. Pour ce faire, nous nous sommes référés au standard européen smartM2M pour définir un style architectural décrivant les organisations acceptables du système. Afin de conduire des actions de reconfiguration dynamiques, nous nous sommes basés sur les grammaires de graphes et des règles de transformation de graphes. L'approche de reconfiguration proposée a été ensuite étendue pour prendre en compte les contraintes temporisées lors de la reconfiguration des systèmes M2M. Pour ce faire, nous avons caractérisé les systèmes M2M en trois couches : une couche application qui exprime les propriétés temporisées entre les applications M2M, une couche service pour décrire les composants nécessaires à l'exécution des applications et une couche infrastructure décrivant le déploiement de ces derniers sur une infrastructure physique. Des mécanismes de reconfiguration dynamique guidés par les contraintes temporisées ont été proposés et implémentés dans un gestionnaire autonomique qui interagit avec ces différentes couches. Son rôle est de superviser, de contrôler, et de garantir le comportement temporisé du système. / The decrease in communication costs, the improvement of networks performance and the evolution of the dedicated services platforms managing multiple objects, led to the appearance of new practices and applications gathered under the designation of Machine-to-Machine communications (M2M). M2M systems have to integrate in a coordinated way various devices and software modules such as sensors, actuators, displays, middleware, etc. M2M expansion gives rise to extensive data exploitation, effective routing and reasoning mechanisms for an appropriate decision making and a coordinated control in a predictive and reactive way. This work aims to meet self-management challenges characterized by recent studies of autonomic computing. It deals with the modeling and the validation of M2M systems operating in a dynamic context and under a set of functional and non-functional properties, specifically temporal ones. To do so, we propose to rely on graph grammars and model checking related techniques. This allows to configure and to reconfigure a set of communicating objects by considering a set of constraints. First, we were interested in the validation at design time of M2M communications operating under temporal constraints. A verification and validation approach based on timed automata was proposed. A smart grid scenario was developed to validate the proposed model. This step is necessary, however it is not sufficient. Indeed, M2M systems are dynamic and verification at run time is important. To validate the execution of an M2M system, we focused on in its functional and temporal aspects. We referred to the European standard smartM2M to define an architectural style for M2M systems. This standard was selected for the following reasons: (1) its independence of the application domain and the objects' communication technology, (2) its broad scope and (3) its deployment on industrial systems. To validate the M2M system' functionalities, a multi-model approach was proposed: a first model, named functional, representing a real-time view of M2M system and a second model, named formal, based on a graph grammar incorporating the concepts of the functional layer. To conduct dynamic reconfiguration actions, graph transformation rules have been defined. Bi-directional communication mechanisms have been set up to maintain coherence between the real system and its models. A smart metering use case was developed to validate the proposed approach. With the aim of validating temporal properties of an M2M system during its execution, this approach has been extended with new concepts. We have defined a three-layers based approach to describe the features and temporal properties of an M2M system: an application layer which incorporates the concepts defined in the formal layer of the previous approach with extensions to express temporal properties between applications M2M, a service layer to describe the necessary components to meet the specification of the upper layer and infrastructure layer describing their deployment. An autonomic manager interacts with these layers to supervise and control the temporal behavior of the system. These layers are part of the autonomic manager knowledge base. The autonomic manager architecture and dynamic reconfiguration mechanisms were detailed. An eHealth scenario has been designed to illustrate the proposed approach.
199

A Comparison of DIMTEST and Generalized Dimensionality Discrepancy Approaches to Assessing Dimensionality in Item Response Theory

January 2013 (has links)
abstract: Dimensionality assessment is an important component of evaluating item response data. Existing approaches to evaluating common assumptions of unidimensionality, such as DIMTEST (Nandakumar & Stout, 1993; Stout, 1987; Stout, Froelich, & Gao, 2001), have been shown to work well under large-scale assessment conditions (e.g., large sample sizes and item pools; see e.g., Froelich & Habing, 2007). It remains to be seen how such procedures perform in the context of small-scale assessments characterized by relatively small sample sizes and/or short tests. The fact that some procedures come with minimum allowable values for characteristics of the data, such as the number of items, may even render them unusable for some small-scale assessments. Other measures designed to assess dimensionality do not come with such limitations and, as such, may perform better under conditions that do not lend themselves to evaluation via statistics that rely on asymptotic theory. The current work aimed to evaluate the performance of one such metric, the standardized generalized dimensionality discrepancy measure (SGDDM; Levy & Svetina, 2011; Levy, Xu, Yel, & Svetina, 2012), under both large- and small-scale testing conditions. A Monte Carlo study was conducted to compare the performance of DIMTEST and the SGDDM statistic in terms of evaluating assumptions of unidimensionality in item response data under a variety of conditions, with an emphasis on the examination of these procedures in small-scale assessments. Similar to previous research, increases in either test length or sample size resulted in increased power. The DIMTEST procedure appeared to be a conservative test of the null hypothesis of unidimensionality. The SGDDM statistic exhibited rejection rates near the nominal rate of .05 under unidimensional conditions, though the reliability of these results may have been less than optimal due to high sampling variability resulting from a relatively limited number of replications. Power values were at or near 1.0 for many of the multidimensional conditions. It was only when the sample size was reduced to N = 100 that the two approaches diverged in performance. Results suggested that both procedures may be appropriate for sample sizes as low as N = 250 and tests as short as J = 12 (SGDDM) or J = 19 (DIMTEST). When used as a diagnostic tool, SGDDM may be appropriate with as few as N = 100 cases combined with J = 12 items. The study was somewhat limited in that it did not include any complex factorial designs, nor were the strength of item discrimination parameters or correlation between factors manipulated. It is recommended that further research be conducted with the inclusion of these factors, as well as an increase in the number of replications when using the SGDDM procedure. / Dissertation/Thesis / M.A. Educational Psychology 2013
200

Design and Implementation of a Tool for Modeling, Simulation and Verification of Component-based Embedded Systems

Wang, Xiaobo January 2004 (has links)
Nowadays, embedded systems are becoming more and more complex. For this reason, designers focus more and more to adopt component-based methods for their designs. Consequently, there is an increasing interest on modeling and verification issues of component-based embedded systems. In this thesis, a tool, which integrates modeling, simulation and verification of component-based embedded systems, is designed and implemented. This tool uses the PRES+, Petri Net based Representation for Embedded Systems, to model component-based embedded systems. Both simulation and verification of systems are based on the PRES+ models. This tool consists of three integrated sub-tools, each of them with a graphical interface, the PRES+ Modeling tool, the PRES+ Simulation tool and the PRES+ Verification tool. The PRES+ Modeling tool is a graphical editor, with which system designers can model component-based embedded systems easily. The PRES+ Simulation tool, which is used to validate systems, visualizes the execution of a model in an intuitive manner. The PRES+ Verification tool provides a convenient access to a model checker, in which models can be formally verified with respect to temporal logic formulas.

Page generated in 0.3836 seconds