• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 871
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 21
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1767
  • 421
  • 360
  • 299
  • 272
  • 263
  • 254
  • 223
  • 211
  • 193
  • 179
  • 172
  • 129
  • 123
  • 123
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Uncovering bugs in P4 programs with assertion based verification / Revelando bugs em programação P4 com verificação baseada em asserções

Freire, Lucas Menezes January 2018 (has links)
Tendências recentes em redes definidas por software têm estendido a programabilidade de rede para o plano de dados através de linguagens de programação como P4. Infelizmente, a chance de introduzir bugs na rede também aumenta significativamente nesse novo contexto. Para prevenir bugs de violarem propriedades de rede, as técnicas de imposição e verificação podem ser aplicadas. Enquanto imposição procura monitorar ativamente o plano de dados para bloquear violações de propriedades, verificação visa encontrar bugs assegurando que o programa satisfaz seus requisitos. Abordagens de verificação de plano de dados existentes que são capazes de modelar programas P4 apresentam restrições severas no conjunto de propriedades que podem ser verificadas. Neste trabalho, nós propomos ASSERT-P4, uma abordagem de verificação de programas de plano de dados baseada em asserções e execução simbólica. Programadores de rede anotam programas P4 com asserções expressando propriedades gerais de corretude. Os programas anotados são transformados em modelos C e todos os seus caminhos possíveis são executados simbolicamente. Como execução simbólica é conhecida por possuir desafios de escalabilidade, nós também propomos um conjunto de técnicas que podem ser aplicadas neste domínio para tornar a verificação factível. Nomeadamente, nós investigamos o efeito das seguintes técnicas sobre o desempenho da verificação: paralelização, otimizações de compilador, limitações de pacotes e fluxo de controle, estratégia de reporte de bugs, e fatiamento de programas. Nós implementamos um protótipo para estudar a eficácia e eficiência da abordagem proposta. Nós mostramos que ela pode revelar uma ampla gama de bugs e defeitos de software, e é capaz de fazer isso em menos de um minuto considerando diversas aplicações P4 encontradas na literatura. Nós mostramos como uma seleção de técnicas de otimização em programas mais complexos pode reduzir o tempo de verificação em aproximadamente 85 por cento. / Recent trends in software-defined networking have extended network programmability to the data plane through programming languages such as P4. Unfortunately, the chance of introducing bugs in the network also increases significantly in this new context. To prevent bugs from violating network properties, the techniques of enforcement or verification can be applied. While enforcement seeks to actively monitor the data plane to block property violations, verification aims to find bugs by assuring that the program meets its requirements. Existing data plane verification approaches that are able to model P4 programs present severe restrictions in the set of properties that can be verified. In this work, we propose ASSERT-P4, a data plane program verification approach based on assertions and symbolic execution. Network programmers annotate P4 programs with assertions expressing general correctness properties. The annotated programs are transformed into C models and all their possible paths are symbolically executed. Since symbolic execution is known to have scalability challenges, we also propose a set of techniques that can be applied in this domain to make verification feasible. Namely, we investigate the effect of the following techniques on verification performance: parallelization, compiler optimizations, packet and control flow constraints, bug reporting strategy, and program slicing. We implemented a prototype to study the efficacy and efficiency of the proposed approach. We show it can uncover a broad range of bugs and software flaws, and can do it in less than a minute considering various P4 applications proposed in the literature. We show how a selection of the optimization techniques on more complex programs can reduce the verification time in approximately 85 percent.
362

Putting on a Show or Showing My True Self? When and Why Consumers Signal Accurate versus Enhanced Self-View Information

January 2016 (has links)
abstract: This research investigates the conditions under which people use consumption choices to signal accurate versus enhanced information about themselves to others. Across five studies, I demonstrate that activating a self-verification, as opposed to self-enhancement, motive leads consumers to choose products that signal accurate information about a self-view, even when this view is negative. I replicate this finding across several self-view domains, including physical attractiveness, power, and global self-esteem. However, I find that this effect is attenuated when consumers have a high fear of negative social evaluation. My findings suggest that this type of consumption, in which choice is driven by the desire to be seen accurately (vs. positively), can explain abundant real-world behavior; contradicting the notion that consumers choose products primarily for self-enhancement. / Dissertation/Thesis / Doctoral Dissertation Business Administration 2016
363

Formal Specification and Verification of Interactive Systems with Plasticity : Applications to Nuclear-Plant Supervision / Spécification formelle et vérification de systèmes interactifs avec plasticité : applications à la supervision nucléaire

Oliveira, Raquel Araùjo de 03 December 2015 (has links)
L'informatique ubiquitaire et la variété croissante des plates-formes et dispositifs changent les attentes des utilisateurs en termes d'interfaces utilisateur. Les systèmes devraient être en mesure de s'adapter à leur contexte d'utilisation, à savoir, la plate-forme (par exemple un PC ou une tablette), les utilisateurs qui interagissent avec le système (par exemple, les administrateurs ou les utilisateurs réguliers), et l'environnement dans lequel le système s'exécute (par exemple une pièce sombre ou en extérieur). La capacité d'une interface utilisateur à s'adapter aux variations de son contexte d'utilisation tout en préservant son utilisabilité est appelée plasticité.La plasticité fournit aux utilisateurs différentes versions d'une interface utilisateur. Bien qu'elle enrichisse les interfaces utilisateur, la plasticité complexifie leur développement: la cohérence entre plusieurs versions d'une interface donnée (une pour chaque contexte d'utilisation) devrait être assurée. Étant donné le grand nombre de versions possibles d'une interface utilisateur, il est coûteux de vérifier ces exigences à la main. Des automatisations doivent être alors fournies afin de vérifier la plasticité.Cette complexité est accentuée quand il s'agit de systèmes critiques. Les systèmes critiques sont des systèmes dans lesquels une défaillance a des conséquences graves (par exemple, décès ou blessures de personnes, dommages à l'environnement, perte ou endommagement de l'équipement, etc.). La complexité de ces systèmes se reflète dans les interfaces utilisateur, qui doivent maintenant non seulement fournir des moyens corrects, intuitifs, non ambiguës et adaptables pour les utilisateurs pour atteindre un but, mais qui doivent aussi faire face aux exigences de sécurité visant à assurer que les systèmes sont raisonnablement sûrs avant d'être mis sur le marché.Plusieurs techniques existent afin d'assurer la qualité des systèmes en général, qui peuvent être également appliquées pour les systèmes critiques. La vérification formelle fournit un moyen d'effectuer une vérification rigoureuse, qui est adaptée pour les systèmes critiques. Notre contribution est une approche de vérification des systèmes interactifs critiques et plastiques à l'aide de méthodes formelles. Avec l'utilisation d'un outil performant, notre approche permet :- La vérification d'ensembles de propriétés sur un modèle du système. Reposant sur la technique de "model checking", notre approche permet la vérification de propriétés sur la spécification formelle du système. Les propriétés d'utilisabilité permettent de vérifier si le système suit de bonnes propriétés ergonomiques. Les propriétés de validité permettent de vérifier si le système suit les exigences qui spécifient son comportement attendu.- La comparaison des différentes versions du système. Reposant sur la technique "d'équivalence checking", notre approche vérifie dans quelle mesure deux interfaces utilisateur offrent les mêmes capacités d'interaction et la même apparence. Nous pouvons ainsi montrer si deux modèles d'une interface utilisateur sont équivalents ou non. Dans le cas où ils ne sont pas équivalents, les divergences de l'interface utilisateur sont listées, offrant ainsi la possibilité de les sortir de l'analyse. De plus, l'approche permet également de montrer qu'une interface utilisateur peut contenir au moins toutes les capacités d'interaction d'une autre interface utilisateur.Nous présentons également dans cette thèse trois études de cas industriels dans le domaine des centrales nucléaires dans lesquelles l'approche a été appliquée. Ces études de cas montrent ainsi de nouvelles applications des méthodes formelles dans un contexte industriel. / The advent of ubiquitous computing and the increasing variety of platforms and devices change user expectations in terms of user interfaces. Systems should be able to adapt themselves to their context of use, i.e., the platform (e.g. a PC or a tablet), the users who interact with the system (e.g. administrators or regular users), and the environment in which the system executes (e.g. a dark room or outdoor). The capacity of a UI to withstand variations in its context of use while preserving usability is called plasticity.Plasticity provides users with different versions of a UI. Although it enhances UI capabilities, plasticity adds complexity to the development of user interfaces: the consistency between multiple versions of a given UI should be ensured. Given the large number of possible versions of a UI, it is time-consuming and error prone to check these requirements by hand. Some automation must be provided to verify plasticity.This complexity is further increased when it comes to UIs of safety-critical systems. Safety-critical systems are systems in which a failure has severe consequences. The complexity of such systems is reflected in the UIs, which are now expected not only to provide correct, intuitive, non-ambiguous and adaptable means for users to accomplish a goal, but also to cope with safety requirements aiming to make sure that systems are reasonably safe before they enter the market.Several techniques to ensure quality of systems in general exist, which can also be used to safety-critical systems. Formal verification provides a rigorous way to perform verification, which is suitable for safety-critical systems. Our contribution is an approach to verify safety-critical interactive systems provided with plastic UIs using formal methods. Using a powerful tool-support, our approach permits:-The verification of sets of properties over a model of the system. Using model checking, our approach permits the verification of properties over the system formal specification. Usability properties verify whether the system follows ergonomic properties to ensure a good usability. Validity properties verify whether the system follows the requirements that specify its expected behavior.-The comparison of different versions of UIs. Using equivalence checking, our approach verifies to which extent UIs present the same interaction capabilities and appearance. We can show whether two UI models are equivalent or not. When they are not equivalent, the UI divergences are listed, thus providing the possibility of leaving them out of the analysis. Furthermore, the approach shows that one UI can contain at least all interaction capabilities of another.We also present in this thesis three industrial case studies in the nuclear power plant domain which the approach was applied to, providing additional examples of successful use of formal methods in industrial systems.
364

VEasy : a tool suite towards the functional verification challenges / VEasy: um conjunto de ferramentas direcionado aos desafios da verificação funcional

Pagliarini, Samuel Nascimento January 2011 (has links)
Esta dissertação descreve um conjunto de ferramentas, VEasy, o qual foi desenvolvido especificamente para auxiliar no processo de Verificação Funcional. VEasy contém quatro módulos principais, os quais realizam tarefas-chave do processo de verificação como linting, simulação, coleta/análise de cobertura e a geração de testcases. Cada módulo é comentado em detalhe ao longo dos capítulos. Todos os módulos são integrados e construídos utilizando uma Interface Gráfica. Esta interface possibilita o uso de uma metodologia de criação de testcases estruturados em camadas, onde é possível criar casos de teste complexos através do uso de operações do tipo drag-and-drop. A forma de uso dos módulos é exemplificada utilizando projetos simples escritos em Verilog. As funcionalidades da ferramenta, assim como o seu desempenho, são comparadas com algumas ferramentas comerciais e acadêmicas. Assim, algumas conclusões são apresentadas, mostrando que o tempo de simulação é consideravelmente menor quando efetuada a comparação com as ferramentas comerciais e acadêmicas. Os resultados também mostram que a metodologia é capaz de permitir um alto nível de automação no processo de criação de testcases através do modelo baseado em camadas. / This thesis describes a tool suite, VEasy, which was developed specifically for aiding the process of Functional Verification. VEasy contains four main modules that perform linting, simulation, coverage collection/analysis and testcase generation, which are considered key challenges of the process. Each of those modules is commented in details throughout the chapters. All the modules are integrated and built on top of a Graphical User Interface. This framework enables the testcase automation methodology which is based on layers, where one is capable of creating complex test scenarios using drag-anddrop operations. Whenever possible the usage of the modules is exemplified using simple Verilog designs. The capabilities of this tool and its performance were compared with some commercial and academic functional verification tools. Finally, some conclusions are drawn, showing that the overall simulation time is considerably smaller with respect to commercial and academic simulators. The results also show that the methodology is capable of enabling a great deal of testcase automation by using the layering scheme.
365

Characterization of Cost Excess in Cloud Applications

January 2012 (has links)
abstract: The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations. / Dissertation/Thesis / Ph.D. Computer Science 2012
366

Decentralized firmware attestation for in-vehicle networks

Khodari, Mohammad January 2018 (has links)
Today's vehicles are controlled by several so called electronic control units (ECUs). ECUs can be seen as small computers that work together in order to perform a common task. They control everything from critical tasks such as engine control to less critical functionality such as window control. The most prominent trend that can be observed today is the development of self-driving functionality. Due to inherent complexity of self-driving functionality, ECUs are becoming more dependent on each other. A fundamental problem in today's vehicles is that there does not exist any efficient way of achieving trust in the vehicle's internal-network. How can ECUs be assured that the output of other ECUs can be trusted? If an ECU produces the wrong output when the vehicle is in autonomous mode it can lead to the vehicle performing unsafe actions and risking the lives of the passengers and driver. In this thesis we evaluate different already established firmware attestation solutions for achieving trust in a decentralized network. Furthermore, three new firmware attestation solutions specially tailored for the automotive domain are proposed. We demonstrate that all the found existing solutions have a fundamental flaw, they all have a single point of failure. Meaning that if you eliminate the correct node, the entire attestation process stops functioning. Thus, a new more robust solution specially tailored for the automotive domain needed to be developed. Three different consistency verification mechanisms were designed. One parallel solution, one serial solution and one merkle-tree solution. Two of the three proposed solutions, the parallel solution and serial solution, were implemented and assessed. Two tests were conducted, a detection performance test and a timing performance test. By assessing the detection performance test and timing performance test of the serial and parallel solutions, it was concluded that the parallel solution showed a significant improvement in both stability and performance over the serial solution.
367

VEasy : a tool suite towards the functional verification challenges / VEasy: um conjunto de ferramentas direcionado aos desafios da verificação funcional

Pagliarini, Samuel Nascimento January 2011 (has links)
Esta dissertação descreve um conjunto de ferramentas, VEasy, o qual foi desenvolvido especificamente para auxiliar no processo de Verificação Funcional. VEasy contém quatro módulos principais, os quais realizam tarefas-chave do processo de verificação como linting, simulação, coleta/análise de cobertura e a geração de testcases. Cada módulo é comentado em detalhe ao longo dos capítulos. Todos os módulos são integrados e construídos utilizando uma Interface Gráfica. Esta interface possibilita o uso de uma metodologia de criação de testcases estruturados em camadas, onde é possível criar casos de teste complexos através do uso de operações do tipo drag-and-drop. A forma de uso dos módulos é exemplificada utilizando projetos simples escritos em Verilog. As funcionalidades da ferramenta, assim como o seu desempenho, são comparadas com algumas ferramentas comerciais e acadêmicas. Assim, algumas conclusões são apresentadas, mostrando que o tempo de simulação é consideravelmente menor quando efetuada a comparação com as ferramentas comerciais e acadêmicas. Os resultados também mostram que a metodologia é capaz de permitir um alto nível de automação no processo de criação de testcases através do modelo baseado em camadas. / This thesis describes a tool suite, VEasy, which was developed specifically for aiding the process of Functional Verification. VEasy contains four main modules that perform linting, simulation, coverage collection/analysis and testcase generation, which are considered key challenges of the process. Each of those modules is commented in details throughout the chapters. All the modules are integrated and built on top of a Graphical User Interface. This framework enables the testcase automation methodology which is based on layers, where one is capable of creating complex test scenarios using drag-anddrop operations. Whenever possible the usage of the modules is exemplified using simple Verilog designs. The capabilities of this tool and its performance were compared with some commercial and academic functional verification tools. Finally, some conclusions are drawn, showing that the overall simulation time is considerably smaller with respect to commercial and academic simulators. The results also show that the methodology is capable of enabling a great deal of testcase automation by using the layering scheme.
368

A Software Verification & Validation Management Framework for the Space Industry

Schulte, Jan January 2009 (has links)
Software for space applications has special requirements in terms of reliability and dependability. As the verification & validation activities (VAs) of these software systems account for more than 50% of the development effort and the industry is faced with political and market pressure to deliver software faster and cheaper, new ways need to be established to reduce this verification & validation effort. In a research project together with RUAG Aerospace Sweden AB and the Swedish Space Corporation, the Blekinge Tekniska Högskola is trying to find out how to optimize the VAs with respect to effectiveness and efficiency. The goal of this thesis is therefore to develop a coherent framework for the management and optimization of verification & validation activities (VAMOS) and is evaluated at the RUAG Aerospace Sweden AB in Göteborg.
369

Systematic Review of Verification and Validation in Dynamic Programming Languages

Saeed, Farrakh, Saeed, Muhammad January 2008 (has links)
The Verification and Validation provides support to improve the quality of the software. Verification and Validation ensures that the product is stable and developed according to the requirements of the end user. This thesis presents a systematic review of dynamic programming languages and verification & validation practices used for dynamic languages. This thesis presents results found in dynamic programming languages and verification & validation over the period of 1985 – 2008. The study is aimed to start from identification of dynamic aspects along with the differences between static and dynamic languages. Furthermore, this thesis is also intends to give overview of the verification and validation practices for dynamic languages. Moreover to validate the verification and validation results, a survey consisting of (i) interviews and (ii) online survey is conducted. After the analysis of systematic review, it has been found that dynamic languages are making progress in some of the areas like integration of common development framework, language enhancement, dynamic aspects etc. The Dynamic languages are lacking in providing a better performance than static languages. There are also some factors found in this study that can raise the popularity of dynamic languages in the industry. Based on the analysis of systematic review, interviews and online survey, it is concluded that there is no difference between the methodologies available for Verification and Validation. It is also revealed that dynamic languages provide support to maintain software quality with their characteristics and dynamic features. Moreover, they also support to test softwares developed with static language. It is concluded that test driven development should be adopted while working with the dynamic languages. Test driven development is supposed to be a mandatory part of dynamic languages. / Farrakh Saeed +46765597558
370

An Effective Verification Strategy for Testing Distributed Automotive Embedded Software Functions: A Case Study

Chunduri, Annapurna January 2016 (has links)
Context. The share and importance of software within automotive vehicles is growing steadily. Most functionalities in modern vehicles, especially safety related functions like advanced emergency braking, are controlled by software. A complex and common phenomenon in today’s automotive vehicles is the distribution of such software functions across several Electronic Control Units (ECUs) and consequently across several ECU system software modules. As a result, integration testing of these distributed software functions has been found to be a challenge. The automotive industry neither has infinite resources, nor has the time to carry out exhaustive testing of these functions. On the other hand, the traditional approach of implementing an ad-hoc selection of test scenarios based on the tester’s experience, can lead to test gaps and test redundancies. Hence, there is a pressing need within the automotive industry for a feasible and effective verification strategy for testing distributed software functions. Objectives. Firstly, to identify the current approach used to test the distributed automotive embedded software functions in literature and in a case company. Secondly, propose and validate a feasible and effective verification strategy for testing the distributed software functions that would help improve test coverage while reducing test redundan- cies and test gaps. Methods. To accomplish the objectives, a case study was conducted at Scania CV AB, Södertälje, Sweden. One of the data collection methods was through conducting interviews of different employees involved in the software testing activities. Based on the research objectives, an interview questionnaire with open-ended and close-ended questions has been used. Apart from interviews, data from relevant ar- tifacts in databases and archived documents has been used to achieve data triangulation. Moreover, to further strengthen the validity of the results obtained, adequate literature support has been presented throughout. Towards the end, a verification strategy has been proposed and validated using existing historical data at Scania. Conclusions. The proposed verification strategy to test distributed automotive embedded software functions has given promising results by providing means to identify test gaps and test redundancies. It helps establish an effective and feasible approach to capture function test coverage information that helps enhance the effectiveness of integration testing of the distributed software functions.

Page generated in 0.1188 seconds