• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 871
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 21
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1767
  • 421
  • 360
  • 299
  • 272
  • 263
  • 254
  • 223
  • 211
  • 193
  • 179
  • 172
  • 129
  • 123
  • 123
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Validace parametrů sítě založená na sledování síťového provozu / Validation of Network Parameters Based on Network Monitoring

Martínek, Radim January 2011 (has links)
The Master's Thesis presents a theoretical introduction, familiarization with the issue and a implementation for a solution of a "network parameter validation" tool, which is founded on principle of network traffic monitoring. Firstly, the current development of computer network setup is analyzed with its limitations. This is an initial point for an introduction of a new approach for implementation and verification of required network setting, which uses techniques of verification, simulation and validation. After the introduction into the context, validation techniques are specifically examined. The Thesis main contribution lies in the capacity to determine appropriate parameters, which can be used for validation and also for implementation of the tool, which ensures validation process. The network traffic, which characterizes the behavior of the network, is collected by NetFlow technology, which generates network flows. These flows are consequently used by the designed tool used for validation of required network parameters. This process overall verifies whether the main computer network requirements have been met or not.
392

Prostředí pro verifikaci digitálních filtrů / Software for digital filter verification

Tesařík, Jan January 2016 (has links)
Diploma thesis deals with design of verification environment for analyzing systems with digital filters. Verification environment is written in SystemVerilog language and it is generated by program, which is also providing generation of input data for system of filters. Matlab environment is used for gaining the reference data. The simulation of the designed involvement with digital filters is performed by program ModelSim. The most watched parameter is functional coverage which indicates how big part of the HDL description has been tested.
393

Vérification par model-checking de programmes concurrents paramétrés sur des modèles mémoires faibles / Verification via Model Checking of Parameterized Concurrent Programs on Weak Memory Models

Declerck, David 24 September 2018 (has links)
Les multiprocesseurs et microprocesseurs multicœurs modernes mettent en oeuvre des modèles mémoires dits faibles ou relâchés, dans dans lesquels l'ordre apparent des opérations mémoire ne suit pas la cohérence séquentielle (SC) proposée par Leslie Lamport. Tout programme concurrent s'exécutant sur une telle architecture et conçu avec un modèle SC en tête risque de montrer à l'exécution de nouveaux comportements, dont certains sont potentiellement des comportements incorrects. Par exemple, un algorithme d'exclusion mutuelle correct avec une sémantique par entrelacement pourrait ne plus garantir l'exclusion mutuelle lorsqu'il est mis en oeuvre sur une architecture plus relâchée. Raisonner sur la sémantique de tels programmes s'avère très difficile. Par ailleurs, bon nombre d'algorithmes concurrents sont conçus pour fonctionner indépendamment du nombre de processus mis en oeuvre. On voudrait donc pouvoir s'assurer de la correction d'algorithmes concurrents, quel que soit le nombre de processus impliqués. Pour ce faire, on s'appuie sur le cadre du Model Checking Modulo Theories (MCMT), développé par Ghilardi et Ranise, qui permet la vérification de propriétés de sûreté de programmes concurrents paramétrés, c'est-à-dire mettant en oeuvre un nombre arbitraire de processus. On étend cette technologie avec une théorie permettant de raisonner sur des modèles mémoires faibles. Le résultat ce ces travaux est une extension du model checker Cubicle, appelée Cubicle-W, permettant de vérifier des propriétés de systèmes de transitions paramétrés s'exécutant sur un modèle mémoire faible similaire à TSO. / Modern multiprocessors and microprocesseurs implement weak or relaxed memory models, in which the apparent order of memory operation does not follow the sequential consistency (SC) proposed by Leslie Lamport. Any concurrent program running on such architecture and designed with an SC model in mind may exhibit new behaviors during its execution, some of which may potentially be incorrect. For instance, a mutual exclusion algorithm, correct under an interleaving semantics, may no longer guarantee mutual exclusion when implemented on a weaker architecture. Reasoning about the semantics of such programs is a difficult task. Moreover, most concurrent algorithms are designed for an arbitrary number of processus. We would like to ensure the correctness of concurrent algorithms, regardless of the number of processes involved. For this purpose, we rely on the Model Checking Modulo Theories (MCMT) framework, developed by Ghilardi and Ranise, which allows for the verification of safety properties of parameterized concurrent programs, that is to say, programs involving an arbitrary number of processes. We extend this technology with a theory for reasoning about weak memory models. The result of this work is an extension of the Cubicle model checker called Cubicle-W, which allows the verification of safety properties of parameterized transition systems running under a weak memory model similar to TSO.
394

System Transition and Integration Engineering

Cory, Jeffrey A. 24 October 2009 (has links)
see document / Master of Science
395

Z textové specifikace k formální verifikaci / From textual specification to formal verification

Šimko, Viliam January 2013 (has links)
Textual use-cases have been traditionally used at the design stage of the software development process to describe software functionality from the user's perspective. Because use-cases typically rely on natural language, they cannot be directly subject to formal verification. Another important artefact is the domain model, a high-level overview of the most important concepts in the problem space. A domain model is usually not constructed en bloc, yet it undergoes refinement starting from the first prototype elicited from text. This thesis covers two closely related topics - formal verification of use-cases and elicitation of a domain model from text. The former is a method (called FOAM) that features simple user-definable annotations inserted into a use-case to make it suitable for verification. A model-checking tool is employed to verify temporal invariants associated with the annotations while still keeping the use-cases understandable for non-experts. The latter is a method (titled Prediction Framework) that features an in-depth linguistic analysis of text and a sequence of statistical classifiers (log-linear Maximum Entropy models) to predict the domain model.
396

Formal Verification of Tree Ensembles in Safety-Critical Applications

Törnblom, John January 2020 (has links)
In the presence of data and computational resources, machine learning can be used to synthesize software automatically. For example, machines are now capable of learning complicated pattern recognition tasks and sophisticated decision policies, two key capabilities in autonomous cyber-physical systems. Unfortunately, humans find software synthesized by machine learning algorithms difficult to interpret, which currently limits their use in safety-critical applications such as medical diagnosis and avionic systems. In particular, successful deployments of safety-critical systems mandate the execution of rigorous verification activities, which often rely on human insights, e.g., to identify scenarios in which the system shall be tested. A natural pathway towards a viable verification strategy for such systems is to leverage formal verification techniques, which, in the presence of a formal specification, can provide definitive guarantees with little human intervention. However, formal verification suffers from scalability issues with respect to system complexity. In this thesis, we investigate the limits of current formal verification techniques when applied to a class of machine learning models called tree ensembles, and identify model-specific characteristics that can be exploited to improve the performance of verification algorithms when applied specifically to tree ensembles. To this end, we develop two formal verification techniques specifically for tree ensembles, one fast and conservative technique, and one exact but more computationally demanding. We then combine these two techniques into an abstraction-refinement approach, that we implement in a tool called VoTE (Verifier of Tree Ensembles). Using a couple of case studies, we recognize that sets of inputs that lead to the same system behavior can be captured precisely as hyperrectangles, which enables tractable enumeration of input-output mappings when the input dimension is low. Tree ensembles with a high-dimensional input domain, however, seems generally difficult to verify. In some cases though, conservative approximations of input-output mappings can greatly improve performance. This is demonstrated in a digit recognition case study, where we assess the robustness of classifiers when confronted with additive noise.
397

Methods for Verification of Post-Impact Control including Driver Interaction

Beltran Gutierrez, Javier, Yujiao, Song January 2011 (has links)
This thesis project focuses on the verification method of a safety function called PICthat stands for Post-Impact Control which controls the vehicle motion of passengercars after being exposed to external disturbances produced by a 1st impact, aiming atavoiding or mitigating secondary events.The main objective was to select a promising method, among several candidates, todevelop further for testing the function and the interaction with the driver. To do thisis was first necessary to map the real destabilized states of motion that are targeted bythe function. These states are referred as Post-Impact problem space and are acombination of variables that describes the host vehicles motion at the instant thedestabilizing force has ceased. Knowing which states are requested by the solutioncandidates, it is possible to grade the rig candidates based on the capability ofcovering the problem space. Then, simulating the proposed rig solutions withMatlab/Simulink models to investigate which candidate fulfils best the problem space.The result of the simulations and other criteria is that a moving base simulator(Simulator SIM4) is most fitted to research verification. The second mostadvantageous solution is the rig alternative called Built-in Actuators.
398

Validation and Verification of Digital Twins

Pedro, Leonardo January 2021 (has links)
Digital Twin is a new technology that is taking over manufacturing and production processes while lowering their costs. This technology has proven to be a key enabler for efficient verification and validation processes, stressing out the importance of its own validation and accreditation phase. This study will emphasize the importance of validation and verification for these DTs, as well as Models and cyber-Physical Systems. Current V&V techniques will be listed and described in this paper, addressing what Model requirements are necessary to validate and translate them to DT.
399

Checkpoint : A case study of a verification project during the 2019 Indian election

Svensson, Linus January 2019 (has links)
This thesis examines the Checkpoint research project and verification initiative that was introduced to address misinformation in private messaging applications during the 2019 Indian general election. Over two months, throughout the seven phases of the election, a team of analysts verified election related misinformation spread on the closed messaging network WhatsApp. Building on new automated technology, the project introduced a WhatsApp tipline which allowed users of the application to submit content to a team of analysts that verified user-generated content in an unprecedented way. The thesis presents a detailed ethnographic account of the implementation of the verification project. Ethnographic fieldwork has been combined with a series of semi-structured interviews in which analysts are underlining the challenges they faced throughout the project. Among the challenges, this study found that India’s legal framework limited the scope of the project so that the organisers had to change approach from an editorial project to one that was research based. Another problem touched the methodology of verification. Analysts perceived the use of online verification tools as a limiting factor when verifying content, as they experienced a need for more traditional journalistic verification methods. Technology was also a limiting factor. The tipline was quickly flooded with verification requests, the majority of which were unverifiable, and the team had to sort the queries manually. Existing technology such as image match check could be further implemented to deal more efficiently with multiple queries in future projects.
400

Designing a Verification Tool for Easier Quality Assurance of Interoperable Master Format Packages

Sjölund, Martin January 2020 (has links)
With today's global distribution of movies, series, documentaries, and more, the need for a standardised system for storing content has emerged. Over-the-top media services such as Netflix, HBO, and Amazon Prime are storing large amounts of content, and by providing it internationally, the content multiplies when it has to conform to regional standards and regulations. The organisation Society of Motion Picture and Television Engineers (SMPTE) has, in the light of this, created a standard called the Interoperable Master Format (IMF). This component-based media lowers storage costs drastically by only storing and managing the media elements that are unique between versions. In management of media content, one of the tasks is verification, a process where the content is checked for errors. By incorporation this process into an IMF workflow, the efficiency could be considerably improved. The objective of this thesis is to explore the use of IMF today and design a tool used for verification of IMF package data, solving present problems in the verification workflow. By looking more deeply into the IMF standard and the needs of people working with verification, a prototype could be created that attends to the needs of the user while simultaneously conforming to the IMF workflow. The prototype was received well by design experts and there is a potential of the further development of it.

Page generated in 0.1235 seconds