Spelling suggestions: "subject:"cerification**"" "subject:"erification**""
261 |
A numerical investigation of mesoscale predictabilityBeattie, Jodi C. 03 1900 (has links)
Approved for public release; distribution in unlimited. / As mesoscale models increase in resolution there is a greater need to understand predictability on smaller scales. The predictability of a model is related to forecast skill. It is possible that the uncertainty of one scale of motion can affect the other scales due to the nonlinearity of the atmosphere. Some suggest that topography is one factor that can lead to an increase of forecast skill and therefore predictability. This study examines the uncertainty of a mesoscale model and attempts to characterize the predictability of the wind field. The data collected is from the summer, when the synoptic forcing is relatively benign. Mesoscale Model 5 (MM5) lagged forecasts are used to create a three-member ensemble over a 12-hour forecast cycle. The differences in these forecasts are used to determine the spread of the wind field. Results show that some mesoscale features have high uncertainty and others have low uncertainty, shedding light on the potential predictability of these features with a mesoscale model. Results indicate that topography is a large source of uncertainty. This is seen in all data sets, contrary to other studies. The ability of the model to properly forecast the diurnal cycle also impacted substantially on the character and evolution of forecast spread. The persistent mesoscale features were represented reasonably well, however the detailed structure of these features had a fair amount of uncertainty. / Lieutenant Junior Grade, United States Navy
|
262 |
Forward looking logics and automataLey, Clemens January 2011 (has links)
This thesis is concerned with extending properties of regular word languages to richer structures. We consider intricate properties like the relationship between one-way and two-way temporal logics, minimization of automata, and the ability to effectively characterize logics. We investigate whether these properties can be extended to tree languages or word languages over an infinite alphabet. It is known that linear temporal logic (LTL) is as expressive as first-order logic over finite words [Kam68, GPSS80]. LTL is a unidirectional logic, that can only navigate forwards in a word, hence it is quite surprising that it can capture all of first-order logic. In fact, one of the main ideas of the proof of [GPSS80] is to show that the expressiveness of LTL is not increased if modalities for navigating backwards are added. It is also known that an extension of bidirectional LTL to ordered trees, called Conditional XPath, is first-order complete [Mar04]. We investigate whether the unidirectional fragment of Conditional XPath is also first-order complete. We show that this is not the case. In fact we show that there is a strict hierarchy of expressiveness consisting of languages that are all weaker than first-order logic. Unidirectional Conditional XPath is contained in the lowest level of this hierarchy. In the second part of the thesis we consider data word languages. That is, word languages over an infinite alphabet. We extend the theorem of Myhill and Nerode to a class of automata for data word languages, called deterministic finite memory automata (DMA). We give a characterization of the languages that are accepted by DMA, and also provide an algorithm for minimizing DMA. Finally we extend theorems of Büchi, Schützenberger, McNaughton, and Papert to data word languages. A theorem of Büchi states that a language is regular iff it can be defined in monadic second-order logic. Schützenberger, McNaughton, and Papert have provided an effective characterization of first-order logic, that is, an algorithm for deciding whether a regular language can be defined in first-order logic. We provide a counterpart of Büchi's theorem for data languages. More precisely we define a new logic and we show that it has the same expressiveness as non-deterministic finite memory automata. We then turn to a smaller class of data languages, those that are recognized by algebraic objects called orbit finite data monoids. We define a second new logic and show that it can define precisely the languages accepted by orbit finite data monoids. We provide an effective characterization of a first-order variant of this second logic, as well as of restrictions of first-order logic, such as its two variable fragment and local variants.
|
263 |
Kinerja: a workflow execution environmentProcter, Sam January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / John Hatcliff / Like all businesses, clinical care groups and facilities are under a range of pressures to enhance the efficacy of their operations. Though there are a number of ways to go about these improvements, one exciting methodology involves the documentation and analysis of clinical workflows. Unfortunately, there is no industry standard tool which supports this, and many available workflow documentation technologies are not only proprietary, but technologically insufficient as well. Ideally, these workflows would be documented at a formal enough level to support their execution; this would allow the partial automation of documented clinical procedures. However, the difficulty involved in this automation effort is substantial: not only is there the irreducible complexity inherent to automation, but a number of the solutions presented so far layer on additional complication.
To solve this, the author introduces Kinerja, a state-of-the-art execution environment for formally specified workflows. Operating on a subset of the academically and industrially proven workflow language YAWL, Kinerja allows for both human guided governance and computer guided verification of workflows, and allows for a seamless switching between modalities. Though the base of Kinerja is essentially an integrated framework allowing for considerable extensibility, a number of modules have already been developed to support the checking and executing of clinical workflows. One such module integrates symbolic execution which greatly optimizes the time and space necessary for a complete exploration of a workflow's state space.
|
264 |
Ambiente integrado para verificação e teste da coordenação de componentes tolerantes a falhas / An integrated environment for verification and test of fault-tolerant components coordinationHanazumi, Simone 01 September 2010 (has links)
Hoje, diante das contínuas mudanças e do mercado competitivo, as empresas e organizações têm sempre a necessidade de adaptar suas práticas de negócios para atender às diferentes exigências de seus clientes e manter-se em vantagem com relação às suas concorrentes. Para ajudá-las a atingir esta meta, uma proposta promissora é o Desenvolvimento Baseado em Componentes (DBC), cuja ideia básica é a de que um novo software possa ser construído rapidamente a partir de componentes pré-existentes. Entretanto, a montagem de sistemas corporativos mais confiáveis e tolerantes a falhas a partir da integração de componentes tem-se mostrado uma tarefa relativamente complexa. E a necessidade de garantir que tal integração não falhe tornou-se algo imprescindível, sobretudo porque as consequências de uma falha podem ser extremamente graves. Para que haja uma certa garantia de que o software seja tolerante a falhas, devem ser realizadas atividades de testes e verificação formal de programas. Isto porque ambas, em conjunto, procuram garantir ao desenvolvedor que o sistema resultante da integração é, de fato, confiável. Mas a viabilidade prática de execução destas atividades depende de ferramentas que auxiliem sua realização, uma vez que a execução de ambas constitui um alto custo para o desenvolvimento do software. Tendo em vista esta necessidade de facilitar a realização de testes e verificação nos sistemas baseados em componentes (DBC), este trabalho de Mestrado se propõe a desenvolver um ambiente integrado para a verificação e teste de protocolos para a coordenação do comportamento excepcional de componentes. / Nowadays, because of continuous changes and the competitive market, companies and organizations have the necessity to adapt their business practices in order to satisfy the different requirements of their customers and then, keep themselves in advantage among their competitors. To help them to reach this aim, a promising purpose is the Component-Based Development (CBD), whose basic idea is that a new software can be built in a fast way from preexisting components. However, mounting more reliable and fault-tolerant corporative systems from components integration is a relatively complex task. And the need to assure that such integration does not fail becomes something essential, especially because the consequences of a failure can be extremely serious. To have a certain guarantee that the software will be fault-tolerant, testing activities and formal verification of programs should be done. This is because both, together, try to assure to developer that the resulting system of the integration is, in fact, reliable. But the practical feasibility of executing these activities depends on tools which support it, once both executions have a high cost to software development. Having the necessity to make test and verification easier in systems based in components (CBD), this work has, as main objective, the development of an integrated environment for verification and test of protocols to the coordination of components exceptional behaviour.
|
265 |
Uncovering bugs in P4 programs with assertion based verification / Revelando bugs em programação P4 com verificação baseada em asserçõesFreire, Lucas Menezes January 2018 (has links)
Tendências recentes em redes definidas por software têm estendido a programabilidade de rede para o plano de dados através de linguagens de programação como P4. Infelizmente, a chance de introduzir bugs na rede também aumenta significativamente nesse novo contexto. Para prevenir bugs de violarem propriedades de rede, as técnicas de imposição e verificação podem ser aplicadas. Enquanto imposição procura monitorar ativamente o plano de dados para bloquear violações de propriedades, verificação visa encontrar bugs assegurando que o programa satisfaz seus requisitos. Abordagens de verificação de plano de dados existentes que são capazes de modelar programas P4 apresentam restrições severas no conjunto de propriedades que podem ser verificadas. Neste trabalho, nós propomos ASSERT-P4, uma abordagem de verificação de programas de plano de dados baseada em asserções e execução simbólica. Programadores de rede anotam programas P4 com asserções expressando propriedades gerais de corretude. Os programas anotados são transformados em modelos C e todos os seus caminhos possíveis são executados simbolicamente. Como execução simbólica é conhecida por possuir desafios de escalabilidade, nós também propomos um conjunto de técnicas que podem ser aplicadas neste domínio para tornar a verificação factível. Nomeadamente, nós investigamos o efeito das seguintes técnicas sobre o desempenho da verificação: paralelização, otimizações de compilador, limitações de pacotes e fluxo de controle, estratégia de reporte de bugs, e fatiamento de programas. Nós implementamos um protótipo para estudar a eficácia e eficiência da abordagem proposta. Nós mostramos que ela pode revelar uma ampla gama de bugs e defeitos de software, e é capaz de fazer isso em menos de um minuto considerando diversas aplicações P4 encontradas na literatura. Nós mostramos como uma seleção de técnicas de otimização em programas mais complexos pode reduzir o tempo de verificação em aproximadamente 85 por cento. / Recent trends in software-defined networking have extended network programmability to the data plane through programming languages such as P4. Unfortunately, the chance of introducing bugs in the network also increases significantly in this new context. To prevent bugs from violating network properties, the techniques of enforcement or verification can be applied. While enforcement seeks to actively monitor the data plane to block property violations, verification aims to find bugs by assuring that the program meets its requirements. Existing data plane verification approaches that are able to model P4 programs present severe restrictions in the set of properties that can be verified. In this work, we propose ASSERT-P4, a data plane program verification approach based on assertions and symbolic execution. Network programmers annotate P4 programs with assertions expressing general correctness properties. The annotated programs are transformed into C models and all their possible paths are symbolically executed. Since symbolic execution is known to have scalability challenges, we also propose a set of techniques that can be applied in this domain to make verification feasible. Namely, we investigate the effect of the following techniques on verification performance: parallelization, compiler optimizations, packet and control flow constraints, bug reporting strategy, and program slicing. We implemented a prototype to study the efficacy and efficiency of the proposed approach. We show it can uncover a broad range of bugs and software flaws, and can do it in less than a minute considering various P4 applications proposed in the literature. We show how a selection of the optimization techniques on more complex programs can reduce the verification time in approximately 85 percent.
|
266 |
Development of a Computer Program for the Verification and Validation of Numerical Simulations in Roadside SafetyMongiardini, Mario 06 May 2010 (has links)
Roadside safety hardware has traditionally been approved on the basis of full-scale crash tests. In recent years, nonlinear dynamic Finite Element (FE) programs like LS-DYNA, PAM-Crash or ABAQUS Explicit have been widely used in evaluating new or improved design of roadside hardware. Although a powerful tool, numerical models must be properly verified and validated in order to provide reliable results. Typically, the verification and validation (V&V) process involves a visual comparison of two curves and is based on a purely subjective judgment. This research investigated the use of comparison metrics, which are mathematical measures that quantify the level of agreement between two curves, for comparing simulation and experimental outcomes in an objective manner. A computer program was developed in Matlab® to automatically evaluate most of the comparison metrics available in literature. The software can be used to preprocess and compare either single or multiple channels, guiding the user through friendly graphical interfaces. Acceptance criteria suitable to represent the typical scatter of experimental tests in roadside safety were determined by comparing ten essentially identical full-scale vehicle crash tests. The robustness and reliability of the implemented method were tested by comparing the qualitative score of the computed metrics for a set of velocity waveforms with the corresponding subjective judgment of experts. Moreover, the implemented method was applied to two real validation cases involving a numerical model in roadside safety and a model in biomechanics respectively. Eventually, the program showed to be an effective tool to be used for assessing the similarities and differences between two curves and, hence, for assisting engineers and analysts in performing verification and validation activities objectively.
|
267 |
Children and gambling : attitudes, behaviour, harm prevention and regulatory responsesMalgorzata, Anna Carran January 2015 (has links)
Gambling constitutes an inherent part of British cultural landscape but due to its potential to cause significant detriments it remains controversial. The Gambling Act 2005 liberalised the UK gambling industry and created an environment where commercial gambling, although regulated, can be offered within a relatively free market setting and its consumption can be stimulated by advertising. The task of the law is to provide a framework where the need for customer choice, a flourishing market, and the respect for private liberties can be adequately balanced with the duty to protect vulnerable individuals such as minors. The Gambling Act has been positioned as containing sufficient protective measures to prevent minors from being harmed by gambling but there is still a relative paucity of research that focuses specifically on how this regime affects this age group. This thesis fills some of the gaps by analysing whether the existing legal and regulatory framework reconciled the conflicting priorities adequately. It uniquely combines legal doctrinal analysis with empirical evidence collected from a sample of British pupils to expose that the liberalisation of gambling has brought severe limitations on protecting minors that are not sufficiently counterbalanced by existing measures. This thesis demonstrates that the legal definition of prohibited gambling does not incorporate all activities that may lead to gambling-related harm. While the age verification measures adopted by online gambling providers appear to be successful, young people continue to have easy access to gambling in land-based venues and are exposed to significant volumes of gambling advertising that appeals to them but these factors are not sufficiently compensated by any holistic regulatory strategy. However, the thesis indicates that the correlation between fun and real gambling games should not be attributed to overlaps in minor's motivations for engaging in either form or to minors' lack of accurate differentiation between them.
|
268 |
Extraction of Rust code from the Why3 verification platformFitinghoff, Nils January 2019 (has links)
It is hard to ensure correctness as software grows more complex. There are many ways to tackle this problem. Improved documentation can prevent misunderstandings what an interface does. Well built abstractions can prevent some kinds of misuse. Tests can find errors, but unless they are exhaustive, they can never guarantee the absence of errors. The use of formal methods can improve the reliability of software. This work uses the Why3 program verification platform to produce Rust code. Possible semantic-preserving mappings from WhyML to Rust are evaluated, and a subset of the mappings are automated using Why3's extraction framework.
|
269 |
A formal verification approach to process modelling and compositionPapapanagiotou, Petros January 2014 (has links)
Process modelling is a design approach where a system or procedure is decomposed in a number of abstract, independent, but connected processes, and then recomposed into a well-defined workflow specification. Research in formal verification, for its part, and theorem proving in particular, is focused on the rigorous verification of system properties using logical proof. This thesis introduces a systematic methodology for process modelling and composition based on formal verification. Our aim is to augment the numerous benefits of a workflow based specification, such as modularity, separation of concerns, interoperability between heterogeneous (including human-based) components, and optimisation, with the high level of trust provided by formally verified properties, such as type correctness, systematic resource accounting (including exception handling), and deadlock-freedom. More specifically, we focus on bridging the gap between the deeply theoretical proofs-as-processes paradigm and the highly pragmatic tasks of process specification and composition. To accomplish this, we embed the proofs-as-processes paradigm within the modern proof assistant HOL Light. This allows the formal, mechanical translation of Classical Linear Logic (CLL) proofs to p-calculus processes. Our methodology then relies on the specification of abstract processes in CLL terms and their composition using CLL inference. A fully diagrammatic interface is used to guide our developed set of high level, semi-automated reasoning tools, and to perform intuitive composition actions including sequential, parallel, and conditional composition. The end result is a p-calculus specification of the constructed workflow, with guarantees of correctness for the aforementioned properties. We can then apply a visual, step-by-step simulation of this workflow or perform an automated workflow deployment as executable code in the programming language Scala. We apply our methodology to a use-case of a holiday booking web agent and to the modelling of real-world collaboration patterns in healthcare, thus demonstrating the capabilities of our framework and its potential use in a variety of scenarios.
|
270 |
A Runtime Verification and Validation Framework for Self-Adaptive SoftwareSayre, David B. 01 January 2017 (has links)
The concepts that make self-adaptive software attractive also make it more difficult for users to gain confidence that these systems will consistently meet their goals under uncertain context. To improve user confidence in self-adaptive behavior, machine-readable conceptual models have been developed to instrument the adaption behavior of the target software system and primary feedback loop. By comparing these machine-readable models to the self-adaptive system, runtime verification and validation may be introduced as another method to increase confidence in self-adaptive systems; however, the existing conceptual models do not provide the semantics needed to institute this runtime verification or validation. This research confirms that the introduction of runtime verification and validation for self-adaptive systems requires the expansion of existing conceptual models with quality of service metrics, a hierarchy of goals, and states with temporal transitions. Based on this expanded semantics, runtime verification and validation was introduced as a second-level feedback loop to improve the performance of the primary feedback loop and quantitatively measure the quality of service achieved in a state-based, self-adaptive system. A web-based purchasing application running in a cloud-based environment was the focus of experimentation. In order to meet changing customer purchasing demand, the self-adaptive system monitored external context changes and increased or decreased available application servers. The runtime verification and validation system operated as a second-level feedback loop to monitor quality of service goals based on internal context, and corrected self-adaptive behavior when goals are violated. Two competing quality of service goals were introduced to maintain customer satisfaction while minimizing cost. The research demonstrated that the addition of a second-level runtime verification and validation feedback loop did quantitatively improve self-adaptive system performance even with simple, static monitoring rules.
|
Page generated in 0.0895 seconds