Spelling suggestions: "subject:"checker"" "subject:"hecker""
31 |
Model checking requirements written in a controlled natural languageBARZA, Sérgio 25 February 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-07-12T13:26:23Z
No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
SergioBarzaDissertation.pdf: 2147656 bytes, checksum: 5c75fe2262be1d224538c1ad6a575ebb (MD5) / Made available in DSpace on 2017-07-12T13:26:23Z (GMT). No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
SergioBarzaDissertation.pdf: 2147656 bytes, checksum: 5c75fe2262be1d224538c1ad6a575ebb (MD5)
Previous issue date: 2016-02-25 / Software Maintainability (SM) has been studied since it became one of the key componentes
of the software quality model accepted around the world. Such models support researchers
and practitioners to evaluate the quality level of his systems. Therefore, many researchers have
proposed a lot of metrics to be used as SM indicators. On the other hand, there is a suspicious that
using SM metrics on industry is different from the academic context. In this case, practitioners
do not adopt the metrics proposed/used by academia. Consequently, the goal of this research is to
investigate the SM metrics adoption and applicability scenario on the Brazilian industrial context.
This study will allow confirming if the practitioners use the SM metrics proposed by academics
around the globe or if they propose their own metrics for SM measurement. As empirical method
for data assessment, we used survey, divided in two steps. The first one was focused in gathering
information that allowed us to design a specific scenario about the use and applicability of SM
metrics. To achieve this goal, it was chosen, as research instrument, semi-structured interviews.
The next step focused in a more general scenario, compassing the Brazillian software production
industrial context. An online questionnaire was used as research instrument. Practitioners with
different positions in several companies participated of this work. Data from requirements
engineers, quality analysts, testers, developers and project managers were collected. 7 software
companies participated in the first part of the study and 68 valid answers were collected on the
second moment, resulting in 31 SM metrics listed. The results showed us that about 90% of
the companies perform maintenance on their software products. However, only 60% confirms
using maintainability metrics, resulting in a discrepancy regarding software maintenance vs SM
metrics. Nearly half of the companies researched have used well-defined processes to collect
these metrics. Nevertheless, there are those that do not have any formal methodology. Instead of
it, they have used SM metrics that best fit to the needs of a specific project. The conclusions of
this study point to an issue that is nothing new in the academic researchers around the world.
Many of the academics results conducting, mainly, in the universities, are not coming to the
software industries and this fact is also a truth when the subject is software maintenance. The
results of this research may lead to discussions on how SM metrics are being proposals nowadays. / Manutenibilidade de Software (MS) é estudada desde que se tornou um dos componente
de modelos de qualidade aceitos globalmente. Tais modelos auxiliam pesquisadores e profissionais
do mercado na avaliação do nível de qualidade dos seus sistemas. Como consequência,
muitos pesquisadores vêm propondo métricas que podem ser utilizadas como indicadores de MS.
Por outro lado, existe uma suspeita que o uso de métricas de MS ocorre de maneira diferente da
academia. Neste caso, as empresas não estão adotando as métricas que estão sendo propostas
no ambiente acadêmico. O objetivo desta pesquisa é investigar o cenário de adoção e aplicação
de métricas de manutenibilidade de software sob o contexto industrial brasileiro. Este estudo
permitirá afirmar se estas empresas utilizam atributos de MS propostos por acadêmicos ao redor
do mundo ou se elas propõem suas próprias métricas para medição de MS. Para ter acesso
aos dados desta pesquisa, foi utilizado o método empírico survey, dividido em duas etapas. A
primeira etapa objetivou levantar informações que permitissem um panorama mais específico
sobre a utilização e aplicação de tais métricas. Para isto, foi escolhido, como instrumento de
pesquisa, entrevistas semi-estruturadas. A segunda etapa apresenta um enfoque mais amplo,
englobando todo o cenário industrial de produção de software brasileira. Um questionário online
foi utilizado como instrumento de pesquisa. Profissionais de diferentes posições em várias
empresas participaram desta pesquisa. Foram coletados dados de engenheiros de requisitos,
analista de qualidade, testadores, desenvolvedores, gerente de projetos, entre outros. Sete empresas
participaram da primeira etapa da pesquisa e 68 respostas válidas foram levantadas no
segundo momento. Com isto, 31 métricas de MS foram identificadas. Os resultados mostram
que cerca de 90% das empresas realizam manutenção em seus produtos de software. Porém
somente 60% (aproximadamente) afirmaram fazer uso de métricas de MS, resultando em uma
discrepância com relação à manutenção de software vs. uso de métricas. Quase metade das
empresas possuem processos bem definidos para coletar estas métricas. Entretanto, muitas delas
ainda não apresentam tais processos formais de coleta. Neste último caso, elas utilizam aqueles
atributos que melhor se adaptam às necessidades de um projeto específico. As conclusões deste
estudo apontam para problemas que não é novidade nas pesquisas acadêmicas ao redor do mundo.
Pela amostra investigada neste trabalho, reforça-se a suspeita de que muitos dos resultados das
pesquisas científicas realizadas nas universidades não estão chegando na indústria e este fato
se reflete quando o assunto é manutenção de software. Os resultados deste estudo apresentam
dados que poderão ocasionar discussões sobre a forma como as métricas de manutenibilidade
são propostas atualmente.
|
32 |
Kvalitetssäkring vid leverans av IFC-modell : Vid export från Tekla Structures och Revit / Quality assurance of IFC-model deliveryRashid, Zhiar, Rustum, Hawar January 2019 (has links)
Användning av olika BIM-verktyg blir allt mer frekvent inom byggsektorn. Informationsutbyte av 3D-modeller mellan projektets olika aktörer sker kontinuerligt under projekteringsskedet. Eftersom olika aktörer använder sig av olika programvaror behövs det ett öppet filformat för leverans av 3D-modeller. IFC är ett neutralt filformat som möjliggör informationsutbyte mellan olika discipliner och samordning av BIM-modeller. Detta examenarbete är en förundersökning av kvalitativ och kvantitativ sort, med syftet att ta fram en intern arbetsmetod för kvalitetssäkring av IFC-modeller vid leverans. I de kvalitativamomenten ingår intervjuer. Intervjuer görs för att få en förståelse om vad de anställda på WSP Byggprojektering Göteborg har för uppfattning om olika program och hantering av IFC. Det har gett författarna en djupare inblick och förståelse av fördelar och utmaningar som svarande stöterpå, som lyfts under avsnittet resultat. Ett praktiskt delmoment ingår i examensarbetet. Under depraktiska momenten kommer det modellera i Tekla Structures och Revit för att sedan analyseramodellerna med hjälp av granskningsprogrammen Solibri Model Checker och Navisworks Manage. Två granskningsmetoder skapas i den här avhandlingen, en visuell granskning och en Excelrapport. Den visuella granskningsmetoden innebär att visuellt granska IFC-modellen i samma granskningsprogram som samordnaren. Den andra metoden är en prototyp som jämför olika rapporter från respektive programvaror. Slutsatsen i den här rapporten är enligt följande punkter: • Mänskliga faktorn vid modellering och användning av korrekta verktyg vid export till IFC. • IFC-formatet och de olika programmens förmåga att tolka IFC-informationen kan i vissa fall brista för icke standardiserade geometrier. • Granskningsprogram kan tolka IFC-modellen olika. / The use of different BIM tools is becoming increasingly frequent in the construction sector. Information exchange of 3D models between the various actors of the project takes place continuously during the design phase. Because different actors use different software, an openfile format is needed for the delivery of 3D models. IFC is a neutral file format that enables information exchange between different disciplines and coordination of BIM models. This exam work is a preliminary study of qualitative and quantitative variety, with the aim of developing an internal working method for quality assurance of IFC model delivery. The qualitative parts include interviews. Interviews are made to gain an understanding of what the employees at WSP Byggprojektering Göteborg have in mind about different programs and management of IFC. It has given the authors a deeper insight and understanding of the benefits and challenges that respondents encounter, which are highlighted in the section results. Apractical part is included in the degree project. During the practical steps, it will model in Tekla Structures and Revit to then analyze the models with the help of the review programs Solibri Model Checker and Navisworks Manage. Two review methods are created in this thesis, a visual review and an Excel report. The visual inspection method involves visually reviewing the IFC model in the same review program as the coordinator. The second method is a prototype that compares different reports from the respective software.The conclusion in this report is as follows: • Human factor in the modeling and use of correct tools when exporting to IFC. • The IFC format and the ability of the various programs to interpret the IFCinformation may in some cases fail for non-standard geometries. • Review programs can interpret the IFC model differently.
|
33 |
Tool for linguistic quality evaluation of student texts / Verktyg för språklig utvärdering av studenttexterKärde, Wilhelm January 2015 (has links)
Spell checkers are nowadays a common occurrence in most editors. A student writing an essay in school will often have the availability of a spell checker. However, the feedback from a spell checker seldom correlates with the feedback from a teacher. A reason for this being that the teacher has more aspects on which it evaluates a text. The teacher will, as opposed to the the spell checker, evaluate a text based on aspects such as genre adaptation, structure and word variation. This thesis evaluates how well those aspects translate to NLP (Natural Language Processing) and implements those who translate well into a rule based solution called Granska. / Grammatikgranskare finns numera tillgängligt i de flesta ordbehandlare. En student som skriver en uppsats har allt som oftast tillgång till en grammatikgranskare. Dock så skiljer det sig mycket mellan den återkoppling som studenten får från grammatikgranskaren respektive läraren. Detta då läraren ofta har fler aspekter som den använder sig av vid bedömingen utav en elevtext. Läraren, till skillnad från grammatikgranskaren, bedömmer en text på aspekter så som hur väl texten hör till en viss genre, dess struktur och ordvariation. Denna uppsats utforskar hur pass väl dessa aspekter går att anpassas till NLP (Natural Language Processing) och implementerar de som passar väl in i en regelbaserad lösning som heter Granska.
|
34 |
Responding to Policies at Runtime in TrustBuilderSmith, Bryan J. 20 April 2004 (has links) (PDF)
Automated trust negotiation is the process of establishing trust between entities with no prior relationship through the iterative disclosure of digital credentials. One approach to negotiating trust is for the participants to exchange access control policies to inform each other of the requirements for establishing trust. When a policy is received at runtime, a compliance checker determines which credentials satisfy the policy so they can be disclosed. In situations where several sets of credentials satisfy a policy and some of the credentials are sensitive, a compliance checker that generates all the sets is necessary to insure that the negotiation succeeds whenever possible. Compliance checkers designed for trust management do not usually generate all the satisfying sets. In this thesis, we present two practical algorithms for generating all satisfying sets given a compliance checker that generates only one set. The ability to generate all of the combinations provides greater flexibility in how the system or user establishes trust. For example, the least sensitive credential combination could be disclosed first. These ideas have been implemented in TrustBuilder, our prototype system for trust negotiation.
|
35 |
On-chip Tracing for Bit-Flip Detection during Post-silicon ValidationVali, Amin January 2018 (has links)
Post-silicon validation is an important step during the implementation flow of digital integrated circuits and systems. Most of the validation strategies are based on ad-hoc solutions, such as guidelines from best practices, decided on a case-by-case basis for a specific design and/or application domain. Developing systematic approaches for post-silicon validation can mitigate the productivity bottlenecks that have emerged due to both design diversification and shrinking implementation cycles.
Ever since integrating on-chip memory blocks became affordable, embedded logic analysis has been used extensively for post-silicon validation. Deciding at design time which signals to be traceable at the post-silicon phase, has been posed as an algorithmic problem a decade ago. Most of the proposed solutions focus on how to restore as much data as possible within a software simulator in order to facilitate the analysis of functional bugs, assuming that there are no electrically-induced design errors, e.g., bit- flips. In this thesis, first it is shown that analyzing the logic inconsistencies from the post-silicon traces can aid with the detection of bit-flips and their root-cause analysis. Furthermore, when a bit-flip is detected, a list of suspect nets can be automatically generated.
Since the rate of bit-flip detection as well the size of the list of suspects depends on the debug data that was acquired, it is necessary to select the trace signals consciously. Subsequently, new methods are presented to improve the bit-flip detectability through an algorithmic approach to selecting the on-chip trace signals. Hardware assertion checkers can also be integrated on-chip in order to detect events of interest, as defined by the user. For example, they can detect a violation of a design property that captures a relationship between internal signals that is supposed to hold indefinitely, so long as no bit-flips occur in the physical prototype. Consequently, information collected from hardware assertion checkers can also provide useful debug information during post-silicon validation. Based on this observation, the last contribution from this thesis presents a novel method to concurrently select a set of trace signals and a set of assertions to be integrated on-chip. / Thesis / Doctor of Philosophy (PhD)
|
36 |
Conception et réalisation d'un vérificateur de modèles AltaRicaVincent, Aymeric 05 December 2003 (has links) (PDF)
Le formalisme AltaRica, développé au LaBRI conjointement avec des industriels, permet d'analyser un même système grâce à plusieurs méthodes différentes (arbres de défaillances, réseaux de Petri, chaînes de Markov) afin d'effectuer des études de sûreté de fonctionnement. Ces méthodes sont outillées par des outils industriels. Cette thèse a eu pour but de développer un outil de vérification formelle basé sur une structure de données symbolique, les diagrammes de décision binaires, qui permet de représenter de manière compacte les systèmes de transitions. Cet outil a été doté d'un langage de spécification très expressif, le mu-calcul de Park, qui est la logique du premier ordre étendue par des points fixes sur les relations. Ce mémoire décrit la logique de spécification employée dans le vérificateur de modèles (Mec 5) que nous avons développé, le formalisme AltaRica et les extensions apportées au langage AltaRica durant cette thèse. Ensuite certains aspects de l'implémentation de Mec 5 sont décrits, comme l'architecture du logiciel et certains composants essentiels dont le module de gestion des diagrammes de décision binaires. Puis, une solution élégante et très générique du problème de la synthèse de contrôleurs est décrite, qui permet de spécifier des objectifs de contrôle arborescents et constitue donc une extension naturelle du cadre proposé par Ramadge et Wonham. Cette méthode ramène le problème de la synthèse de contrôleurs à un problème de calcul de stratégies gagnantes. Enfin, une méthode de calcul des stratégies gagnantes dans un jeu de parité est proposée et il est montré que Mec 5 peut calculer de telles stratégies.
|
37 |
Spatial prediction of wind farm outputs for grid integration using the augmented Kriging-based modelHur, Jin, 1973- 12 July 2012 (has links)
Wind generating resources have been increasing more rapidly than any other renewable generating resources.
Wind power forecasting is an important issue for deploying higher wind power penetrations on power grids.
The existing work on power output forecasting for wind farms has focused on the temporal issues.
As wind farm outputs depend on natural wind resources that vary over space and time, spatial analysis and modeling is also needed.
Predictions about suitability for locating new wind generating resources can be performed using spatial modeling.
In this dissertation, we propose a new approach to spatial prediction of wind farm outputs for grid integration based on Kriging techniques.
First, we investigate the characteristics of wind farm outputs.
Wind power is variable, uncontrollable, and uncertain compared to traditional generating resources.
In order to understand the characteristics of wind power outputs, we study the variability of wind farm outputs using correlation analysis. We estimate the Power Spectrum Density (PSD) from empirical data.
Following Apt[1], we classify the estimated PSD into four frequency ranges having different slopes.
We subsequently focus on phenomena relating to the slope of the estimated PSD at a low frequency range because our spatial prediction is based on the period over daily to monthly timescales.
Since most of the energy is in the lower frequency components (the second, third, and fourth slope regions have much lower spectral density than the first), the conclusion is that the dominant issues regarding energy will be captured by the low frequency behavior.
Consequently, most of the issues regarding energy (at least at longer timescales) will be captured by the first slope, since relatively little energy is in the other regions.
We propose the slope estimation model of new wind farm production.
When the existing wind farms are highly correlated and the slope of each wind farm is estimated at a low frequency range, we can predict the slope with low frequency components of a new wind farm through the proposed spatial interpolation techniques.
Second, we propose a new approach, based on Kriging techniques, to predict wind farm outputs.
We introduce Kriging techniques for spatial prediction, modeling semivariograms for spatial correlation, and mathematical formulation of the Kriging system.
The aim of spatial modeling is to calculate a target value of wind production at unmeasured or new locations based on the existing values that have already been measured at locations considering the spatial correlation relationship between measured values.
We propose the multivariate spatial approach based on Co-Kriging to consider multiple variables for better prediction.
Co-Kriging is a multivariate spatial technique to predict spatially distributed and correlated variables and it adds auxiliary variables to a single variable of interest at unmeasured locations.
Third, we develop the Augmented Kriging-based Model, to predict power outputs at unmeasured or new wind farms that are geographically distributed in a region.
The proposed spatial prediction model consists of three stages: collection of wind farm data for spatial analysis, performance of spatial analysis and prediction, and verification of the predicted wind farm outputs.
The proposed spatial prediction model provides the univariate prediction based on Universal Kriging techniques and the multivariate prediction based on Universal and Co-Kriging techniques. The proposed multivariate prediction model considers multiple variables: the measured wind power output as a primary variable and the type or hub height of wind turbines, or the slope with low frequency components as a secondary variable. The multivariate problem is solved by Co-Kriging techniques.
In addition, we propose $p$ indicator as a categorical variable considering the data configuration of wind farms connected to electrical power grids.
Although the interconnection voltage does not influence the wind regime, it does affect transmission system issues such as the level of curtailments, which, in turn, affect power production.
Voltage level is therefore used as a proxy to the effect of the transmission system on power output.
The Augmented Kriging-based Model (AKM) is implemented in the R system environments and the latest Gstat library is used for the implementation of the AKM.
Fourth, we demonstrate the performance of the proposed spatial prediction model based on Kriging techniques in the context of the McCamey and Central areas of ERCOT CREZ.
Spatial prediction of ERCOT wind farms is performed in daily, weekly, and monthly time scales for January to September 2009.
These time scales all correspond to the lowest frequency range of the estimated PSD.
We propose a merit function to provide practical information to find optimal wind farm sites based on spatial wind farm output prediction, including correlation with other wind farms.
Our approach can predict what will happen when a new wind farm is added at various locations.
Fifth, we propose the Augmented Sequential Outage Checker (ASOC) as a possible approach to study the transmission system, including grid integration of wind-powered generation resources.
We analyze cascading outages caused by a combination of thermal overloads, low voltages, and under-frequencies following an initial disturbance using the ASOC. / text
|
38 |
BMCLua: Metodologia para Verificação de Códigos Lua utilizando Bounded Model CheckingJanuário, Francisco de Assis Pereira 01 April 2015 (has links)
Submitted by Kamila Costa (kamilavasconceloscosta@gmail.com) on 2015-08-03T12:38:15Z
No. of bitstreams: 2
Dissertação - Francisco de A P Januário.pdf: 1215702 bytes, checksum: 7f02a7976f19b94633a48b22a4990adf (MD5)
ficha_catalografica.pdf: 1919 bytes, checksum: cbf0df9103c43df7202f18e8010435b9 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-08-04T15:34:24Z (GMT) No. of bitstreams: 2
Dissertação - Francisco de A P Januário.pdf: 1215702 bytes, checksum: 7f02a7976f19b94633a48b22a4990adf (MD5)
ficha_catalografica.pdf: 1919 bytes, checksum: cbf0df9103c43df7202f18e8010435b9 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-08-04T15:38:25Z (GMT) No. of bitstreams: 2
Dissertação - Francisco de A P Januário.pdf: 1215702 bytes, checksum: 7f02a7976f19b94633a48b22a4990adf (MD5)
ficha_catalografica.pdf: 1919 bytes, checksum: cbf0df9103c43df7202f18e8010435b9 (MD5) / Made available in DSpace on 2015-08-04T15:38:25Z (GMT). No. of bitstreams: 2
Dissertação - Francisco de A P Januário.pdf: 1215702 bytes, checksum: 7f02a7976f19b94633a48b22a4990adf (MD5)
ficha_catalografica.pdf: 1919 bytes, checksum: cbf0df9103c43df7202f18e8010435b9 (MD5)
Previous issue date: 2015-04-01 / CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico / The development of programs written in Lua programming language, which is largely
used in applications for digital TV and games, can cause errors, deadlocks, arithmetic overflow,
and division by zero. This work aims to propose a methodology for checking programs written
in Lua programming language using the Efficient SMT-Based Context-BoundedModel Checker
(ESBMC) tool, which represents the state-of-the-art context-bounded model checker. It is used
for ANSI-C/C++ programs and has the ability to verify array out-of-bounds, division by zero,
and user-defined assertions. The proposed approach consists in translating programs written
in Lua to an intermediate language, which are further verified by ESBMC. The translator is
developed with the ANTLR (ANother Tool for Language Recognition) tool, which is used for
developing the lexer and parser, based on the Lua language grammar. This work is motivated by
the need for extending the benefits of bounded model checking, based on satisfiability modulotheories, to programs written in Lua programming language. The experimental results show that the proposed methodology can be very effective, regarding model checking (safety) of Luaprogramming language properties. / O desenvolvimento de programas escritos na linguagem de programação Lua, que é
muito utilizada em aplicações para TV digital e jogos, pode gerar erros, deadlocks, estouro
aritmético e divisão por zero. Este trabalho tem como objetivo propor uma metodologia de
verificação para programas escritos na linguagem de programação Lua usando a ferramenta
Efficient SMT-Based Context-Bounded Model Checker (ESBMC), que representa o estado da
arte em verificação de modelos de contexto limitado. O ESBMC é aplicado a programas embarcados
ANSI-C/C++ e possui a capacidade de verificar estouro de limites de vetores, divisão
por zero e assertivas definidas pelo usuário. A abordagem proposta consiste na tradução de
programas escritos em Lua para uma linguagem intermediária, que é posteriormente verificada
pelo ESBMC. O tradutor foi desenvolvido com a ferramenta ANTLR (do inglês “ANother Tool
for Language Recognition”), que é utilizada na construção de analisadores léxicos e sintáticos,
a partir da gramática da linguagem Lua. Este trabalho é motivado pela necessidade de se
estender os benefícios da verificação de modelos, baseada nas teorias de satisfatibilidade, a programas
escritos na linguagem de programação Lua. Os resultados experimentais mostram que
a metodologia proposta pode ser muito eficaz, no que diz respeito à verificação de propriedades
(segurança) da linguagem de programação Lua.
|
39 |
Automatická konstrukce hlídacích obvodů založených na konečných automatech / Automatic Construction of Checking Circuits Based on Finite AutomataMatušová, Lucie January 2014 (has links)
Cílem této práce bylo studium aktivního učení automatů, navržení a implementace softwarové architektury pro automatickou konstrukci hlídacího obvodu dané jednotky implementované v FPGA a ověření funkčnosti hlídacího obvodu pomocí injekce poruch. Hlídací obvod, tzv. online checker, má za úkol zabezpečovat danou jednotku proti poruchám. Checker je konstruován z modelu odvozeného pomocí aktivního učení automatů, které probíhá na základě komunikace se simulátorem. Pro implementaci učícího prostředí byla použita knihovna LearnLib, která poskytuje algoritmy aktivního učení automatů a jejich optimalizace. Byla navržena a implementována experimentální platforma umožňující řízenou injekci poruch do designu v FPGA, která slouží k otestování checkeru. Výsledky experimentů ukazují, že při použití checkeru a rekonfigurace je možné snížit chybovost designu o více než 98%.
|
40 |
Verknüpfung von formaler Verifikation und modellgetriebener EntwicklungAmmann, Christian 29 April 2015 (has links)
Die modellgetriebene Entwicklung (MDD) ist ein Ansatz, um formale Modelle automatisiert in ausführbare Software zu übersetzen. Obwohl dieses Vorgehen die Häufigkeit von Fehlern im generierten Quellcode verringert, können immer noch die Ausgangsmodelle fehlerhaft sein. Deshalb sind zusätzliche Prüfverfahren, wie z.B. die Verwendung eines Model Checkers sinnvoll. Er stellt automatisiert sicher, dass ein Modell alle gestellten Anforderungen erfüllt. Das Ziel dieser Arbeit ist daher die Integration eines Model Checkers in den modellgetriebenen Entwicklungsprozess, um die Qualität von Software-Produkten zu verbessern. Zu diesem Zweck wird das sogenannte DSL Verification Framework (DVF) vorgestellt. Es stellt Entwicklern vorgefertigte Sprachkonstrukte zur Verfügung, um die Implementierung von Parsern und Transformatoren zu erleichtern. Des Weiteren berücksichtigt es auch die "State Space Explosion", damit auch größere Modelle verifizierbar bleiben. Um die praktische Nutzung des DVF zu evaluieren, werden zwei Industriefallstudien durchgeführt.
|
Page generated in 0.0496 seconds