Spelling suggestions: "subject:"checker"" "subject:"hecker""
41 |
Static MySQL Error CheckingZarinkhail, Mohammad Shuaib January 2010 (has links)
Masters of Science / Coders of databases repeatedly face the problem of checking their Structured Query Language (SQL) code. Instructors face the difficulty of checking student projects and lab assignments in database courses. We collect and categorize common MySQL programming errors into three groups: data definition errors, data manipulation errors, and transaction control errors. We build these into a comprehensive list of MySQL errors, which novices are inclined make during database programming. We collected our list of common MySQL errors both from the technical literature and directly by noting errors made in assignments handed in by students. In the results section of this research, we check and summarize occurrences of these errors based on three characteristics as semantics, syntax, and logic. These data form the basis of a future static MySQL checker that will eventually assist database coders to correct their code automatically. These errors also form a useful
checklist to guide students away from the mistakes that they are prone to make.
|
42 |
A formal framework for run-time verification of Web applications : an approach supported by ccope-extended linear temporal logicHaydar, May January 2007 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
|
43 |
Redações do ENEM: estudo dos desvios da norma padrão sob a perspectiva de corpos / ENEM essays: a study of deviations from the standard norm from a corpus perspective.Pinheiro, Gisele Montilha 27 March 2008 (has links)
Desvios da norma padrão, comumente chamados de \"erros\", são fatos comuns na escrita dos aprendizes da variante culta de uma língua materna como o português brasileiro. Tratados como um \"mal a ser combatido\", eles são, na verdade, importantes indícios do processo de assimilação da escrita culta pelo falante nativo. Revelam qual a tendência da transformação que naturalmente ocorre numa língua, demonstrando, por exemplo, a obsolência das gramáticas tradicionais, que não aceitam determinadas construções já muito freqüentes. Mas seria possível detectar algum padrão desses desvios? Haveria desvios típicos de um determinado perfil de redatores? Essas indagações motivaram a presente investigação, que se baseou na concepção de que esses estudos são de natureza empírica, comprometidos com a noção de que a língua funciona tal como um sistema probabilístico, de onde é possível prever tendências, por exemplo, de mudança. Falamos, pois, de uma investigação à luz da Lingüística de Corpus. Composto de redações do Exame Nacional do Ensino Médio (ENEM), edição de 2002, cedidas pelo Instituto Nacional de Estudos e Pesquisas Educacionais (INEP) juntamente com determinados traços do perfil dos redatores, construímos um corpus que foi batizado de Corvo, e se ocupou de uma faixa específica de textos: a de pior desempenho no ENEM no quesito domínio da norma culta. Observamos, desse modo, textos em que, supostamente, há freqüência maior de desvios e maior variedade de tipos de desvios. Nossa metodologia de pesquisa apoiou-se no uso do revisor gramatical automático ReGra, bastante popular no país e que auxilia o usuário no uso correto do português culto padrão. Além disso, construímos um material próprio de detecção e classificação dos desvios gramaticais, aumentando a capacidade de tratamento automático dos textos. Assim, foi possível gerar uma versão do corpus anotada em desvios, i.e., os textos apresentam indicações de quando e qual tipo de desvios ocorrem. Como resultado temos um mapeamento do Corvo; ou seja, um panorama dos desvios típicos de um determinado tip o de perfil de redator. Constatamos a deficiência ortográfica como o traço típico do grupo de indivíduos investigado, mas, sobretudo, que a ortografia é motor para o pleno funcionamento de uma revisão gramatical automática. O revisor ReGra mostrou-se incapaz de processar satisfatoriamente textos desse tipo de redator, mas, ainda assim, comprovou que esses textos apresentam desvios gramaticais de tratamento complexo, cuja intervenção do revisor, se acontece, pouco altera na qualidade geral dos mesmos. Com respeito à tipologia de desvio, pudemos constatar a validade da tipologia aplicada na pesquisa, que advém do ReGra e, portanto, está à margem das discussões teóricas ortodoxas. De fato, há recorrência de tipos de desvios, e isso numa freqüência que nos autoriza admitir a fraca assimilação de certas regras gramaticais tomadas como básicas (p.ex., a pontuação, a concordância e a regência). Constatamos, com relação ao perfil de redatores, que textos com maior potencial para a revisão da escrita, i.e., aqueles que alteram significativamente a qualidade textual com interferências pontuais de revisão, são justamente os produzidos pelos concluintes do ensino médio e não pelos egressos. / Deviations from the standard norm, usually called \'mistakes\', are common events in writing pieces of language learners speakers of a native language such as Brazilian Portuguese. They are treated as \'an evil that must be fought\'. They are, in fact, important evidence of the acquisition process of writing in the standard norm by the native speaker. They reveal the transformation trend, which normally occurs in a language, showing, for instance, the obsolescence of traditional grammars that do not accept certain patterns, which are frequent nonetheless. However, is it possible to identify a pattern in these deviations? Are there common deviations among a certain profile of students? These are the questions motivating this study, which is based on the concept that these investigations are empirical in nature, and are marked by the notion that language operates as a probabilistic system, in which it is possible to forecast trends of change, for example. We are, therefore, speaking of an investigation in the light of Corpus Linguistics. We compiled a corpus of essays written during the National Middle Education Exam (ENEM) carried out in 2002. These essays were obtained with the National Institute for Research in Education (INEP) together with the profile of the students. The corpus was called Corvo, and it is made up of a certain bracket of texts: those having obtained the worst performance rate in the ENEM in the standard norm category. We observed, therefore, texts in which there are, allegedly, a greater frequency of deviations and a greater variety of kinds of deviations. The research methodology was supported by the electronic grammar checker - ReGra - which is very popular in the country, and helps the user in writing standard Portuguese correctly. In addition, we built a specific tool for identifying and classifying grammar deviations, thus, increasing the ability to treat the texts electronically. Therefore, it was possible to generate an annotated version of the corpus according to the deviations, i.e., the texts were annotated according to when and what kind of deviations they presented. As a result, we have a mapping of the Corvo; that is, a view of the common deviations of students belonging to a certain profile. We identified poor spelling as a common feature of the group, but, above all, that spelling is the engine enabling a full grammatical check to operate. The ReGra grammar checker was not able to satisfactorily process these kinds of texts, but, even so, it proved that these texts presented complex grammar deviations, and the intervention of the checker, when it is applied, little alters their overall quality. In regard to deviation typology, we identified the validity of the typology used in this study, which results from the ReGra and, therefore, lies in the outskirts of orthodox theoretical discussions. In fact, certain kinds of deviations reoccur at a frequency that enables us to admit a poor assimilation of certain grammatical rules considered basic (e.g., punctuation, agreement and use of prepositions). We found that in regard to the profile of the students, texts with a greater writing check potential, that is, a check that would significantly improve text quality through individual checking interferences, are te xts produced by students who finished middle education and not those produced by students who are finishing the course.
|
44 |
Questões ortográficas: Rafael Bluteau e o Novo Acordo num percurso historiográficoSilva, Raisimar Arruda da 18 May 2010 (has links)
Made available in DSpace on 2016-04-28T19:34:45Z (GMT). No. of bitstreams: 1
Raisimar Arruda da Silva.pdf: 954249 bytes, checksum: df0e9c2e39b9781d99758742dc25dc45 (MD5)
Previous issue date: 2010-05-18 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This dissertation is subject an examination of the orthography developed by
Rafael Bluteau. It follows the line of research of history and description of the
Portuguese language. The season is marked in the pseudoetimológico precisely in
the second half of the seventeenth century and the first of the eighteenth century.
Since the purpose of regulating the spelling Portuguese Bluteau edit spelling
rules and laws, based on the origin of words. The topic will be corpus the
Vobabulario Portuguez and Latino.
The questioning of this research is the discovery of spelling rules, which have
not yet been achieved, today. In other words, to what extent the problems pointed out
by Bluteau repeat, in the nineteenth century. It is in this plot of inaccuracies that
need arises to search for such problems.
The objective that guides this research is to contribute to studies of the
spelling of the English language. Therefore, described the rules of spelling Rafael
Bluteau, and then make the comparison with the New Deal Checker.
To give methodological foundation for this research, we followed the tenets of
historiography Linguistics, based on work by Konrad Koerner through its three
principles: context (retracted Bluteau the historical and cultural context of his time),
immanence (described in and then explain its proposal spelling, using your
terminology) and use (closer to the proposed Rafael Bluteau with the New Accord,
using modern terminology).
The results were satisfactory. After examining the spelling of Rafael Bluteau
to compare it with the New Accord, there was some rejection of his spelling rules by
the Portuguese society of the eighteenth century, due to the naturalness of the
author, although there was political will on the part of D. John V / Esta dissertação de mestrado tem por tema um exame da ortografia
elaborada por Rafael Bluteau. Segue a linha de pesquisa da História e Descrição da
Língua Portuguesa. A época demarcada encontra-se no período pseudoetimológico,
precisamente, na segunda metade do século XVII e na primeira do século XVIII.
Tendo o propósito de normatizar a grafia portuguesa, Bluteau edita regras e
leis ortográficas, baseadas na origem das palavras. O tema abordado terá como
corpus o Vobabulario Portuguez e Latino.
A problematização desta pesquisa consiste na descoberta de regras
ortográficas, que ainda não foram alcançadas, nos dias atuais. Em outros termos,
em que medida os problemas apontados por Bluteau se repetem, no século XXI. É
nesse enredo de imprecisões que surge a necessidade de busca para tais
problemas.
Assim, o objetivo que norteia esta pesquisa é contribuir com os estudos da
ortografia da língua portuguesa. Para tanto, descreveu-se as regras ortográficas de
Rafael Bluteau, para depois efetuar a comparação com o Novo Acordo Ortográfico.
Para dar fundamento metodológico a esta pesquisa, foram seguidos os
postulados da Historiografia Linguística, baseados no trabalho de Konrad Koerner
por meio de seus três princípios: contextualização (retratou-se Bluteau no contexto
histórico-cultural de sua época), imanência (descreveu-se para depois explicar a sua
proposta ortográfica, utilizando a sua terminologia) e adequação (aproximou-se a
proposta de Rafael Bluteau com a do Novo Acordo, utilizando a terminologia
moderna).
Os resultados obtidos foram satisfatórios. Após examinar a ortografia de
Rafael Bluteau para depois compará-la com a do Novo Acordo, verificou-se certa
rejeição de suas regras ortográficas pela sociedade portuguesa do século XVIII,
devido à naturalidade do autor, apesar de ter existido vontade política por parte de
D. João V
|
45 |
Using timed automata formalism for modeling and analyzing home care plans / L'utilisation du formalisme des automates temporisés pour la modélisation et l'analyse des plans de soins à domicileGani, Kahina 02 December 2015 (has links)
Dans cette thèse nous nous sommes intéressés aux problèmes concernant la conception et la gestion des plans de soins à domicile. Un plan de soins à domicile définit l'ensemble des activités médicales et/ou sociales qui sont menées jour après jour au domicile d'un patient. Ce plan de soins est généralement construit à travers un processus complexe impliquant une évaluation complète des besoins du patient ainsi que son environnement social et physique. La spécification de plans de soins à domicile est difficile pour plusieurs raisons: les plans de soins à domicile sont par nature des processus non structurés qui impliquent des activités répétitives mais irrégulières, dont la spécification requiert des expressions temporelles complexes. Ces caractéristiques font que les plans de soins à domicile sont difficiles à modéliser en utilisant les technologies traditionnelles de modélisation de processus. Tout d'abord, nous présentons l'approche basée sur les DSL (Langage spécifique au domaine) qui permet d'exprimer les plans de soins à domicile en utilisant des abstractions de haut niveau et orientées utilisateur. Le DSL nous permet à travers cette thèse de proposer un langage de temporalités permettant de spécifier les temporalités des activités du plan de soins à domicile. Ensuite, nous décrivons comment les plans de soins à domicile, formalisés grâce aux automates temporisés, peuvent être générés à partir de ces abstractions. Nous proposons une approche en trois étapes qui consiste à: (i) le mapping entre les spécifications temporelles élémentaires et les automates temporisés appelés "pattern automata", (ii) la combinaison des "patterns automata" afin de construire les automates d'activités en utilisant l'algorithme de composition que nous avons déni, et aussi (iii) la construction de l'automate de plan de soins à domicile global. L'automate de plan de soins à domicile résultant englobe tous les schedules autorisés des activités pour un patient donné. Enfin, nous montrons comment la vérification et le suivi de l'automate du plan de soins à domicile résultant peuvent être faits en utilisant des techniques et des outils existants, en particulier en utilisant l'outil de verification UPPAAL. / In this thesis we are interested in the problems underlying the design and the management of home care plans. A home care plan defines the set of medical and/or social activities that are carried out day after day at a patient's home. Such a care plan is usually constructed through a complex process involving a comprehensive assessment of patient's needs as well as his/her social and physical environment. Specication of home care plans is challenging for several reasons: home care plans are inherently nonstructured processes which involve repetitive, but irregular, activities, whose specification requires complex temporal expressions. These features make home care plans difficult to model using traditional process modeling technologies. First, we present a DSL (Domain Specific Language) based approach tailored to express home care plans using high level and user-oriented abstractions. DSL enables us through this thesis to propose a temporalities language to specify temporalities of home care plan activities. Then, we describe how home care plans, formalized as timed automata, can be generated from these abstractions. We propose a three-step approach which consists in (i) mapping between elementary temporal specifications and timed automata called Pattern automata, (ii) combining patterns automata to build the activity automata using our composition algorithm and then (iii) constructing the global care plan automaton. The resulting care plan automaton encompasses all the possible allowed schedules of activities for a given patient. Finally, we show how verification and monitoring of the resulting care plan can be handled using existing techniques and tools, especially using UPPAAL Model Checker.
|
46 |
Verificação e comprovação de erros em códigos C usando bounded model checkerRocha, Herbert Oliveira 04 February 2011 (has links)
Made available in DSpace on 2015-04-11T14:03:20Z (GMT). No. of bitstreams: 1
HERBERT OLIVEIRA.pdf: 512075 bytes, checksum: acc5d05442df938abdfa025f9db23367 (MD5)
Previous issue date: 2011-02-04 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The use of computer-based systems in several domains has increased significantly over
the last years, one of the main challenges in software development of these systems
is to ensure the correctness and reliability of these. So that software verification now
plays an important role in ensuring the overall product quality, aimed mainly the
characteristics of predictability and reliability. In the context of software verification,
with respect to the use of model checking technique, Bounded Model Checkers have
already been applied to discover subtle errors in actual systems projects, contributing
effectively in this verification process. The value of the counterexample and safety
properties generated by Bounded Model Checkers to create test case and to debug
these systems is widely recognized. When a Bounded Model Checking (BMC) finds
an error it produces a counterexample. Thus, the value of counterexamples to debug
software systems is widely recognized in the state-of-the-practice. However, BMCs
often produce counterexamples that are either large or difficult to be understood and
manipulated mainly because of both the software size and the values chosen by the
respective solver. In this work we aim to demonstrate and analyze the use of formal
methods (through using the model checking technique) in the process of developing
programs in C language, exploring the features already provided by the model checking
as the counterexample and the identification and verification of safety properties. In
view of this we present two approaches: (i) we describe a method to integrate the
bounded model checker ESBMC with the CUnit framework. This method aims to
extract the safety properties generated by ESBMC to generate automatically test cases
using the rich set of assertions provided by the CUnit framework and (ii) a method aims
to automate the collection and manipulation of counterexamples in order to instantiate
the analised C program for proving the root cause of the identified error. Such methods
may be seen as a complementary technique for the verification performed by BMCs.
We show the effectiveness of our proposed method over publicly available benchmarks
of C programs. / A utilização de sistemas baseados em computador em diversos domínios aumentou
significativamente nos últimos anos. Um dos principais desafios no desenvolvimento
de software de sistemas críticos é a garantia da sua correção e confiabilidade. Desta
forma, a verificação de software exerce um papel importante para assegurar a qualidade
geral do produto, visando principalmente características como previsibilidade e confiabilidade.
No contexto de verificação de software, os Bounded Model Checkers estão
sendo utilizados para descobrir erros sutis em projetos de sistemas de software atuais,
contribuindo eficazmente neste processo de verificação. O valor dos contra-exemplos e
propriedades de segurança gerados pelo Bounded Model Checkers para criar casos de
testes e para a depuração de sistemas é amplamente reconhecido. Quando um Bounded
Model Checking (BMC) encontra um erro ele produz um contra-exemplo. Assim,
o valor dos contra-exemplos para depuração de software é amplamente reconhecido no
estado da prática. Entretanto, os BMCs frequentemente produzem contra-exemplos
que são grandes ou difíceis de entender ou manipular, principalmente devido ao tamanho
do software e valores escolhidos pelo solucionador de satisfabilidade. Neste
trabalho visamos demonstrar e analisar o uso de método formal (através da técnica
model checking) no processo de desenvolvimento de programas na linguagem C, explorando
as características já providas pelo model checking como o contra-exemplo e a
identificação e verificação de propriedades de segurança. Em face disto apresentamos
duas abordagens: (i) descrevemos um método para integrar o Bounded Model Checker
ESBMC como o framework de teste unitário CUnit, este método visa extrair as
propriedades geradas pelo ESBMC para gerar automaticamente casos de teste usando
o rico conjunto de assertivas providas pelo framework CUnit e (ii) um método que
visa automatizar a coleta e manipulação dos contra-exemplos, de modo a instanciar o
programa C analisado, para comprovar a causa raiz do erro identificado. Tais métodos
podem ser vistos como um método complementar para a verificação efetuada pelos
BMCs. Demonstramos a eficácia dos métodos propostos sobre benchmarks públicos de
código C.
|
47 |
Redações do ENEM: estudo dos desvios da norma padrão sob a perspectiva de corpos / ENEM essays: a study of deviations from the standard norm from a corpus perspective.Gisele Montilha Pinheiro 27 March 2008 (has links)
Desvios da norma padrão, comumente chamados de \"erros\", são fatos comuns na escrita dos aprendizes da variante culta de uma língua materna como o português brasileiro. Tratados como um \"mal a ser combatido\", eles são, na verdade, importantes indícios do processo de assimilação da escrita culta pelo falante nativo. Revelam qual a tendência da transformação que naturalmente ocorre numa língua, demonstrando, por exemplo, a obsolência das gramáticas tradicionais, que não aceitam determinadas construções já muito freqüentes. Mas seria possível detectar algum padrão desses desvios? Haveria desvios típicos de um determinado perfil de redatores? Essas indagações motivaram a presente investigação, que se baseou na concepção de que esses estudos são de natureza empírica, comprometidos com a noção de que a língua funciona tal como um sistema probabilístico, de onde é possível prever tendências, por exemplo, de mudança. Falamos, pois, de uma investigação à luz da Lingüística de Corpus. Composto de redações do Exame Nacional do Ensino Médio (ENEM), edição de 2002, cedidas pelo Instituto Nacional de Estudos e Pesquisas Educacionais (INEP) juntamente com determinados traços do perfil dos redatores, construímos um corpus que foi batizado de Corvo, e se ocupou de uma faixa específica de textos: a de pior desempenho no ENEM no quesito domínio da norma culta. Observamos, desse modo, textos em que, supostamente, há freqüência maior de desvios e maior variedade de tipos de desvios. Nossa metodologia de pesquisa apoiou-se no uso do revisor gramatical automático ReGra, bastante popular no país e que auxilia o usuário no uso correto do português culto padrão. Além disso, construímos um material próprio de detecção e classificação dos desvios gramaticais, aumentando a capacidade de tratamento automático dos textos. Assim, foi possível gerar uma versão do corpus anotada em desvios, i.e., os textos apresentam indicações de quando e qual tipo de desvios ocorrem. Como resultado temos um mapeamento do Corvo; ou seja, um panorama dos desvios típicos de um determinado tip o de perfil de redator. Constatamos a deficiência ortográfica como o traço típico do grupo de indivíduos investigado, mas, sobretudo, que a ortografia é motor para o pleno funcionamento de uma revisão gramatical automática. O revisor ReGra mostrou-se incapaz de processar satisfatoriamente textos desse tipo de redator, mas, ainda assim, comprovou que esses textos apresentam desvios gramaticais de tratamento complexo, cuja intervenção do revisor, se acontece, pouco altera na qualidade geral dos mesmos. Com respeito à tipologia de desvio, pudemos constatar a validade da tipologia aplicada na pesquisa, que advém do ReGra e, portanto, está à margem das discussões teóricas ortodoxas. De fato, há recorrência de tipos de desvios, e isso numa freqüência que nos autoriza admitir a fraca assimilação de certas regras gramaticais tomadas como básicas (p.ex., a pontuação, a concordância e a regência). Constatamos, com relação ao perfil de redatores, que textos com maior potencial para a revisão da escrita, i.e., aqueles que alteram significativamente a qualidade textual com interferências pontuais de revisão, são justamente os produzidos pelos concluintes do ensino médio e não pelos egressos. / Deviations from the standard norm, usually called \'mistakes\', are common events in writing pieces of language learners speakers of a native language such as Brazilian Portuguese. They are treated as \'an evil that must be fought\'. They are, in fact, important evidence of the acquisition process of writing in the standard norm by the native speaker. They reveal the transformation trend, which normally occurs in a language, showing, for instance, the obsolescence of traditional grammars that do not accept certain patterns, which are frequent nonetheless. However, is it possible to identify a pattern in these deviations? Are there common deviations among a certain profile of students? These are the questions motivating this study, which is based on the concept that these investigations are empirical in nature, and are marked by the notion that language operates as a probabilistic system, in which it is possible to forecast trends of change, for example. We are, therefore, speaking of an investigation in the light of Corpus Linguistics. We compiled a corpus of essays written during the National Middle Education Exam (ENEM) carried out in 2002. These essays were obtained with the National Institute for Research in Education (INEP) together with the profile of the students. The corpus was called Corvo, and it is made up of a certain bracket of texts: those having obtained the worst performance rate in the ENEM in the standard norm category. We observed, therefore, texts in which there are, allegedly, a greater frequency of deviations and a greater variety of kinds of deviations. The research methodology was supported by the electronic grammar checker - ReGra - which is very popular in the country, and helps the user in writing standard Portuguese correctly. In addition, we built a specific tool for identifying and classifying grammar deviations, thus, increasing the ability to treat the texts electronically. Therefore, it was possible to generate an annotated version of the corpus according to the deviations, i.e., the texts were annotated according to when and what kind of deviations they presented. As a result, we have a mapping of the Corvo; that is, a view of the common deviations of students belonging to a certain profile. We identified poor spelling as a common feature of the group, but, above all, that spelling is the engine enabling a full grammatical check to operate. The ReGra grammar checker was not able to satisfactorily process these kinds of texts, but, even so, it proved that these texts presented complex grammar deviations, and the intervention of the checker, when it is applied, little alters their overall quality. In regard to deviation typology, we identified the validity of the typology used in this study, which results from the ReGra and, therefore, lies in the outskirts of orthodox theoretical discussions. In fact, certain kinds of deviations reoccur at a frequency that enables us to admit a poor assimilation of certain grammatical rules considered basic (e.g., punctuation, agreement and use of prepositions). We found that in regard to the profile of the students, texts with a greater writing check potential, that is, a check that would significantly improve text quality through individual checking interferences, are te xts produced by students who finished middle education and not those produced by students who are finishing the course.
|
48 |
Webový portál pro aplikaci metodik pro zvyšování spolehlivosti / Web Portal for Fault Tolerant Methodology ApplicationPoupě, Petr January 2012 (has links)
This master's thesis deals with the development of web portal for the application of fault-tolerant methodologies. It introduces the issue of fault-tolerant systems and analyze system requirements, that have users working in this field. It describes the development cycle from analysis and specification of application system design through to implementation and testing part. More thoroughly it is focusing above design portal that provides a comprehensive and versatile solution to the problem that leads to the final implementation of this portal. This realization is part of the thesis.
|
49 |
Multi-Agent Games of Imperfect Information: Algorithms for Strategy SynthesisÅkerblom Jonsson, Viktor, Berisha, David January 2021 (has links)
The aim of this project was to improve upon a toolfor strategy synthesis for multi-agent games of imperfect informationagainst nature. Another objective was to compare the toolwith the original tool we improved upon and the Strategic ModelChecker (SMC). For the strategy synthesis, an existing extensionfor expanding the games called the Multi-Agent Knowledge-Based Subset Construction was used. The construction creates anew knowledge-based game where strategies can be tested. Thestrategies were synthesized for the individual agents and thenjoint profiles of the individual strategies were tested to see ifthey were winning.Four different algorithms for going through the game graphswere tested against the other existing tools. The new andimproved tool was faster at synthesizing a strategy than both theold tool and the SMC for almost all games tested. Although forthe games where the new tool is out-performed, results indicateit to be due to a combination of chance and how the games areperceived by the tools. No algorithm or tool proved to be thebest performing for all games. / Syftet med detta projekt var att förbättra ettexisterande verktyg för att syntetisera strategier för fleragentspelav imperfect information mot naturen. Därefter också jämföraverktyget med original verktyget och med ett verktyg somheter the strategic model checker (SMC). För syntetiseringenav strategier användes ett existerande verktyg för att expanderaspel, som kallas Multi-Agent Knowledge-Based Subset Construction.Konstruktionen skapar ett kunskapsbaserat spel därstrategierna kan bli testade. Strategierna syntetiserades för deenskilda agenterna och därefter skapades en sammansatt profilav strategier, som då testades för att se om det var en vinnandestrategi.Fyra olika algoritmer för att gå igenom spelgrafen testadesoch jämfördes med de andra verktygen. Det nya och förbättradeverktyget var snabbare att syntetisera en strategi än både detgamla verktyget och SMC verktyget för nästan alla spel somtestades. Fast, för spelen då nya verktyget inte var snabbast så indikerar resultaten på att detta är p.g.a. en kombination avslump och hur spelen ses på av verktygen. Ingen algoritm ellerverktyg visade sig vara det snabbaste för samtliga spel. / Kandidatexjobb i elektroteknik 2021, KTH, Stockholm
|
50 |
ScaleSem : model checking et web sémantiqueGueffaz, Mahdi 11 December 2012 (has links) (PDF)
Le développement croissant des réseaux et en particulier l'Internet a considérablement développé l'écart entre les systèmes d'information hétérogènes. En faisant une analyse sur les études de l'interopérabilité des systèmes d'information hétérogènes, nous découvrons que tous les travaux dans ce domaine tendent à la résolution des problèmes de l'hétérogénéité sémantique. Le W3C (World Wide Web Consortium) propose des normes pour représenter la sémantique par l'ontologie. L'ontologie est en train de devenir un support incontournable pour l'interopérabilité des systèmes d'information et en particulier dans la sémantique. La structure de l'ontologie est une combinaison de concepts, propriétés et relations. Cette combinaison est aussi appelée un graphe sémantique. Plusieurs langages ont été développés dans le cadre du Web sémantique et la plupart de ces langages utilisent la syntaxe XML (eXtensible Meta Language). Les langages OWL (Ontology Web Language) et RDF (Resource Description Framework) sont les langages les plus importants du web sémantique, ils sont basés sur XML.Le RDF est la première norme du W3C pour l'enrichissement des ressources sur le Web avec des descriptions détaillées et il augmente la facilité de traitement automatique des ressources Web. Les descriptions peuvent être des caractéristiques des ressources, telles que l'auteur ou le contenu d'un site web. Ces descriptions sont des métadonnées. Enrichir le Web avec des métadonnées permet le développement de ce qu'on appelle le Web Sémantique. Le RDF est aussi utilisé pour représenter les graphes sémantiques correspondant à une modélisation des connaissances spécifiques. Les fichiers RDF sont généralement stockés dans une base de données relationnelle et manipulés en utilisant le langage SQL ou les langages dérivés comme SPARQL. Malheureusement, cette solution, bien adaptée pour les petits graphes RDF n'est pas bien adaptée pour les grands graphes RDF. Ces graphes évoluent rapidement et leur adaptation au changement peut faire apparaître des incohérences. Conduire l'application des changements tout en maintenant la cohérence des graphes sémantiques est une tâche cruciale et coûteuse en termes de temps et de complexité. Un processus automatisé est donc essentiel. Pour ces graphes RDF de grande taille, nous suggérons une nouvelle façon en utilisant la vérification formelle " Le Model checking ".Le Model checking est une technique de vérification qui explore tous les états possibles du système. De cette manière, on peut montrer qu'un modèle d'un système donné satisfait une propriété donnée. Cette thèse apporte une nouvelle méthode de vérification et d'interrogation de graphes sémantiques. Nous proposons une approche nommé ScaleSem qui consiste à transformer les graphes sémantiques en graphes compréhensibles par le model checker (l'outil de vérification de la méthode Model checking). Il est nécessaire d'avoir des outils logiciels permettant de réaliser la traduction d'un graphe décrit dans un formalisme vers le même graphe (ou une adaptation) décrit dans un autre formalisme
|
Page generated in 0.04 seconds