• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 529
  • 529
  • 148
  • 140
  • 124
  • 123
  • 119
  • 112
  • 104
  • 101
  • 97
  • 83
  • 80
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Relieving the cognitive load of constructing molecular biological ontology based queries by means of visual aids.

O'Neill, Kieran. January 2007 (has links)
The domain of molecular biology is complex and vast. Bio-ontologies and information visualisation have arisen in recent years as means to assist biologists in making sense of this information. Ontologies can enable the construction of conceptual queries, but existing systems to do this are too technical for most biologists. OntoDas, the software developed as part of this thesis work, demonstrates how the application of techniques from information visualisation and human computer interaction can result in software which enables biologists to construct conceptual queries. / Thesis (M.Comp.Sc.)-Universty of KwaZulu-Natal, Pietermaritzburg, 2007.
342

Représentation des connaissances scientifiques en vue de leur transfert vers l'industrie : Application à la valorisation des produits à base de bois en fin de vie / Representation of scientific knowledge in order to transfer it to industry

Jmal, Aymen 19 February 2013 (has links)
La thèse s’intéresse au transfert de connaissances scientifiques en valorisation des produits à base de bois en fin de vie vers les acteurs de la filière bois. La question de recherche est : Comment transférer les connaissances scientifiques (recueil et représentation des connaissances) sur la valorisation des produits à base de bois en fin de vie pour qu'elles puissent être assimilées et utilisées par les acteurs de la filière bois ? Le recueil des connaissances a combiné des séances d'interview de spécialistes avec l’acquisition des connaissances à partir d’articles scientifiques. Les connaissances recueillies ont été reformulées afin de faciliter leur transfert vers les acteurs de la filière bois : les concepts pertinents, relations d'influence entre les concepts et résultats scientifiques ont été respectivement représentés via des cartes conceptuelles, graphes d'influence, et fiches de connaissances. Un modèle canonique de cartes conceptuelles a été proposé afin de permettre une représentation homogène des concepts. La transmission, l’assimilation et l'utilisation potentielle des connaissances à transférer ont été traitées comme suit : La transmission a été proposée via un livre électronique (hypermédia) de connaissances, L’assimilation a été prise en compte via la reformulation des connaissances et une représentation graphique des connaissances suivant une carte conceptuelle canonique et un format prédéfini de fiches. L’utilisation potentielle des connaissances transférées a été renforcée par la représentation des leviers d'action sur les concepts du domaine sous forme de graphes d'influence. Les performances de transfert à partir du livre ont été évaluées en fonction du degré de compréhension du contenu du livre, la charge cognitive de l’utilisateur au cours de l’utilisation du livre de connaissances et sa désorientation. L’expérience a montré que la forme canonique développée est intuitive ; et, tout comme la navigation dans le livre, n'occasionne pas de désorientation ou surcharge cognitive de l'utilisateur. Les résultats obtenus montrent, au sein de la filière bois, tout l’intérêt de la représentation proposée pour le transfert de connaissances scientifiques vers des professionnels. / This thesis focuses on the transfer of scientific knowledge on recovered wood to practitioners of the wood sector. The research question is: how to transfer the scientific knowledge (collect and representation of knowledge and the transmission medium) on the recovered wood so that they may assimilate and used by non-scientific personnel of the wood sector ? Knowledge was first collected through interviews with specialists in the re-use of recovered wood and combined with acquisition of knowledge from scientific publications. The collected knowledge was then reformulated to facilitate its transfer to practitioners in the wood sector. The relevant concepts, the influences between concepts and relevant scientific results werere spectively represented via concept maps, influence graphs, and knowledge sheets. A canonical model of concept maps is proposed to enable a homogeneous representation of concepts. Transmission, assimilation and potential utilization of knowledge transfer were treated as follows : The transmission is proposed via an electronic (hypermedia) knowledge-book, the assimilation (absorption) has been taken into account through the reformulation of knowledge and a graphical representation of knowledge following a canonical concept map and sheets with a predefined format and The potential use of the transferred knowledge is facilitated by the representation of the action levers on the domain concepts in the form of influence graphs. The efficiency of knowledge transfer via the knowledge book has been evaluated according to the degree of understanding of the book content, the cognitive load of the user during use of the knowledge-book and the disorientation it caused. The experience has shown that the developed canonical form is intuitive and, like navigation in the book, does not cause disorientation or cognitive over load to the user. This promotes the assimilation and the use of the knowledge-book content. The obtained results indicate that the representation of relevant knowledge in a knowledge-book should facilitate for the transfer of scientific knowledge to professionals in the wood sector.
343

ATTENTION TO SHARED PERCEPTUAL FEATURES INFLUENCES EARLY NOUN-CONCEPT PROCESSING

Ryan Peters (7027685) 15 August 2019 (has links)
Recent modeling work shows that patterns of shared perceptual features relate to the group-level order of acquisition of early-learned words (Peters & Borovsky, 2019). Here we present results for two eye-tracked word recognition studies showing patterns of shared perceptual features likewise influence processing of known and novel noun-concepts in individual 24- to 30-month-old toddlers. In the first study (Chapter 2, N=54), we explored the influence of perceptual connectivity on both initial attentional biases to known objects and subsequent label processing. In the second study (Chapter 3, N=49), we investigated whether perceptual connectivity influences patterns of attention during learning opportunities for novel object-features and object-labels, subsequent pre-labeling attentional biases, and object-label learning outcomes. Results across studies revealed four main findings. First, patterns of shared (visual-motion and visual-form and surface) perceptual features do relate to differences in early noun-concept processing at the individual level. Second, such influences are tentatively at play from the outset of novel noun-concept learning. Third, connectivity driven attentional biases to both recently learned and well-known objects follow a similar timecourse and show similar patterns of individual differences. Fourth, initial, pre-labeling attentional biases to objects relate to subsequent label processing, but do not linearly explain effects of connectivity. Finally, we consider whether these findings provide support for shared-feature-guided selective attention to object features as a mechanism underlying early lexico-semantic development.
344

Data Driven Dense 3D Facial Reconstruction From 3D Skull Shape

Anusha Gorrila (7023152) 13 August 2019 (has links)
<p>This thesis explores a data driven machine learning based solution for Facial reconstruction from three dimensional (3D) skull shape for recognizing or identifying unknown subjects during forensic investigation. With over 8000 unidentified bodies during the past 3 decades, facial reconstruction of disintegrated bodies in helping with identification has been a critical issue for forensic practitioners. Historically, clay modelling has been used for facial reconstruction that not only requires an expert in the field but also demands a substantial amount of time for modelling, even after acquiring the skull model. Such manual reconstruction typically takes from a month to over 3 months of time and effort. The solution presented in this thesis uses 3D Cone Beam Computed Tomography (CBCT) data collected from many people to build a model of the relationship of facial skin to skull bone over a dense set of locations on the face. It then uses this skin-to-bone relationship model learned from the data to reconstruct the predicted face model from a skull shape of an unknown subject. The thesis also extends the algorithm in a way that could help modify the reconstructed face model interactively to account for the effects of age or weight. This uses the predicted face model as a starting point and creates different hypotheses of the facial appearances for different physical attributes. Attributes like age and body mass index (BMI) are used to show the physical facial appearance changes with the help of a tool we constructed. This could improve the identification process. The thesis also presents a methods designed for testing and validating the facial reconstruction algorithm. <br></p>
345

Photoplythesmogram (PPG) Signal Reliability Analysis in a Wearable Sensor-Kit

Deena Alabed (6634382) 14 May 2019 (has links)
<p>In recent years, there has been an increase in the popularity of wearable sensors such as electroencephalography (EEG) sensors, electromyography (EMG) sensors, gyroscopes, accelerometers, and photoplethysmography (PPG) sensors. This work is focused on PPG sensors, which are used to measure heart rate in real time. They are currently used in many commercial products such as Fitbit Watch and Muse Headband. Due to their low cost and relative implementation simplicity, they are easy to add to custom-built wearable devices.</p><p><br></p> <p>We built an Arduino-based wearable wrist sensor-kit that consists of a PPG sensor in addition to other low cost commercial biosensors to measure biosignals such as pulse rate, skin temperature, skin conductivity, and hand motion. The purpose of the sensor-kit is to analyze the effects of stress on students in a classroom based on changes in their biometric signals. We noticed some failures in the measured PPG signal, which could negatively affect the accuracy of our analysis. We conjectured that one of the causes of failure is movement. Therefore, in this thesis, we build automatic failure detection methods and use these methods to study the effect of movement on the signal.</p><p><br></p> <p>Using the sensor-kit, PPG signals were collected in two settings. In the first setting, the participants were in a still sitting position. These measured signals were manually labeled and used in signal analysis and method development. In the second setting, the signals were acquired in three different scenarios with increasing levels of activity. These measured signals were used to investigate the effect of movement on the reliability of the PPG sensor. </p><p><br></p> <p>Four types of failure detection methods were developed: Support Vector Machines (SVM), Deep Neural Networks (DNN), K-Nearest Neighbor (K-NN), and Decision Trees. The classification accuracy is evaluated by comparing the resulting Receiver Operating Characteristic (ROC) curves, Area Above the Curve (AAC), as well as the duration of failure and non-failure sequences. The DNN and Decision Tree results are found to be the most promising and seem to have the highest error detection accuracy. </p> <p> </p> <p>The proposed classifiers are also used to assess the reliability of the PPG sensor in the three activity scenarios. Our findings indicate that there is a significant presence of failures in the measured PPG signals at rest, which increases with movement. They also show that it is hard to obtain long sequences of pulses without failure. These findings should be taken into account when designing wearable systems that use heart rate values as input.</p>
346

ESTIMATING PHENYLALANINE OF COMMERCIAL FOODS : A COMPARISON BETWEEN A MATHEMATICAL APPROACH AND A MACHINE LEARNING APPROACH

Amruthavarshini Talikoti (6634508) 14 May 2019 (has links)
<p></p><p>Phenylketonuria (PKU) is an inherited metabolic disorder affecting 1 in every 10,000 to 15,000 newborns in the United States every year. Caused by a genetic mutation, PKU results in an excessive build up of the amino acid Phenylalanine (Phe) in the body leading to symptoms including but not limited to intellectual disability, hyperactivity, psychiatric disorders and seizures. Most PKU patients must follow a strict diet limited in Phe. The aim of this research study is to formulate, implement and compare techniques for Phe estimation in commercial foods using the information on the food label (Nutritional Fact Label and ordered ingredient list). Ideally, the techniques should be both accurate and amenable to a user friendly implementation as a Phe calculator that would aid PKU patients monitor their dietary Phe intake.</p> <p> The first approach to solve the above problem is a mathematical one that comprises three steps. The three steps were separately proposed as methods by Jieun Kim in her dissertation. It was assumed that the third method, which is more computationally expensive, was the most accurate one. However, by performing the three methods subsequently in three different steps and combining the results, we actually obtained better results than by merely using the third method.</p> <p> The first step makes use of the protein content in the foods and Phe:protein multipliers. The second step enumerates all the ingredients in the food and uses the minimum and maximum Phe:protein multipliers of the ingredients along with the protein content. The third step lists the ingredients in decreasing order of their weights, which gives rise to inequality constraints. These constraints hold assuming that there is no loss in the preparation process. The inequality constraints are optimized numerically in two phases. The first involves nutrient content estimation by approximating the ingredient amounts. The second phase is a refinement of the above estimates using the Simplex algorithm. The final Phe range is obtained by performing an interval intersection of the results of the three steps. We implemented all three steps as web applications. Our proposed three-step method yields a high accuracy of Phe estimation (error <= +/- 13.04mg Phe per serving for 90% of foods).</p> <p> The above mathematical procedure is contrasted against a machine learning approach that uses the data in an existing database as training data to infer the Phe in any given food. Specifically, we use the K-Nearest Neighbors (K-NN) classification method using a feature vector containing the (rounded) nutrient data. In other words, the Phe content of the test food is a weighted average of the Phe values of the neighbors closest to it using the nutrient values as attributes. A four-fold cross validation is carried out to determine the hyper-parameters and the training is performed using the United States Department of Agriculture (USDA) food nutrient database. Our tests indicate that this approach is not very accurate for general foods (error <= +/- 50mg Phe per 100g in about 38% of the foods tested). However, for low-protein foods which are typically consumed by PKU patients, the accuracy increases significantly (error <= +/- 50mg Phe per 100g in over 77% foods).</p> <p> The machine learning approach is more user-friendly than the mathematical approach. It is convenient, fast and easy to use as it takes into account just the nutrient information. In contrast, the mathematical method additionally takes as input a detailed ingredient list, which is cumbersome to be located in a food database and entered as input. However, the Mathematical method has the added advantage of providing error bounds for the Phe estimate. It is also more accurate than the ML method. This may be due to the fact that for the ML method, the nutrition facts alone are not sufficient to estimate Phe and that additional information like the ingredients list is required. </p><br><p></p>
347

BAYESIAN OPTIMAL DESIGN OF EXPERIMENTS FOR EXPENSIVE BLACK-BOX FUNCTIONS UNDER UNCERTAINTY

Piyush Pandita (6561242) 10 June 2019 (has links)
<div>Researchers and scientists across various areas face the perennial challenge of selecting experimental conditions or inputs for computer simulations in order to achieve promising results.</div><div> The aim of conducting these experiments could be to study the production of a material that has great applicability.</div><div> One might also be interested in accurately modeling and analyzing a simulation of a physical process through a high-fidelity computer code.</div><div> The presence of noise in the experimental observations or simulator outputs, called aleatory uncertainty, is usually accompanied by limited amount of data due to budget constraints.</div><div> This gives rise to what is known as epistemic uncertainty. </div><div> This problem of designing of experiments with limited number of allowable experiments or simulations under aleatory and epistemic uncertainty needs to be treated in a Bayesian way.</div><div> The aim of this thesis is to extend the state-of-the-art in Bayesian optimal design of experiments where one can optimize and infer statistics of the expensive experimental observation(s) or simulation output(s) under uncertainty.</div>
348

Ontologias e DSLs na geração de sistemas de apoio à decisão, caso de estudo SustenAgro / Ontologies and DSLs in the generation of decision support systems, SustenAgro study case

Suarez, John Freddy Garavito 03 May 2017 (has links)
Os Sistemas de Apoio à Decisão (SAD) organizam e processam dados e informações para gerar resultados que apoiem a tomada de decisão em um domínio especifico. Eles integram conhecimento de especialistas de domínio em cada um de seus componentes: modelos, dados, operações matemáticas (que processam os dados) e resultado de análises. Nas metodologias de desenvolvimento tradicionais, esse conhecimento deve ser interpretado e usado por desenvolvedores de software para implementar os SADs. Isso porque especialistas de domínio não conseguem formalizar esse conhecimento em um modelo computável que possa ser integrado aos SADs. O processo de modelagem de conhecimento é realizado, na prática, pelos desenvolvedores, parcializando o conhecimento do domínio e dificultando o desenvolvimento ágil dos SADs (já que os especialistas não modificam o código diretamente). Para solucionar esse problema, propõe-se um método e ferramenta web que usa ontologias, na Web Ontology Language (OWL), para representar o conhecimento de especialistas, e uma Domain Specific Language (DSL), para modelar o comportamento dos SADs. Ontologias, em OWL, são uma representação de conhecimento computável, que permite definir SADs em um formato entendível e accessível a humanos e máquinas. Esse método foi usado para criar o Framework Decisioner para a instanciação de SADs. O Decisioner gera automaticamente SADs a partir de uma ontologia e uma descrição naDSL, incluindo a interface do SAD (usando uma biblioteca de Web Components). Um editor online de ontologias, que usa um formato simplificado, permite que especialistas de domínio possam modificar aspectos da ontologia e imediatamente ver as consequência de suasmudanças no SAD.Uma validação desse método foi realizada, por meio da instanciação do SAD SustenAgro no Framework Decisioner. O SAD SustenAgro avalia a sustentabilidade de sistemas produtivos de cana-de-açúcar na região centro-sul do Brasil. Avaliações, conduzidas por especialistas em sustentabilidade da Embrapa Meio ambiente (parceiros neste projeto), mostraram que especialistas são capazes de alterar a ontologia e DSL usadas, sem a ajuda de programadores, e que o sistema produz análises de sustentabilidade corretas. / Decision Support Systems (DSSs) organize and process data and information to generate results to support decision making in a specific domain. They integrate knowledge from domain experts in each of their components: models, data, mathematical operations (that process the data) and analysis results. In traditional development methodologies, this knowledge must be interpreted and used by software developers to implement DSSs. That is because domain experts cannot formalize this knowledge in a computable model that can be integrated into DSSs. The knowledge modeling process is carried out, in practice, by the developers, biasing domain knowledge and hindering the agile development of DSSs (as domain experts cannot modify code directly). To solve this problem, a method and web tool is proposed that uses ontologies, in the Web Ontology Language (OWL), to represent experts knowledge, and a Domain Specific Language (DSL), to model DSS behavior. Ontologies, in OWL, are a computable knowledge representations, which allow the definition of DSSs in a format understandable and accessible to humans and machines. This method was used to create the Decisioner Framework for the instantiation of DSSs. Decisioner automatically generates DSSs from an ontology and a description in its DSL, including the DSS interface (using a Web Components library). An online ontology editor, using a simplified format, allows that domain experts change the ontology and immediately see the consequences of their changes in the in the DSS. A validation of this method was done through the instantiation of the SustenAgro DSS, using the Decisioner Framework. The SustenAgro DSS evaluates the sustainability of sugarcane production systems in the center-south region of Brazil. Evaluations, done by by sustainability experts from Embrapa Environment (partners in this project), showed that domain experts are capable of changing the ontology and DSL program used, without the help of software developers, and that the system produced correct sustainability analysis.
349

A knowledge representation framework for the design and the evaluation of a product variety / Cadre de modélisation pour la représentation de la connaissance à l’aide de la conception et l’évaluation de variétés de produits

Giovannini, Antonio 16 January 2015 (has links)
La conception de variété (ou diversité) de produit est un processus essentiel pour atteindre le niveau de flexibilité requis par la personnalisation de masse. Pendant le processus de conception de la variété, les clients et les experts sont impliqués dans la définition de la meilleure solution. Par conséquent, la compréhension des liens entre les connaissances provenant de ces différents domaines, i.e. client, produit, processus est devenue nécessaire. Dans cette thèse, nous nous intéressons en particulier à la formalisation de ces connaissances. En effet, même si plusieurs efforts ont étés accomplis dans le domaine de la représentation de la connaissance, la pensée logiciste (i.e. utilisation de méthode à base de logiques formelles) reste la base de la majeure partie des travaux sur la formalisation de la connaissance. Des réflexions appropriées sur l’utilisation des logiques peuvent montrer les risques d’ambiguïté de la représentation: l’utilisation de la logique conduit souvent à une représentation sujette à plusieurs interprétations, i.e. une représentation ambiguë. Une représentation avec cette caractéristique ne répond pas à l’exigence de bien comprendre les liens entre les différentes connaissances impliquées dans la conception de la variété. Notre travail s’intéresse, donc, au développement d’un cadre de modélisation de la connaissance de conception basé sur l’anti-logicisme. Les travaux sur les systèmes développés à partir des principes de cette école de représentation de la connaissance montrent à travers des applications concrètes dans les domaines de la robotique ou des systèmes multi-agents que les comportements intelligents peuvent être obtenus sans une représentation de la connaissance basée sur les logiques. Ce cadre permet de développer une variété de produit-processus à partir d’une clientèle définie au départ. Finalement, un critère pour comparer les différentes alternatives de variété générées est aussi proposé. Une méthode pour instancier le cadre de modélisation sur un logiciel de CAO 3D a été développée. De plus, un prototype pour utiliser les modèles de connaissance avec un solveur mathématique a été conçu et développé. Les propositions ont été testées sur un cas d’étude industriel, i.e. batterie froide d’un appareil de réfrigération. Ce test a permis de discuter les avantages et les limites de nos propositions / The product variety design is an essential process in order to deal with the flexibility requested by the mass-customisation. During the product variety stage, customers and expert are involved in the definition of the best variety. Therefore a deep understanding of the links between knowledge coming from the customer domain, product domain and process domain is needed. In this thesis the research focus is on the formalisation of this knowledge. Indeed, even if many efforts are present in the knowledge representation literature, logics are always used to build these links. But appropriate reflections about the use of logics can lead to recognise the risk of ambiguity of the representations, i.e. more than one interpretation of the same represented object are possible. This ambiguity would make the represented knowledge not appropriate for the product variety design. In this work, we propose a framework for the knowledge representation based on the anti-logicism. Since the samples of anti-logicist systems (e.g. multi-agents, robots) have shown an intelligent behaviour without a representation based on logics, we use the principles the anti-logicism to propose our knowledge representation framework. A knowledge representation framework that allows to connect the customer requirements to the manufacturing process parameters is proposed. The core feature of the models based on this framework is the non-ambiguity. Indeed, each piece of knowledge that composes the model can be interpreted in one unique way. This feature allows the perfect collaboration between customer, product engineers and process engineering during the variety design stage. Once the pieces of knowledge coming from different domains are integrated in one model, the framework explains how to generate alternatives of product-process variety by starting from a given customer set. Finally a criterion to compare the different generated alternatives of product-process variety is proposed. A method to instantiate the framework on a 3D CAD has been developed. Moreover, a prototype that uses the knowledge model along with a mathematical solver to propose the best variety has been developed. The impact of the framework on the selection process and on the design process of a customisable product (i.e. water coil) is tested. The test of the instantiation and the prototype allows to show the advantages and the limit of the proposals
350

Un cadre formel pour l'intégration de connaissances du domaine dans la conception des systèmes : application au formalisme Event-B / A formal framework to integrate domain knowledge into system design : Application to Event-B formalism

Kherroubi, Souad 21 December 2018 (has links)
Cette thèse vise à définir des techniques pour mieux exploiter les connaissances du domaine dans l’objectif de rendre compte de la réalité de systèmes qualifiés de complexes et critiques. La modélisation est une étape indispensable pour effectuer des vérifications et exprimer des propriétés qu’un système doit satisfaire. La modélisation est une représentation simplificatrice, mais réductionniste de la réalité d’un système. Or, un système complexe ne peut se réduire à un modèle. Un modèle doit s’intégrer dans sa théorie observationnelle pour rendre compte des anomalies qu’il peut y contenir. Notre étude montre clairement que le contexte est la première problématique à traiter car principale source de conflits dans le processus de conception d’un système. L’approche retenue dans cette thèse est celle d’intégrer des connaissances du domaine en associant le système à concevoir à des formalismes déclaratifs qualifiés de descriptifs appelés ontologies. Notre attention est portée au formalisme Event-B dont l’approche correct-par-construction appelée raffinement est le principal mécanisme dans ce formalisme qui permet de faire des preuves sur des représentations abstraites de systèmes pour exprimer/vérifier des propriétés de sûreté et d’invariance. Le premier problème traité concerne la représentation et la modélisation des connaissances du contexte en V&V de modèles. Suite à l’étude des sources de conflits, nous avons établi de nouvelles règles pour une extraction de connaissances liées au contexte par raffinement pour la V&V. Une étude des formalismes de représentation et d’interprétation logiques du contexte a permis de définir un nouveau mécanisme pour mieux structurer les modèles Event-B. Une deuxième étude concerne l’apport des connaissances du domaine pour la V&V. Nous définissons une logique pour le formalisme Event-B avec contraintes du domaine fondées sur les logiques de description, établissons des règles à exploiter pour l’intégration de ces connaissances à des fins de V&V. L’évaluation des propositions faites portent sur des études de cas très complexes telles que les systèmes de vote dont des patrons de conception sont aussi développés dans cette thèse. Nous soulevons des problématiques fondamentales sur la complémentarité que peut avoir l’intégration par raffinement des connaissances du domaine à des modèles en exploitant les raisonnements ontologiques, proposons de définir de nouvelles structures pour une extraction partiellement automatisée / This thesis aims at defining techniques to better exploit the knowledge provided from the domain in order to account for the reality of systems described as complex and critical. Modeling is an essential step in performing verifications and expressing properties that a system must satisfy according to the needs and requirements established in the specifications. Modeling is a representation that simplifies the reality of a system. However, a complex system can not be reduced to a model. A model that represents a system must always fit into its observational theory to account for any anomalies that it may contain. Our study clearly shows that the context is the first issue to deal with as the main source of conflict in the design process of a system. The approach adopted in this thesis is that of integrating knowledge of the domain by associating the system to design with declarative formalisms qualified of descriptive ones that we call ontologies. We pay a particular attention to the Event-B formalism, whose correct-by-construction approach called refinement is the main mechanism at the heart of this formalism, which makes it possible to make proofs on abstract representations of systems for expressing and verifying properties of safety and invariance. The first problem treated is the representation and modeling of contextual knowledge in V&V of models. Following to the study looked at the different sources of conflict, we established new definitions and rules for a refinement context knowledge extraction for Event-B V&V. A study of logical formalisms that represent and interpret the context allowed us to define a new mechanism for better structuring Event-B models. A second study concerns the contribution that domain knowledge can make to the V&V of models. We define a logic for the Event-B formalism with domain constraints based on the description logic, and we define rules to integrate domain knowledge for model V&V. The evaluation of the proposals made deal with very complex case studies such as voting systems whose design patterns are also developed in this thesis. We raise fundamental issues about the complementarity that the integration of domain knowledge can bring to Event-B models by refinement using ontological reasoning, and we propose to define a new structures for a partially automated extraction on both levels, namely the V&V

Page generated in 0.1236 seconds