• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 8
  • 5
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 66
  • 20
  • 19
  • 18
  • 17
  • 16
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

CEManTIKA: a Domain-independent framework for designing context sensitive systems

SANTOS, Vaninha Vieira dos 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T15:51:14Z (GMT). No. of bitstreams: 2 arquivo2013_1.pdf: 7106085 bytes, checksum: 47ad31fd4b9b044b146cc59b0e2bc197 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Em uma época em que os usuários precisam processar uma quantidade cada vez maior de informação e executar tarefas cada vez mais complexas em um intervalo menor de tempo, a introdução do conceito de contexto em sistemas computacionais torna-se uma necessidade. Contexto é definido como as condições interelacionadas em que alguma coisa existe ou ocorre . Contexto é o que viabiliza a identificação do que é ou não relevante em uma dada situação. Sistemas sensíveis ao contexto são aqueles que utilizam contexto para prover informações ou serviços relevantes para a execução de uma tarefa. Projetar um sistema sensível ao contexto não é trivial, uma vez que é necessário lidar com questões relacionadas a que tipo de informação considerar como contexto, como representar essas informações, como podem ser adquiridas e processadas e como projetar o uso do contexto pelo sistema. Embora existam trabalhos que tratem desafios específicos envolvidos no desenvolvimento de sistemas sensíveis ao contexto, a maioria das soluções é proprietária ou restrita a um determinado tipo de aplicação e não são facilmente replicáveis em diferentes domínios de aplicação. Além disso, um outro problema é que projetistas de software têm dificuldade em especificar o que exatamente considerar como contexto e como projetar a sua representação, gerenciamento e uso. Esta tese propõe um framework de apoio ao projeto de sistemas sensíveis ao contexto em diferentes domínios, o qual é composto por quatro elementos principais: (i) uma arquitetura genérica para sistemas sensíveis ao contexto, (ii) um metamodelo de contexto independente de domínio, que guia a modelagem de contexto em diferentes aplicações; (iii) um conjunto de perfis UML que considera a estrutura do contexto e do comportamento sensível ao contexto; e (iv) um processo que direciona a execução de atividades relacionadas à especificação do contexto e ao projeto de sistemas sensíveis ao contexto. Para investigar a viabilidade da proposta, desenvolvemos o projeto de duas aplicações em diferentes domínios. Para uma destas aplicações, foi criado um protótipo funcional, o qual foi avaliado por usuários finais
22

Abordagem para guiar a reprodução de experimentos computacionais: aplicações em biologia computacional

Knop, Igor de Oliveira 31 March 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-02-09T10:36:22Z No. of bitstreams: 1 igordeoliveiraknop.pdf: 9278336 bytes, checksum: 3ba3e63654031ff0b2d334733fcd215b (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-02-09T13:54:43Z (GMT) No. of bitstreams: 1 igordeoliveiraknop.pdf: 9278336 bytes, checksum: 3ba3e63654031ff0b2d334733fcd215b (MD5) / Made available in DSpace on 2017-02-09T13:54:43Z (GMT). No. of bitstreams: 1 igordeoliveiraknop.pdf: 9278336 bytes, checksum: 3ba3e63654031ff0b2d334733fcd215b (MD5) Previous issue date: 2016-03-31 / A biologia sistêmica é uma das áreas emergentes mais poderosas no terceiro milênio por combinar de forma interdisciplinar conhecimentos e ferramentas da biologia, ciência da computação, medicina, química e engenharia. Entretanto, o contínuo desenvolvimento de experimentoscomputacionaiséacompanhadoporproblemascomoaintegraçãomanualde ferramentas para simulação e análise; a perda de modelos pela obsolescência de software; e a dificuldade para reprodução dos experimentos devido à falta de detalhes do ambiente de execução utilizado. A maioria dos modelos quantitativos publicados em biologia são perdidos porque eles, ou não estão mais disponíveis, ou porque são insuficientemente caracterizados para permitir sua reprodução. Este trabalho propõe uma abordagem para guiaroregistrodeexperimentosin silico comfoconasuareprodução.Aabordagemprevê a criação de uma série de anotações durante um trabalho em modelagem computacional, amparado por um ambiente de software, onde o pesquisador realiza as etapas de integração de ferramentas, descrição de processos e execução de experimentos. O objetivo é capturaroprocessodemodelagemdeformanãoinvasivaparaaumentaratrocadeconhecimento, permitir repetição e validação dos resultados e diminuir o retrabalho em grupos depesquisainterdisciplinares.Umambientecomputacionalprotótipofoiconstruídoedois fluxos de trabalho de ferramentas diferentes foram integradas como estudos de caso. O primeiro usa modelos da eletrofisiologia cardíaca para se construir novas aplicações sobre o ambiente. O segundo apresenta um novo uso para os metamodelos de dinâmica de sistemas para simular a resposta do sistema imune inato em uma seção planar de tecido. Foi observada a completa captura dos workflow de simulação e tratamento dos dados de saída nos dois experimentos de controle. O ambiente permitiu a reprodução e adaptação dos experimentos em três níveis diferentes: a criação de novos experimentos utilizando a mesma estrutura do original; a definição de novos aplicativos que utilizam variações da estrutura do experimento original; reaproveitamento do fluxo de trabalho para alterações nos modelos e condições originais. / Systems Biology is one of the most powerful emerging areas in the third millennium that combines, in an interdisciplinary way, knowledge and tools of Biology, Computer Science, Medicine, Chemistry and Engineering. However, the continued development of computational experiments is accompanied by problems such as manual integration of tools to simulation and analysis; loss of models due software obsolescence; and the difficulty to reproduce the experiments due to lack of details of the execution environment used. Most quantitative models published in Biology are lost because they, or are no longer available or are insufficiently characterized for reproduction. This work proposes an approach to guide the registration of in silico experiments focused on its reproduction. The approach involvesthecreationofaseriesofannotationsduringcomputationalmodeling,supported by a software environment where the researcher conducts the tool integration steps, process description and execution of experiments. The goal is to noninvasively capture the modelingprocesstoincreasetheexchangeofknowledge,allowrepetitionandvalidationof the results and reduce rework in interdisciplinary research groups. A prototype was built and two different workflows have been integrated as case studies. The first uses models and tools of cardiac electrophysiology to build new applications on the environment. The secondpresentsanewuseforthesystemdynamicsmetamodelingtosimulatetheresponse of the innate immune system in a planar section of tissue. The complete capture of workflow, consisting of simulation and processing of output data, in two control experiments, was observed. The environment allowed the reproduction and adaptation of experiments at three different levels: the creation of new experiments using the same structure as the original one; the definition of new applications that use variations of the structure of the original experiment; the reuse of workflow to change models and original condition.
23

Physical-Statistical Modeling and Optimization of Cardiovascular Systems

Du, Dongping 01 January 2002 (has links)
Heart disease remains the No.1 leading cause of death in U.S. and in the world. To improve cardiac care services, there is an urgent need of developing early diagnosis of heart diseases and optimal intervention strategies. As such, it calls upon a better understanding of the pathology of heart diseases. Computer simulation and modeling have been widely applied to overcome many practical and ethical limitations in in-vivo, ex-vivo, and whole-animal experiments. Computer experiments provide physiologists and cardiologists an indispensable tool to characterize, model and analyze cardiac function both in healthy and in diseased heart. Most importantly, simulation modeling empowers the analysis of causal relationships of cardiac dysfunction from ion channels to the whole heart, which physical experiments alone cannot achieve. Growing evidences show that aberrant glycosylation have dramatic influence on cardiac and neuronal function. Variable but modest reduction in glycosylation among congenital disorders of glycosylation (CDG) subtypes has multi-system effects leading to a high infant mortality rate. In addition, CDG in all young patients tends to cause Atrial Fibrillation (AF), i.e., the most common sustained cardiac arrhythmia. The mortality rate from AF has been increasing in the past two decades. Due to the increasing healthcare burden of AF, studying the AF mechanisms and developing optimal ablation strategies are now urgently needed. Very little is known about how glycosylation modulates cardiac electrical signaling. It is also a significant challenge to experimentally connect the changes at one organizational level (e.g.,electrical conduction among cardiac tissue) to measured changes at another organizational level (e.g., ion channels). In this study, we integrate the data from in vitro experiments with in-silico models to simulate the effects of reduced glycosylation on the gating kinetics of cardiac ion channel, i.e., hERG channels, Na+ channels, K+ channels, and to predict the glycosylation modulation dynamics in individual cardiac cells and tissues. The complex gating kinetics of Na+ channels is modeled with a 9-state Markov model that have voltage-dependent transition rates of exponential forms. The model calibration is quite a challenge as the Markov model is non-linear, non-convex, ill-posed, and has a large parametric space. We developed a new metamodel-based simulation optimization approach for calibrating the model with the in-vitro experimental data. This proposed algorithm is shown to be efficient in learning the Markov model of Na+ model. Moreover, it can be easily transformed and applied to many other optimization problems in computer modeling. In addition, the understanding of AF initiation and maintenance has remained sketchy at best. One salient problem is the inability to interpret intracardiac recordings, which prevents us from reconstructing the rhythmic mechanisms for AF, due to multiple wavelets' circulating, clashing and continuously changing direction in the atria. We are designing computer experiments to simulate the single/multiple activations on atrial tissues and the corresponding intra-cardiac signals. This research will create a novel computer-aided decision support tool to optimize AF ablation procedures.
24

Extension des systèmes de métamodélisation persistant avec la sémantique comportementale / Handling behavioral semantics in persistent metamodeling systems

Bazhar, Youness 13 December 2013 (has links)
L’Ingénierie Dirigée par les Modèles (IDM) a suscité un grand intérêt grâce aux avantages qu’elle offre. Enparticulier, l’IDM vise à accélérer le processus de développement et à faciliter la maintenance des logiciels. Mais avecl'augmentation permanente de la taille des modèles et de leurs instances, l’exploitation des modèles et de leurs instances,en utilisant des outils classiques présente des insuffisances liées au passage à l’échelle. L’utilisation des bases de donnéesest une des solutions proposées pour répondre à ce problème. Dans ce contexte, deux approches ont été proposées. Lapremière consiste à équiper les outils de modélisation avec des bases de données dédiées au stockage de modèles,appelées model repositories (p. ex. EMFStore). Ces bases de données sont équipées de langages d’exploitation limitésseulement à l’interrogation des modèles et des instances. Par conséquent, ces langages n’offrent aucune capacité poureffectuer des opérations avancées sur les modèles telles que la transformation de modèles ou la génération de code. Ladeuxième approche, que nous suivons dans notre travail, consiste à définir des environnements persistants en base dedonnées dédiés à la méta-modélisation. Ces environnements sont appelés systèmes de méta-modélisation persistants(PMMS). Un PMMS consiste en (i) une base de données dédiée au stockage des méta-modèles, des modèles et de leursinstances, et (ii) un langage d'exploitation associé possédant des capacités de méta-modélisation et d’exploitation desmodèles. Plusieurs PMMS ont été proposés tels que ConceptBase ou OntoDB/OntoQL. Ces PMMS supportentprincipalement la définition de la sémantique structurelle et descriptive des méta-modèles et des modèles en terme de(méta-)classes, (méta-)attributs, etc. Par contre, ces PMMS fournissent des mécanismes limités pour définir la sémantiquecomportementale nécessaire à l’exploitation des modèles et des instances. En effet, la sémantique comportementalepourrait être utile pour calculer des concepts dérivés, effectuer des transformations de modèles, générer du code source,etc. Ainsi, nous proposons dans notre travail d'étendre les PMMS avec la possibilité d'introduire dynamiquement desopérations qui peuvent être implémentées en utilisant des mécanismes hétérogènes. Ces opérations peuvent ainsi utiliserdes mécanismes internes au système de gestion de base de données (p. ex. les procédures stockées) tout comme desmécanismes externes tels que les services web ou les programmes externes (p. ex. Java, C++). Cette extension permetd’améliorer les PMMS en leur donnant une plus large couverture de fonctionnalités et une plus grande flexibilité. Pourvalider notre proposition, elle a été implémentée sur le prototype OntoDB/OntoQ et a été mise en oeuvre dans troiscontextes différents : (1) pour calculer les concepts dérivés dans les bases de données à base ontologique, (2) pouraméliorer une méthodologie de conception de base de données à base ontologique et finalement (3) pour faire de latransformation et de l’analyse des modèles des systèmes embarqués temps réel. / Modeling and model management have taken a great interest in software development since they accelerate thesoftware development process and facilitate their maintenance. But, with the increasing size of models and their instances,the management of models and their instances with tools evolving in main memory presents some insufficiencies relatedto scalability. Indeed, classical tools using the central memory have shown their limits when they face large scale modelsand instances. Thus, to overcome the problem of scalability, the management of models in databases becomes a necessity.Indeed, two solutions were proposed. The first one consists in equipping modeling and model management tools withspecific databases, called model repositories, (e.g., EMFStore) dedicated to store metamodels, models and instances.These model repositories are equipped with exploitation languages restricted only to querying capabilities such that modelrepositories serve only as model warehouses as processing model management tasks require loading the whole model tothe central memory. The second solution, on which we focus our approach, consists in defining database environments formetamodeling and model management. These systems, called Persistent MetaModeling Systems (PMMSs), aim atproviding a database environment for metamodeling and model management. Indeed, a PMMS consists in (i) a databasethat stores metamodels, models their instances, and (ii) an associated exploitation language possessing metamodeling andmodel management capabilities. Several PMMSs have been proposed (e.g., ConceptBase, OntoDB/OntoQL) and focusmainly on the structural definition of metamodels and models in terms of (meta-)classes, (meta-)attributes, etc. Yet,existing PMMSs provide limited capabilities to define behavioral semantics for model and data management. Indeed,behavioral semantics could be useful to compute derivations, perform model transformations, generate source code, etc.In our work, we propose to extend PMMSs with the capability to introduce dynamically user-defined model and datamanagement operations. These operations can be implemented using flexible and heterogeneous mechanisms. Indeed,they can use internal database mechanisms (e.g., stored procedures) as well as external mechanisms such as web servicesor external programs (e.g., Java, C++). As a consequence, this extension enhances PMMSs giving them more coverageand further flexibility. This extension has been implemented on the OntoDB/OntoQL prototype, and experimented tocheck the scaling of our approach. Moreover, our proposition has been used in three different contexts. In particular, (1)to compute derived concepts of ontologies, (2) to enhance an ontology-based database design methodology and (3) totransform and analyze models of real-time and embedded systems.
25

The Role of Constitutive Model in Traumatic Brain Injury Prediction

Kacker, Shubhra 28 October 2019 (has links)
No description available.
26

Multi-Objective Design Optimization Using Metamodelling Techniques and a Damage Material Model

Brister, Kenneth Eugene 11 August 2007 (has links) (PDF)
In this work, the effectiveness of multi-objective design optimization using metamodeling techniques and an internal state variable (ISV) plasticity damage material model as a design tool is demonstrated. Multi-objective design optimization, metamodeling, and ISV plasticity damage material models are brought together to provide a design tool capable of meeting the stringent structural design requirements of today and of the future. The process of implementing this tool are laid out, and two case studies using multi-objective design optimization were carried out. The first was the optimization of a Chevrolet Equinox rear subframe. The optimized subframe was 12% lighter and met design requirements not achieved by the heavier initial design. The second case was the optimization of a Formula SAE front upright. The optimized upright meets all the design constraints and is 22% lighter.
27

Engineering Modeling, Analysis and Optimal Design of Custom Foot Orthotic

Trinidad, Lieselle Enid 01 September 2011 (has links)
This research details a procedure for the systematic design of custom foot orthotics based on simulation models and their validation through experimental and clinical studies. These models may ultimately be able to replace the use of empirical tables for designing custom foot orthotics and enable optimal design thicknesses based on the body weight and activities of end-users. Similarly, they may facilitate effortless simulation of various orthotic and loading conditions, changes in material properties, and foot deformities by simply altering model parameters. Finally, these models and the corresponding results may also form the basis for subsequent design of a new generation of custom foot orthotics. Two studies were carried out, the first involving a methodical approach to development of engineering analysis models using the FEA technique. Subsequently, for model verification and validation purposes, detailed investigations were executed through experimental and clinical studies. The results were within 15% difference for the experimental studies and 26% for the clinical studies, and most of the probability values were greater than α= 0.05 accepting our null hypothesis that the FEA model data versus clinical trial data are not significantly different. The accuracy of the FEA model was further enhanced when the uniform loading condition was replaced with a more realistic pressure distribution of 70% of the weight in the heel and the rest in the front portion of the orthotic. This alteration brought the values down to within 22% difference of the clinical studies, with the P-values once again showed no significant difference between the modified FEA model and the clinical studies for most of the scenarios. The second study dealt with the development of surrogate models from FEA results, which can then be used in lieu of the computationally intensive FEA-based analysis models in the engineering design of CFO. Four techniques were studied, including the second-order polynomial response surface, Kriging, non-parametric regression and neural networking. All four techniques were found to be computationally efficient with an average of over 200% savings in time, and the Kriging technique was found to be the most accurate with an average % difference of below 0.30 for each of the loading conditions (light, medium and heavy). The two studies clearly indicate that engineering modeling, analysis and design using FEA techniques coupled with surrogate modeling methods offer a consistent, accurate and reliable alternative to empirical clinical studies. This powerful alternative simulation-based design framework can be a viable and valuable tool in the custom design of orthotics based on an individual's unique needs and foot characteristics. With these capabilities, the CFO prescriber would be able to design and develop the best-fit CFO with the optimal design characteristics for each individual customer without relying upon extensive and expensive trial and error ad hoc approaches. Such a model could also facilitate the inspection of robustness of resulting designs, as well as enable visual inspection of the impact of even small changes on the overall performance of the CFO. By adding the results from these studies to the CFO community, the prescription process may become more efficient and therefore more affordable and accessible to all populations and groups.
28

Consistency and Uniform Bounds for Heteroscedastic Simulation Metamodeling and Their Applications

Zhang, Yutong 05 September 2023 (has links)
Heteroscedastic metamodeling has gained popularity as an effective tool for analyzing and optimizing complex stochastic systems. A heteroscedastic metamodel provides an accurate approximation of the input-output relationship implied by a stochastic simulation experiment whose output is subject to input-dependent noise variance. Several challenges remain unsolved in this field. First, in-depth investigations into the consistency of heteroscedastic metamodeling techniques, particularly from the sequential prediction perspective, are lacking. Second, sequential heteroscedastic metamodel-based level-set estimation (LSE) methods are scarce. Third, the increasingly high computational cost required by heteroscedastic Gaussian process-based LSE methods in the sequential sampling setting is a concern. Additionally, when constructing a valid uniform bound for a heteroscedastic metamodel, the impact of noise variance estimation is not adequately addressed. This dissertation aims to tackle these challenges and provide promising solutions. First, we investigate the information consistency of a widely used heteroscedastic metamodeling technique, stochastic kriging (SK). Second, we propose SK-based LSE methods leveraging novel uniform bounds for input-point classification. Moreover, we incorporate the Nystrom approximation and a principled budget allocation scheme to improve the computational efficiency of SK-based LSE methods. Lastly, we investigate empirical uniform bounds that take into account the impact of noise variance estimation, ensuring an adequate coverage capability. / Doctor of Philosophy / In real-world engineering problems, understanding and optimizing complex systems can be challenging and prohibitively expensive. Computer simulation is a valuable tool for analyzing and predicting system behaviors, allowing engineers to explore different scenarios without relying on costly physical prototypes. However, the increasing complexity of simulation models leads to a higher computational burden. Metamodeling techniques have emerged to address this issue by accurately approximating the system performance response surface based on limited simulation experiment data to enable real-time decision-making. Heteroscedastic metamodeling goes further by considering varying noise levels inherent in simulation outputs, resulting in more robust and accurate predictions. Among various techniques, stochastic kriging (SK) stands out by striking a good balance between computational efficiency and statistical accuracy. Despite extensive research on SK, challenges persist in its application and methodology. These include little understanding of SK's consistency properties, an absence of sequential SK-based algorithms for level-set estimation (LSE) under heteroscedasticity, and the increasingly low computational efficiency of SK-based LSE methods in implementation. Furthermore, a precise construction of uniform bounds for the SK predictor is also missing. This dissertation aims at addressing these aforementioned challenges. First, the information consistency of SK from a prediction perspective is investigated. Then, sequential SK-based procedures for LSE in stochastic simulation, incorporating novel uniform bounds for accurate input-point classification, are proposed. Furthermore, a popular approximation technique is incorporated to enhance the computational efficiency of the SK-based LSE methods. Lastly, empirical uniform bounds are investigated considering the impact of noise variance estimation.
29

Metamodeling Driven IP Reuse for System-on-chip Integration and Microprocessor Design

Mathaikutty, Deepak Abraham 02 December 2007 (has links)
This dissertation addresses two important problems in reusing intellectual properties (IPs) in the form of reusable design or verification components. The first problem is associated with fast and effective integration of reusable design components into a System-on-chip (SoC), so faster design turn-around time can be achieved, leading to faster time-to-market. The second problem has the same goals of faster product design cycle, but emphasizes on verification model reuse, rather than design component reuse. It specifically addresses reuse of reusable verification IPs to enable a "write once, use many times" verification strategy. This dissertation is accordingly divided into part I and part II which are related but describe the two problems and our solutions to them. These two related but distinctive problems faced by system design companies have been tackled through a unique approach which hither-to-fore only have been used in the software engineering domain. This approach is called metamodeling, which allows creating customized meta-language to describe the syntax and semantics for a modeling domain. It provides a way to create, transform and analyze domain specific languages, which are themselves described by metamodels, and the transformation and processing of models in such languages are also described by metamodels. This makes machine based interpretation and translation from these models an easier and formal task. In part I, we consider the problem of rapid system-level model integration of existing reusable components such that (i) the required architecture of the SoC can be expressed formally, (ii) automatic selection of components from an IP library to match the need of the system being integrated can be done, (iii) integrability of the components is provable, or checkable automatically, and (iv) structural and behavioral type systems for each component can be utilized through inferencing and matching techniques to ensure their compatibility. Our solutions include a component composition language, algorithms for component selection, type matching and inferencing algorithms, temporal property based behavioral typing, and finally a software system on top of an existing metamodeling environment. In part II, we use the same metamodeling environment to create a framework for modeling generative verification IPs. Our main contributions relate to INTEL's microprocessor verification environment, and our solution spans various abstraction levels (System, architectural, and microarchitecture) to perform verification. We provide a unified language that can be used to model verification IPs at all abstraction levels, and verification collaterals such as testbenches, simulators, and coverage monitors can be generated from these models, thereby enhancing reuse in verification. / Ph. D.
30

Functional Programming and Metamodeling frameworks for System Design

Mathaikutty, Deepak Abraham 19 May 2005 (has links)
System-on-Chip (SoC) and other complex distributed hardware/software systems contain heterogeneous components whose behavior are best captured by different models of computations (MoCs). As a result, any system design framework for such systems requires the capability to express heterogeneous MoCs. Although a number of system level design languages (SLDL)s and frameworks have proliferated over the last few years, most of them are lacking in multiple ways. Some of the SLDLs and system design frameworks we have worked with are SpecC, Ptolemy II, SystemC-H, etc. From our analysis of these, we identify their following shortcomings: First, their dependence on specific programming language artifacts (Java or C/C++) make them less amenable to formal analysis. Second, the refinement strategies proposed in the design flows based on these languages lack formal semantics underpinnings making it difficult to prove that refinements preserve correctness, and third, none of the available SLDLs are easily customizable by users. In our work, we address these problems as follows: To alleviate the first problem, we follow Axel Jantsch's paradigm of function-based semantic definitions of MoCs and formulate a functional programming framework called SML-Sys. We illustrate through a number of examples how to model heterogenous computing systems using SML-Sys. Our framework provides for formal reasoning due to its formal semantic underpinning inherited from SML's precise denotational semantics. To handle the second problem and apply refinement strategies at a higher-level, we propose a refinement methodology and provide a semantics preserving transformation library within our framework. To address the third shortcoming, we have developed EWD, which allows users to customize MoC-specific visual modeling syntax defined as a metamodel. EWD is developed using a metamodeling framework GME (Generic Modeling Environment). It allows for automatic design-time syntactic and semantic checks on the models for conformance to their metamodel. Modeling in EWD facilitates saving the model in an XML-based interoperability language (IML) we defined for this purpose. The IML format is in turn automatically translated into Standard ML, or Haskell models. These may then be executed and analyzed either by our existing model analysis tools SMLSys, or the ForSyDe environment. We also generate SMV-based template from the XML representation to obtain verification models. / Master of Science

Page generated in 0.0828 seconds