Spelling suggestions: "subject:"[een] CONCEPTUAL MODELING"" "subject:"[enn] CONCEPTUAL MODELING""
1 |
An Object Oriented Simulator for Conceptual GraphsSastry, Kiran Srinivasa 12 May 2001 (has links)
This thesis deals with the design and implementation of an object-oriented simulator for conceptual graphs. Conceptual graphs are a means of representing information and knowledge. In particular, they may be used to represent the behavior of mechanisms. Conceptual graph simulation provides the means for verifying that the conceptual graph model of the system is a proper representation of the mechanism. The motivation for the design of this simulator is to help a conceptual graph model designer overcome the imprecision and ambiguity inherent in the English language. When a person translates an English language specification of a system to a conceptual graph model, the model may be incomplete, owing to semantic gaps in the English language specification.
The simulator attempts to help the designer fill in these gaps by pointing out missing concepts and relations needed to simulate the model. This thesis covers the issues involved in designing such a simulator, and the implementation of the simulator in Java. The working of the simulator is demonstrated by simulating sample conceptual graphs. Also, a set of action procedures, and a small library of device schema graphs are created, so that devices may be effectively modeled. / Master of Science
|
2 |
Profilování překladu konceptuálních schémat do XML schémat / Profiling translation of conceptual schemas to XML schemasBerka, Lukáš January 2010 (has links)
In the present work we analyze the algorithm that was introduced in [4]. The algorithm performs a translation of a conceptual schema to an XML schema expressed in the XML Schema language. We look for limitations of the algorithm and try to discover parameters that can be potentially used to influence its behavior. We propose solutions to the most serious limitations. Also, we introduce a concept of a translation profiling. The concept is based on a configuration that contains a set of parameters. We modify the algorithm to use the user requirements specified in the configuration. Thanks to the improvements, the new algorithm works with the concept of XML Namespaces, uses XML Schema designs and also, focuses on an elimination of redundancy. The elimination of redundancy in an output of the algorithm is an important part of this work and we create a formal model that helps us to solve this task.
|
3 |
Vzájemný převod mezi XSEM PSM diagramy a jazykem Schematron / Vzájemný převod mezi XSEM PSM diagramy a jazykem SchematronBenda, Soběslav January 2012 (has links)
In the present work we study possibilities in the area of automatic construction of Schematron schemas from a conceptual model for XML and possibilities in reverse-engineering of Schematron schemas. The work introduces the reader to conceptual schemas for XML and to rule-based validation of XML documents. Existing techniques for mutual conversions between XML schema languages and conceptual model are also included. The main part of the work is a design and implementation of a new method for deriving Schematron schemas from conceptual schemas for XML. This method allows to get XML schemas, which in some respects outperforms the possibilities of other popular schema languages.The work also discusses the issue of Schematron schema reverse-engineering and shows possibilities in the area and establishing basis for further research.
|
4 |
Data Quality By Design: A Goal-oriented ApproachJiang, Lei 13 August 2010 (has links)
A successful information system is the one that meets its design goals. Expressing these goals and subsequently translating them into a working solution is a major challenge for information systems engineering. This thesis adopts the concepts and techniques from goal-oriented (software)
requirements engineering research for conceptual database design, with a focus on data quality issues. Based on a real-world case study, a goal-oriented process is proposed for database requirements analysis and modeling. It spans from analysis of high-level stakeholder goals to detailed design of a conceptual databases schema. This process is then extended specifically for dealing with data quality issues: data of low quality may be detected and corrected by performing various quality assurance activities; to support these activities, the schema needs to be revised by accommodating additional data requirements. The extended process therefore focuses on analyzing and modeling quality assurance data requirements.
A quality assurance activity supported by a revised schema may involve manual work,
and/or rely on some automatic techniques, which often depend on the specification and enforcement of data quality rules. To address the constraint aspect in conceptual database design, data quality rules are classified according to a number of domain and application independent properties. This classification can be used to guide rule designers and to facilitate building of a
rule repository. A quantitative framework is then proposed for measuring and comparing DQ
rules according to one of these properties: effectiveness; this framework relies on derivation of formulas that represent the effectiveness of DQ rules under different probabilistic assumptions.
A semi-automatic approach is also presented to derive these effectiveness formulas.
|
5 |
Data Quality By Design: A Goal-oriented ApproachJiang, Lei 13 August 2010 (has links)
A successful information system is the one that meets its design goals. Expressing these goals and subsequently translating them into a working solution is a major challenge for information systems engineering. This thesis adopts the concepts and techniques from goal-oriented (software)
requirements engineering research for conceptual database design, with a focus on data quality issues. Based on a real-world case study, a goal-oriented process is proposed for database requirements analysis and modeling. It spans from analysis of high-level stakeholder goals to detailed design of a conceptual databases schema. This process is then extended specifically for dealing with data quality issues: data of low quality may be detected and corrected by performing various quality assurance activities; to support these activities, the schema needs to be revised by accommodating additional data requirements. The extended process therefore focuses on analyzing and modeling quality assurance data requirements.
A quality assurance activity supported by a revised schema may involve manual work,
and/or rely on some automatic techniques, which often depend on the specification and enforcement of data quality rules. To address the constraint aspect in conceptual database design, data quality rules are classified according to a number of domain and application independent properties. This classification can be used to guide rule designers and to facilitate building of a
rule repository. A quantitative framework is then proposed for measuring and comparing DQ
rules according to one of these properties: effectiveness; this framework relies on derivation of formulas that represent the effectiveness of DQ rules under different probabilistic assumptions.
A semi-automatic approach is also presented to derive these effectiveness formulas.
|
6 |
Validation of UML conceptual schemas with OCL constraints and operationsQueralt Calafat, Anna 02 March 2009 (has links)
Per tal de garantir la qualitat final d'un sistema d'informació, és imprescindible que l'esquema conceptual que representa el coneixement sobre el seu domini i les funcions que ha de realitzar sigui semànticament correcte.La correctesa d'un esquema conceptual es pot veure des de dues perspectives. Per una banda, des del punt de vista de la seva definició, determinar la correctesa d'un esquema conceptual consisteix en respondre la pregunta "És correcte l'esquema conceptual?". Aquesta pregunta es pot respondre determinant si l'esquema satisfà certes propietats, com satisfactibilitat, no redundància o executabilitat de les seves operacions.D'altra banda, des de la perspectiva dels requisits que el sistema d'informació ha de satisfer, l'esquema conceptual no només ha de ser correcte sinó que també ha de ser el correcte. Per tal d'assegurar-ho, el dissenyador necessita algun tipus de guia i ajut durant el procés de validació, de manera que pugui entendre què està representant l'esquema exactament i veure si es correspon amb els requisits que s'han de formalitzar.En aquesta tesi presentem una aproximació que millora els resultats de les propostes anteriors adreçades a validar un esquema conceptual en UML, amb les restriccions i operacions formalitzades en OCL. La nostra aproximació permet validar un esquema conceptual tant des del punt de vista de la seva definició com de la seva correspondència amb els requisits.La validació es porta a terme mitjançant un conjunt de proves que s'apliquen a l'esquema, algunes de les quals es generen automàticament mentre que d'altres són definides ad-hoc pel dissenyador. Totes les proves estan formalitzades de tal manera que es poden tractar d'una manera uniforme,independentment de la propietat específica que determinen.La nostra proposta es pot aplicar tant a un esquema conceptual complet com només a la seva part estructural. Quan es pretén validar només la part estructural d'un esquema, oferim un conjunt de condicions que permeten determinar si qualsevol prova de validació que es pugui fer sobrel'esquema acabarà en temps finit. Per aquells casos en els quals aquestes condicions de terminació se satisfan, també proposem un procediment de raonament sobre l'esquema que s'aprofita d'aquest fet i és més eficient que en el cas general. Aquesta aproximació permet validar esquemes conceptuals molt expressius, assegurant completesa i decidibilitat al mateix temps.Per provar la factibilitat de la nostra aproximació, hem implementat el procés de validació complet per a la part estructural d'un esquema. A més, per a la validació d'un esquema conceptual que inclou la definició del comportament, hem implementat el procediment de raonament estenent un mètode existent. / To ensure the quality of an information system, it is essential that the conceptual schema that represents the knowledge about its domain and the functions it has to perform is semantically correct.The correctness of a conceptual schema can be seen from two different perspectives. On the one hand, from the point of view of its definition, determining the correctness of a conceptual schema consists in answering to the question "Is the conceptual schema right?". This can be achieved by determining whether the schema fulfills certain properties, such as satisfiability, non-redundancy or operation executability.On the other hand, from the perspective of the requirements that the information system should satisfy, not only the conceptual schema must be right, but it also must be the right one. To ensure this, the designer must be provided with some kind of help and guidance during the validation process, so that he is able to understand the exact meaning of the schema and see whether it corresponds to the requirements to be formalized.In this thesis we provide an approach which improves the results of previous proposals that address the validation of a UML conceptual schema, with its constraints and operations formalized in OCL. Our approach allows to validate the conceptual schema both from the point of view of its definition and of its correspondence to the requirements.The validation is performed by means of a set of tests that are applied to the schema, including automatically generated tests and ad-hoc tests defined by the designer. All the validation tests are formalized in such a way that they can be treated uniformly, regardless the specific property they allow to test.Our approach can be either applied to a complete conceptual schema or only to its structural part. In case that only the structural part is validated, we provide a set of conditions to determine whether any validation test performed on the schema will terminate. For those cases in which these conditions of termination are satisfied, we also provide a reasoning procedure that takes advantage of this situation and works more efficiently than in the general case. This approach allows the validation of very expressive schemas and ensures completeness and decidability at the same time. To show the feasibility of our approach, we have implemented the complete validation process for the structural part of a conceptual schema.Additionally, for the validation of a conceptual schema with a behavioral part, the reasoning procedure has been implemented as an extension of an existing method.
|
7 |
Cocreating Value in Knowledge-intensive Business Services: An Empirically-grounded Design Framework and a Modelling TechniqueLessard, Lysanne 22 July 2014 (has links)
While knowledge-intensive business services (KIBS) play an important role in industrialized economies, little research has focused on how best to support their design. The emerging understanding of service as a process of value cocreation – or collaborative value creation – can provide the foundations for this purpose; however, this body of literature lacks empirically grounded explanations of how value is actually cocreated and does not provide adequate design support for the specific context of KIBS. This research thus first identifies generative mechanisms of value cocreation in KIBS engagements; it then develops a design framework from this understanding; finally, it elaborates a modeling technique fulfilling the requirements derived from this design framework. A multiple-case study of two academic research and development service engagements, as a particular type of KIBS engagement, was first undertaken to identify generative mechanisms of value cocreation. Data was gathered through interviews, observation, and documentation, and was analyzed both inductively and deductively according to key concepts of value cocreation proposed in literature. Data from a third case study was then used to evaluate the ability of the modeling technique to support the analysis of value cocreation processes in KIBS engagements.
Empirical findings identify two contextual factors; one core mechanism; six direct mechanisms; four supporting mechanisms; and two overall processes of value cocreation, aligning and integrating. These findings emphasize the strategic nature of value cocreation in KIBS engagements. Results include an empirically grounded design framework that identifies points of intervention to foster value cocreation in KIBS engagements, and from which modeling requirements are derived. To fulfill these requirements, a modeling technique Value Cocreation Modeling 2 (VCM2) was created by adapting and combining concepts from several existing modeling approaches developed for strategic actors modeling, value network modeling, and business intelligence modeling.
|
8 |
Migrating an Operational Database Schema to Data Warehouse SchemasPHIPPS, CASSANDRA J. 22 May 2002 (has links)
No description available.
|
9 |
[en] PROVENANCE CONCEPTUAL MODELS / [pt] MODELOS CONCEITUAIS PARA PROVENIÊNCIAANDRE LUIZ ALMEIDA MARINS 07 July 2008 (has links)
[pt] Sistemas de informação, desenvolvidos para diversos setores econômicos, necessitam com maior freqüência capacidade de rastreabilidade dos dados. Para habilitar tal capacidade, é necessário modelar a proveniência dos dados. Proveniência permite testar conformidade com a legislação, repetição de experimentos, controle de qualidade, entre outros. Habilita também a identificação de agentes (pessoas, organizações ou agentes de software) e pode ser utilizada para estabelecer níveis de confiança para as transformações dos dados. Esta dissertação propõe um modelo genérico de proveniência criado com base no alinhamento de recortes de ontologias de alto nível, padrões internacionais e propostas de padrões que tratam direta ou indiretamente de conceitos relacionados à proveniência. As contribuições da dissertação são portanto em duas direções: um modelo conceitual para proveniência - bem fundamentado - e a aplicação da estratégia de projeto conceitual baseada em alinhamento de ontologias. / [en] Information systems, developed for several economic
segments,
increasingly demand data traceability functionality. To
endow information
systems with such capacity, we depend on data provenance
modeling.
Provenance enables legal compliance, experiment validation,
and quality control,
among others . Provenance also helps identifying
participants (determinants or
immanents) like people, organizations, software agents
among others, as well as
their association with activities, events or processes. It
can also be used to
establish levels of trust for data transformations. This
dissertation proposes a
generic conceptual model for provenance, designed by
aligning fragments of
upper ontologies, international standards and broadly
recognized projects. The
contributions are in two directions: a provenance
conceptual model - extensively
documented - that facilitates interoperability and the
application of a design
methodology based on ontology alignment.
|
10 |
Representação de variabilidade estrutural de dados por meio de famílias de esquemas de banco de dados / Representing structural data variability using families of database schemasRodrigues, Larissa Cristina Moraes 09 December 2016 (has links)
Diferentes organizações dentro de um mesmo domínio de aplicação costumam ter requisitos de dados bastante semelhantes. Apesar disso, cada organização também tem necessidades específicas, que precisam ser consideradas no projeto e desenvolvimento dos sistemas de bancos de dados para o domínio em questão. Dessas necessidades específicas, resultam variações estruturais nos dados das organizações de um mesmo domínio. As técnicas tradicionais de modelagem conceitual de banco de dados (como o Modelo Entidade-Relacionamento - MER - e a Linguagem Unificada de Modelagem - UML) não nos permitem expressar em um único esquema de dados essa variabilidade. Para abordar esse problema, este trabalho de mestrado propôs um novo método de modelagem conceitual baseado no uso de Diagramas de Características de Banco de Dados (DBFDs, do inglês Database Feature Diagrams). Esse método foi projetado para apoiar a criação de famílias de esquemas conceituais de banco de dados. Uma família de esquemas conceituais de banco de dados compreende todas as possíveis variações de esquemas conceituais de banco de dados para um determinado domínio de aplicação. Os DBFDs são uma extensão do conceito de Diagrama de Características, usado na Engenharia de Linhas de Produtos de Software. Por meio dos DBFDs, é possível gerar esquemas conceituais de banco de dados personalizados para atender às necessidades específicas de usuários ou organizações, ao mesmo tempo que se garante uma padronização no tratamento dos requisitos de dados de um domínio de aplicação. No trabalho, também foi desenvolvida uma ferramenta Web chamada DBFD Creator, para facilitar o uso do novo método de modelagem e a criação dos DBFDs. Para avaliar o método proposto neste trabalho, foi desenvolvido um estudo de caso no domínio de dados experimentais de neurociência. Por meio do estudo de caso, foi possível concluir que o método proposto é viável para modelar a variabilidade de dados de um domínio de aplicação real. Além disso, foi realizado um estudo exploratório com um grupo de pessoas que receberam treinamentos, executaram tarefas e preencheram questionários de avaliação sobre o método de modelagem e a sua ferramenta de software de apoio. Os resultados desse estudo exploratório mostraram que o método proposto é reprodutível e que a ferramenta de software tem boa usabilidade, amparando de forma apropriada a execução do passo-a-passo do método. / Different organizations within the same application domain usually have very similar data requirements. Nevertheless, each organization also has specific needs that should be considered in the design and development of database systems for that domain. These specific needs result in structural variations in data from organizations of the same domain. The traditional techniques of database conceptual modeling (such as Entity Relationship Model - ERM - and Unified Modeling Language - UML) do not allow to express this variability in a single data schema. To address this problem, this work proposes a new conceptual modeling method based on the use of Database Feature Diagrams (DBFDs). This method was designed to support the creation of families of conceptual database schemas. A family of conceptual database schemas includes all possible variations of database conceptual schemas for a particular application domain. The DBFDs are an extension of the concept of Features Diagram used in the Software Product Lines Engineering. Through DBFDs, it is possible to generate customized database conceptual schemas to address the specific needs of users or organizations at the same time we ensure a standardized treatment of the data requirements of an application domain. At this work, a Web tool called DBFD Creator was also developed to facilitate the use of the new modeling method and the creation of DBFDs. To evaluate the method proposed in this work, a case study was developed on the domain of neuroscience experimental data. Through the case study, it was possible to conclude that the proposed method is feasible to model data variability of a real application domain. In addition, an exploratory study was conducted with a group of people who have received training, executed tasks and filled out evaluation questionnaires about the modeling method and its supporting software tool. The results of this exploratory study showed that the proposed method is reproducible and that the software tool has good usability, properly supporting the execution of the method\'s step-by-step procedure.
|
Page generated in 0.0725 seconds