• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 23
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 89
  • 89
  • 26
  • 25
  • 22
  • 13
  • 12
  • 11
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Data Quality By Design: A Goal-oriented Approach

Jiang, Lei 13 August 2010 (has links)
A successful information system is the one that meets its design goals. Expressing these goals and subsequently translating them into a working solution is a major challenge for information systems engineering. This thesis adopts the concepts and techniques from goal-oriented (software) requirements engineering research for conceptual database design, with a focus on data quality issues. Based on a real-world case study, a goal-oriented process is proposed for database requirements analysis and modeling. It spans from analysis of high-level stakeholder goals to detailed design of a conceptual databases schema. This process is then extended specifically for dealing with data quality issues: data of low quality may be detected and corrected by performing various quality assurance activities; to support these activities, the schema needs to be revised by accommodating additional data requirements. The extended process therefore focuses on analyzing and modeling quality assurance data requirements. A quality assurance activity supported by a revised schema may involve manual work, and/or rely on some automatic techniques, which often depend on the specification and enforcement of data quality rules. To address the constraint aspect in conceptual database design, data quality rules are classified according to a number of domain and application independent properties. This classification can be used to guide rule designers and to facilitate building of a rule repository. A quantitative framework is then proposed for measuring and comparing DQ rules according to one of these properties: effectiveness; this framework relies on derivation of formulas that represent the effectiveness of DQ rules under different probabilistic assumptions. A semi-automatic approach is also presented to derive these effectiveness formulas.
12

Data Quality By Design: A Goal-oriented Approach

Jiang, Lei 13 August 2010 (has links)
A successful information system is the one that meets its design goals. Expressing these goals and subsequently translating them into a working solution is a major challenge for information systems engineering. This thesis adopts the concepts and techniques from goal-oriented (software) requirements engineering research for conceptual database design, with a focus on data quality issues. Based on a real-world case study, a goal-oriented process is proposed for database requirements analysis and modeling. It spans from analysis of high-level stakeholder goals to detailed design of a conceptual databases schema. This process is then extended specifically for dealing with data quality issues: data of low quality may be detected and corrected by performing various quality assurance activities; to support these activities, the schema needs to be revised by accommodating additional data requirements. The extended process therefore focuses on analyzing and modeling quality assurance data requirements. A quality assurance activity supported by a revised schema may involve manual work, and/or rely on some automatic techniques, which often depend on the specification and enforcement of data quality rules. To address the constraint aspect in conceptual database design, data quality rules are classified according to a number of domain and application independent properties. This classification can be used to guide rule designers and to facilitate building of a rule repository. A quantitative framework is then proposed for measuring and comparing DQ rules according to one of these properties: effectiveness; this framework relies on derivation of formulas that represent the effectiveness of DQ rules under different probabilistic assumptions. A semi-automatic approach is also presented to derive these effectiveness formulas.
13

Validation of UML conceptual schemas with OCL constraints and operations

Queralt Calafat, Anna 02 March 2009 (has links)
Per tal de garantir la qualitat final d'un sistema d'informació, és imprescindible que l'esquema conceptual que representa el coneixement sobre el seu domini i les funcions que ha de realitzar sigui semànticament correcte.La correctesa d'un esquema conceptual es pot veure des de dues perspectives. Per una banda, des del punt de vista de la seva definició, determinar la correctesa d'un esquema conceptual consisteix en respondre la pregunta "És correcte l'esquema conceptual?". Aquesta pregunta es pot respondre determinant si l'esquema satisfà certes propietats, com satisfactibilitat, no redundància o executabilitat de les seves operacions.D'altra banda, des de la perspectiva dels requisits que el sistema d'informació ha de satisfer, l'esquema conceptual no només ha de ser correcte sinó que també ha de ser el correcte. Per tal d'assegurar-ho, el dissenyador necessita algun tipus de guia i ajut durant el procés de validació, de manera que pugui entendre què està representant l'esquema exactament i veure si es correspon amb els requisits que s'han de formalitzar.En aquesta tesi presentem una aproximació que millora els resultats de les propostes anteriors adreçades a validar un esquema conceptual en UML, amb les restriccions i operacions formalitzades en OCL. La nostra aproximació permet validar un esquema conceptual tant des del punt de vista de la seva definició com de la seva correspondència amb els requisits.La validació es porta a terme mitjançant un conjunt de proves que s'apliquen a l'esquema, algunes de les quals es generen automàticament mentre que d'altres són definides ad-hoc pel dissenyador. Totes les proves estan formalitzades de tal manera que es poden tractar d'una manera uniforme,independentment de la propietat específica que determinen.La nostra proposta es pot aplicar tant a un esquema conceptual complet com només a la seva part estructural. Quan es pretén validar només la part estructural d'un esquema, oferim un conjunt de condicions que permeten determinar si qualsevol prova de validació que es pugui fer sobrel'esquema acabarà en temps finit. Per aquells casos en els quals aquestes condicions de terminació se satisfan, també proposem un procediment de raonament sobre l'esquema que s'aprofita d'aquest fet i és més eficient que en el cas general. Aquesta aproximació permet validar esquemes conceptuals molt expressius, assegurant completesa i decidibilitat al mateix temps.Per provar la factibilitat de la nostra aproximació, hem implementat el procés de validació complet per a la part estructural d'un esquema. A més, per a la validació d'un esquema conceptual que inclou la definició del comportament, hem implementat el procediment de raonament estenent un mètode existent. / To ensure the quality of an information system, it is essential that the conceptual schema that represents the knowledge about its domain and the functions it has to perform is semantically correct.The correctness of a conceptual schema can be seen from two different perspectives. On the one hand, from the point of view of its definition, determining the correctness of a conceptual schema consists in answering to the question "Is the conceptual schema right?". This can be achieved by determining whether the schema fulfills certain properties, such as satisfiability, non-redundancy or operation executability.On the other hand, from the perspective of the requirements that the information system should satisfy, not only the conceptual schema must be right, but it also must be the right one. To ensure this, the designer must be provided with some kind of help and guidance during the validation process, so that he is able to understand the exact meaning of the schema and see whether it corresponds to the requirements to be formalized.In this thesis we provide an approach which improves the results of previous proposals that address the validation of a UML conceptual schema, with its constraints and operations formalized in OCL. Our approach allows to validate the conceptual schema both from the point of view of its definition and of its correspondence to the requirements.The validation is performed by means of a set of tests that are applied to the schema, including automatically generated tests and ad-hoc tests defined by the designer. All the validation tests are formalized in such a way that they can be treated uniformly, regardless the specific property they allow to test.Our approach can be either applied to a complete conceptual schema or only to its structural part. In case that only the structural part is validated, we provide a set of conditions to determine whether any validation test performed on the schema will terminate. For those cases in which these conditions of termination are satisfied, we also provide a reasoning procedure that takes advantage of this situation and works more efficiently than in the general case. This approach allows the validation of very expressive schemas and ensures completeness and decidability at the same time. To show the feasibility of our approach, we have implemented the complete validation process for the structural part of a conceptual schema.Additionally, for the validation of a conceptual schema with a behavioral part, the reasoning procedure has been implemented as an extension of an existing method.
14

Testing and test-driven development of conceptual schemas

Tort Pugibet, Albert 11 April 2012 (has links)
The traditional focus for Information Systems (IS) quality assurance relies on the evaluation of its implementation. However, the quality of an IS can be largely determined in the first stages of its development. Several studies reveal that more than half the errors that occur during systems development are requirements errors. A requirements error is defined as a mismatch between requirements specification and stakeholders¿ needs and expectations. Conceptual modeling is an essential activity in requirements engineering aimed at developing the conceptual schema of an IS. The conceptual schema is the general knowledge that an IS needs to know in order to perform its functions. A conceptual schema specification has semantic quality when it is valid and complete. Validity means that the schema is correct (the knowledge it defines is true for the domain) and relevant (the knowledge it defines is necessary for the system). Completeness means that the conceptual schema includes all relevant knowledge. The validation of a conceptual schema pursues the detection of requirements errors in order to improve its semantic quality. Conceptual schema validation is still a critical challenge in requirements engineering. In this work we contribute to this challenge, taking into account that, since conceptual schemas of IS can be specified in executable artifacts, they can be tested. In this context, the main contributions of this Thesis are (1) an approach to test conceptual schemas of information systems, and (2) a novel method for the incremental development of conceptual schemas supported by continuous test-driven validation. As far as we know, this is the first work that proposes and implements an environment for automated testing of UML/OCL conceptual schemas, and the first work that explores the use of test-driven approaches in conceptual modeling. The testing of conceptual schemas may be an important and practical means for their validation. It allows checking correctness and completeness according to stakeholders¿ needs and expectations. Moreover, in conjunction with the automatic check of basic test adequacy criteria, we can also analyze the relevance of the elements defined in the schema. The testing environment we propose requires a specialized language for writing tests of conceptual schemas. We defined the Conceptual Schema Testing Language (CSTL), which may be used to specify automated tests of executable schemas specified in UML/OCL. We also describe a prototype implementation of a test processor that makes feasible the approach in practice. The conceptual schema testing approach supports test-last validation of conceptual schemas, but it also makes sense to test incomplete conceptual schemas while they are developed. This fact lays the groundwork of Test-Driven Conceptual Modeling (TDCM), which is our second main contribution. TDCM is a novel conceptual modeling method based on the main principles of Test-Driven Development (TDD), an extreme programming method in which a software system is developed in short iterations driven by tests. We have applied the method in several case studies, in the context of Design Research, which is the general research framework we adopted. Finally, we also describe an integration approach of TDCM into a broad set of software development methodologies, including the Unified Process development methodology, MDD-based approaches, storytest-driven agile methods and goal and scenario-oriented requirements engineering methods. / Els enfocaments per assegurar la qualitat deis sistemes d'informació s'han basal tradicional m en! en l'avaluació de la seva implementació. No obstan! aix6, la qualitat d'un sis tema d'informació pot ser ampliament determinada en les primeres fases del seu desenvolupament. Diversos estudis indiquen que més de la meitat deis errors de software són errors de requisits . Un error de requisit es defineix com una desalineació entre l'especificació deis requisits i les necessitats i expectatives de les parts im plicades (stakeholders ). La modelització conceptual és una activitat essencial en l'enginyeria de requisits , l'objectiu de la qual és desenvolupar !'esquema conceptual d'un sistema d'informació. L'esquema conceptual és el coneixement general que un sistema d'informació requereix per tal de desenvolupar les seves funcions . Un esquema conceptual té qualitat semantica quan és va lid i complet. La valides a implica que !'esquema sigui correcte (el coneixement definit és cert peral domini) i rellevant (el coneixement definit és necessari peral sistema). La completes a significa que !'esquema conceptual inclou tot el coneixement rellevant. La validació de !'esquema conceptual té coma objectiu la detecció d'errors de requisits per tal de millorar la qualitat semantica. La validació d'esquemes conceptuals és un repte crític en l'enginyeria de requisits . Aquesta te si contribueix a aquest repte i es basa en el fet que els es quemes conceptuals de sistemes d'informació poden ser especificats en artefactes executables i, per tant, poden ser provats. Les principals contribucions de la te si són (1) un enfocament pera les pro ves d'esquemes conceptuals de sistemes d'informació, i (2) una metodología innovadora pel desenvolupament incremental d'esquemes conceptuals assistit per una validació continuada basada en proves . Les pro ves d'esquemes conceptuals poden ser una im portant i practica técnica pera la se va validació, jaque permeten provar la correctesa i la completesa d'acord ambles necessitats i expectatives de les parts interessades. En conjunció amb la comprovació d'un conjunt basic de criteris d'adequació de les proves, també podem analitzar la rellevancia deis elements definits a !'esquema. L'entorn de test proposat inclou un llenguatge especialitzat per escriure proves automatitzades d'esquemes conceptuals, anomenat Conceptual Schema Testing Language (CSTL). També hem descrit i implementa! a un prototip de processador de tes tos que fa possible l'aplicació de l'enfocament proposat a la practica. D'acord amb l'estat de l'art en validació d'esquemes conceptuals , aquest és el primer treball que proposa i implementa un entorn pel testing automatitzat d'esquemes conceptuals definits en UML!OCL. L'enfocament de proves d'esquemes conceptuals permet dura terme la validació d'esquemes existents , pero també té sentit provar es quemes conceptuals incomplets m entre estant sent desenvolupats. Aquest fet és la base de la metodología Test-Driven Conceptual Modeling (TDCM), que és la segona contribució principal. El TDCM és una metodología de modelització conceptual basada en principis basics del Test-Driven Development (TDD), un métode de programació en el qual un sistema software és desenvolupat en petites iteracions guiades per proves. També hem aplicat el métode en diversos casos d'estudi en el context de la metodología de recerca Design Science Research. Finalment, hem proposat enfocaments d'integració del TDCM en diverses metodologies de desenvolupament de software.
15

Cocreating Value in Knowledge-intensive Business Services: An Empirically-grounded Design Framework and a Modelling Technique

Lessard, Lysanne 22 July 2014 (has links)
While knowledge-intensive business services (KIBS) play an important role in industrialized economies, little research has focused on how best to support their design. The emerging understanding of service as a process of value cocreation – or collaborative value creation – can provide the foundations for this purpose; however, this body of literature lacks empirically grounded explanations of how value is actually cocreated and does not provide adequate design support for the specific context of KIBS. This research thus first identifies generative mechanisms of value cocreation in KIBS engagements; it then develops a design framework from this understanding; finally, it elaborates a modeling technique fulfilling the requirements derived from this design framework. A multiple-case study of two academic research and development service engagements, as a particular type of KIBS engagement, was first undertaken to identify generative mechanisms of value cocreation. Data was gathered through interviews, observation, and documentation, and was analyzed both inductively and deductively according to key concepts of value cocreation proposed in literature. Data from a third case study was then used to evaluate the ability of the modeling technique to support the analysis of value cocreation processes in KIBS engagements. Empirical findings identify two contextual factors; one core mechanism; six direct mechanisms; four supporting mechanisms; and two overall processes of value cocreation, aligning and integrating. These findings emphasize the strategic nature of value cocreation in KIBS engagements. Results include an empirically grounded design framework that identifies points of intervention to foster value cocreation in KIBS engagements, and from which modeling requirements are derived. To fulfill these requirements, a modeling technique Value Cocreation Modeling 2 (VCM2) was created by adapting and combining concepts from several existing modeling approaches developed for strategic actors modeling, value network modeling, and business intelligence modeling.
16

Migrating an Operational Database Schema to Data Warehouse Schemas

PHIPPS, CASSANDRA J. 22 May 2002 (has links)
No description available.
17

[en] PROVENANCE CONCEPTUAL MODELS / [pt] MODELOS CONCEITUAIS PARA PROVENIÊNCIA

ANDRE LUIZ ALMEIDA MARINS 07 July 2008 (has links)
[pt] Sistemas de informação, desenvolvidos para diversos setores econômicos, necessitam com maior freqüência capacidade de rastreabilidade dos dados. Para habilitar tal capacidade, é necessário modelar a proveniência dos dados. Proveniência permite testar conformidade com a legislação, repetição de experimentos, controle de qualidade, entre outros. Habilita também a identificação de agentes (pessoas, organizações ou agentes de software) e pode ser utilizada para estabelecer níveis de confiança para as transformações dos dados. Esta dissertação propõe um modelo genérico de proveniência criado com base no alinhamento de recortes de ontologias de alto nível, padrões internacionais e propostas de padrões que tratam direta ou indiretamente de conceitos relacionados à proveniência. As contribuições da dissertação são portanto em duas direções: um modelo conceitual para proveniência - bem fundamentado - e a aplicação da estratégia de projeto conceitual baseada em alinhamento de ontologias. / [en] Information systems, developed for several economic segments, increasingly demand data traceability functionality. To endow information systems with such capacity, we depend on data provenance modeling. Provenance enables legal compliance, experiment validation, and quality control, among others . Provenance also helps identifying participants (determinants or immanents) like people, organizations, software agents among others, as well as their association with activities, events or processes. It can also be used to establish levels of trust for data transformations. This dissertation proposes a generic conceptual model for provenance, designed by aligning fragments of upper ontologies, international standards and broadly recognized projects. The contributions are in two directions: a provenance conceptual model - extensively documented - that facilitates interoperability and the application of a design methodology based on ontology alignment.
18

Representação de variabilidade estrutural de dados por meio de famílias de esquemas de banco de dados / Representing structural data variability using families of database schemas

Rodrigues, Larissa Cristina Moraes 09 December 2016 (has links)
Diferentes organizações dentro de um mesmo domínio de aplicação costumam ter requisitos de dados bastante semelhantes. Apesar disso, cada organização também tem necessidades específicas, que precisam ser consideradas no projeto e desenvolvimento dos sistemas de bancos de dados para o domínio em questão. Dessas necessidades específicas, resultam variações estruturais nos dados das organizações de um mesmo domínio. As técnicas tradicionais de modelagem conceitual de banco de dados (como o Modelo Entidade-Relacionamento - MER - e a Linguagem Unificada de Modelagem - UML) não nos permitem expressar em um único esquema de dados essa variabilidade. Para abordar esse problema, este trabalho de mestrado propôs um novo método de modelagem conceitual baseado no uso de Diagramas de Características de Banco de Dados (DBFDs, do inglês Database Feature Diagrams). Esse método foi projetado para apoiar a criação de famílias de esquemas conceituais de banco de dados. Uma família de esquemas conceituais de banco de dados compreende todas as possíveis variações de esquemas conceituais de banco de dados para um determinado domínio de aplicação. Os DBFDs são uma extensão do conceito de Diagrama de Características, usado na Engenharia de Linhas de Produtos de Software. Por meio dos DBFDs, é possível gerar esquemas conceituais de banco de dados personalizados para atender às necessidades específicas de usuários ou organizações, ao mesmo tempo que se garante uma padronização no tratamento dos requisitos de dados de um domínio de aplicação. No trabalho, também foi desenvolvida uma ferramenta Web chamada DBFD Creator, para facilitar o uso do novo método de modelagem e a criação dos DBFDs. Para avaliar o método proposto neste trabalho, foi desenvolvido um estudo de caso no domínio de dados experimentais de neurociência. Por meio do estudo de caso, foi possível concluir que o método proposto é viável para modelar a variabilidade de dados de um domínio de aplicação real. Além disso, foi realizado um estudo exploratório com um grupo de pessoas que receberam treinamentos, executaram tarefas e preencheram questionários de avaliação sobre o método de modelagem e a sua ferramenta de software de apoio. Os resultados desse estudo exploratório mostraram que o método proposto é reprodutível e que a ferramenta de software tem boa usabilidade, amparando de forma apropriada a execução do passo-a-passo do método. / Different organizations within the same application domain usually have very similar data requirements. Nevertheless, each organization also has specific needs that should be considered in the design and development of database systems for that domain. These specific needs result in structural variations in data from organizations of the same domain. The traditional techniques of database conceptual modeling (such as Entity Relationship Model - ERM - and Unified Modeling Language - UML) do not allow to express this variability in a single data schema. To address this problem, this work proposes a new conceptual modeling method based on the use of Database Feature Diagrams (DBFDs). This method was designed to support the creation of families of conceptual database schemas. A family of conceptual database schemas includes all possible variations of database conceptual schemas for a particular application domain. The DBFDs are an extension of the concept of Features Diagram used in the Software Product Lines Engineering. Through DBFDs, it is possible to generate customized database conceptual schemas to address the specific needs of users or organizations at the same time we ensure a standardized treatment of the data requirements of an application domain. At this work, a Web tool called DBFD Creator was also developed to facilitate the use of the new modeling method and the creation of DBFDs. To evaluate the method proposed in this work, a case study was developed on the domain of neuroscience experimental data. Through the case study, it was possible to conclude that the proposed method is feasible to model data variability of a real application domain. In addition, an exploratory study was conducted with a group of people who have received training, executed tasks and filled out evaluation questionnaires about the modeling method and its supporting software tool. The results of this exploratory study showed that the proposed method is reproducible and that the software tool has good usability, properly supporting the execution of the method\'s step-by-step procedure.
19

[en] AN APPROACH TO MODEL, STORE AND ACCESS BIOLOGICAL SEQUENCES / [pt] UMA ABORDAGEM PARA MODELAR, ARMAZENAR E ACESSAR SEQUÊNCIAS BIOLÓGICAS

CRISTIAN TRISTAO 03 April 2013 (has links)
[pt] As pesquisas na área da biologia molecular vêm produzindo um grande volume de dados e estes precisam ser bem organizados, estruturados e persistidos. Na sua grande maioria os dados biológicos são armazenados em arquivos no formato texto. Para grandes volumes de dados, o caminho natural seria utilizar SGBDs para gerenciá-los. Contudo, estes sistemas não possuem estruturas adequadas para representar e manipular dados específicos ao domínio. Por exemplo, sequências biológicas normalmente são tratadas como simples cadeias de caracteres (tipo texto/varchar) ou BLOB, e desta forma perde-se todo um conjunto de informações composicionais, posicionais e de conteúdo. Esta tese argumenta que a gerência de dados (estrutura, armazenamento e acesso de dados) se transformou em um dos principais problemas para o domínio de pesquisas da bioinformática. Desta maneira propõe-se um modelo conceitual biológico para representar informações do dogma central da biologia molecular, bem como um tipo abstrato de dado (ADT – do inglês Abstract Data Types) específico para a manipulação de sequências biológicas e seus derivados. / [en] The researches in molecular biology have been producing a large amount of data and they need to be well organized, structured and persisted. Mostly biological data are stored on files in text format. For large volumes of data, the natural way would be to use DBMS to manage them. However, these systems do not have adequate structures to represent and manipulate data specific to the domain. For example, biological sequences are typically treated as simple strings (type text/varchar) or BLOB, and thus lost a whole set of compositional, positional and content information. This thesis argues that the management of data (structure, storage and data access) has become a major problem for researches in bioinformatics. Thus we propose a conceptual model for representing biological information of the central dogma of molecular biology, as well as an Abstract Data Types (ADT) specific for the manipulation of biological sequences and its derivatives.
20

Modelagem conceitual de ontologia de tarefa para as operações agrícolas da cana-de-açucar. / Task ontology conceptual modeling for the sugar cane agriculture field operations.

Abrahão, Elcio 01 December 2017 (has links)
O Brasil é um dos maiores produtores mundiais de cana-de-açúcar. As operações agrícolas da cana-de-açúcar representam aproximadamente 67% dos custos de produção de açúcar e álcool e é um setor que faz uso intenso de tecnologia. Um dos problemas mais comuns na área de sistemas de informação agrícolas é a dificuldade de interoperabilidade entre os agentes da cadeia produtiva. A falta de um padrão para representar o conhecimento técnico das operações agrícolas da cana-de-açúcar dificulta o compartilhamento deste conhecimento, além de aumentar o custo de manutenção dos sistemas especialistas. O presente trabalho propõe um modelo conceitual para uma ontologia de tarefa que represente as operações agrícolas da cana-de-açúcar afim de possibilitar a interoperabilidade entre sistemas computacionais e o compartilhamento do conhecimento utilizando formalismo ontológico. Foram estudados os padrões para troca de dados na área agrícola, métodos para modelagem de tarefas e ontologias. O modelo conceitual proposto foi baseado em uma extensão de um perfil UML para representar tarefas existentes, sendo adicionada uma notação para representação de eventos externos a tarefa que podem alterar seu estado, não existente no perfil original. Os resultados foram avaliados em relação a conformidade da extensão proposta com o meta-modelo da linguagem de modelagem conceitual original e da capacidade do modelo em representar as estruturas específicas das operações agrícolas da cana-de-açúcar. O modelo proposto serve de base para implementações via RDF ou OWL garantindo através do formalismo ontológico a interoperabilidade entre os sistemas agrícolas canavieiros. / Brazil is one of the world\'s largest producers of sugar cane. Sugar cane agricultural operations account for approximately 67% of sugar and alcohol production costs and is a sector that makes intense use of technology. One of the most common problems in the agricultural information systems area is the difficulty of interoperability among agents in the production chain. The lack of a standard to represent the technical knowledge of the sugar cane agricultural operations makes it difficult to share this knowledge, and it increases the cost of maintenance of expert systems. The present work proposes a conceptual model for a task ontology that represents the sugar cane agricultural operations in order to enable the interoperability between computational systems and the knowledge sharing by using an ontological formalism. The standards for agricultural data exchange, methods for task modeling and ontologies have been studied. The proposed conceptual model was based on an extension of an existing UML profile to represent tasks, but a notation to represent external tasks that can change its state was added. This notation did not exist in the original profile. The results were evaluated in relation to the conformity of the proposed extension with the meta-model of the original conceptual modeling language and the capacity of the model to represent the specific structures of the sugarcane agricultural operations. The proposed model could be implemented in RDF or OWL and allowing the interoperability between sugar cane software systems.

Page generated in 0.5008 seconds