1 |
Merging and Consistency Checking of Distributed ModelsSabetzadeh, Mehrdad 26 February 2009 (has links)
Large software projects are characterized by distributed environments consisting of teams at different organizations and geographical locations. These teams typically build multiple overlapping models, representing different perspectives, different versions across time, different variants in a product family, different development concerns, etc. Keeping track of the relationships between these models, constructing a global view, and managing consistency are major challenges.
Model Management is concerned with describing the relationships between distributed models, i.e., models built in a distributed development environment, and providing systematic operators to manipulate these models and their relationships. Such operators include, among others, Match, for finding relationships between disparate models, Merge, for combining models with respect to known or hypothesized relationships between them, Slice, for producing projections of models and relationships based on given criteria, and Check-Consistency, for verifying models and relationships against the consistency properties of interest.
In this thesis, we provide automated solutions for two key model management operators, Merge and Check-Consistency. The most novel aspects of our work on model merging are (1) the ability to combine arbitrarily large collections of interrelated models and (2) support for toleration of incompleteness and inconsistency. Our consistency checking technique employs model merging to reduce the problem of checking inter-model consistency to checking intra-model consistency of a merged model. This enables a flexible way of verifying global consistency properties that is not possible with other existing approaches.
We develop a prototype tool, TReMer+, implementing our merge and consistency checking approaches. We use TReMer+ to demonstrate that our contributions facilitate understanding and refinement of the relationships between distributed models.
|
2 |
Merging and Consistency Checking of Distributed ModelsSabetzadeh, Mehrdad 26 February 2009 (has links)
Large software projects are characterized by distributed environments consisting of teams at different organizations and geographical locations. These teams typically build multiple overlapping models, representing different perspectives, different versions across time, different variants in a product family, different development concerns, etc. Keeping track of the relationships between these models, constructing a global view, and managing consistency are major challenges.
Model Management is concerned with describing the relationships between distributed models, i.e., models built in a distributed development environment, and providing systematic operators to manipulate these models and their relationships. Such operators include, among others, Match, for finding relationships between disparate models, Merge, for combining models with respect to known or hypothesized relationships between them, Slice, for producing projections of models and relationships based on given criteria, and Check-Consistency, for verifying models and relationships against the consistency properties of interest.
In this thesis, we provide automated solutions for two key model management operators, Merge and Check-Consistency. The most novel aspects of our work on model merging are (1) the ability to combine arbitrarily large collections of interrelated models and (2) support for toleration of incompleteness and inconsistency. Our consistency checking technique employs model merging to reduce the problem of checking inter-model consistency to checking intra-model consistency of a merged model. This enables a flexible way of verifying global consistency properties that is not possible with other existing approaches.
We develop a prototype tool, TReMer+, implementing our merge and consistency checking approaches. We use TReMer+ to demonstrate that our contributions facilitate understanding and refinement of the relationships between distributed models.
|
3 |
A Web-Services-Based Approach to Model ManagementChiu, Ching-Chih 26 July 2006 (has links)
Decision support systems (DSS) are increasingly important in this highly competitive environment in which organizations face frequent and complex decision problems. The increased popularity of web technology has pushed most information systems to be web-based; DSS is no exception. Model management as a major component in DSS is also moving toward this direction. Therefore, this research examines how web services technology can be used to implement model management for decision support.
In this research, a model is treated as a service. The primary purpose of this research is to achieve model integration capabilities. We adopt existing techniques for web service composition to implement a system that can support model integration. In other words, we use WSDL and BPEL4WS to develop a model integration and management method to support organizational decision making.
Service-oriented architecture (SOA) is a trend for future information systems. Web service technology is the foundation for this paradigm shift. The method developed in this research can facilitate an organization to better manage its decision models to increase its competitive advantages.
|
4 |
Extension des systèmes de métamodélisation persistant avec la sémantique comportementale / Handling behavioral semantics in persistent metamodeling systemsBazhar, Youness 13 December 2013 (has links)
L’Ingénierie Dirigée par les Modèles (IDM) a suscité un grand intérêt grâce aux avantages qu’elle offre. Enparticulier, l’IDM vise à accélérer le processus de développement et à faciliter la maintenance des logiciels. Mais avecl'augmentation permanente de la taille des modèles et de leurs instances, l’exploitation des modèles et de leurs instances,en utilisant des outils classiques présente des insuffisances liées au passage à l’échelle. L’utilisation des bases de donnéesest une des solutions proposées pour répondre à ce problème. Dans ce contexte, deux approches ont été proposées. Lapremière consiste à équiper les outils de modélisation avec des bases de données dédiées au stockage de modèles,appelées model repositories (p. ex. EMFStore). Ces bases de données sont équipées de langages d’exploitation limitésseulement à l’interrogation des modèles et des instances. Par conséquent, ces langages n’offrent aucune capacité poureffectuer des opérations avancées sur les modèles telles que la transformation de modèles ou la génération de code. Ladeuxième approche, que nous suivons dans notre travail, consiste à définir des environnements persistants en base dedonnées dédiés à la méta-modélisation. Ces environnements sont appelés systèmes de méta-modélisation persistants(PMMS). Un PMMS consiste en (i) une base de données dédiée au stockage des méta-modèles, des modèles et de leursinstances, et (ii) un langage d'exploitation associé possédant des capacités de méta-modélisation et d’exploitation desmodèles. Plusieurs PMMS ont été proposés tels que ConceptBase ou OntoDB/OntoQL. Ces PMMS supportentprincipalement la définition de la sémantique structurelle et descriptive des méta-modèles et des modèles en terme de(méta-)classes, (méta-)attributs, etc. Par contre, ces PMMS fournissent des mécanismes limités pour définir la sémantiquecomportementale nécessaire à l’exploitation des modèles et des instances. En effet, la sémantique comportementalepourrait être utile pour calculer des concepts dérivés, effectuer des transformations de modèles, générer du code source,etc. Ainsi, nous proposons dans notre travail d'étendre les PMMS avec la possibilité d'introduire dynamiquement desopérations qui peuvent être implémentées en utilisant des mécanismes hétérogènes. Ces opérations peuvent ainsi utiliserdes mécanismes internes au système de gestion de base de données (p. ex. les procédures stockées) tout comme desmécanismes externes tels que les services web ou les programmes externes (p. ex. Java, C++). Cette extension permetd’améliorer les PMMS en leur donnant une plus large couverture de fonctionnalités et une plus grande flexibilité. Pourvalider notre proposition, elle a été implémentée sur le prototype OntoDB/OntoQ et a été mise en oeuvre dans troiscontextes différents : (1) pour calculer les concepts dérivés dans les bases de données à base ontologique, (2) pouraméliorer une méthodologie de conception de base de données à base ontologique et finalement (3) pour faire de latransformation et de l’analyse des modèles des systèmes embarqués temps réel. / Modeling and model management have taken a great interest in software development since they accelerate thesoftware development process and facilitate their maintenance. But, with the increasing size of models and their instances,the management of models and their instances with tools evolving in main memory presents some insufficiencies relatedto scalability. Indeed, classical tools using the central memory have shown their limits when they face large scale modelsand instances. Thus, to overcome the problem of scalability, the management of models in databases becomes a necessity.Indeed, two solutions were proposed. The first one consists in equipping modeling and model management tools withspecific databases, called model repositories, (e.g., EMFStore) dedicated to store metamodels, models and instances.These model repositories are equipped with exploitation languages restricted only to querying capabilities such that modelrepositories serve only as model warehouses as processing model management tasks require loading the whole model tothe central memory. The second solution, on which we focus our approach, consists in defining database environments formetamodeling and model management. These systems, called Persistent MetaModeling Systems (PMMSs), aim atproviding a database environment for metamodeling and model management. Indeed, a PMMS consists in (i) a databasethat stores metamodels, models their instances, and (ii) an associated exploitation language possessing metamodeling andmodel management capabilities. Several PMMSs have been proposed (e.g., ConceptBase, OntoDB/OntoQL) and focusmainly on the structural definition of metamodels and models in terms of (meta-)classes, (meta-)attributes, etc. Yet,existing PMMSs provide limited capabilities to define behavioral semantics for model and data management. Indeed,behavioral semantics could be useful to compute derivations, perform model transformations, generate source code, etc.In our work, we propose to extend PMMSs with the capability to introduce dynamically user-defined model and datamanagement operations. These operations can be implemented using flexible and heterogeneous mechanisms. Indeed,they can use internal database mechanisms (e.g., stored procedures) as well as external mechanisms such as web servicesor external programs (e.g., Java, C++). As a consequence, this extension enhances PMMSs giving them more coverageand further flexibility. This extension has been implemented on the OntoDB/OntoQL prototype, and experimented tocheck the scaling of our approach. Moreover, our proposition has been used in three different contexts. In particular, (1)to compute derived concepts of ontologies, (2) to enhance an ontology-based database design methodology and (3) totransform and analyze models of real-time and embedded systems.
|
5 |
A Practical Approach to Merging Multidimensional Data ModelsMireku Kwakye, Michael 30 November 2011 (has links)
Schema merging is the process of incorporating data models into an integrated, consistent schema from which query solutions satisfying all incorporated models can be derived. The efficiency of such a process is reliant on the effective semantic representation of the chosen data models, as well as the mapping relationships between the elements of the source data models.
Consider a scenario where, as a result of company mergers or acquisitions, a number of related, but possible disparate data marts need to be integrated into a global data warehouse. The ability to retrieve data across these disparate, but related, data marts poses an important challenge. Intuitively, forming an all-inclusive data warehouse includes the tedious tasks of identifying related fact and dimension table attributes, as well as the design of a schema merge algorithm for the integration. Additionally, the evaluation of the combined set of correct answers to queries, likely to be independently posed to such data marts, becomes difficult to achieve.
Model management refers to a high-level, abstract programming language designed to efficiently manipulate schemas and mappings. Particularly, model management operations such as match, compose mappings, apply functions and merge, offer a way to handle the above-mentioned data integration problem within the domain of data warehousing.
In this research, we introduce a methodology for the integration of star schema source data marts into a single consolidated data warehouse based on model management. In our methodology, we discuss the development of three (3) main streamlined steps to facilitate the generation of a global data warehouse. That is, we adopt techniques for deriving attribute correspondences, and for schema mapping discovery. Finally, we formulate and design a merge algorithm, based on multidimensional star schemas; which is primarily the core contribution of this research. Our approach focuses on delivering a polynomial time solution needed for the expected volume of data and its associated large-scale query processing.
The experimental evaluation shows that an integrated schema, alongside instance data, can be derived based on the type of mappings adopted in the mapping discovery step. The adoption of Global-And-Local-As-View (GLAV) mapping models delivered a maximally-contained or exact representation of all fact and dimensional instance data tuples needed in query processing on the integrated data warehouse. Additionally, different forms of conflicts, such as semantic conflicts for related or unrelated dimension entities, and descriptive conflicts for differing attribute data types, were encountered and resolved in the developed solution. Finally, this research has highlighted some critical and inherent issues regarding functional dependencies in mapping models, integrity constraints at the source data marts, and multi-valued dimension attributes. These issues were encountered during the integration of the source data marts, as it has been the case of evaluating the queries processed on the merged data warehouse as against that on the independent data marts.
|
6 |
A Practical Approach to Merging Multidimensional Data ModelsMireku Kwakye, Michael 30 November 2011 (has links)
Schema merging is the process of incorporating data models into an integrated, consistent schema from which query solutions satisfying all incorporated models can be derived. The efficiency of such a process is reliant on the effective semantic representation of the chosen data models, as well as the mapping relationships between the elements of the source data models.
Consider a scenario where, as a result of company mergers or acquisitions, a number of related, but possible disparate data marts need to be integrated into a global data warehouse. The ability to retrieve data across these disparate, but related, data marts poses an important challenge. Intuitively, forming an all-inclusive data warehouse includes the tedious tasks of identifying related fact and dimension table attributes, as well as the design of a schema merge algorithm for the integration. Additionally, the evaluation of the combined set of correct answers to queries, likely to be independently posed to such data marts, becomes difficult to achieve.
Model management refers to a high-level, abstract programming language designed to efficiently manipulate schemas and mappings. Particularly, model management operations such as match, compose mappings, apply functions and merge, offer a way to handle the above-mentioned data integration problem within the domain of data warehousing.
In this research, we introduce a methodology for the integration of star schema source data marts into a single consolidated data warehouse based on model management. In our methodology, we discuss the development of three (3) main streamlined steps to facilitate the generation of a global data warehouse. That is, we adopt techniques for deriving attribute correspondences, and for schema mapping discovery. Finally, we formulate and design a merge algorithm, based on multidimensional star schemas; which is primarily the core contribution of this research. Our approach focuses on delivering a polynomial time solution needed for the expected volume of data and its associated large-scale query processing.
The experimental evaluation shows that an integrated schema, alongside instance data, can be derived based on the type of mappings adopted in the mapping discovery step. The adoption of Global-And-Local-As-View (GLAV) mapping models delivered a maximally-contained or exact representation of all fact and dimensional instance data tuples needed in query processing on the integrated data warehouse. Additionally, different forms of conflicts, such as semantic conflicts for related or unrelated dimension entities, and descriptive conflicts for differing attribute data types, were encountered and resolved in the developed solution. Finally, this research has highlighted some critical and inherent issues regarding functional dependencies in mapping models, integrity constraints at the source data marts, and multi-valued dimension attributes. These issues were encountered during the integration of the source data marts, as it has been the case of evaluating the queries processed on the merged data warehouse as against that on the independent data marts.
|
7 |
A Practical Approach to Merging Multidimensional Data ModelsMireku Kwakye, Michael 30 November 2011 (has links)
Schema merging is the process of incorporating data models into an integrated, consistent schema from which query solutions satisfying all incorporated models can be derived. The efficiency of such a process is reliant on the effective semantic representation of the chosen data models, as well as the mapping relationships between the elements of the source data models.
Consider a scenario where, as a result of company mergers or acquisitions, a number of related, but possible disparate data marts need to be integrated into a global data warehouse. The ability to retrieve data across these disparate, but related, data marts poses an important challenge. Intuitively, forming an all-inclusive data warehouse includes the tedious tasks of identifying related fact and dimension table attributes, as well as the design of a schema merge algorithm for the integration. Additionally, the evaluation of the combined set of correct answers to queries, likely to be independently posed to such data marts, becomes difficult to achieve.
Model management refers to a high-level, abstract programming language designed to efficiently manipulate schemas and mappings. Particularly, model management operations such as match, compose mappings, apply functions and merge, offer a way to handle the above-mentioned data integration problem within the domain of data warehousing.
In this research, we introduce a methodology for the integration of star schema source data marts into a single consolidated data warehouse based on model management. In our methodology, we discuss the development of three (3) main streamlined steps to facilitate the generation of a global data warehouse. That is, we adopt techniques for deriving attribute correspondences, and for schema mapping discovery. Finally, we formulate and design a merge algorithm, based on multidimensional star schemas; which is primarily the core contribution of this research. Our approach focuses on delivering a polynomial time solution needed for the expected volume of data and its associated large-scale query processing.
The experimental evaluation shows that an integrated schema, alongside instance data, can be derived based on the type of mappings adopted in the mapping discovery step. The adoption of Global-And-Local-As-View (GLAV) mapping models delivered a maximally-contained or exact representation of all fact and dimensional instance data tuples needed in query processing on the integrated data warehouse. Additionally, different forms of conflicts, such as semantic conflicts for related or unrelated dimension entities, and descriptive conflicts for differing attribute data types, were encountered and resolved in the developed solution. Finally, this research has highlighted some critical and inherent issues regarding functional dependencies in mapping models, integrity constraints at the source data marts, and multi-valued dimension attributes. These issues were encountered during the integration of the source data marts, as it has been the case of evaluating the queries processed on the merged data warehouse as against that on the independent data marts.
|
8 |
AplicaÃÃo de indicadores de sustentabilidade para avaliar a gestÃo integrada de resÃduos sÃlidos urbanos no municÃpio de Caucaia â CE ante a polÃtica nacional dos resÃduos sÃlidos / Applying sustainability indicators to evaluate the integrated management of municipal solid waste in the municipality of Caucaia - EC before the national solid waste policyDeborah de Freitas GuimarÃes Cavalcanti 31 July 2013 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / O tema dos resÃduos sÃlidos recebeu uma maior atenÃÃo depois de promulgada a PolÃtica
Nacional de ResÃduos SÃlidos, sob a Lei Federal 12.305 de 2010 e pelo Decreto 7404 /2010, a
qual transitou no Congresso Nacional durante 19 anos, conseguindo mexer nas antigas
estruturas e rever os antigos moldes com os quais o poder pÃblico tratava a questÃo. Antes, as
polÃticas tinham carÃter remediativo, com baixa qualidade tÃcnica, altos gastos pÃblicos e de
pouco alcance social, dando Ãs cidades brasileiras um modelo de gestÃo ineficiente, com
repercussÃo negativa na saÃde pÃblica, no crescente nÃmero de pessoas que vivem
informalmente e que tiram sua sobrevivÃncia dos lixos brasileiros como à o caso dos
catadores de material reciclÃveis em um mercado informal e ainda pouco lucrativo de
materiais reciclÃveis.O presente trabalho tem como objetivo principal avaliar a atual gestÃo,
com base em um conjunto de indicadores de sustentabilidade e criar um modelo de aÃÃes
voltadas para os gestores municipais, a fim de ser utilizada por estes como ferramenta para a
elaboraÃÃo de um plano maior de gestÃo municipal dentro dos novos moldes da LegislaÃÃo
brasileira. Os resultados obtidos mostraram que o MunicÃpio de Caucaia possui um bom
gerenciamento, tendo a coleta de cerca de 100% de seu territÃrio urbano. No entanto, Ã sendo
tambÃm um grande gerador de resÃduos orgÃnicos, cerca de 57% das amostras extraÃdas na
anÃlise gravimÃtrica pelo mÃtodo do quarteamento, nÃo recebem, portanto um tratamento final
adequado para esse tipo de resÃduo e com uma anÃlise dos indicadores de sustentabilidade
tendendo a ser desfavorÃvel pela ausÃncia de polÃticas sociais que englobam os atores sociais
envolvidos na cadeia dos resÃduos, devendo o MunicÃpio refazer suas leis e adotar o uso de
outras tecnologias a fim de se adequar Ãs novas exigÃncias da Lei. / The issue of solid waste received greater attention after the enactment of the National Solid
Waste under Federal Law 12,305 of 2010 and Decree 7404/2010, which moved in Congress
for 19 years, getting messing with old structures and review the old molds with which the
government was the question. Earlier policies had remedial character, with low technical
quality, high public spending and some social reach, giving Brazilian cities a model of
inefficient management, with negative repercussions on public health, the growing number of
people living informally and take survival of Brazilian waste such as waste pickers of
recyclable materials in an informal market and little lucrative recyclables.The present study
aims at evaluating the current management, based on a set of sustainability indicators and
create a model of actions for municipal managers in order to be used by them as a tool for the
development of a larger plan municipal management within the new mold of Brazilian
law.The results showed that the City of Caucaia has good management, and the collection of
about 100% of its urban territory. However, it is also a major generator of organic waste,
about 57% of the samples in the gravimetric analysis by the method of quartering, not given,
so a final treatment suitable for this type of waste and an analysis of sustainability indicators
trending to be unfavorable for the absence of social policies that encompass the social actors
involved in the chain of waste, the municipality must redo their laws and adopt the use of
other technologies in order to adapt to the new demands of the Law.
|
9 |
A Practical Approach to Merging Multidimensional Data ModelsMireku Kwakye, Michael January 2011 (has links)
Schema merging is the process of incorporating data models into an integrated, consistent schema from which query solutions satisfying all incorporated models can be derived. The efficiency of such a process is reliant on the effective semantic representation of the chosen data models, as well as the mapping relationships between the elements of the source data models.
Consider a scenario where, as a result of company mergers or acquisitions, a number of related, but possible disparate data marts need to be integrated into a global data warehouse. The ability to retrieve data across these disparate, but related, data marts poses an important challenge. Intuitively, forming an all-inclusive data warehouse includes the tedious tasks of identifying related fact and dimension table attributes, as well as the design of a schema merge algorithm for the integration. Additionally, the evaluation of the combined set of correct answers to queries, likely to be independently posed to such data marts, becomes difficult to achieve.
Model management refers to a high-level, abstract programming language designed to efficiently manipulate schemas and mappings. Particularly, model management operations such as match, compose mappings, apply functions and merge, offer a way to handle the above-mentioned data integration problem within the domain of data warehousing.
In this research, we introduce a methodology for the integration of star schema source data marts into a single consolidated data warehouse based on model management. In our methodology, we discuss the development of three (3) main streamlined steps to facilitate the generation of a global data warehouse. That is, we adopt techniques for deriving attribute correspondences, and for schema mapping discovery. Finally, we formulate and design a merge algorithm, based on multidimensional star schemas; which is primarily the core contribution of this research. Our approach focuses on delivering a polynomial time solution needed for the expected volume of data and its associated large-scale query processing.
The experimental evaluation shows that an integrated schema, alongside instance data, can be derived based on the type of mappings adopted in the mapping discovery step. The adoption of Global-And-Local-As-View (GLAV) mapping models delivered a maximally-contained or exact representation of all fact and dimensional instance data tuples needed in query processing on the integrated data warehouse. Additionally, different forms of conflicts, such as semantic conflicts for related or unrelated dimension entities, and descriptive conflicts for differing attribute data types, were encountered and resolved in the developed solution. Finally, this research has highlighted some critical and inherent issues regarding functional dependencies in mapping models, integrity constraints at the source data marts, and multi-valued dimension attributes. These issues were encountered during the integration of the source data marts, as it has been the case of evaluating the queries processed on the merged data warehouse as against that on the independent data marts.
|
10 |
A Study of Autonomous Agents in Decision Support SystemsHess, Traci J. 12 May 1999 (has links)
Software agents have been heralded as the most important emerging technology of the decade. As software development firms eagerly attempt to integrate these autonomous programs into their products, researchers attempt to define the concept of agency and to develop architectures that will improve agent capabilities. Decision Support System (DSS) researchers have been eager to integrate agents into their applications, and exploratory works in which agents have been used within a DSS have been documented. This dissertation attempts to further this exploration by studying the agent features and underlying architectures that can lead to the successful integration of agents in DSS.
This exploration is carried out in three parts. In the first part, a review of the relevant research streams is provided. The history and current status of software agents is first discussed. Similarly, a historical and current view of DSS research is provided. Lastly, a historical and tutorial-type of discussion is provided on the topic of Artificial Intelligence (AI) planning. This review of the relevant literature provides a general background for the conceptual analyses and implementations that are carried out in the next two sections.
In the second part, the literature on software agents is synthesized to develop a definition of agency applicable to DSS. Using this definition, an agent-integrated DSS that supports variance-analysis is designed and developed. Following this implementation, a general framework for agent-enabling DSS is suggested. The use of this framework promises to raise some DSS to a new level of capability whereby "what-if" systems are transformed into real-time, proactive systems.
The third part utilizes this general framework to agent-enable a corporate-planning system DSS and extends the framework in the second section through the introduction of an automated-planning agent. The agent uses AI planning to generate decision-making alternatives, providing a means to integrate and sequence the models in the DSS. The architecture used to support this planning agent is described. This new kind of DSS enables not only the monitoring of goals, but also the maintenance of these goals through agent-generated plans.
The conclusion summarizes the contributions of this work and outlines in considerable detail potential research opportunities in the realm of software agents, DSS, and planning. / Ph. D.
|
Page generated in 0.0757 seconds