• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 24
  • 24
  • 15
  • 13
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 183
  • 183
  • 32
  • 32
  • 31
  • 29
  • 25
  • 22
  • 22
  • 21
  • 18
  • 16
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Assisting in the reuse of existing materials to build adaptive hypermedia

Zemirline, Nadjet 12 July 2011 (has links) (PDF)
Nowadays, there is a growing demand for personalization and the "one-size-fits-all" approach for hypermedia systems is no longer applicable. Adaptive hypermedia (AH) systems adapt their behavior to the needs of individual users. However due to the complexity of their authoring process and the different skills required from authors, only few of them have been proposed. These last years, numerous efforts have been put to propose assistance for authors to create their own AH. However, as explained in this thesis some problems remain.In this thesis, we tackle two particular problems. A first problem concerns the integration of authors' materials (information and user profile) into models of existing systems. Thus, allowing authors to directly reuse existing reasoning and execute it on their materials. We propose a semi-automatic merging/specialization process to integrate an author's model into a model of an existing system. Our objectives are twofold: to create a support for defining mappings between elements in a model of existing models and elements in the author's model and to help creating consistent and relevant models integrating the two models and taking into account the mappings between them.A second problem concerns the adaptation specification, which is famously the hardest part of the authoring process of adaptive web-based systems. We propose an EAP framework with three main contributions: a set of elementary adaptation patterns for the adaptive navigation, a typology organizing the proposed elementary adaptation patterns and a semi-automatic process to generate adaptation strategies based on the use and the combination of patterns. Our objectives are to define easily adaptation strategies at a high level by combining simple ones. Furthermore, we have studied the expressivity of some existing solutions allowing the specification of adaptation versus the EAP framework, discussing thus, based on this study, the pros and cons of various decisions in terms of the ideal way of defining an adaptation language. We propose a unified vision of adaptation and adaptation languages, based on the analysis of these solutions and our framework, as well as a study of the adaptation expressivity and the interoperability between them, resulting in an adaptation typology. The unified vision and adaptation typology are not limited to the solutions analysed, and can be used to compare and extend other approaches in the future. Besides these theoretical qualitative studies, this thesis also describes implementations and experimental evaluations of our contributions in an e-learning application.
42

Evaluation of functional data models for database design and use

Kulkarni, Krishnarao Gururao January 1983 (has links)
The problems of design, operation, and maintenance of databases using the three most popular database management systems (Hierarchical, CQDASYL/DBTG, and Relational) are well known. Users wishing to use these systems have to make conscious and often complex mappings between the real-world structures and the data structuring options (data models) provided by these systems. In addition, much of the semantics associated with the data either does not get expressed at all or gets embedded procedurally in application programs in an ad-hoc way. In recent years, a large number of data models (called semantic data models) have been proposed with the aim of simplifying database design and use. However, the lack of usable implementations of these proposals has so far inhibited the widespread use of these concepts. The present work reports on an effort to evaluate and extend one such semantic model by means of an implementation. It is based on the functional data model proposed earlier by Shipman (SHIP81). We call this 'Extended Functional Data Model' (EFDM). EFDM, like Shipman's proposals, is a marriage of three of the advanced modelling concepts found in both database and artificial intelligence research: the concept of entity to represent an object in the real world, the concept of type hierarchy among entity types, and the concept of derived data for modelling procedural knowledge. The functional notation of the model lends itself to high level data manipulation languages. The data selection in these languages is expressed simply as function application. Further, the functional approach makes it possible to incorporate general purpose computation facilities in the data languages without having to embed them in procedural languages. In addition to providing the usual database facilities, the implementation also provides a mechanism to specify multiple user views of the database.
43

Asset management data warehouse data modelling

Mathew, Avin D. January 2008 (has links)
Data are the lifeblood of an organisation, being employed by virtually all business functions within a firm. Data management, therefore, is a critical process in prolonging the life of a company and determining the success of each of an organisation’s business functions. The last decade and a half has seen data warehousing rising in priority within corporate data management as it provides an effective supporting platform for decision support tools. A cross-sectional survey conducted by this research showed that data warehousing is starting to be used within organisations for their engineering asset management, however the industry uptake is slow and has much room for development and improvement. This conclusion is also evidenced by the lack of systematic scholarly research within asset management data warehousing as compared to data warehousing for other business areas. This research is motivated by the lack of dedicated research into asset management data warehousing and attempts to provide original contributions to the area, focussing on data modelling. Integration is a fundamental characteristic of a data warehouse and facilitates the analysis of data from multiple sources. While several integration models exist for asset management, these only cover select areas of asset management. This research presents a novel conceptual data warehousing data model that integrates the numerous asset management data areas. The comprehensive ethnographic modelling methodology involved a diverse set of inputs (including data model patterns, standards, information system data models, and business process models) that described asset management data. Used as an integrated data source, the conceptual data model was verified by more than 20 experts in asset management and validated against four case studies. A large section of asset management data are stored in a relational format due to the maturity and pervasiveness of relational database management systems. Data warehousing offers the alternative approach of structuring data in a dimensional format, which suggests increased data retrieval speeds in addition to reducing analysis complexity for end users. To investigate the benefits of moving asset management data from a relational to multidimensional format, this research presents an innovative relational vs. multidimensional model evaluation procedure. To undertake an equitable comparison, the compared multidimensional are derived from an asset management relational model and as such, this research presents an original multidimensional modelling derivation methodology for asset management relational models. Multidimensional models were derived from the relational models in the asset management data exchange standard, MIMOSA OSA-EAI. The multidimensional and relational models were compared through a series of queries. It was discovered that multidimensional schemas reduced the data size and subsequently data insertion time, decreased the complexity of query conceptualisation, and improved the query execution performance across a range of query types. To facilitate the quicker uptake of these data warehouse multidimensional models within organisations, an alternate modelling methodology was investigated. This research presents an innovative approach of using a case-based reasoning methodology for data warehouse schema design. Using unique case representation and indexing techniques, the system also uses a business vocabulary repository to augment case searching and adaptation. The system was validated through a case-study where multidimensional schema design speed and accuracy was measured. It was found that the case-based reasoning system provided a marginal benefit, with a greater benefits gained when confronted with more difficult scenarios.
44

Agrupamento personalizado de pontos em web maps usando um modelo multidimensional - APPWM / Multidimensional model for cluster points in web maps

Bigolin, Marcio January 2014 (has links)
Com o avanço da geração de informação georeferenciada torna-se extremamente importante desenvolver técnicas que auxiliem na melhora da visualização dessas informações. Neste sentido os web maps tornam-se cada vez mais comuns na difusão dessas informações. Esses sistemas permitem ao usuário explorar tendências geográficas de forma rápida e sem necessidade de muito conhecimento técnico em cartografia e softwares específicos. As áreas do mapa onde ocorre um mesmo evento com maior incidência geram visualizações confusas e que não possibilitam uma adequada tomada de decisão. Essas áreas, quando representadas através de pontos (o que é bastante comum), provocará uma sobreposição massiva de dados, devido à densidade de informações. Esta dissertação propõe uma técnica que utiliza um modelo de dados multidimensional para auxiliar a exibição das informações em um web map, de acordo com o contexto do usuário. Esse modelo organiza os dados por níveis geográficos e permite assim uma melhor compreensão da informação exibida. Os experimentos desenvolvidos mostraram que a técnica foi considerada de fácil utilização e de uma necessidade pequena de conhecimento para a execução das tarefas. Isso pode ser visto que das 59 consultas propostas para serem geradas apenas 7 precisam de mudanças significativas para serem executadas. Esses resultados permitem comprovar que o modelo se apresenta como uma boa alternativa para a tomada de decisão sobre mapas produzidos em ambiente web. / The advancement of generation of geo-referenced information becomes extremely important to develop techniques that help in improving the display of this information. In this sense the web maps become increasingly common in the dissemination of such information. These systems allow the user to explore geographical trends quickly and without much technical knowledge in cartography and specific software . The map areas where there is a single event with a higher incidence generate confusing views and not allow proper decision making. These areas , as represented by points (which is quite common) , will cause a massive overlay data , due to the density of information. This work proposes a technique that uses a multidimensional data model to support the display of information on a web map, according to the user's context . This model organizes data by geographical levels and thus allows a better understanding of the information displayed. Developed experiments showed that the technique was considered easy to use and a small need for knowledge to perform the tasks. It can be seen that the 59 queries proposals to be generated only 7 significant changes need to be executed. These results allow to prove that the model is presented as a good alternative for decision-making on maps produced in a web environment.
45

Agrupamento personalizado de pontos em web maps usando um modelo multidimensional - APPWM / Multidimensional model for cluster points in web maps

Bigolin, Marcio January 2014 (has links)
Com o avanço da geração de informação georeferenciada torna-se extremamente importante desenvolver técnicas que auxiliem na melhora da visualização dessas informações. Neste sentido os web maps tornam-se cada vez mais comuns na difusão dessas informações. Esses sistemas permitem ao usuário explorar tendências geográficas de forma rápida e sem necessidade de muito conhecimento técnico em cartografia e softwares específicos. As áreas do mapa onde ocorre um mesmo evento com maior incidência geram visualizações confusas e que não possibilitam uma adequada tomada de decisão. Essas áreas, quando representadas através de pontos (o que é bastante comum), provocará uma sobreposição massiva de dados, devido à densidade de informações. Esta dissertação propõe uma técnica que utiliza um modelo de dados multidimensional para auxiliar a exibição das informações em um web map, de acordo com o contexto do usuário. Esse modelo organiza os dados por níveis geográficos e permite assim uma melhor compreensão da informação exibida. Os experimentos desenvolvidos mostraram que a técnica foi considerada de fácil utilização e de uma necessidade pequena de conhecimento para a execução das tarefas. Isso pode ser visto que das 59 consultas propostas para serem geradas apenas 7 precisam de mudanças significativas para serem executadas. Esses resultados permitem comprovar que o modelo se apresenta como uma boa alternativa para a tomada de decisão sobre mapas produzidos em ambiente web. / The advancement of generation of geo-referenced information becomes extremely important to develop techniques that help in improving the display of this information. In this sense the web maps become increasingly common in the dissemination of such information. These systems allow the user to explore geographical trends quickly and without much technical knowledge in cartography and specific software . The map areas where there is a single event with a higher incidence generate confusing views and not allow proper decision making. These areas , as represented by points (which is quite common) , will cause a massive overlay data , due to the density of information. This work proposes a technique that uses a multidimensional data model to support the display of information on a web map, according to the user's context . This model organizes data by geographical levels and thus allows a better understanding of the information displayed. Developed experiments showed that the technique was considered easy to use and a small need for knowledge to perform the tasks. It can be seen that the 59 queries proposals to be generated only 7 significant changes need to be executed. These results allow to prove that the model is presented as a good alternative for decision-making on maps produced in a web environment.
46

Application of IEC 61970 for data standardization and smart grid interoperability. / AplicaÃÃo da norma IEC 61970 para padronizaÃÃo de dados e interoperabilidade de redes elÃtricas inteligentes

Mario Barreto de Moura Neto 28 February 2014 (has links)
CoordenaÃÃo de AperfeiÃoamento de NÃvel Superior / In the context of the current modernization process through which the electrical power systems go through, the concept of Smart Grids and their foundations serve as guidelines. In the search for interoperability, communication between heterogeneous systems has been the subject of constant and increasing developments. Under this scenario, the work presented in this dissertation focuses primarily on the study and application of the data model contained in the IEC 61970 series of standards, best known as the Common Information Model (CIM). With this purpose, the general aspects of the standard are exposed and assisted by the concepts of UML and XML, which are essential for a complete understanding of the model. Certain features of the CIM, as its extensibility and generality are emphasized, which qualify it as ideal data model for the establishment of interoperability. In order to exemplify the use of the model, a case study was performed which modeled an electrical distribution network in medium voltage so as to make it suitable for integration with a multi-agent system in a standardized format and, consequently, adequate to interoperability. The complete process of modeling an electrical network using the CIM is shown. Finally, the development of an interface is proposed as a mechanism that enables human intervention in the data flow between the integrated systems. The use of PHP with a MySQL database, are justified because of their suitability in various usage environments. / No processo atual de modernizaÃÃo pelo qual passam os sistemas de energia elÃtrica, o conceito de Redes ElÃtricas Inteligentes e seus fundamentos tÃm servido de diretrizes. Na busca pela interoperabilidade, a comunicaÃÃo entre sistemas heterogÃneos tem sido objeto de constantes e crescentes avanÃos. Este trabalho tem como objetivo o estudo e a aplicaÃÃo do modelo de dados da sÃrie de normas IEC 61970, denominado Common Information Model (CIM). Com esse objetivo, os aspectos gerais da norma sÃo apresentados, auxiliados pelos conceitos de UML (Unified Modeling Language) e XML (eXtensible Markup Language), essenciais para a compreensÃo integral do modelo. Determinadas caracterÃsticas do modelo CIM, como sua extensibilidade e generalidade, sÃo enfatizadas, as quais o credenciam como modelo com excelentes caracterÃsticas para o estabelecimento da interoperabilidade. Com o intuito de exemplificar a utilizaÃÃo do modelo, realizou-se um estudo de caso em que se modelou uma rede elÃtrica de distribuiÃÃo em mÃdia tensÃo de maneira a tornÃ-la prÃpria para integraÃÃo com um sistema multiagente em um formato padronizado e, consequentemente, adequado à interoperabilidade. O processo completo de modelagem da rede elÃtrica utilizando o CIM foi demonstrado. Por fim, uma interface foi desenvolvida como mecanismo de manipulaÃÃo dos dados nos documentos XML que possam fazer parte do fluxo de informaÃÃes. A utilizaÃÃo do PHP, juntamente com um banco de dados MySQL, à justificada em decorrÃncia de sua adequaÃÃo de uso em ambientes diversos. Os conjuntos formados pela interface, simulador da rede elÃtrica e sistema multiagente para recomposiÃÃo automÃtica, constituÃram um sistema cujas informaÃÃes foram plenamente integradas.
47

O impacto de variáveis climáticas sobre o valor da produção agrícola - análise para alguns estados brasileiros / The climate impacts on the agricultural production - an analysis for some Brazilian states

Nicole Rennó Castro 05 February 2015 (has links)
A influência do clima sobre a agricultura tem sido constantemente discutida na literatura econômica, e os resultados sugerem que este setor deve ser o mais afetado conforme as projeções atuais sobre o clima. No caso do Brasil, a temática tem sua importância destacada, uma vez que o setor agrícola e suas atividades vinculadas representam parte expressiva do PIB nacional, de modo que o desempenho econômico se apresenta vinculado aos resultados do setor. Ademais, a agricultura brasileira apresenta significativa participação no mercado internacional, sendo o país um importante player no que diz respeito à oferta global de commodities. Portanto, estudos e pesquisas que auxiliem na redução dos potenciais impactos do clima na agricultura brasileira ganham relevância, dados os efeitos sobre o mercado internacional de commodities e sobre a economia nacional. Neste contexto, o presente estudo avaliou empiricamente o impacto potencial do clima na produção agrícola dos principais estados produtores do país, por meio da estimação das elasticidades entre as variáveis temperatura e precipitação e o valor real de produção nestes estados. A fim de atingir o objetivo proposto, foi utilizado um modelo de efeitos fixos, aplicado a uma base de dados em painel, com dez estados entre 1990 e 2012. Os resultados encontrados sugerem impactos significativos do clima na agricultura, sendo aqueles relacionados à temperatura de magnitude expressivamente superior aos de precipitação. Quanto à temperatura, as relações estimadas foram predominantemente negativas, e para a precipitação ocorreu o inverso. Além disso, observaram-se respostas bastante divergentes entre os estados, sendo que o Rio Grande do Sul e o Espírito Santo se mostraram os mais vulneráveis às variações climáticas. Apenas em Goiás a agricultura respondeu positivamente a aumentos de temperatura, e na Bahia e no Mato Grosso não foram encontradas relações estatisticamente significativas. / The influence of climate on agriculture has been constantly discussed in the economic literature, and the results suggest that this should be the sector most affected according the current climate projections. In the Brazilian case, the issue is particular relevant, since the agricultural sector and its related activities represent a significant part of the national GDP, then, the country\'s economic performance is linked to the sector\'s result. Moreover, Brazilian agriculture has a significant share on the international market, and the country is an important player on the global commodities supply. Therefore, studies and researches might generate results to mitigate the potential climate impacts on Brazilian agriculture. In this context, this research evaluated the potential impact of climate variables on agricultural production at the states level, through the elasticities estimation among the climate variables, temperature and precipitation, and the state\'s agricultural production real values. It was estimated a fixed effects panel model, considering ten states from 1990 to 2012. The results suggest significant impacts of climate on agriculture, especially those related to temperature, which were significantly greater than the precipitation effects. For temperature, the results estimated were predominantly negative, and for precipitation, the opposite happened. In addition, there were widely divergent responses among states; the Rio Grande do Sul and the Espírito Santo were the most vulnerable states to climate variations. Only in Goiás the agriculture responded positively to increases in temperature, and in Bahia and Mato Grosso there was no statistically significant relationships between temperature and agricultural production.
48

Proposta para o banco de dados do projeto WebMaps / A proposal for the database of the WebMaps project

Martins, Rodrigo Grassi 12 December 2006 (has links)
Orientador: Claudia Maria Bauzer Medeiros / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-08T07:31:46Z (GMT). No. of bitstreams: 1 Martins_RodrigoGrassi_M.pdf: 1266697 bytes, checksum: abefc5e9662e9a00291a681e6d8ccd0d (MD5) Previous issue date: 2006 / Resumo: O objetivo do projeto WebMaps é a especificação e desenvolvimento de um sistema de informação WEB que sirva de apoio ao monitoramento e planejamento de safras agrícolas no Brasil. Os trabalhos necessários para tais objetivos são alvo de pesquisa multidisciplinar de ponta mundial. Um dos problemas a ser enfrentado é o projeto do banco de dados para o WebMaps. Esta dissertação discute as necessidades do banco de dados do projeto WebMaps e propõe um modelo básico para o mesmo. O banco de dados proposto deve servir como suporte ao cadastro de usuários, propriedades e talhões, além de gerenciar informações sobre os demais dados - em especial- imagens de satélite. As principais contribuições deste trabalho são: especificação de um modelo de dados com suporte ao gerenciamento espaço-temporal, especificação de um conjunto de consultas temporais, espaciais e espaço-temporais, resultando na implementação de um protótipo, utilizando Postgresql/Postgis / Abstract: The goals of the WebMaps project is the specification and development of a WEB information system to support crop planning and monitoring in Brazil. This kind of project involves state-of-the art research all over world. One of the problems faced by WebMaps is database design. This work attacks this issue, discussing the project's needs and proposing a basic database supports management of users, properties and parcials, as well as others kinds of data, especially satellite images. The main contributions of this work are: specification of a spatio-temporal database model; specification of sets of temporal, spatial and spatio-temporal queries; and the implementation of a prototype, in Postgresql/Postgis / Mestrado / Sistemas de Informação / Mestre em Ciência da Computação
49

Production et transmission des données de suivi des patients atteints de maladies chroniques dans un contexte de télémédecine et intégration dans un système d'information pour l'aide à la décision / Production and transmission of chronic disease patients monitoring data in a context of telemedicine and integration into an information system for decision support

Finet, Philippe 15 December 2017 (has links)
Le vieillissement de la population s'accompagne de l'augmentation du nombre de patients souffrant de maladies chroniques. Ceci entraîne une augmentation du nombre de visites et d'examens à l'hôpital. La télémédecine peut apporter un bénéfice tant en matière de qualité et de sécurité des soins qu'en matière de réduction des dépenses de santé. Elle apparaît comme une piste prometteuse, mais encore insuffisamment déployée. Nous avons déterminé les pathologies chroniques les plus fréquentes compatibles avec la télémédecine, à savoir l'insuffisance cardiaque, le diabète, l'insuffisance respiratoire et l'insuffisance rénale. Cette étude a mis en exergue la présence d'un ensemble de comorbidités associées à ces quatre pathologies et montré la nécessité d'une prise en charge globale du patient. Un état de l'art des différentes expériences de télémédecine dans le monde pour ces maladies a mis en évidence que les différentes applications proposées sont partiellement redondantes et ne sont pas interopérables entre elles. Ainsi, les patients peuvent réaliser deux fois la même mesure pour le même examen médical, mais pour deux pathologies différentes. Ces deux problèmes peuvent induire des développements redondants pour chaque application de télémédecine, des risques de diminution de l'efficience des applications de télémédecine lors de leur déploiement, ainsi que des risques d'aggravation de la santé du patient dès lors que l'action d'un professionnel de santé sur une pathologie peut avoir des répercussions sur une autre pathologie. Par ailleurs, cette étude a fait apparaître les besoins communs à ces pathologies. Nos travaux ont donc consisté à développer une architecture générique permettant à différentes applications de télémédecine spécifiques à une pathologie chronique de partager un plateau technique commun. L'originalité de ce travail porte d'une part sur l'étude des normes et des standards de communication nécessaires à l'interopérabilité de l'infrastructure envisagée, et d'autre part sur une modélisation des données relatives aux signes vitaux analysés et à leur contexte. En effet, ces dernières contiennent toutes les informations pouvant influer sur l'interprétation des résultats, telles que la date et l'horaire de la mesure réalisée, la nature de la donnée acquise et les caractéristiques des capteurs utilisés. Pour valider notre modèle d'application de télésurveillance des maladies chroniques, nous avons réalisé deux expérimentations. La première, menée en collaboration avec la société AZNetwork, a consisté à mettre en œuvre une plate-forme digitale de recueil et d'archivage des données médicales pour les seniors dans le cadre du projet Silver@Home. La seconde expérimentation réalisée en partenariat avec le réseau de soins TELAP sur le projet Domoplaies a permis d'étendre notre modèle à un système d'échanges d'information médicale entre les professionnels de santé. Ces travaux constituent une proposition de modèle d'application de télémédecine qui est non seulement conforme au Cadre d'Interopérabilité des Systèmes d'Information de Santé (CI-SIS) de l'Agence des Systèmes d'Information Partagés de Santé (ASIP Santé), mais qui constitue une proposition d'extension de ce dernier à l'acquisition des données au domicile du patient. / The current trend in aging population leads to an increasing number of chronic diseases cases and consequently, to an increase of the number of medical examinations and hospital stays. Telemedicine system can contribute to both increase or maintain care quality and safety, as well as to reduce costs. In spite of this potential, telemedicine deployment is currently limited. We identified the most frequent chronic diseases consistent with telemedicine, namely heart failure, diabetes, respiratory failure and kidney failure. This study highlighted a number of comorbidities associated to these four diseases, reflecting the need for overall patient care. A state of the art report on worldwide Telemedicine experiments for the four chronic diseases showed that the current applications are partially redundant and hardly interoperable. Thus, the same measure can be performed twice for the same medical examination, but for two different diseases. These two problems can induce redundant developments, a risk of a decreased efficiency of a telemedicine application during its deployment, as well as risks of making the patient health worse when the intervention of a healthcare professional can have an impact on another chronic disease. Furthermore, this study revealed common requirements for these chronic diseases and their specific features. We developed a generic architecture that allows different telemedicine applications associated with specific diseases to share a common technical platform. The original aspects of this work are first, a study of communication standards to achieve an interoperable system, and, on the other hand second, a health data model for the patient's vital signs. This model contains all the necessary information to interpret the results, such as the date and time of the measurement, the acquired data format and the sensor characteristics. To validate our telemedicine application model, we conducted two experiments. The first one was a collaboration with AZNetwork company. It consisted in the development of a digital platform to collect and archive seniors' data in the context of the Silver@Home project. The second experiment was a partnership with the TELAP network on the Domoplaies project. It allowed us to extend our telemedicine model to a medical data exchange system among healthcare providers. This led us to propose a telemedicine application model, which is not only in conformity with the Health Information Systems Interoperability Framework (HIS-IF) of the "Agence des Systèmes d'Information Partagés de Santé" (ASIP Santé), but also constitutes a proposed extension of this framework to the patient's home.
50

Towards tool support for phase 2 in 2G

Stefánsson, Vilhjálmur January 2002 (has links)
When systematically adopting a CASE (Computer-Aided Software Engineering) tool, an organisation evaluates candidate tools against a framework of requirements, and selects the most suitable tool for usage. A method, called 2G, has been proposed that aims at developing such frameworks based on the needs of a specific organisation. This method includes a pilot evaluation phase, where state-of-the-art CASE-tools are explored with the aim of gaining more understanding of the requirements that the organisation adopting CASE-tools puts on candidate tools. This exploration results in certain output data, parts of which are used in interviews to discuss the findings of the tool exploration with the organisation. This project has focused on identifying the characteristics of these data, and subsequently to hypothesise a representation of the data, with the aim of providing guidelines for future tool support for the 2G method. The approach to reaching this aim was to conduct a case study of a new application of the pilot evaluation phase, which resulted in data that could subsequently be analysed with the aim of identifying characteristics. This resulted in a hypothesised data representation, which was found to fit the data from the conducted application well, although certain situations were identified that the representation might not be able to handle.

Page generated in 0.1201 seconds