• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

[en] COMMUNICATION THROUGH MODELS IN THE CONTEXT OF SOFTWARE DEVELOPMENT / [pt] COMUNICAÇÃO ATRAVÉS DE MODELOS NO CONTEXTO DO DESENVOLVIMENTO DE SOFTWARE

JULIANA SOARES JANSEN FERREIRA 02 August 2016 (has links)
[pt] Desenvolvimento de software é um processo altamente colaborativo, no qual a construção do software é o objetivo comum. É apoiado em várias fases por ferramentas computacionais, dentre elas as ferramentas de modelagem de software. Modelos são parte importante do processo de desenvolvimento de software e o foco desta pesquisa, que tem como objetivo investigar a comunicabilidade de modelos de software que são produzidos e consumidos através de ferramentas de modelagem. A comunicabilidade de modelos de software é a capacidade que estes artefatos têm de efetuar o processo de comunicação entre pessoas, ou a de serem usados como instrumentos para realizar parte significativa deste processo. As ferramentas de modelagem têm impacto direto nessa comunicabilidade, já que os produtores e consumidores de modelos interagem com tais ferramentas ao longo do processo de desenvolvimento do software. Durante essa interação, os modelos de software, que são artefatos intelectuais, são criados, alterados, evoluídos, transformados e compartilhados pelas pessoas envolvidas nas atividades de especificação, análise, design e implementação do software em desenvolvimento. Além da influência das ferramentas, a modelagem de software também deve considerar a utilização de notações previamente definidas como premissas para as atividades de modelagem. Esta pesquisa é uma investigação como ferramentas e notações de modelagem influenciam e apoiam o processo intelectual de produção e consumo de modelos de software. Temos a Engenharia Semiótica como teoria guia desta pesquisa, tendo em conta um aspecto essencial para esta que é: um estudo criterioso das ferramentas que os envolvidos no desenvolvimento do software utilizam para construir, usar e divulgar modelos através dos quais coordenam o seu trabalho de equipe. O uso de modelos no processo de desenvolvimento de software é um fenômeno que apresenta vários fatores que não podem ser isolados. Portanto, propomos a tripla Tool-Notation-People (TNP) como um recurso de articulação para caracterizar questões observadas sobre modelos no desenvolvimento de software, ao longo de toda a pesquisa. Junto com a tripla TNP, apresentamos um método que combina as perspectivas cognitiva e semiótica para avaliar as ferramentas de modelagem de software, produzindo dados sobre a metacomunicação designer-usuário, sendo os usuários, neste caso, os desenvolvedores de software. Nosso objetivo é rastrear potenciais relações entre a experiência de interação humano-computador dos evolvidos no processo de desenvolvimento de software no momento de criar/ler/editar modelos com: (a) o produto (tipos de modelo) gerado neste processo; e (b) as interpretações que tais modelos evocam quando usados efetivamente em situações práticas do cotidiano para comunicar e expressar ideias e entendimentos . A Engenharia Semiótica apresenta duplo interesse nesta pesquisa. Por um lado, como uma lente de observação , ela nos oferece diversos recursos para investigar e compreender a construção e uso de artefatos computacionais, seus significados e seus papéis no processo de comunicação. Por outro lado, um melhor entendimento sobre o processo completo que resulta, em última análise, na experiência do usuário durante a interação com o software é relevante para a evolução da própria teoria. Ou seja, esta pesquisa produziu mais conhecimento sobre as condições de comunicação e mútuo entendimento daqueles que, segundo a teoria, comunicam sua intenção e princípios de design através da interface , uma fonte potencialmente valiosa de explicações sobre problemas de comunicabilidade em IHC. / [en] Software development is a highly collaborative process where software construction is the common goal. It is supported at several stages by computer tools, including software modeling tools. Models are important artifacts of the software development process and constitute the focus of this research, which aims to investigate the communicability of software models produced and consumed with the support of modeling tools. Software model communicability is the capacity that such artifacts have of carrying and effecting a communication process among people, or of being used as an instrument to perform a significant part of such process. Modeling tools have a direct impact in that communicability, since model s producers and consumers interact with those tools throughout the software development process. During that interaction, software models, which are intellectual artifacts, are created, changed, evolved, transformed and shared by people involved in activities of specification, analysis, design and implementation of the software under development. Besides the influence of tools, software modeling also needs to take into consideration previously defined notations as premises for modeling activities. This research is an investigation on how tools and notations influence and support the intellectual process of production and consumption of software models. We have Semiotic Engineering as our guiding theory given the essence of it that is: a careful study of tools people interact with to build, use and publish models through which they coordinate teamwork. The use of models in the software development process is a phenomenon that includes several factors that cannot be isolated from each other. Therefore, we propose a Tool-Notation-People triplet (TNP triplet) as a means of articulation to characterize observed issues about models in the software development. Along with the TNP triplet, we introduce a method that combines semiotic and cognitive perspectives to evaluate software modeling tools, producing data about the emission of designer-user metacommunication, users in this case being software developers. We aim to track potential relations between the human-computer interaction experience of those involved in the software development process while creating/reading/editing models with: (a) the product (types of models) generated in the process; and (b) the interpretations that such models evoke when used effectively in everyday practical situations to communicate and express ideas and understandings. The interest of working with Semiotic Engineering in this research is twofold. First, as an observation lens, the theory offers many resources to investigate and understand the construction and use of computational artifacts, their meanings and roles in the communication process. Second, a better perspective about the complete process that results, ultimately, in the user experience during the interaction with the software is relevant for the theory s own evolution. In other words, this research has produced further knowledge about the communication conditions and mutual understanding of those who, according to the theory, communicate their intent and design principles through the interface, a potentially valuable source of explanations about communication problems in HCI.
12

Language Family Engineering with Features and Role-Based Composition

Wende, Christian 16 March 2012 (has links)
The benefits of Model-Driven Software Development (MDSD) and Domain-Specific Languages (DSLs) wrt. efficiency and quality in software engineering increase the demand for custom languages and the need for efficient methods for language engineering. This motivated the introduction of language families that aim at further reducing the development costs and the maintenance effort for custom languages. The basic idea is to exploit the commonalities and provide means to enable systematic variation among a set of related languages. Current techniques and methodologies for language engineering are not prepared to deal with the particular challenges of language families. First, language engineering processes lack means for a systematic analysis, specification and management of variability as found in language families. Second, technical approaches for a modular specification and realisation of languages suffer from insufficient modularity properties. They lack means for information hiding, for explicit module interfaces, for loose coupling, and for flexible module integration. Our first contribution, Feature-Oriented Language Family Engineering (LFE), adapts methods from Software Product Line Engineering to the domain of language engineering. It extends Feature-Oriented Software Development to support metamodelling approaches used for language engineering and replaces state-of-the-art processes by a variability- and reuse-oriented LFE process. Feature-oriented techniques are used as means for systematic variability analysis, variability management, language variant specification, and the automatic derivation of custom language variants. Our second contribution, Integrative Role-Based Language Composition, extends existing metamodelling approaches with roles. Role models introduce enhanced modularity for object-oriented specifications like abstract syntax metamodels. We introduce a role-based language for the specification of language components, a role-based composition language, and an extensible composition system to evaluate role-based language composition programs. The composition system introduces integrative, grey-box composition techniques for language syntax and semantics that realise the statics and dynamics of role composition, respectively. To evaluate the introduced approaches and to show their applicability, we apply them in three major case studies. First, we use feature-oriented LFE to implement a language family for the ontology language OWL. Second, we employ role-based language composition to realise a component-based version of the language OCL. Third, we apply both approaches in combination for the development of SumUp, a family of languages for mathematical equations.:1. Introduction 1.1. The Omnipresence of Language Families 1.2. Challenges for Language Family Engineering 1.3. Language Family Engineering with Features and Role-Based Composition 2. Review of Current Language Engineering 2.1. Language Engineering Processes 2.1.1. Analysis Phase 2.1.2. Design Phase 2.1.3. Implementation Phase 2.1.4. Applicability in Language Family Engineering 2.1.5. Requirements for an Enhanced LFE Process 2.2. Technical Approaches in Language Engineering 2.2.1. Specification of Abstract Syntax 2.2.2. Specification of Concrete Syntax 2.2.3. Specification of Semantics 2.2.4. Requirements for an Enhanced LFE Technique 3. Feature-Oriented Language Family Engineering 3.1. Foundations of Feature-Oriented SPLE 3.1.1. Introduction to SPLE 3.1.2. Feature-Oriented Software Development 3.2. Feature-Oriented Language Family Engineering 3.2.1. Variability and Variant Specification in LFE 3.2.2. Product-Line Realisation, Mapping and Variant Derivation for LFE 3.3. Case Study: Scalability in Ontology Specification, Evaluation and Application 3.3.1. Review of Evolution, Customisation and Combination in the OWL LanguageFamily 3.3.2. Application of Feature-Oriented Language Family Engineering for OWL 3.4. Discussion 3.4.1. Contributions 3.4.2. Related Work. 3.4.3. Conclusion 4. Integrative, Role-Based Composition for Language Family Engineering 4.1. Foundations of Role-Based Modelling. 4.1.1. Information Hiding and Interface Specification in Role Models 4.1.2. Loose Coupling and Flexible Integration in Role Composition 4.2. The LanGems Language Composition System 4.2.1. The Language Component Specification Language . 4.2.2. TheLanguageCompositionLanguage 4.2.3. TechniquesofLanguageComposition 4.3. Case Study: Component-based OCL 4.3.1. Role-Based OCL Modularisation 4.3.2. Role-Based OCL Composition 4.4. Discussion 4.4.1. Contributions 4.4.2. Related Work 4.4.3. Conclusion 5. LFE with Integrative, Role-Based Syntax and Semantics Composition 5.1. Integrating Features and Roles 5.2. SumUp Case Study 5.2.1. Motivation 5.2.2. Feature-Oriented Variability and Variant Specification 5.2.3. Role-Based Component Realisation 5.2.4. Feature-Oriented Variability and Variant Evolution 5.2.5. Model-driven Concrete Syntax Realisation 5.2.6. Model-driven Semantics Realisation 5.2.7. Role-Based Composition and Feature Mapping 5.2.8. Language Variant Derivation 5.3. Conclusion 6. Conclusion 6.1. Contributions 6.2. Outlook 6.2.1. Co-Evolution in Language Families 6.2.2. Role-Based Tool Integration. 6.2.3. Automatic Modularisation of Existing Language Families 6.2.4. Language Component Library Appendix A Appendix B Bibliography
13

On-the-Fly Dynamic Dead Variable Analysis

Self, Joel P. 22 March 2007 (has links) (PDF)
State explosion in model checking continues to be the primary obstacle to widespread use of software model checking. The large input ranges of variables used in software is the main cause of state explosion. As software grows in size and complexity the problem only becomes worse. As such, model checking research into data abstraction as a way of mitigating state explosion has become more and more important. Data abstractions aim to reduce the effect of large input ranges. This work focuses on a static program analysis technique called dead variable analysis. The goal of dead variable analysis is to discover variable assignments that are not used. When applied to model checking, this allows us to ignore the entire input range of dead variables and thus reduce the size of the explored state space. Prior research into dead variable analysis for model checking does not make full use of dynamic run-time information that is present during model checking. We present an algorithm for intraprocedural dead variable analysis that uses dynamic run-time information to find more dead variables on-the-fly and further reduce the size of the explored state space. We introduce a definition for the maximal state space reduction possible through an on-the-fly dead variable analysis and then show that our algorithm produces a maximal reduction in the absence of non-determinism.

Page generated in 0.058 seconds