• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 59
  • 59
  • 59
  • 39
  • 27
  • 23
  • 18
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Integrating a software engineering approach and instructional factors in instructional software development--illustrated by a prototype in theoretical computer science

De Villiers, Mary Ruth 09 1900 (has links)
This dissertation is a multi-disciplinary study, which integrates a software engineering approach with instructional factors in the decision-making, analysis, design and development processes of instructional software. Software engineering models, tools and representations are used in the process of software construction. With reference to the fundamental characteristics of the software product, several disciplines and factors, from both instructional and computing perspectives are considered, and the most appropriate approach/es selected. Software engineering, instructional design and instructional theory are considered as pillars of courseware engineering. The object-oriented design paradigm and a prototyping life-cycle model are found to be most suitable for development of computer-aided instruction. The conceptual study is illustrated by prototype development of a component-based multi-activity practice environment in theoretical Computer Science. It offers perusal or practice, in various instructional modes, according to the user's preferred learning style or need. / Computing / M. Sc. (Information Systems)
52

Desarrollo de sistemas de tiempo real basados en componentes utilizando modelos de comportamiento reactivos.

López Martínez, Patricia 23 September 2010 (has links)
El objetivo de la tesis es definir una metodología de desarrollo de aplicaciones de tiempo real basadas en componentes, orientada a aplicaciones cuyos requisitos temporales se especifican utilizando un modelo reactivo de comportamiento temporal. La metodología se construye en base a extensiones que incorporan a las especificaciones, modelos de referencia y procesos estándares propios de la ingeniería de componentes convencionales, esto es, sin requisitos temporales, los datos y los procesos necesarios para la especificación, diseño y análisis de los aspectos relativos al comportamiento temporal. La metodología se sustenta en cuatro contribuciones principales:- Se propone la metodología de modelado modular del comportamiento temporal Mod-MAST, que permite construir el modelo de una aplicación basada en componentes por composición de los modelos de los componentes que la forman. - Se propone la extensión RT-D&C de la especificación Deployment and Configuration of Component-based Distributed Applications de OMG, que permite incluir metadatos relativos a comportamiento temporal en los descriptores de componentes, plataformas de ejecución y aplicaciones. - Se especifica la tecnología de componentes RT-CCM como una extensión de la especificación estándar Lightweight CCM de OMG, que añade los mecanismos necesarios para desarrollar aplicaciones con comportamiento temporal predecible.- Se propone la tecnología de componentes Ada-CCM como implementación concreta de RT-CCM basada en el lenguaje de programación Ada 2005.Todos estos elementos se integran en un proceso completo de diseño de tiempo real de aplicaciones basadas en componentes. / The objective of this work is to define a methodology for the development of real-time component-based applications, focused on applications whose timing requirements are specified according to a reactive model of the timing behaviour. The methodology is built through a set of extensions that incorporate to the standard specifications, reference models and processes typical from the conventional components engineering, i.e. components without timing requirements, the data structures and the processes required for the specification, design and analysis of the aspects related to timing behaviour. The methodology relies on four main contributions:- The Mod-MAST modular modelling methodology, which allows building the real-time model of a component-based application by composing the models of the components that form it.- The RT-D&C extension of the Deployment and Configuration of Component-based Distributed Applications Specification of the OMG, which allows including metadata related to timing behaviour in the descriptors of components, execution platforms and applications.- The RT-CCM components technology, which is an extension of the standard Lightweight CCM Specification of the OMG that incorporates mechanisms to develop applications with predictable timing behaviour.- The Ada-CCM components technology has been developed. It is an implementation of the RT-CCM technology based on the Ada 2005 programming language.All these elements have been integrated in a complete real-time design process for component-based applications.
53

Integrating a software engineering approach and instructional factors in instructional software development--illustrated by a prototype in theoretical computer science

De Villiers, M.R. (Ruth) 09 1900 (has links)
This dissertation is a multi-disciplinary study, which integrates a software engineering approach with instructional factors in the decision-making, analysis, design and development processes of instructional software. Software engineering models, tools and representations are used in the process of software construction. With reference to the fundamental characteristics of the software product, several disciplines and factors, from both instructional and computing perspectives are considered, and the most appropriate approach/es selected. Software engineering, instructional design and instructional theory are considered as pillars of courseware engineering. The object-oriented design paradigm and a prototyping life-cycle model are found to be most suitable for development of computer-aided instruction. The conceptual study is illustrated by prototype development of a component-based multi-activity practice environment in theoretical Computer Science. It offers perusal or practice, in various instructional modes, according to the user's preferred learning style or need. / Computing / M. Sc. (Information Systems)
54

Component-Based Model-Driven Software Development

Johannes, Jendrik 07 January 2011 (has links) (PDF)
Model-driven software development (MDSD) and component-based software development are both paradigms for reducing complexity and for increasing abstraction and reuse in software development. In this thesis, we aim at combining the advantages of each by introducing methods from component-based development into MDSD. In MDSD, all artefacts that describe a software system are regarded as models of the system and are treated as the central development artefacts. To obtain a system implementation from such models, they are transformed and integrated until implementation code can be generated from them. Models in MDSD can have very different forms: they can be documents, diagrams, or textual specifications defined in different modelling languages. Integrating these models of different formats and abstraction in a consistent way is a central challenge in MDSD. We propose to tackle this challenge by explicitly separating the tasks of defining model components and composing model components, which is also known as distinguishing programming-in-the-small and programming-in-the-large. That is, we promote a separation of models into models for modelling-in-the-small (models that are components) and models for modelling-in-the-large (models that describe compositions of model components). To perform such component-based modelling, we introduce two architectural styles for developing systems with component-based MDSD (CB-MDSD). For CB-MDSD, we require a universal composition technique that can handle models defined in arbitrary modelling languages. A technique that can handle arbitrary textual languages is universal invasive software composition for code fragment composition. We extend this technique to universal invasive software composition for graph fragments (U-ISC/Graph) which can handle arbitrary models, including graphical and textual ones, as components. Such components are called graph fragments, because we treat each model as a typed graph and support reuse of partial models. To put the composition technique into practice, we developed the tool Reuseware that implements U-ISC/Graph. The tool is based on the Eclipse Modelling Framework and can therefore be integrated into existing MDSD development environments based on the framework. To evaluate the applicability of CB-MDSD, we realised for each of our two architectural styles a model-driven architecture with Reuseware. The first style, which we name ModelSoC, is based on the component-based development paradigm of multi-dimensional separation of concerns. The architecture we realised with that style shows how a system that involves multiple modelling languages can be developed with CB-MDSD. The second style, which we name ModelHiC, is based on hierarchical composition. With this style, we developed abstraction and reuse support for a large modelling language for telecommunication networks that implements the Common Information Model industry standard.
55

Effective reuse of coupling technologies for Earth System Models

Dunlap, Ralph S. 16 September 2013 (has links)
Designing and implementing coupled Earth System Models (ESMs) is a challenge for climate scientists and software engineers alike. Coupled models incorporate two or more independent numerical models into a single application, allowing for the simulation of complex feedback effects. As ESMs increase in sophistication, incorporating higher fidelity models of geophysical processes, developers are faced with the issue of managing increasing software complexity. Recently, reusable coupling software has emerged to aid developers in building coupled models. Effective reuse of coupling infrastructure means increasing the number of coupling functions reused, minimizing code duplication, reducing the development time required to couple models, and enabling flexible composition of coupling infrastructure with existing constituent model implementations. Despite the widespread availability of software packages that provide coupling infrastructure, effective reuse of coupling technologies remains an elusive goal: coupling models is effort-intensive, often requiring weeks or months of developer time to work through implementation details, even when starting from a set of existing software components. Coupling technologies are never used in isolation: they must be integrated with multiple existing constituent models to provide their primary services, such as model-to-model data communication and transformation. Unfortunately, the high level of interdependence between coupling concerns and scientific concerns has resulted in high interdependence between the infrastructure code and the scientific code within a model’s implementation. These dependencies are a source of complexity which tends to reduce reusability of coupling infrastructure. This dissertation presents mechanisms for increasing modeler productivity based on improving reuse of coupling infrastructure and raising the level of abstraction at which modelers work. This dissertation argues that effective reuse of coupling technologies can be achieved by decomposing existing coupling technologies into a salient set of implementation-independent features required for coupling high-performance models, increasing abstraction levels at which model developers work, and facilitating integration of coupling infrastructure with constituent models via component-based modularization of coupling features. The contributions of this research include: (1) a comprehensive feature model that identifies the multi-dimensional design space of coupling technologies used in high-performance Earth System Models, (2) Cupid, a domain-specific language and compiler for specifying coupling configurations declaratively and generating their implementations automatically, and (3) Component-based Coupling Operators (CC-Ops), a modular approach to code reuse of coupling infrastructure based on component technologies for high-performance scientific settings. The Cupid domain-specific language is evaluated by specifying a coupling configuration for an example fluid dynamics model and measuring the amount of code generated by the Cupid compiler compared to a hand coded version. The CC-Op approach is evaluated by implementing several CC-Ops using an existing high-performance component framework and measuring performance in terms of scalability and overhead.
56

Component-Based Model-Driven Software Development

Johannes, Jendrik 15 December 2010 (has links)
Model-driven software development (MDSD) and component-based software development are both paradigms for reducing complexity and for increasing abstraction and reuse in software development. In this thesis, we aim at combining the advantages of each by introducing methods from component-based development into MDSD. In MDSD, all artefacts that describe a software system are regarded as models of the system and are treated as the central development artefacts. To obtain a system implementation from such models, they are transformed and integrated until implementation code can be generated from them. Models in MDSD can have very different forms: they can be documents, diagrams, or textual specifications defined in different modelling languages. Integrating these models of different formats and abstraction in a consistent way is a central challenge in MDSD. We propose to tackle this challenge by explicitly separating the tasks of defining model components and composing model components, which is also known as distinguishing programming-in-the-small and programming-in-the-large. That is, we promote a separation of models into models for modelling-in-the-small (models that are components) and models for modelling-in-the-large (models that describe compositions of model components). To perform such component-based modelling, we introduce two architectural styles for developing systems with component-based MDSD (CB-MDSD). For CB-MDSD, we require a universal composition technique that can handle models defined in arbitrary modelling languages. A technique that can handle arbitrary textual languages is universal invasive software composition for code fragment composition. We extend this technique to universal invasive software composition for graph fragments (U-ISC/Graph) which can handle arbitrary models, including graphical and textual ones, as components. Such components are called graph fragments, because we treat each model as a typed graph and support reuse of partial models. To put the composition technique into practice, we developed the tool Reuseware that implements U-ISC/Graph. The tool is based on the Eclipse Modelling Framework and can therefore be integrated into existing MDSD development environments based on the framework. To evaluate the applicability of CB-MDSD, we realised for each of our two architectural styles a model-driven architecture with Reuseware. The first style, which we name ModelSoC, is based on the component-based development paradigm of multi-dimensional separation of concerns. The architecture we realised with that style shows how a system that involves multiple modelling languages can be developed with CB-MDSD. The second style, which we name ModelHiC, is based on hierarchical composition. With this style, we developed abstraction and reuse support for a large modelling language for telecommunication networks that implements the Common Information Model industry standard.
57

Algorithmes parallèles pour le suivi de particules / Parallel algorithms for tracking of particles

Bonnier, Florent 12 December 2018 (has links)
Les méthodes de suivi de particules sont couramment utilisées en mécanique des fluides de par leur propriété unique de reconstruire de longues trajectoires avec une haute résolution spatiale et temporelle. De fait, de nombreuses applications industrielles mettant en jeu des écoulements gaz-particules, comme les turbines aéronautiques utilisent un formalisme Euler-Lagrange. L’augmentation rapide de la puissance de calcul des machines massivement parallèles et l’arrivée des machines atteignant le petaflops ouvrent une nouvelle voie pour des simulations qui étaient prohibitives il y a encore une décennie. La mise en oeuvre d’un code parallèle efficace pour maintenir une bonne performance sur un grand nombre de processeurs devra être étudié. On s’attachera en particuliers à conserver un bon équilibre des charges sur les processeurs. De plus, une attention particulière aux structures de données devra être fait afin de conserver une certaine simplicité et la portabilité et l’adaptabilité du code pour différentes architectures et différents problèmes utilisant une approche Lagrangienne. Ainsi, certains algorithmes sont à repenser pour tenir compte de ces contraintes. La puissance de calcul permettant de résoudre ces problèmes est offerte par des nouvelles architectures distribuées avec un nombre important de coeurs. Cependant, l’exploitation efficace de ces architectures est une tâche très délicate nécessitant une maîtrise des architectures ciblées, des modèles de programmation associés et des applications visées. La complexité de ces nouvelles générations des architectures distribuées est essentiellement due à un très grand nombre de noeuds multi-coeurs. Ces noeuds ou une partie d’entre eux peuvent être hétérogènes et parfois distants. L’approche de la plupart des bibliothèques parallèles (PBLAS, ScalAPACK, P_ARPACK) consiste à mettre en oeuvre la version distribuée de ses opérations de base, ce qui signifie que les sous-programmes de ces bibliothèques ne peuvent pas adapter leurs comportements aux types de données. Ces sous programmes doivent être définis une fois pour l’utilisation dans le cas séquentiel et une autre fois pour le cas parallèle. L’approche par composants permet la modularité et l’extensibilité de certaines bibliothèques numériques (comme par exemple PETSc) tout en offrant la réutilisation de code séquentiel et parallèle. Cette approche récente pour modéliser des bibliothèques numériques séquentielles/parallèles est très prometteuse grâce à ses possibilités de réutilisation et son moindre coût de maintenance. Dans les applications industrielles, le besoin de l’emploi des techniques du génie logiciel pour le calcul scientifique dont la réutilisabilité est un des éléments des plus importants, est de plus en plus mis en évidence. Cependant, ces techniques ne sont pas encore maÃotrisées et les modèles ne sont pas encore bien définis. La recherche de méthodologies afin de concevoir et réaliser des bibliothèques réutilisables est motivée, entre autres, par les besoins du monde industriel dans ce domaine. L’objectif principal de ce projet de thèse est de définir des stratégies de conception d’une bibliothèque numérique parallèle pour le suivi lagrangien en utilisant une approche par composants. Ces stratégies devront permettre la réutilisation du code séquentiel dans les versions parallèles tout en permettant l’optimisation des performances. L’étude devra être basée sur une séparation entre le flux de contrôle et la gestion des flux de données. Elle devra s’étendre aux modèles de parallélisme permettant l’exploitation d’un grand nombre de coeurs en mémoire partagée et distribuée. / The complexity of these new generations of distributed architectures is essencially due to a high number of multi-core nodes. Most of the nodes can be heterogeneous and sometimes remote. Today, nor the high number of nodes, nor the processes that compose the nodes are exploited by most of applications and numerical libraries. The approach of most of parallel libraries (PBLAS, ScalAPACK, P_ARPACK) consists in implementing the distributed version of its base operations, which means that the subroutines of these libraries can not adapt their behaviors to the data types. These subroutines must be defined once for use in the sequential case and again for the parallel case. The object-oriented approach allows the modularity and scalability of some digital libraries (such as PETSc) and the reusability of sequential and parallel code. This modern approach to modelize sequential/parallel libraries is very promising because of its reusability and low maintenance cost. In industrial applications, the need for the use of software engineering techniques for scientific computation, whose reusability is one of the most important elements, is increasingly highlighted. However, these techniques are not yet well defined. The search for methodologies for designing and producing reusable libraries is motivated by the needs of the industries in this field. The main objective of this thesis is to define strategies for designing a parallel library for Lagrangian particle tracking using a component approach. These strategies should allow the reuse of the sequential code in the parallel versions while allowing the optimization of the performances. The study should be based on a separation between the control flow and the data flow management. It should extend to models of parallelism allowing the exploitation of a large number of cores in shared and distributed memory.
58

Towards decision-making to choose among different component origins

Badampudi, Deepika January 2016 (has links)
Context: The amount of software in solutions provided in various domains is continuously growing. These solutions are a mix of hardware and software solutions, often referred to as software-intensive systems. Companies seek to improve the software development process to avoid delays or cost overruns related to the software development.   Objective: The overall goal of this thesis is to improve the software development/building process to provide timely, high quality and cost efficient solutions. The objective is to select the origin of the components (in-house, outsource, components off-the-shelf (COTS) or open source software (OSS)) that facilitates the improvement. The system can be built of components from one origin or a combination of two or more (or even all) origins. Selecting a proper origin for a component is important to get the most out of a component and to optimize the development.  Method: It is necessary to investigate the component origins to make decisions to select among different origins. We conducted a case study to explore the existing challenges in software development.  The next step was to identify factors that influence the choice to select among different component origins through a systematic literature review using a snowballing (SB) strategy and a database (DB) search. Furthermore, a Bayesian synthesis process is proposed to integrate the evidence from literature into practice.   Results: The results of this thesis indicate that the context of software-intensive systems such as domain regulations hinder the software development improvement. In addition to in-house development, alternative component origins (outsourcing, COTS, and OSS) are being used for software development. Several factors such as time, cost and license implications influence the selection of component origins. Solutions have been proposed to support the decision-making. However, these solutions consider only a subset of factors identified in the literature.    Conclusions: Each component origin has some advantages and disadvantages. Depending on the scenario, one component origin is more suitable than the others. It is important to investigate the different scenarios and suitability of the component origins, which is recognized as future work of this thesis. In addition, the future work is aimed at providing models to support the decision-making process.
59

Metodología y herramientas UML para el modelado y análisis de sistemas de tiempo real orientados a objetos

Medina Pasaje, Julio Luis 22 September 2005 (has links)
El objetivo de este trabajo es la definición de una metodología para la representación y análisis del comportamiento de tiempo real de sistemas que han sido diseñados utilizando el paradigma de orientación a objetos. La metodología que se propone, denominada UML-MAST, concilia las diferencias entre la visión del diseñador de sistemas de tiempo real y la del de sistemas orientados a objetos. A tal fin define un nivel de abstracción adecuado para los elementos de modelado del comportamiento de tiempo real, que permite formularlos con una estructura paralela a la arquitectura lógica del sistema, y vincularlos a esta. La semántica de modelado sigue el perfil UML para planificabilidad, rendimiento y tiempo (SPT) estandarizado por el OMG, del que UML-MAST puede considerase una implementación. La propuesta se integra con las herramientas de análisis y diseño de sistemas de tiempo real MAST (Modeling and Analysis Suite for Real-Time Applications), que analiza los modelos y retorna los resultados al modelo inicial para su interpretación por el diseñador. Asimismo, se han definido criterios para la extensión de esta metodología a otros niveles de abstracción tales como sistemas basados en componentes y sistemas implementados utilizando Ada 95. Parte de los resultados de este trabajo han sido incorporados por el OMG a su perfil SPT. / The main objective of this work has been the definition of a methodology for the representation and analysis of the timing behaviour of real-time distributed systems designed following the object oriented paradigm. The methodology proposed is called UML-MAST, and reconciles the mismatch between the visions of the object oriented designer and the real-time systems designer. To get this, it has been developed a particular level of abstraction that holds all the modelling elements needed to represent real-time behaviour, structuring the models following the logical architecture of the system. The semantics of the modelling elements follows the "UML Profile for Schedulability, Performance and Time" (SPT), a standard of the Object Management Group (OMG) to which this thesis has reported a number of contributions. UML-MAST can also be considered a particular specialization of its schedulability analysis sub-profile. UML-MAST is integrated in the framework of the Modeling and Analysis Suite for Real-Time Applications (MAST), a modelling environment with a set of tools that enable the analysis of a model and the recovery of its results in it. Criteria for the extension of the methodology to higher levels of abstraction have been defined. As examples, its extension to the modelling of component-based systems as well as to distributed systems developed with Ada95 have been explored and formulated.

Page generated in 0.0808 seconds