• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 93
  • 31
  • 14
  • Tagged with
  • 308
  • 108
  • 104
  • 55
  • 55
  • 55
  • 47
  • 46
  • 39
  • 35
  • 35
  • 29
  • 26
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Component-Based Model-Driven Software Development

Johannes, Jendrik 07 January 2011 (has links) (PDF)
Model-driven software development (MDSD) and component-based software development are both paradigms for reducing complexity and for increasing abstraction and reuse in software development. In this thesis, we aim at combining the advantages of each by introducing methods from component-based development into MDSD. In MDSD, all artefacts that describe a software system are regarded as models of the system and are treated as the central development artefacts. To obtain a system implementation from such models, they are transformed and integrated until implementation code can be generated from them. Models in MDSD can have very different forms: they can be documents, diagrams, or textual specifications defined in different modelling languages. Integrating these models of different formats and abstraction in a consistent way is a central challenge in MDSD. We propose to tackle this challenge by explicitly separating the tasks of defining model components and composing model components, which is also known as distinguishing programming-in-the-small and programming-in-the-large. That is, we promote a separation of models into models for modelling-in-the-small (models that are components) and models for modelling-in-the-large (models that describe compositions of model components). To perform such component-based modelling, we introduce two architectural styles for developing systems with component-based MDSD (CB-MDSD). For CB-MDSD, we require a universal composition technique that can handle models defined in arbitrary modelling languages. A technique that can handle arbitrary textual languages is universal invasive software composition for code fragment composition. We extend this technique to universal invasive software composition for graph fragments (U-ISC/Graph) which can handle arbitrary models, including graphical and textual ones, as components. Such components are called graph fragments, because we treat each model as a typed graph and support reuse of partial models. To put the composition technique into practice, we developed the tool Reuseware that implements U-ISC/Graph. The tool is based on the Eclipse Modelling Framework and can therefore be integrated into existing MDSD development environments based on the framework. To evaluate the applicability of CB-MDSD, we realised for each of our two architectural styles a model-driven architecture with Reuseware. The first style, which we name ModelSoC, is based on the component-based development paradigm of multi-dimensional separation of concerns. The architecture we realised with that style shows how a system that involves multiple modelling languages can be developed with CB-MDSD. The second style, which we name ModelHiC, is based on hierarchical composition. With this style, we developed abstraction and reuse support for a large modelling language for telecommunication networks that implements the Common Information Model industry standard.
232

Hackathons

Nauber, Jens 19 November 2015 (has links) (PDF)
Fünfter Juli 2015, es ist wohl einer der heißesten Sommertage des Jahres, bin ich voller Neugierde und Vorfreude auf dem Weg nach Berlin zur Abschlussveranstaltung und Preisverleihung des Kulturhackathons „Coding da Vinci“. Bereits zehn Wochen zuvor habe ich die gleiche Strecke nach Berlin zurückgelegt. Damals, um einige Spitzenstücke der Digitalen Sammlungen der SLUB bei der Auftaktveranstaltung von „Coding da Vinci“ vorzustellen. Darunter befinden sich die spannende Maya-Handschrift – Codex Dresdensis aus dem 13. Jahrhundert, Kulinarisches aus der „Bibliotheca Gastronomica\"“ des Sammlers Walter Putz, Reisefotografien von Oswald Lübeck & Franz Grasser aus den Beständen der Deutschen Fotothek oder das erst kürzlich durch die SLUB erworbene Reisetagebuch von Johann Andreas Silbermann, elsässischer Orgelbauer und Neffe von Gottfried Silbermann.
233

Von der EFRE-Projektförderung in den dauerhaften Betrieb

Schneider, Ulrich Johannes 26 November 2014 (has links) (PDF)
Im Rahmen der EFRE-geförderten Projekte wurde in den Jahren 2011–2014 für elf sächsische Hochschulbibliotheken eine auf der Open-Source-Software Vufind basierende ‚Recherche-Oberfläche‘ entwickelt. Mit der Einbindung der Nutzerdienste der Bibliotheksmanagementsysteme, der Abbildung hierarchischer Mehrbändigkeit und der Integration von Normdaten hatte ein Proof-of-Concept schon früh die Leistungsfähigkeit des Open Source-Systems belegt. Jetzt, am Ende der Projektlaufzeit, können wir die anfänglichen Prognosen bestätigen und sagen: ‚finc‘– so der Name des neuen Suchsystems – ist ein großer Erfolg (vgl. BIS 2012, H. 2, S. 72–76).
234

Generative und modellgetriebene Softwarevisualisierung am Beispiel der Stadtmetapher

Zilch, Denise 03 February 2015 (has links)
Für den Visualisierungsgenerator der Forschungsgruppe „Softwarevisualisierung in drei Dimensionen und virtueller Realität“ soll eine Stadtmetapher zur Darstellung von Software implementiert werden. Als Vorlage dient „CodeCity“, dessen Umsetzung der Stadtmetapher auf den Generator übertragen werden soll. Die Anforderungsermittlung basiert auf der Analyse beider Bestandteile, um ein strukturiertes Vorgehen zu gewährleisten. Die Implementierung der Generatorartefakte erfolgt mittels Xtext zur Erstellung eines Metamodells, das die Entitäten der neuen Metapher beschreibt, und Xtend, das genutzt wird um die Datenmodelle zu modifizieren und in Quelltext umzuwandeln. Darauf aufbauend folgt abschließend die Abstraktion zu einem Prozessmodell für die generative und modellgetriebene Softwarevisualisierung, das als Leitfaden für zukünftige Implementierungen dienen soll.:Gliederung Abbildungsverzeichnis Tabellenverzeichnis Verzeichnis der Listings Abkürzungsverzeichnis 1 Einleitung 1.1 Motivation und Problemstellung 1.2 Zielstellung der Arbeit 1.3 Aufbau der Arbeit 2 Grundlagen des Visualisierungsgenerator 2.1 Generative und modellgetriebene Softwareentwicklung 2.2 FAMIX 2.3 Xtext und Xtend 2.4 X3D 3 Implementierung des Prototyps 3.1 Analyse der Zielmetapher 3.1.1 Grundlagen von „CodeCity“ 3.1.2 Anforderungen 3.1.3 Analyseergebnisse 3.2 Auswahl und Analyse der Referenzmetapher 3.2.1 Grundlagen der Referenzmetapher 3.2.2 Erweiterung der Anforderungen 3.3 Das Metamodell 3.4 Der Workflow 3.5 Modell-zu-Modell-Transformation 3.6 Modellmodifikation 3.7 Modell-zu-Text-Transformation 3.8 Anpassungen und Ergänzungen 4 Abstrahiertes Prozessmodell 5 Zusammenfassung und Ausblick Anhang A – Metamodell Recursive Disk-Metapher Anhang B – Hilfestellung für Eclipse-Konfigurationen Anhang C – Konzepte zur Durchführung der Modellmodifikation Anhang D – Entwicklungsstadien der Stadtmetapher Quellen- und Literaturverzeichnis Ehrenwörtliche Erklärung
235

Von der EFRE-Projektförderung in den dauerhaften Betrieb: Zur Gründung der finc-Nutzergemeinschaft

Schneider, Ulrich Johannes 26 November 2014 (has links)
Im Rahmen der EFRE-geförderten Projekte wurde in den Jahren 2011–2014 für elf sächsische Hochschulbibliotheken eine auf der Open-Source-Software Vufind basierende ‚Recherche-Oberfläche‘ entwickelt. Mit der Einbindung der Nutzerdienste der Bibliotheksmanagementsysteme, der Abbildung hierarchischer Mehrbändigkeit und der Integration von Normdaten hatte ein Proof-of-Concept schon früh die Leistungsfähigkeit des Open Source-Systems belegt. Jetzt, am Ende der Projektlaufzeit, können wir die anfänglichen Prognosen bestätigen und sagen: ‚finc‘– so der Name des neuen Suchsystems – ist ein großer Erfolg (vgl. BIS 2012, H. 2, S. 72–76).
236

Hackathons: Versuchslabore für Kulturinstitutionen

Nauber, Jens 19 November 2015 (has links)
Fünfter Juli 2015, es ist wohl einer der heißesten Sommertage des Jahres, bin ich voller Neugierde und Vorfreude auf dem Weg nach Berlin zur Abschlussveranstaltung und Preisverleihung des Kulturhackathons „Coding da Vinci“. Bereits zehn Wochen zuvor habe ich die gleiche Strecke nach Berlin zurückgelegt. Damals, um einige Spitzenstücke der Digitalen Sammlungen der SLUB bei der Auftaktveranstaltung von „Coding da Vinci“ vorzustellen. Darunter befinden sich die spannende Maya-Handschrift – Codex Dresdensis aus dem 13. Jahrhundert, Kulinarisches aus der „Bibliotheca Gastronomica\"“ des Sammlers Walter Putz, Reisefotografien von Oswald Lübeck & Franz Grasser aus den Beständen der Deutschen Fotothek oder das erst kürzlich durch die SLUB erworbene Reisetagebuch von Johann Andreas Silbermann, elsässischer Orgelbauer und Neffe von Gottfried Silbermann.
237

Component-Based Model-Driven Software Development

Johannes, Jendrik 15 December 2010 (has links)
Model-driven software development (MDSD) and component-based software development are both paradigms for reducing complexity and for increasing abstraction and reuse in software development. In this thesis, we aim at combining the advantages of each by introducing methods from component-based development into MDSD. In MDSD, all artefacts that describe a software system are regarded as models of the system and are treated as the central development artefacts. To obtain a system implementation from such models, they are transformed and integrated until implementation code can be generated from them. Models in MDSD can have very different forms: they can be documents, diagrams, or textual specifications defined in different modelling languages. Integrating these models of different formats and abstraction in a consistent way is a central challenge in MDSD. We propose to tackle this challenge by explicitly separating the tasks of defining model components and composing model components, which is also known as distinguishing programming-in-the-small and programming-in-the-large. That is, we promote a separation of models into models for modelling-in-the-small (models that are components) and models for modelling-in-the-large (models that describe compositions of model components). To perform such component-based modelling, we introduce two architectural styles for developing systems with component-based MDSD (CB-MDSD). For CB-MDSD, we require a universal composition technique that can handle models defined in arbitrary modelling languages. A technique that can handle arbitrary textual languages is universal invasive software composition for code fragment composition. We extend this technique to universal invasive software composition for graph fragments (U-ISC/Graph) which can handle arbitrary models, including graphical and textual ones, as components. Such components are called graph fragments, because we treat each model as a typed graph and support reuse of partial models. To put the composition technique into practice, we developed the tool Reuseware that implements U-ISC/Graph. The tool is based on the Eclipse Modelling Framework and can therefore be integrated into existing MDSD development environments based on the framework. To evaluate the applicability of CB-MDSD, we realised for each of our two architectural styles a model-driven architecture with Reuseware. The first style, which we name ModelSoC, is based on the component-based development paradigm of multi-dimensional separation of concerns. The architecture we realised with that style shows how a system that involves multiple modelling languages can be developed with CB-MDSD. The second style, which we name ModelHiC, is based on hierarchical composition. With this style, we developed abstraction and reuse support for a large modelling language for telecommunication networks that implements the Common Information Model industry standard.
238

Round-trip engineering concept for hierarchical UML models in AUTOSAR-based safety projects

Pathni, Charu 30 September 2015 (has links)
Product development process begins at a very abstract level of understanding the requirements. The data needs to be passed on the next phase of development. This happens after every stage for further development and finally a product is made. This thesis deals with the data exchange process of software development process in specific. The problem lies in handling of data in terms of redundancy and versions of the data to be handled. Also, once data passed on to next stage, the ability to exchange it in reveres order is not existent in evident forms. The results found during this thesis discusses the solutions for the problem by getting all the data at same level, in terms of its format. Having the concept ready, provides an opportunity to use this data based on our requirements. In this research, the problem of data consistency, data verification is dealt with. This data is used during the development and data merging from various sources. The concept that is formulated can be expanded to a wide variety of applications with respect to development process. If the process involves exchange of data - scalability and generalization are the main foundation concepts that are contained within the concept.
239

Cognitive Computing

11 November 2015 (has links) (PDF)
"Cognitive Computing" has initiated a new era in computer science. Cognitive computers are not rigidly programmed computers anymore, but they learn from their interactions with humans, from the environment and from information. They are thus able to perform amazing tasks on their own, such as driving a car in dense traffic, piloting an aircraft in difficult conditions, taking complex financial investment decisions, analysing medical-imaging data, and assist medical doctors in diagnosis and therapy. Cognitive computing is based on artificial intelligence, image processing, pattern recognition, robotics, adaptive software, networks and other modern computer science areas, but also includes sensors and actuators to interact with the physical world. Cognitive computers – also called "intelligent machines" – are emulating the human cognitive, mental and intellectual capabilities. They aim to do for human mental power (the ability to use our brain in understanding and influencing our physical and information environment) what the steam engine and combustion motor did for muscle power. We can expect a massive impact of cognitive computing on life and work. Many modern complex infrastructures, such as the electricity distribution grid, railway networks, the road traffic structure, information analysis (big data), the health care system, and many more will rely on intelligent decisions taken by cognitive computers. A drawback of cognitive computers will be a shift in employment opportunities: A raising number of tasks will be taken over by intelligent machines, thus erasing entire job categories (such as cashiers, mail clerks, call and customer assistance centres, taxi and bus drivers, pilots, grid operators, air traffic controllers, …). A possibly dangerous risk of cognitive computing is the threat by “super intelligent machines” to mankind. As soon as they are sufficiently intelligent, deeply networked and have access to the physical world they may endanger many areas of human supremacy, even possibly eliminate humans. Cognitive computing technology is based on new software architectures – the “cognitive computing architectures”. Cognitive architectures enable the development of systems that exhibit intelligent behaviour.
240

Impact and Challenges of Software in 2025

22 September 2014 (has links) (PDF)
Today (2014), software is the key ingredient of most products and services. Software generates innovation and progress in many modern industries. Software is an indispensable element of evolution, of quality of life, and of our future. Software development is (slowly) evolving from a craft to an industrial discipline. Software – and the ability to efficiently produce and evolve high-quality software – is the single most important success factor for many highly competitive industries. Software technology, development methods and tools, and applications in more and more areas are rapidly evolving. The impact of software in 2025 in nearly all areas of life, work, relationships, culture, and society is expected to be massive. The question of the future of software is therefore important. However – like all predictions – quite difficult. Some market forces, industrial developments, social needs, and technology trends are visible today. How will they develop and influence the software we will have in 2025?

Page generated in 0.4288 seconds