• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 20
  • Tagged with
  • 57
  • 57
  • 57
  • 47
  • 25
  • 18
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Generisches Modellrefactoring für EMFText / Generic Model Refactoring for EMFText

Reimann, Jan 28 April 2011 (has links) (PDF)
Code-Refactorings sind gut erforscht und die meisten Entwicklungsumgebungen unterstützen diese. Mit dem Auftrieb der modellgetriebenen Software-Entwicklung (MDSD) stellt sich jedoch eine neue Herausforderung. Zahlreiche neue domänenspezifische Sprachen (DSL) werden entwickelt, wodurch sich die Frage stellt, wie man diesen Werkzeuge an die Hand gibt, die Modell-Refactorings ermöglichen. In dieser Diplomarbeit wird ein Ansatz zum generischen Modell-Refactoring entwickelt, mit dem der Kern eines Refactorings, bestehend aus den teilnehmenden Elementen und den Transformationsschritten, einmalig definiert und anschließend durch ein einfaches Mapping für beliebige DSLs zur Verfügung gestellt wird.
2

Prototypische Entwicklung eines mandantenfähigen dezentralen Austauschsystems für hochsensible Daten

Stockhaus, Christian 01 March 2017 (has links) (PDF)
Diese Arbeit behandelt die Entstehung eines Prototypen für die Übertragung von hochsensiblen Daten zwischen verschieden Firmen. Dabei geht Sie auf alle Schritte bei der Entwicklung ein von der Anforderungsanalyse über die Evaluierung einer passenden Technologie und die eigentliche Implementierung bis hin zum Test und der Administration.
3

Energieeffizienz in Workflowsystemen

Püschel, Georg 30 October 2012 (has links) (PDF)
Im CoolSoftware-Projekt wurden Metamodelle, Algorithmen und Architekturmuster für energieeffiziente Software entworfen. Sobald ein komplexer Kontrollfluss auf einem solchen System ausgeführt werden soll, muss das dynamische Energieverhalten in die Optimierung einbezogen werden. Um diese Herausforderung zu lösen, werden in dieser Arbeit die Ansätze von CoolSoftware durch weitere Modellelemente und Algorithmen ergänzt. Unter anderem kommt eine Simulation zum Einsatz, um funktionell mögliche Konfigurationen zu bewerten. Der Kontrollfluss kann durch das Workflow-Management-System Open Service Process Platform definiert werden. Im Ergebnis kann das System den Workflow je nach Komplexität möglichst energiearm ausführen.
4

Risiken in der Softwareentwicklung

Aßmann, Uwe, Demuth, Birgit, Hartmann, Falk 05 March 2007 (has links) (PDF)
The overall costs of software production are of crucial importance to many companies, as software is involved in a constantly increasing number of business processes and products. On the other hand, the completion of software projects in accordance with defined quality, time and cost requirements involves a high level risk. This paper enumerates specific risks within software development, outlines possibilities for risk prevention and illustrates the treatment of risks within the software development process. The increasing application of software within embedded systems also founds a necessity of knowledge concerning the risks in software development for the established engineering disciplines. / Die Kosten der Softwareproduktion sind für viele Firmen von entscheidender Bedeutung, da Software in einer ständig wachsenden Anzahl von Firmenprozessen und Produkten eingesetzt wird und immer mehr Anwendungsbereiche durchdringt. Allerdings birgt die qualitäts-, zeit- und kostengerechte Fertigstellung von Softwareprojekten enorme Risiken. Dieses Papier nennt spezifische Risiken der Softwareentwicklung anhand von fehlgeschlagenen Projekten, skizziert die Möglichkeiten der Risikovermeidung in der Softwareentwicklung und die Behandlung des Risikomanagements im Prozess der Softwareentwicklung. Der zunehmende Einsatz von Software in eingebetteten Systemen begründet die Notwendigkeit des Wissens um Risiken der Softwareentwicklung auch in den klassischen Ingenieurdisziplinen.
5

VAMPIR: Visualization and Analysis of MPI Resources

Nagel, Wolfgang E., Arnold, Alfred, Weber, Michael, Hoppe, Hans-Christian, Solchenbach, Karl 04 February 2010 (has links) (PDF)
Performance analysis most often is based on the detailed knowledge of program behavior. One option to get this information is tracing. Based on the research tool PARvis, the visualization environment VAMPIR was developed at KFA which now supports the new message passing standard MPI. VAMPIR translates a given trace file into a variety of graphical views, e.g., state diagrams, activity charts, time-line displays, and statistics. Moreover, it supports an animation mode that can help to locate performance bottlenecks, and it provides flexible filter operations to reduce the amount of information displayed. The most interesting part of VAMPIR is the powerful zooming feature that allows to identify problems at any level of detail.
6

Managing Service Dependencies in Service Compositions

Winkler, Matthias 21 December 2010 (has links) (PDF)
In the Internet of Services (IoS) providers and consumers of services engage in business interactions on service marketplaces. Provisioning and consumption of services are regulated by service level agreements (SLA), which are negotiated between providers and consumers. Trading composite services requires the providers to manage the SLAs that are negotiated with the providers of atomic services and the consumers of the composition. The management of SLAs involves the negotiation and renegotiation of SLAs as well as their monitoring during service provisioning. The complexity of this task arises due to the fact that dependencies exist between the different services in a composition. Dependencies between services occur because the complex task of a composition is distributed between atomic services. Thus, the successful provisioning of the composite service depends on its atomic building blocks. At the same time, atomic services depend on other atomic services, e.g. because of data or resource requirements, or time relationships. These dependencies need to be considered for the management of composite service SLAs. This thesis aims at developing a management approach for dependencies between services in service compositions to support SLA management. Information about service dependencies is not explicitly available. Instead it is implicitly contained in the workflow description of a composite service, the negotiated SLAs of the composite service, and as application domain knowledge of experts, which makes the handling of this information more complex. Thus, the dependency management approach needs to capture this dependency information in an explicit way. The dependency information is then used to support SLA management in three ways. First of all dependency information is used during SLA negotiation the to ensure that the different SLAs enable the successful collaboration of the services to achieve the composite service goal. Secondly, during SLA renegotiation dependency information is used to determine which effects the renegotiation has on other SLAs. Finally, dependency information is used during SLA monitoring to determine the effects of detected violations on other services. Based on a literature study and two use cases from the logistics and healthcare domains different types of dependencies were analyzed and classified. The results from this analysis were used as a basis for the development of an approach to analyze and represent dependency information according to the different dependency properties. Furthermore, a lifecycle and architecture for managing dependency information was developed. In an iterative approach the different artifacts were implemented, tested based on two use cases, and refined according to the test results Finally, the prototype was evaluated with regard to detailed test cases and performance measurements were executed. The resulting dependency management approach has four main contributions. Firstly, it represents a holistic approach for managing service dependencies with regard to composite SLA management. It extends existing work by supporting the handling of dependencies between atomic services as well as atomic and composite services at design time and during service provisioning. Secondly, a semi-automatic approach to capturing dependency information is provided. It helps to achieve a higher degree of automation as compared to other approaches. Thirdly, a metamodel for representing dependency information for SLA management is shown. Dependency information is kept separately from SLA information to achieve a better separation of concerns. This facilitates the utilization of the dependency management functionality with different SLA management approaches. Fourthly, a dependency management architecture is presented. The design of the architecture ensures that the components can be integrated with different SLA management approaches. The test case based evaluation of the dependency management approach showed its feasibility and correct functioning in two different application domains. Furthermore, the performance evaluation showed that the automated dependency management tasks are executed within the range of milliseconds for both use cases. The dependency management approach is suited to support the different SLA management tasks. It supports the work of composite service providers by facilitating the SLA management of complex service compositions.
7

Contributions To Ontology-Driven Requirements Engineering

Siegemund, Katja 27 March 2015 (has links) (PDF)
Today, it is well known that missing, incomplete or inconsistent requirements lead to faulty software designs, implementations and tests resulting in software of improper quality or safety risks. Thus, an improved Requirements Engineering contributes to safer and better-quality software, reduces the risk of overrun time and budgets and, most of all, decreases or even eliminates the risk for project failures. One significant problem requirements engineers have to cope with, are inconsistencies in the Software Requirements Specification. Such inconsistencies result from the acquisition, specification, and evolution of goals and requirements from multiple stakeholders and sources. In order to regain consistency, requirements information are removed from the specification which often leads to incompleteness. Due to this causal relationship between consistency, completeness and correctness, we can formally improve the correctness of requirements knowledge by increasing its completeness and consistency. Furthermore, the poor quality of individual requirements is a primary reason why so many projects continue to fail and needs to be considered in order to improve the Software Requirements Specification. These flaws in the Software Requirements Specification are hard to identify by current methods and thus, usually remain unrecognised. While the validation of requirements ensures that they are correct, complete, consistent and meet the customer and user intents, the requirements engineer is hardly supported by automated validation methods. In this thesis, a novel approach to automated validation and measurement of requirements knowledge is presented, which automatically identifies incomplete or inconsistent requirements and quality flaws. Furthermore, the requirements engineer is guided by providing knowledge specific suggestions on how to resolve them. For this purpose, a requirements metamodel, the Requirements Ontology, has been developed that provides the basis for the validation and measurement support. This requirements ontology is suited for Goal-oriented Requirements Engineering and allows for the conceptualisation of requirements knowledge, facilitated by ontologies. It provides a huge set of predefined requirements metadata, requirements artefacts and various relations among them. Thus, the Requirements Ontology enables the documentation of structured, reusable, unambiguous, traceable, complete and consistent requirements as demanded by the IEEE specification for Software Requirement Specifications. We demonstrate our approach with a prototypic implementation called OntoReq. OntoReq allows for the specification of requirements knowledge while keeping the ontology invisible to the requirements engineer and enables the validation of the knowledge captured within. The validation approach presented in this thesis is capable of being applied to any domain ontology. Therefore, we formulate various guidelines and use a continuous example to demonstrate the transfer to the domain of medical drugs. The Requirements Ontology as well as OntoReq have been evaluated by different methods. The Requirements Ontology has been shown to be capable for capturing requirements knowledge of a real Software Requirements Specification and OntoReq feasible to be used by a requirements engineering tool to highlight inconsistencies, incompleteness and quality flaws during real time requirements modelling.
8

Energy-Aware Development and Labeling for Mobile Applications

Wilke, Claas 14 April 2014 (has links) (PDF)
Today, mobile devices such as smart phones and tablets have become ubiquitous and are used everywhere. Millions of software applications can be purchased and installed on these devices, customizing them to personal interests and needs. However, the frequent use of mobile devices has let a new problem become omnipresent: their limited operation time, due to their limited energy capacities. Although energy consumption can be considered as being a hardware problem, the amount of energy required by today’s mobile devices highly depends on their current workloads, being highly influenced by the software running on them. Thus, although only hardware modules are consuming energy, operating systems, middleware services, and mobile applications highly influence the energy consumption of mobile devices, depending on how efficient they use and control hardware modules. Nevertheless, most of today’s mobile applications totally ignore their influence on the devices’ energy consumption, leading to energy wastes, shorter operation times, and thus, frustrated application users. A major reason for this energy-unawareness is the lack for appropriate tooling for the development of energy-aware mobile applications. As many mobile applications are today behaving energy-unaware and various mobile applications providing similar services exist, mobile application users aim to optimize their devices by installing applications being known as energy-saving or energy-aware; meaning that they consume less energy while providing the same services as their competitors. However, scarce information on the applications’ energy usage is available and, thus, users are forced to install and try many applications manually, before finding the applications fulfilling their personal functional, non-functional, and energy requirements. This thesis addresses the lack of tooling for the development of energy-aware mobile applications and the lack of comparability of mobile applications in terms of energy-awareness with the following two contributions: First, it proposes JouleUnit, an energy profiling and testing framework using unit-tests for the execution of application workloads while profiling their energy consumption in parallel. By extending a well-known testing concept and providing tooling integrated into the development environment Eclipse, JouleUnit requires a low learning curve for the integration into existing development and testing processes. Second, for the comparability of mobile applications in terms of energy efficiency, this thesis proposes an energy benchmarking and labeling service. Mobile applications belonging to the same usage domain are energy-profiled while executing a usage-domain specific benchmark in parallel. Thus, their energy consumption for specific use cases can be evaluated and compared afterwards. To abstract and summarize the profiling results, energy labels are derived that summarize the applications’ energy consumption over all evaluated use cases as a simple energy grade, ranging from A to G. Besides, users can decide how to weigh specific use cases for the computation of energy grades, as it is likely that different users use the same applications differently. The energy labeling service has been implemented for Android applications and evaluated for three different usage domains (being web browsers, email clients, and live wallpapers), showing that different mobile applications indeed differ in their energy consumption for the same services and, thus, their comparison is both possible and sensible. To the best of my knowledge, this is the first approach providing mobile application users comparable energy consumption information on mobile applications without installing and testing them on their own mobile devices.
9

Automatic Generation of Trace Links in Model-driven Software Development

Grammel, Birgit 02 December 2014 (has links) (PDF)
Traceability data provides the knowledge on dependencies and logical relations existing amongst artefacts that are created during software development. In reasoning over traceability data, conclusions can be drawn to increase the quality of software. The paradigm of Model-driven Software Engineering (MDSD) promotes the generation of software out of models. The latter are specified through different modelling languages. In subsequent model transformations, these models are used to generate programming code automatically. Traceability data of the involved artefacts in a MDSD process can be used to increase the software quality in providing the necessary knowledge as described above. Existing traceability solutions in MDSD are based on the integral model mapping of transformation execution to generate traceability data. Yet, these solutions still entail a wide range of open challenges. One challenge is that the collected traceability data does not adhere to a unified formal definition, which leads to poorly integrated traceability data. This aggravates the reasoning over traceability data. Furthermore, these traceability solutions all depend on the existence of a transformation engine. However, not in all cases pertaining to MDSD can a transformation engine be accessed, while taking into account proprietary transformation engines, or manually implemented transformations. In these cases it is not possible to instrument the transformation engine for the sake of generating traceability data, resulting in a lack of traceability data. In this work, we address these shortcomings. In doing so, we propose a generic traceability framework for augmenting arbitrary transformation approaches with a traceability mechanism. To integrate traceability data from different transformation approaches, our approach features a methodology for augmentation possibilities based on a design pattern. The design pattern supplies the engineer with recommendations for designing the traceability mechanism and for modelling traceability data. Additionally, to provide a traceability mechanism for inaccessible transformation engines, we leverage parallel model matching to generate traceability data for arbitrary source and target models. This approach is based on a language-agnostic concept of three similarity measures for matching. To realise the similarity measures, we exploit metamodel matching techniques for graph-based model matching. Finally, we evaluate our approach according to a set of transformations from an SAP business application and the domain of MDSD.
10

Hardware Error Detection Using AN-Codes

Schiffel, Ute 08 July 2011 (has links) (PDF)
Due to the continuously decreasing feature sizes and the increasing complexity of integrated circuits, commercial off-the-shelf (COTS) hardware is becoming less and less reliable. However, dedicated reliable hardware is expensive and usually slower than commodity hardware. Thus, economic pressure will most likely result in the usage of unreliable COTS hardware in safety-critical systems. The usage of unreliable, COTS hardware in safety-critical systems results in the need for software-implemented solutions for handling execution errors caused by this unreliable hardware. In this thesis, we provide techniques for detecting hardware errors that disturb the execution of a program. The detection provided facilitates handling of these errors, for example, by retry or graceful degradation. We realize the error detection by transforming unsafe programs that are not guaranteed to detect execution errors into safe programs that detect execution errors with a high probability. Therefore, we use arithmetic AN-, ANB-, ANBD-, and ANBDmem-codes. These codes detect errors that modify data during storage or transport and errors that disturb computations as well. Furthermore, the error detection provided is independent of the hardware used. We present the following novel encoding approaches: - Software Encoded Processing (SEP) that transforms an unsafe binary into a safe execution at runtime by applying an ANB-code, and - Compiler Encoded Processing (CEP) that applies encoding at compile time and provides different levels of safety by using different arithmetic codes. In contrast to existing encoding solutions, SEP and CEP allow to encode applications whose data and control flow is not completely predictable at compile time. For encoding, SEP and CEP use our set of encoded operations also presented in this thesis. To the best of our knowledge, we are the first ones that present the encoding of a complete RISC instruction set including boolean and bitwise logical operations, casts, unaligned loads and stores, shifts and arithmetic operations. Our evaluations show that encoding with SEP and CEP significantly reduces the amount of erroneous output caused by hardware errors. Furthermore, our evaluations show that, in contrast to replication-based approaches for detecting errors, arithmetic encoding facilitates the detection of permanent hardware errors. This increased reliability does not come for free. However, unexpectedly the runtime costs for the different arithmetic codes supported by CEP compared to redundancy increase only linearly, while the gained safety increases exponentially.

Page generated in 0.0226 seconds