Spelling suggestions: "subject:"softwareentwicklung"" "subject:"softwareenwicklung""
271 |
Impact and Challenges of Software in 2025: Collected PapersFurrer, Frank J., Reimann, Jan 22 September 2014 (has links)
Today (2014), software is the key ingredient of most products and services. Software generates innovation and progress in many modern industries. Software is an indispensable element of evolution, of quality of life, and of our future. Software development is (slowly) evolving from a craft to an industrial discipline. Software – and the ability to efficiently produce and evolve high-quality software – is the single most important success factor for many highly competitive industries.
Software technology, development methods and tools, and applications in more and more areas are rapidly evolving. The impact of software in 2025 in nearly all areas of life, work, relationships, culture, and society is expected to be massive.
The question of the future of software is therefore important. However – like all predictions – quite difficult. Some market forces, industrial developments, social needs, and technology trends are visible today. How will they develop and influence the software we will have in 2025?:Impact of Heterogeneous Processor Architectures and Adaptation Technologies on the Software of 2025 (Kay Bierzynski) 9
Facing Future Software Engineering Challenges by Means of Software Product Lines (David Gollasch) 19
Capabilities of Digital Search and Impact on Work and Life in 2025 (Christina Korger) 27
Transparent Components for Software Systems (Paul Peschel) 37
Functionality, Threats and Influence of Ubiquitous Personal Assistants with Regard to the Society (Jonas Rausch) 47
Evolution-driven Changes of Non-Functional Requirements and Their Architecture (Hendrik Schön) 57
|
272 |
Cognitive Computing: Collected PapersPüschel, Georg, Furrer, Frank J. 11 November 2015 (has links)
Cognitive Computing' has initiated a new era in computer science. Cognitive computers are not rigidly programmed computers anymore, but they learn from their interactions with humans, from the environment and from information. They are thus able to perform amazing tasks on their own, such as driving a car in dense traffic, piloting an aircraft in difficult conditions, taking complex financial investment decisions, analysing medical-imaging data, and assist medical doctors in diagnosis and therapy. Cognitive computing is based on artificial intelligence, image processing, pattern recognition, robotics, adaptive software, networks and other modern computer science areas, but also includes sensors and actuators to interact with the physical world.
Cognitive computers – also called 'intelligent machines' – are emulating the human cognitive, mental and intellectual capabilities. They aim to do for human mental power (the ability to use our brain in understanding and influencing our physical and information environment) what the steam engine and combustion motor did for muscle power. We can expect a massive impact of cognitive computing on life and work. Many modern complex infrastructures, such as the electricity distribution grid, railway networks, the road traffic structure, information analysis (big data), the health care system, and many more will rely on intelligent decisions taken by cognitive computers.
A drawback of cognitive computers will be a shift in employment opportunities: A raising number of tasks will be taken over by intelligent machines, thus erasing entire job categories (such as cashiers, mail clerks, call and customer assistance centres, taxi and bus drivers, pilots, grid operators, air traffic controllers, …).
A possibly dangerous risk of cognitive computing is the threat by “super intelligent machines” to mankind. As soon as they are sufficiently intelligent, deeply networked and have access to the physical world they may endanger many areas of human supremacy, even possibly eliminate humans.
Cognitive computing technology is based on new software architectures – the “cognitive computing architectures”. Cognitive architectures enable the development of systems that exhibit intelligent behaviour.:Introduction 5
1. Applying the Subsumption Architecture to the Genesis Story Understanding System – A Notion and Nexus of Cognition Hypotheses
(Felix Mai) 9
2. Benefits and Drawbacks of Hardware Architectures Developed Specifically for
Cognitive Computing (Philipp Schröppe)l 19
3. Language Workbench Technology For Cognitive Systems (Tobias Nett) 29
4. Networked Brain-based Architectures for more Efficient Learning (Tyler Butler) 41
5. Developing Better Pharmaceuticals – Using the Virtual Physiological Human (Ben Blau) 51
6. Management of existential Risks of Applications leveraged through Cognitive Computing (Robert Richter) 61
|
273 |
Object-Oriented Development for Reconfigurable ArchitecturesFröhlich, Dominik 20 June 2007 (has links)
Reconfigurable hardware architectures have been available now for several years. Yet the application development for such architectures is still a challenging and error-prone task, since the methods, languages, and tools being used for development are inappropriate to handle the complexity of the problem. This thesis introduces a novel approach that tackles the complexity challenge by raising the level of abstraction to system-level and increasing the degree of automation. The approach is centered around the paradigms of object-orientation, platforms, and modeling. An application and all platforms being used for its design, implementation, and deployment are modeled with objects using UML and an action language. The application model is then transformed into an implementation, whereby the transformation is steered by the platform models. In this thesis solutions for the relevant problems behind this approach are discussed. It is shown how UML can be used for complete and precise modeling of applications and platforms. Application development is done at the system-level using a set of well-defined, orthogonal platform models. Thereby the core features of object-orientation - data abstraction, encapsulation, inheritance, and polymorphism - are fully supported. Novel algorithms are presented, that allow for an automatic mapping of such application models to the target architecture. Thereby the problems of platform mapping, estimation of implementation characteristics, and synthesis of UML models are discussed. The thesis explores the utilization of platform models for generation of highly optimized implementations in an automatic yet adaptable way. The approach is evaluated by a number of relevant applications. The execution of the generated implementations is supported by a run-time service. This service manages the hardware configurations and objects comprising the application. Moreover, it serves as broker for hardware objects. The efficient management of configurations and objects at run-time is discussed and optimized life cycles for these entities are proposed. Mechanisms are presented that make the approach portable among different physical hardware architectures. Further, this thesis presents UML profiles and example platforms that support system-level design. These extensions are embodied in a novel type of model compiler. The compiler is accompanied by an implementation of the run-time service. Both have been used to evaluate and improve the presented concepts and algorithms.
|
274 |
VS2DRT: Variably saturated two dimensional reactive transport modeling in the vadose zoneHaile, Sosina Shimeles 22 February 2013 (has links)
Contaminate transport in vadose is a huge concern since the vadose zone is the main passage way for ground water recharge. Understanding this process is crucial in order to prevent contamination, protect and rehabilitate ground water resources. Reactive transport models are instrumental for such purposes and there are numerous solute transport simulation programs for both ground water and vadose zone but most of this models are limited to simple Linear, Langmuir and Freundlich sorption models and first order decay and fail to simulate more complex geochemical reactions that are common in the vadose zone such as cation exchange, surface complexation, redox reaction and biodegradation. So it is necessary to enhance capabilities of solute transport models by incorporating well tested hydrogeochemical models like PHREEQC in to them to be able closely approximate the geochemical transport process in the subsurface.
In this PhD research a new reactive transport model called VS2DRT was created by coupling existing public domain solute and heat transport models VS2DT, VS2DH with hydro-chemical model PHREEQC using non-iterative operator splitting technique. VS2DRT was compiled using MinGW compiler using tools like autotools and automake. A graphical user interface was also created using QT creator and Argus ONE numerical development tools. The new model was tested for one dimensional conservative Cl transport, surface complexation, cation exchange, dissolution of calcite and gypsum, heat and solute transport as well as for two dimensional cation exchange cases. Their results were compared with VS2DT, VS2DH, HP1 and HP2 models and the results are in good agreement.
|
275 |
QoS Contract Negotiation in Distributed Component-Based SoftwareMulugeta Dinku, Mesfin 15 June 2007 (has links)
Currently, several mature and commercial component models (for e.g. EJB, .NET, COM+) exist on the market. These technologies were designed largely for applications with business-oriented non-functional requirements such as data persistence, confidentiality, and transactional support. They provide only limited support for the development of components and applications with non-functional properties (NFPs) like QoS (e.g. throughput, response time). The integration of QoS into component infrastructure requires among other things the support of components’ QoS contract specification, negotiation, adaptation, etc. This thesis focuses on contract negotiation. For applications in which the consideration of non-functional properties (NFPs) is essential (e.g. Video-on-Demand, eCommerce), a component-based solution demands the appropriate composition of the QoS contracts specified at the different ports of the collaborating components. The ports must be properly connected so that the QoS level required by one is matched by the QoS level provided by the other. Generally, QoS contracts of components depend on run-time resources (e.g. network bandwidth, CPU time) or quality attributes to be established dynamically and are usually specified in multiple QoS-Profiles. QoS contract negotiation enables the selection of appropriate concrete QoS contracts between collaborating components. In our approach, the component containers perform the contract negotiation at run-time. This thesis addresses the QoS contract negotiation problem by first modelling it as a constraint satisfaction optimization problem (CSOP). As a basis for this modelling, the provided and required QoS as well as resource demand are specified at the component level. The notion of utility is applied to select a good solution according to some negotiation goal (e.g. user’s satisfaction). We argue that performing QoS contract negotiation in multiple phases simplifies the negotiation process and makes it more efficient. Based on such classification, the thesis presents heuristic algorithms that comprise coarse-grained and fine-grained negotiations for collaborating components deployed in distributed nodes in the following scenarios: (i) single-client - single-server, (ii) multiple-clients, and (iii) multi-tier scenarios. To motivate the problem as well as to validate the proposed approach, we have examined three componentized distributed applications. These are: (i) video streaming, (ii) stock quote, and (iii) billing (to evaluate certain security properties). An experiment has been conducted to specify the QoS contracts of the collaborating components in one of the applications we studied. In a run-time system that implements our algorithm, we simulated different behaviors concerning: (i) user’s QoS requirements and preferences, (ii) resource availability conditions concerning the client, server, and network bandwidth, and (iii) the specified QoS-Profiles of the collaborating components. Under various conditions, the outcome of the negotiation confirms the claim we made with regard to obtaining a good solution.
|
276 |
Designing Round-Trip Systems by Change Propagation and Model PartitioningSeifert, Mirko 28 June 2011 (has links)
Software development processes incorporate a variety of different artifacts (e.g., source code, models, and documentation). For multiple reasons the data that is contained in these artifacts does expose some degree of redundancy. Ensuring global consistency across artifacts during all stages in the development of software systems is required, because inconsistent artifacts can yield to failures. Ensuring consistency can be either achieved by reducing the amount of redundancy or by synchronizing the information that is shared across multiple artifacts. The discipline of software engineering that addresses these problems is called Round-Trip Engineering (RTE).
In this thesis we present a conceptual framework for the design RTE systems. This framework delivers precise definitions for essential terms in the context of RTE and a process that can be used to address new RTE applications. The main idea of the framework is to partition models into parts that require synchronization - skeletons - and parts that do not - clothings. Once such a partitioning is obtained, the relations between the elements of the skeletons determine whether a deterministic RTE system can be built. If not, manual decisions may be required by developers. Based on this conceptual framework, two concrete approaches to RTE are presented.
The first one - Backpropagation-based RTE - employs change translation, traceability and synchronization fitness functions to allow for synchronization of artifacts that are connected by non-injective transformations. The second approach - Role-based Tool Integration - provides means to avoid redundancy. To do so, a novel tool design method that relies on role modeling is presented. Tool integration is then performed by the creation of role bindings between role models.
In addition to the two concrete approaches to RTE, which form the main contributions of the thesis, we investigate the creation of bridges between technical spaces. We consider these bridges as an essential prerequisite for performing logical synchronization between artifacts. Also, the feasibility of semantic web technologies is a subject of the thesis, because the specification of synchronization rules was identified as a blocking factor during our problem analysis.
The thesis is complemented by an evaluation of all presented RTE approaches in different scenarios. Based on this evaluation, the strengths and weaknesses of the approaches are identified. Also, the practical feasibility of our approaches is confirmed w.r.t. the presented RTE applications.
|
277 |
Die Logistik-orientierte Objekt-Plattform LOOP: Komponentenorientierte Softwareentwicklung vor dem Hintergrund fluider OrganisationTeichmann, Gunter, Dittes, Benjamin January 2006 (has links)
Das Geschäftsfeld der SALT Solutions GmbH ist der Entwurf und die Implementierung von IT-Lösungen für Logistik, Handel und Produktion sowie die Integration dieser Lösungen in die Geschäftsprozesse und Systemlandschaften ihrer Kunden. Stand dabei in der Vergangenheit die Auswahl und Einführung passender Standardsoftware oder die Implementierung optimal zugeschnittener Individualsoftware im Mittelpunkt, beobachten wir insbesondere im Marktumfeld der Kontraktlogistik ein wachsendes Interesse an Lösungen, die sich dynamisch an immer schneller auftretende Anforderungsänderungen anpassen lassen. Dieses Interesse resultiert aus einem zentralen Trend zur „High-End“-Kontraktlogistik, der davon gekennzeichnet ist, dass immer umfassendere und komplexere Dienstleistungen von Logistikunternehmen übernommen werden, die im Sinne eines „Business On Demand“ mit immer kürzeren Reaktionszeiten bis hin zur sofortigen Reaktion auf Kundenbedürfnisse erbracht werden. (...)
|
278 |
Generative und Merkmal-orientierte Entwicklung von Software-Produktlinien mit noninvasiven FramesKörber, Hans Jörg 18 October 2013 (has links)
Frames sind parametrisierte Elemente zur Erzeugung von Programmen in einer beliebigen Zielprogrammiersprache. Ihre Handhabung ist einfach und schnell zu erlernen. Allerdings findet bei Verwendung von Frames eine “Verunreinigung” des Programmcodes, der als Basis für die Generatorentwicklung dient, mit Befehlen der Generatorsprache statt. Dies erschwert die Weiterverwendung der gewohnten Entwicklungsumgebung für die Zielprogrammiersprache. Eine eventuelle Weiterentwicklung der Programmbasis muss anschließend in Form von Frames erfolgen.
Im Rahmen dieser Arbeit erfolgt die Beschreibung noninvasiver Frames, bei denen Informationen zur Position der Frames getrennt vom Programmcode aufbewahrt werden. Ihre Vermischung erfolgt in einem separaten Schritt zur Darstellung oder zur eigentlichen Codeerzeugung. Der Prozess der Generatorentwicklung auf der Basis noninvasiver Frames passt sich gut in die Prozesse von Merkmal-orientierter (FOSD) und Generativer Softwareentwicklung (GSE) ein, weil noninvasive Frames die automatisierte Prüfung aller mit dem Generator erzeugbaren Programme hinsichtlich Syntax und bestimmter semantischer Eigenschaften unterstützen und die Generierung durch Auswahl der gewünschten Programmeigenschaften ermöglichen. Die Machbarkeit der Entwicklung von Softwaregeneratoren mit noninvasiven Frames wird anhand zweier Fallstudien demonstriert.
|
279 |
Entwurf eines Frameworks für CTI-Lösungen im Call CenterBauer, Nikolai 06 December 2002 (has links)
Besonders in Call Centern spielt die unter dem Begriff CTI (Computer Telephony Integration) zusammengefasste Integration von IT-Systemen und Telefonanlagen eine wichtige Rolle. Wenn auch diese Integration auf technischer Ebene in der Regel zufriedenstellend gelöst wird, zeigt ein Blick auf die Softwareentwicklung in diesem Bereich noch Nachholbedarf. Die vorliegende Arbeit greift dieses Problem auf und versucht, den Ansatz CTI auf die Ebene der Entwicklung verteilter Anwendungen abzubilden. Ziel dabei ist es, Erkenntnisse darüber zu erzielen, inwieweit ein allgemeines Basismodell als Framework für die Entwicklung von CTI-Anwendungen definiert werden kann und welchen Mehrwert es mit sich bringt. Parallel dazu wird die Frage untersucht, inwieweit bewährte Methoden und Technologien verteilter Systeme auf diesem Spezialgebiet ihre Anwendung finden können. Dazu wird ein allgemeines Anwendungsmodell für CTI-Lösungen und darauf aufbauend ein objektorientiertes, verteiltes Framework entworfen. Das Framework selbst wird als Prototyp implementiert und diversen Leistungsmessungen unterzogen. / Computer Telephony Integration (CTI) plays an important role wherever computer and telecommunication systems have to interact. Applications in a call center are typical examples. This integration has been studied widely from a technical viewpoint only, but not at the level of application development. Since telecommunication systems are naturally distributed systems, CTI eventually leads to distributed applications. This thesis presents an example of a general, object-oriented framework for CTI applications and examines the use of proven technologies and methodologies for distributed applications. Based on a prototype implementation the practicability of the concept is being examined and verified.
|
280 |
A unifying mathematical definition enables the theoretical study of the algorithmic class of particle methods.Pahlke, Johannes 05 June 2023 (has links)
Mathematical definitions provide a precise, unambiguous way to formulate concepts. They also provide a common language between disciplines. Thus, they are the basis for a well-founded scientific discussion. In addition, mathematical definitions allow for deeper insights into the defined subject based on mathematical theorems that are incontrovertible under the given definition. Besides their value in mathematics, mathematical definitions are indispensable in other sciences like physics, chemistry, and computer science. In computer science, they help to derive the expected behavior of a computer program and provide guidance for the design and testing of software. Therefore, mathematical definitions can be used to design and implement advanced algorithms.
One class of widely used algorithms in computer science is the class of particle-based algorithms, also known as particle methods. Particle methods can solve complex problems in various fields, such as fluid dynamics, plasma physics, or granular flows, using diverse simulation methods, including Discrete Element Methods (DEM), Molecular Dynamics (MD), Reproducing Kernel Particle Methods (RKPM), Particle Strength Exchange (PSE), and Smoothed Particle Hydrodynamics (SPH). Despite the increasing use of particle methods driven by improved computing performance, the relation between these algorithms remains formally unclear. In particular, particle methods lack a unifying mathematical definition and precisely defined terminology. This prevents the determination of whether an algorithm belongs to the class and what distinguishes the class.
Here we present a rigorous mathematical definition for determining particle methods and demonstrate its importance by applying it to several canonical algorithms and those not previously recognized as particle methods. Furthermore, we base proofs of theorems about parallelizability and computational power on it and use it to develop scientific computing software.
Our definition unified, for the first time, the so far loosely connected notion of particle methods. Thus, it marks the necessary starting point for a broad range of joint formal investigations and applications across fields.:1 Introduction
1.1 The Role of Mathematical Definitions
1.2 Particle Methods
1.3 Scope and Contributions of this Thesis
2 Terminology and Notation
3 A Formal Definition of Particle Methods
3.1 Introduction
3.2 Definition of Particle Methods
3.2.1 Particle Method Algorithm
3.2.2 Particle Method Instance
3.2.3 Particle State Transition Function
3.3 Explanation of the Definition of Particle Methods
3.3.1 Illustrative Example
3.3.2 Explanation of the Particle Method Algorithm
3.3.3 Explanation of the Particle Method Instance
3.3.4 Explanation of the State Transition Function
3.4 Conclusion
4 Algorithms as Particle Methods
4.1 Introduction
4.2 Perfectly Elastic Collision in Arbitrary Dimensions
4.3 Particle Strength Exchange
4.4 Smoothed Particle Hydrodynamics
4.5 Lennard-Jones Molecular Dynamics
4.6 Triangulation refinement
4.7 Conway's Game of Life
4.8 Gaussian Elimination
4.9 Conclusion
5 Parallelizability of Particle Methods
5.1 Introduction
5.2 Particle Methods on Shared Memory Systems
5.2.1 Parallelization Scheme
5.2.2 Lemmata
5.2.3 Parallelizability
5.2.4 Time Complexity
5.2.5 Application
5.3 Particle Methods on Distributed Memory Systems
5.3.1 Parallelization Scheme
5.3.2 Lemmata
5.3.3 Parallelizability
5.3.4 Bounds on Time Complexity and Parallel Scalability
5.4 Conclusion
6 Turing Powerfulness and Halting Decidability
6.1 Introduction
6.2 Turing Machine
6.3 Turing Powerfulness of Particle Methods Under a First Set of Constraints
6.4 Turing Powerfulness of Particle Methods Under a Second Set of Constraints
6.5 Halting Decidability of Particle Methods
6.6 Conclusion
7 Particle Methods as a Basis for Scientific Software Engineering
7.1 Introduction
7.2 Design of the Prototype
7.3 Applications, Comparisons, Convergence Study, and Run-time Evaluations
7.4 Conclusion
8 Results, Discussion, Outlook, and Conclusion
8.1 Problem
8.2 Results
8.3 Discussion
8.4 Outlook
8.5 Conclusion
|
Page generated in 0.0532 seconds