• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 100
  • 27
  • 6
  • 2
  • Tagged with
  • 238
  • 106
  • 59
  • 56
  • 56
  • 52
  • 39
  • 37
  • 36
  • 36
  • 36
  • 24
  • 23
  • 22
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Planen im Fluentkalkül mit binären Entscheidungsdiagrammen

Störr, Hans-Peter 21 April 2005 (has links)
Seit langem ist die Intelligenz des Menschen für viele Forscher und Philosophen ein faszinierendes Forschungsobjekt. Mit dem Aufkommen der Computertechnik erscheint nun zum ersten mal der Traum, einige dieser typisch menschlichen Fähigkeiten nicht nur zu verstehen, sondern nachbauen oder in Teilgebieten gar übertreffen zu können, als realistisch. Ein wichtiger Teil dieses mit "Künstliche Intelligenz" überschriebenen Forschungsgebietes ist das Schließen über Aktionen und Veränderung. Hier wird versucht, die menschliche Fähigkeit, die Effekte seiner Aktionen vorauszusehen und Pläne zum Erreichen von Zielen zu schmieden, nachzubilden. Ein aktives Forschungsgebiet in diesem Rahmen ist der Fluentkalkül, ein Formalismus zur Modellierung von Aktionen und Veränderung. Er stellt Mittel bereit, in der ein automatischer Agent seine Umgebung und die Effekte seiner Aktionen im Rahmen der mathematischen Logik darstellen kann, um mit Hilfe von logischem Schließen sein Verhalten zu planen. Obwohl zum Fluentkalkül viele Arbeiten existieren, die dessen Anwendungsbereiche und Semantik erweitern, gibt es doch noch relativ wenige Arbeiten zum effizienten Schlussfolgern. Dies ist ein Hauptaugenmerk der vorliegenden Arbeit. Es wird ein Algorithmus geschaffen, der Erkenntnisse aus effizienten Verfahren zum Modelchecking mit Binären Entscheidungsdiagrammen (BDDs) sinngemäß überträgt und für ein Fragment des Fluentkalkül erweitert. Damit können nun auch Planungsprobleme von Fluentkalkül-Planern gelöst werden, die der realisierten symbolischen Breitensuche besser zugänglich sind, als der bisher aussschliesslich verwendeten heuristischen Tiefensuche. Um eine leichtere Vergleichbarkeit Fluentkalkül-basierter Planungsverfahren mit anderen Planungsalgorithmen zu ermöglichen, wurde eine Übersetzung des ADL-Fragments der Planungsdomänenbeschreibungssprache PDDL in den Fluentkalkül geschaffen. Damit können zahlreiche Planungsprobleme aus der Literatur und Planungsdomänenbibliotheken auch mit Fluentkalkül-Planern bearbeitet werden. Die Übersetzung kann zugleich als formale Semantik des nur informal spezifizierten PDDL dienen.
212

Integrierte und hybride Konstruktion von Software-Produktlinien

Dinger, Ulrich 12 June 2009 (has links)
Die Konzepte zur Erstellung von Software-Produktlinien dienen der ingenieurmäßigen, unternehmensinternen Wiederverwendung existierender Software-Artefakte. Existierende Ansätze nutzen von Hand erstellte und gewartete Kompositionsprogramme zum Assemblieren der Produkte entsprechend einer Variantenauswahl. Der Einsatz einer automatischen Planungskomponente sowie eines einfachen, erweiterbaren Komponenten-Meta-Modells hilft dabei, die dabei anfallenden Daten computergestützt zu verarbeiten. Die Integration beider Konzepte zu einem hybriden Ansatz ermöglicht die Neuerstellung von Produkten, die nicht von Anfang an als Produktlinie konzipiert sind, ohne eine spätere Umarbeitung unter Nutzung der automatischen Planungskomponente unnötig zu erschweren.
213

Extended Version of Multi-Perspectives on Feature Models

Schroeter, Julia, Lochau, Malte, Winkelmann, Tim 17 January 2012 (has links)
Domain feature models concisely express commonality and variability among variants of a software product line. For separation of concerns, e.g., due to legal restrictions, technical considerations, and business requirements, multi-view approaches restrict the configuration choices on feature models for different stakeholders. However, recent approaches lack a formalization for precise, yet flexible specifications of views that ensure every derivable configuration perspective to obey feature model semantics. Here, we introduce a novel approach for clustering feature models to create multi-perspectives. Such customized perspectives result from composition of multiple concern-relevant views. A structured view model is used to organize feature groups, whereat a feature can be contained in multiple views. We provide formalizations for view composition and guaranteed consistency of the resulting perspectives w.r.t. feature model semantics. Thereupon, an efficient algorithm to verify consistency for entire clusterings is provided. We present an implementation and evaluate our concepts on two case studies.
214

Knowledge-Based General Game Playing

Schiffel, Stephan 29 July 2011 (has links)
The goal of General Game Playing (GGP) is to develop a system, that is able to automatically play previously unseen games well, solely by being given the rules of the game. In contrast to traditional game playing programs, a general game player cannot be given game specific knowledge. Instead, the program has to discover this knowledge and use it for effectively playing the game well without human intervention. In this thesis, we present a such a program and general methods that solve a variety of knowledge discovery problems in GGP. Our main contributions are methods for the automatic construction of heuristic evaluation functions, the automated discovery of game structures, a system for proving properties of games, and symmetry detection and exploitation for general games.:1. Introduction 2. Preliminaries 3. Components of Fluxplayer 4. Game Tree Search 5. Generating State Evaluation Functions 6. Distance Estimates for Fluents and States 7. Proving Properties of Games 8. Symmetry Detection 9. Related Work 10. Discussion
215

Erstellung von Echtzeitmotormodellen aus den Konstruktionsdaten von Verbrennungsmotoren

Kämmer, Alexander 30 June 2003 (has links)
Motormanagement Systeme im modernen Kraftfahrzeug werden zunehmend umfangreicher und komplexer. Die Steuergeräte als zentrale Komponente dieser Systeme werden in ihrer Funktionalität durch Hard- und Software bestimmt. Sie sind das Ergebnis eines langen Entwicklungs- und Fertigungsprozesses. Fahrversuche und Versuche am Motorenprüfstand zum Test von Steuergeräten sind sehr zeit- und kostenintensiv. Eine Alternative ist der Test von Steuergeräten außerhalb ihrer realen Umgebung. Das Steuergerät wird dabei auf einem Hardware-in-the-Loop-Prüfstand betrieben. Die große Zahl von Einzelfunktionen und miteinander verknüpften Funktionen des Steuergerätes erfordert einen strukturierten und reproduzierbaren Ablauf beim Test. Diese Tests sind aber erst nach Fertigstellung eines Motorprototypen möglich, da die Parameter der vorhandenen Modelle aus den gemessenen Prüfstandsdaten ermittelt werden. Eine weitere Fragestellung zu diesem Thema bezieht sich auf die Modelltiefe: Heutige Modelle basieren auf Daten, die über einem Arbeitsspiel gemittelt werden. Untersuchungen wie z.B. der Momentanbeschleunigung von Kurbelwellen sind somit nicht möglich. Aus diesem Grund untersucht diese Arbeit Strategien im Hinblick auf den modellbasierten Test von Motormanagement Systemen. Dabei wird das Potenzial zur Zeiteinsparung bei einem neuen Motormodell großer Tiefe betrachtet.
216

Extending Artemis With a Rule-Based Approach for Automatically Assessing Modeling Tasks

Rodestock, Franz 27 September 2022 (has links)
The Technische Universität Dresden has multiple e-learning projects in use. The Chair of Software Technology uses Inloop to teach students object-oriented programming through automatic feedback. In the last years, interest has grown in giving students automated feedback on modeling tasks. This is why there was an extension developed by Hamann to automate the assessment of modeling tasks in 2020. The TU Dresden currently has plans to replace Inloop with Artemis, a comparable system. Artemis currently supports the semi-automatic assessment of modeling exercises. In contrast, the system proposed by Hamann, called Inloom, is based on a rule-based approach and provides instant feedback. A rule-based system has certain advantages over a similarity-based system. One advantage is the mostly better feedback that these systems generate. To give instructors more flexibility and choice, this work tries to identify possible ways of extending Artemis with the rule-based approach Inloom. In the second step, this thesis will provide a proof of concept implementation. Furthermore, a comparison between different systems is developed to help instructors choose the best suitable system for their usecase.:Introduction, Background, Related Work, Analysis, System Design, Implementation, Evaluation, Conclusion and Future Work, Bibliography, Appendix
217

A Sample Advisor for Approximate Query Processing

Rösch, Philipp, Lehner, Wolfgang 25 January 2023 (has links)
The rapid growth of current data warehouse systems makes random sampling a crucial component of modern data management systems. Although there is a large body of work on database sampling, the problem of automatic sample selection remained (almost) unaddressed. In this paper, we tackle the problem with a sample advisor. We propose a cost model to evaluate a sample for a given query. Based on this, our sample advisor determines the optimal set of samples for a given set of queries specified by an expert. We further propose an extension to utilize recorded workload information. In this case, the sample advisor takes the set of queries and a given memory bound into account for the computation of a sample advice. Additionally, we consider the merge of samples in case of overlapping sample advice and present both an exact and a heuristic solution. Within our evaluation, we analyze the properties of the cost model and compare the proposed algorithms. We further demonstrate the effectiveness and the efficiency of the heuristic solutions with a variety of experiments.
218

Statistical determination of atomic-scale characteristics of nanocrystals based on correlative multiscale transmission electron microscopy

Neumann, Stefan 21 December 2023 (has links)
The exceptional properties of nanocrystals (NCs) are strongly influenced by many different characteristics, such as their size and shape, but also by characteristics on the atomic scale, such as their crystal structure, their surface structure, as well as by potential microstructure defects. While the size and shape of NCs are frequently determined in a statistical manner, atomic-scale characteristics are usually quantified only for a small number of individual NCs and thus with limited statistical relevance. Within this work, a characterization workflow was established that is capable of determining relevant NC characteristics simultaneously in a sufficiently detailed and statistically relevant manner. The workflow is based on transmission electron microscopy, networked by a correlative multiscale approach that combines atomic-scale information on NCs obtained from high-resolution imaging with statistical information on NCs obtained from low-resolution imaging, assisted by a semi-automatic segmentation routine. The approach is complemented by other characterization techniques, such as X-ray diffraction, UV-vis spectroscopy, dynamic light scattering, or alternating gradient magnetometry. The general applicability of the developed workflow is illustrated on several examples, i.e., on the classification of Au NCs with different structures, on the statistical determination of the facet configurations of Au nanorods, on the study of the hierarchical structure of multi-core iron oxide nanoflowers and its influence on their magnetic properties, and on the evaluation of the interplay between size, morphology, microstructure defects, and optoelectronic properties of CdSe NCs.:List of abbreviations and symbols 1 Introduction 1.1 Types of nanocrystals 1.2 Characterization of nanocrystals 1.3 Motivation and outline of this thesis 2 Materials and methods 2.1 Nanocrystal synthesis 2.1.1 Au nanocrystals 2.1.2 Au nanorods 2.1.3 Multi-core iron oxide nanoparticles 2.1.4 CdSe nanocrystals 2.2 Nanocrystal characterization 2.2.1 Transmission electron microscopy 2.2.2 X-ray diffraction 2.2.3 UV-vis spectroscopy 2.2.3.1 Au nanocrystals 2.2.3.2 Au nanorods 2.2.3.3 CdSe nanocrystals 2.2.4 Dynamic light scattering 2.2.5 Alternating gradient magnetometry 2.3 Methodical development 2.3.1 Correlative multiscale approach – Statistical information beyond size and shape 2.3.2 Semi-automatic segmentation routine 3 Classification of Au nanocrystals with comparable size but different morphology and defect structure 3.1 Introduction 3.1.1 Morphologies and structures of Au nanocrystals 3.1.2 Localized surface plasmon resonance of Au nanocrystals 3.1.3 Motivation and outline 3.2 Results 3.2.1 Microstructural characteristics of the Au nanocrystals 3.2.2 Insufficiency of two-dimensional size and shape for an unambiguous classification of the Au nanocrystals 3.2.3 Statistical classification of the Au nanocrystals 3.2.4 Advantage of a multidimensional characterization of the Au nanocrystals 3.2.5 Estimation of the density of planar defects in the Au nanoplates 3.3 Discussion 3.4 Conclusions 4 Statistical determination of the facet configurations of Au nanorods 4.1 Introduction 4.1.1 Growth mechanism and facet formation of Au nanorods 4.1.2 Localized surface plasmon resonance of Au nanorods 4.1.3 Catalytic activity of Au nanorods 4.1.4 Motivation and outline 4.2 Results 4.2.1 Statistical determination of the size and shape of the Au nanorods 4.2.2 Microstructural characteristics and facet configurations of the Au nanorods 4.2.3 Statistical determination of the facet configurations of the Au nanorods 4.3 Discussion 4.4 Conclusions 5 Influence of the hierarchical architecture of multi-core iron oxide nanoflowers on their magnetic properties 5.1 Introduction 5.1.1 Phase composition and phase distribution in iron oxide nanoparticles 5.1.2 Magnetic properties of iron oxide nanoparticles 5.1.3 Mono-core vs. multi-core iron oxide nanoparticles 5.1.4 Motivation and outline 5.2 Results 5.2.1 Phase composition, vacancy ordering, and antiphase boundaries 5.2.2 Arrangement and coherence of individual cores within the iron oxide nanoflowers 5.2.3 Statistical determination of particle, core, and shell size 5.2.4 Influence of the coherence of the cores on the magnetic properties 5.3 Discussion 5.4 Conclusions 6 Interplay between size, morphology, microstructure defects, and optoelectronic properties of CdSe nanocrystals 6.1 Introduction 6.1.1 Polymorphism in CdSe nanocrystals 6.1.2 Optoelectronic properties of CdSe nanocrystals 6.1.3 Nucleation, growth, and coarsening of CdSe nanocrystals 6.1.4 Motivation and outline 6.2 Results 6.2.1 Influence of the synthesis temperature on the optoelectronic properties of the CdSe nanocrystals 6.2.2 Microstructural characteristics of the CdSe nanocrystals 6.2.3 Statistical determination of size, shape, and amount of oriented attachment of the CdSe nanocrystals 6.3 Discussion 6.4 Conclusions 7 Summary and outlook References Publications
219

Top-k Entity Augmentation using Consistent Set Covering

Eberius, Julian, Thiele, Maik, Braunschweig, Katrin, Lehner, Wolfgang 19 September 2022 (has links)
Entity augmentation is a query type in which, given a set of entities and a large corpus of possible data sources, the values of a missing attribute are to be retrieved. State of the art methods return a single result that, to cover all queried entities, is fused from a potentially large set of data sources. We argue that queries on large corpora of heterogeneous sources using information retrieval and automatic schema matching methods can not easily return a single result that the user can trust, especially if the result is composed from a large number of sources that user has to verify manually. We therefore propose to process these queries in a Top-k fashion, in which the system produces multiple minimal consistent solutions from which the user can choose to resolve the uncertainty of the data sources and methods used. In this paper, we introduce and formalize the problem of consistent, multi-solution set covering, and present algorithms based on a greedy and a genetic optimization approach. We then apply these algorithms to Web table-based entity augmentation. The publication further includes a Web table corpus with 100M tables, and a Web table retrieval and matching system in which these algorithms are implemented. Our experiments show that the consistency and minimality of the augmentation results can be improved using our set covering approach, without loss of precision or coverage and while producing multiple alternative query results.
220

BLAINDER—A Blender AI Add-On for Generation of Semantically Labeled Depth-Sensing Data

Reitmann, Stefan, Neumann, Lorenzo, Jung, Bernhard 02 July 2024 (has links)
Common Machine-Learning (ML) approaches for scene classification require a large amountof training data. However, for classification of depth sensor data, in contrast to image data, relativelyfew databases are publicly available and manual generation of semantically labeled 3D point clouds isan even more time-consuming task. To simplify the training data generation process for a wide rangeof domains, we have developed theBLAINDERadd-on package for the open-source 3D modelingsoftware Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniquesLight Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within theBLAINDERadd-on, different depth sensors can be loaded from presets, customized sensors can beimplemented and different environmental conditions (e.g., influence of rain, dust) can be simulated.The semantically labeled data can be exported to various 2D and 3D formats and are thus optimizedfor different ML applications and visualizations. In addition, semantically labeled images can beexported using the rendering functionalities of Blender.

Page generated in 0.9208 seconds