• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 65
  • 14
  • Tagged with
  • 164
  • 164
  • 151
  • 81
  • 79
  • 79
  • 77
  • 67
  • 65
  • 28
  • 26
  • 24
  • 24
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Automated Theorem Proving for General Game Playing

Haufe, Sebastian 22 June 2012 (has links)
While automated game playing systems like Deep Blue perform excellent within their domain, handling a different game or even a slight change of rules is impossible without intervention of the programmer. Considered a great challenge for Artificial Intelligence, General Game Playing is concerned with the development of techniques that enable computer programs to play arbitrary, possibly unknown n-player games given nothing but the game rules in a tailor-made description language. A key to success in this endeavour is the ability to reliably extract hidden game-specific features from a given game description automatically. An informed general game player can efficiently play a game by exploiting structural game properties to choose the currently most appropriate algorithm, to construct a suited heuristic, or to apply techniques that reduce the search space. In addition, an automated method for property extraction can provide valuable assistance for the discovery of specification bugs during game design by providing information about the mechanics of the currently specified game description. The recent extension of the description language to games with incomplete information and elements of chance further induces the need for the detection of game properties involving player knowledge in several stages of the game. In this thesis, we develop a formal proof method for the automatic acquisition of rich game-specific invariance properties. To this end, we first introduce a simple yet expressive property description language to address knowledge-free game properties which may involve arbitrary finite sequences of successive game states. We specify a semantic based on state transition systems over the Game Description Language, and develop a provably correct formal theory which allows to show the validity of game properties with respect to their semantic across all reachable game states. Our proof theory does not require to visit every single reachable state. Instead, it applies an induction principle on the game rules based on the generation of answer set programs, allowing to apply any off-the-shelf answer set solver to practically verify invariance properties even in complex games whose state space cannot totally be explored. To account for the recent extension of the description language to games with incomplete information and elements of chance, we correctly extend our induction method to properties involving player knowledge. With an extensive evaluation we show its practical applicability even in complex games.
102

Development of a Class Framework for Flood Forecasting

Krauße, Thomas January 2007 (has links)
Aus der Einleitung: The calculation and prediction of river flow is a very old problem. Especially extremely high values of the runoff can cause enormous economic damage. A system which precisely predicts the runoff and warns in case of a flood event can prevent a high amount of the damages. On the basis of a good flood forecast, one can take action by preventive methods and warnings. An efficient constructional flood retention can reduce the effects of a flood event enormously.With a precise runoff prediction with longer lead times (>48h), the dam administration is enabled to give order to their gatekeepers to empty dams and reservoirs very fast, following a smart strategy. With a good timing, that enables the dams later to store and retain the peak of the flood and to reduce all effects of damage in the downstream. A warning of people in possible flooded areas with greater lead time, enables them to evacuate not fixed things like cars, computers, important documents and so on. Additionally it is possible to use the underlying rainfall-runoff model to perform runoff simulations to find out which areas are threatened at which precipitation events and associated runoff in the river. Altogether these methods can avoid a huge amount of economic damage.:List of Symbols and Abbreviations S. III 1 Introduction S. 1 2 Process based Rainfall-Runoff Modelling S. 5 2.1 Basics of runoff processes S. 5 2.2 Physically based rainfall-runoff and hydrodynamic river models S. 15 3 Portraying Rainfall-Runoff Processes with Neural Networks S. 21 3.1 The Challenge in General S. 22 3.2 State-of-the-art Approaches S. 24 3.3 Architectures of neural networks for time series prediction S. 26 4 Requirements specification S. 33 5 The PAI-OFF approach as the base of the system S. 35 5.1 Pre-Processing of the Input Data S. 37 5.2 Operating and training the PoNN S. 47 5.3 The PAI-OFF approach - an Intelligent System S. 52 6 Design and Implementation S. 55 6.1 Design S. 55 6.2 Implementation S. 58 6.3 Exported interface definition S. 62 6.4 Displaying output data with involvement of uncertainty S. 64 7 Results and Discussion S. 69 7.1 Evaluation of the Results S. 69 7.2 Discussion of the achieved state S. 75 8 Conclusion and FutureWork S. 77 8.1 Access to real-time meteorological input data S. 77 8.2 Using further developed prediction methods S. 79 8.3 Development of a graphical user interface S. 80 Bibliography S. 83
103

Künstliche Intelligenz in der Hochschullehre: Empirische Untersuchungen zur KI-Akzeptanz von Studierenden an (sächsischen) Hochschulen

Stützer, Cathleen M. 04 March 2022 (has links)
Inwieweit KI das neuartige universitäre Lehren und Lernen wirksam begleiten kann, wird im BMBF-Verbundprojekt 'tech4comp: Personalisierte Kompetenzentwicklung durch skalierbare Mentoringprozesse' untersucht. Gemeinsam beforscht man soziotechnische Artefakte für personalisiertes digital-gestütztes Mentoring für Studierende. Hierzu werden u.a. Rahmenbedingungen und (soziale) Kontextfaktoren erforscht, um die Implementierung von KI in der Hochschulbildung zu unterstützen. Es wird davon ausgegangen, dass unabhängig von der Art der Technologie und vom pandemischen Kontext, insbesondere die Akzeptanz und Bereitschaft der beteiligten Stakeholder zum erfolgreichen Einsatz intelligenter Bildungstechnologien beiträgt. Das ZQA/KfBH der TU Dresden widmet sich unter der Leitung von Dr. Cathleen M. Stützer im Forschungsprojekt der Elaboration von Handlungsfeldern, die sich aus einer soziotechnischen Beforschung von KI in der Hochschulbildung ergeben. Fallstudien hierzu stellen sich u. a. Fragen zu Gelingensbedingungen und Wirksamkeit digitaler Hochschulbildung, um (prospektiv) eine erfolgreiche Implementierung KI-gestützter adaptiver Mentoringsysteme mit evidenten Forschungsberichten zu unterstützen.:Vorwort & Danksagung Abbildungsverzeichnis Tabellenverzeichnis Abkürzungsverzeichnis 1. Einleitung 2. Methodik 3. Ergebnisse 4. Implikationen 4.1 Einflussfaktoren und Gelingensbedingungen der KI-Akzeptanz 4.2 Handlungsempfehlungen 5. Zusammenfassung und Fazit 6. Limitationen 7. Literaturverzeichnis Anhang / The extent to which AI can effectively accompany new types of university teaching and learning is being investigated in the BMBF joint project 'tech4comp: Personalised competence development through scalable mentoring processes'. Together, they are researching socio-technical artefacts for personalised digitally-supported mentoring for students. For this purpose, framework conditions and (social) contextual factors, among others, are being researched in order to support the implementation of AI in higher education. It is assumed that regardless of the type of technology and the pandemic context, the acceptance and willingness of the stakeholders involved in particular contributes to the successful use of intelligent educational technologies. Under the direction of Dr. Cathleen M. Stützer, the ZQA/KfBH at TU Dresden is dedicated to the elaboration of fields of action resulting from socio-technical research on AI in higher education. Case studies on this topic address questions such as the conditions for success and the effectiveness of digital higher education in order to (prospectively) support the successful implementation of AI-supported adaptive mentoring systems with evident research reports.:Vorwort & Danksagung Abbildungsverzeichnis Tabellenverzeichnis Abkürzungsverzeichnis 1. Einleitung 2. Methodik 3. Ergebnisse 4. Implikationen 4.1 Einflussfaktoren und Gelingensbedingungen der KI-Akzeptanz 4.2 Handlungsempfehlungen 5. Zusammenfassung und Fazit 6. Limitationen 7. Literaturverzeichnis Anhang
104

Online-Debatten mit Künstlicher Intelligenz verbessern

Geißler, Holger 16 December 2019 (has links)
Die zugrundeliegende Kommunikationsform bei Online-Debatten wie Chats und Forendiskussionen ist die computervermittelte Kommunikation. Der Begriff bezeichnet vielfältige Kommunikationsformen, denen gemein ist, dass jeweils ein Computer als medialer Bedeutungsvermittler in die Kommunikation eingebunden ist. Die Informationsvermittlung durch computervermittelte Kommunikation ist damit im Vergleich zu anderen Kommunikationsformen wie der persönlichen Kommunikation stark eingeschränkt (Taddicken, 2008, 30ff.). Dies hat den Effekt, dass Online-Diskussionen im Vergleich zu persönlichen Diskussionen mit etlichen Schwierigkeiten zu kämpfen haben: Online-Diskussionen werden schnell unübersichtlich, sie drehen sich im Kreis, Argumente wiederholen sich, man redet aneinander vorbei, und gemeinsame Entscheidungen oder Kompromisse werden selten ausgehandelt. Soziale Normen rücken in den Hintergrund, Beleidigungen in den Vordergrund – vor allem bei Teilnehmern, die sich nicht persönlich kennen. Diese Schwierigkeiten und die vom Gesetzgeber auferlegte Haftung für Betreiber von Websites haben dazu geführt, dass viele Kommentarfunktionen auf Seiten wie z. B. der Tagesschau, der Deutsche Welle oder des Sterns ganz oder teilweise abgeschaltet wurden (u. a. Pohl, 2018). [... aus Punkt 1]
105

Konstruktionslösungen mit Hilfe von Künstlicher Intelligenz

Gründer, Willi, Polyakov, Denis 03 January 2020 (has links)
Im Rahmen des Artikels wird ein Ansatz für einen 'intellektuellen Konstruktionsassistenten' auf der Basis digitalisierter Erfahrung vorgeschlagen. Diese an die analytischen und numerischen Verfahren anknüpfenden Assistenten werden unter Verwendung von Methoden der Künstlichen Intelligenz erzeugt. Sie sollen bereits bekannte Wissenselemente und Erfahrungen aufnehmen und durch eine fortgesetzte Spiegelung an der Realität fortschreiben, ohne dass eine aufwendige Algorithmenbildung und zeitraubende Numerik den Transfer neuer, oftmals inhärenter Erkenntnisse in die tägliche Praxis und damit das Qualitätsmanagement behindert. Wissensunterschiede zwischen Abteilungen können auf diese Weise schnell beseitigt und Bildungsunterschiede zwischen Mitarbeitern ausgeglichen werden. Andererseits kann hiermit in den Unternehmen aber auch die Abbildung besonderer Stärken durch einen automatischen Abgleich gleichgelagerter Konstruktionen vorangetrieben werden. [... aus der Einleitung]
106

Das Analysekompetenz-Marktpriorität-Portfolio zum Vergleich von Datenanalyseprojekten in der Produktentwicklung

Klement, Sebastian, Saske, Bernhard, Arndt, Stephan, Stelzer, Ralph 03 January 2020 (has links)
Die Künstliche Intelligenz (KI) mit ihren untergeordneten Forschungsgebieten wie maschinelles Lernen (ML), Spracherkennung oder Robotik ist in aller Munde. Die Leistungsfähigkeit und Stabilität von Anwendungen, die im weiteren Sinne KI zur Aufgabenbearbeitung einsetzen, sind gestiegen und durchdringen die Gesellschaft immer mehr. Weltweit wird die KI als eine Schlüsseltechnologie wahrgenommen, die in den nächsten Jahren weiter an Bedeutung gewinnt (Bitkom, DFKI 2017). So zielt auch die Ausschreibung des Bundesministeriums für Wirtschaft und Energie von 02/2019 darauf ab, KI als Schrittmachertechnologie für „[…] volkswirtschaftlich relevante Ökosysteme“ zu fördern (BMWi 2019). Mit der zunehmenden Ausstattung der Produktionsmittel mit Sensoren und der gleichzeitig steigenden Vernetzung dieser, steigt auch die Menge verfügbarer Daten, die für die Generierung von Wissen genutzt werden können (Fraunhofer 2018). Davon profitiert besonders das ML als Teilgebiet der KI. So unterschiedlich die gewonnenen Daten sind, so unterschiedlich sind die Aufgaben, die innerhalb des Maschinenbaus mit diesen bewältigt werden können. Ziele, die mit dem Einsatz von ML verbunden werden, sind beispielsweise selbst optimierende Produktionssysteme oder die bedarfsgerechte Instandhaltung von Anlagen auf Grund einer möglichst genauen Prognose des Ausfallzeitpunktes der Komponenten. Ebenso wie jede andere Technologie bedarf der Einsatz von ML Ressourcen, die in den Unternehmen nur begrenzt vorhanden sind. Die Entscheidung für oder gegen einen Einsatz von ML in Maschinenbauprodukten ist derzeit ganz klar eine strategische und bedingt die Einbeziehung verschiedener Fachbereiche bis hin zum Management des Unternehmens (Saltz et al. 2017). Daher wird ein strategisches Diskussions- und Entscheidungswerkzeug benötigt, welches ein Projekt aus technologischer und wirtschaftlicher Sicht darstellen und fachübergreifend genutzt werden kann sowie ein strukturiertes Vorgehen ermöglicht. Die Autoren schlagen zur Entscheidungsfindung die Nutzung des hier eingeführten Analysekompetenz-Marktpriorität-Portfolios vor, welches speziell auf die Fragestellung des ML Einsatzes im Maschinenbau zugeschnitten ist. Es werden Bewertungstabellen vorgestellt und deren Nutzung erläutert, welche sich an den zu bearbeitenden Prozessschritten für komplexe Datenanalysen (Shearer 2000, Klement et al. 2018) orientiert. Die Ableitung von Normstrategien wird anhand der finalen Darstellung des Portfolios diskutiert. [... aus der Einleitung]
107

Towards Efficient Convolutional Neural Architecture Design

Richter, Mats L. 10 May 2022 (has links)
The design and adjustment of convolutional neural network architectures is an opaque and mostly trial and error-driven process. The main reason for this is the lack of proper paradigms beyond general conventions for the development of neural networks architectures and lacking effective insights into the models that can be propagated back to design decision. In order for the task-specific design of deep learning solutions to become more efficient and goal-oriented, novel design strategies need to be developed that are founded on an understanding of convolutional neural network models. This work develops tools for the analysis of the inference process in trained neural network models. Based on these tools, characteristics of convolutional neural network models are identified that can be linked to inefficiencies in predictive and computational performance. Based on these insights, this work presents methods for effectively diagnosing these design faults before and during training with little computational overhead. These findings are empirically tested and demonstrated on architectures with sequential and multi-pathway structures, covering all the common types of convolutional neural network architectures used for classification. Furthermore, this work proposes simple optimization strategies that allow for goal-oriented and informed adjustment of the neural architecture, opening the potential for a less trial-and-error-driven design process.
108

Improving nuclear medicine with deep learning and explainability: two real-world use cases in parkinsonian syndrome and safety dosimetry

Nazari, Mahmood 17 March 2022 (has links)
Computer vision in the area of medical imaging has rapidly improved during recent years as a consequence of developments in deep learning and explainability algorithms. In addition, imaging in nuclear medicine is becoming increasingly sophisticated, with the emergence of targeted radiotherapies that enable treatment and imaging on a molecular level (“theranostics”) where radiolabeled targeted molecules are directly injected into the bloodstream. Based on our recent work, we present two use-cases in nuclear medicine as follows: first, the impact of automated organ segmentation required for personalized dosimetry in patients with neuroendocrine tumors and second, purely data-driven identification and verification of brain regions for diagnosis of Parkinson’s disease. Convolutional neural network was used for automated organ segmentation on computed tomography images. The segmented organs were used for calculation of the energy deposited into the organ-at-risk for patients treated with a radiopharmaceutical. Our method resulted in faster and cheaper dosimetry and only differed by 7% from dosimetry performed by two medical physicists. The identification of brain regions, however was analyzed on dopamine-transporter single positron emission tomography images using convolutional neural network and explainability, i.e., layer-wise relevance propagation algorithm. Our findings confirm that the extra-striatal brain regions, i.e., insula, amygdala, ventromedial prefrontal cortex, thalamus, anterior temporal cortex, superior frontal lobe, and pons contribute to the interpretation of images beyond the striatal regions. In current common diagnostic practice, however, only the striatum is the reference region, while extra-striatal regions are neglected. We further demonstrate that deep learning-based diagnosis combined with explainability algorithm can be recommended to support interpretation of this image modality in clinical routine for parkinsonian syndromes, with a total computation time of three seconds which is compatible with busy clinical workflow. Overall, this thesis shows for the first time that deep learning with explainability can achieve results competitive with human performance and generate novel hypotheses, thus paving the way towards improved diagnosis and treatment in nuclear medicine.
109

Fusing DL Reasoning with HTN Planning as a Deliberative Layer in Mobile Robotics

Hartanto, Ronny 08 March 2010 (has links)
Action planning has been used in the field of robotics for solving long-running tasks. In the robot architectures field, it is also known as the deliberative layer. However, there is still a gap between the symbolic representation on the one hand and the low-level control and sensor representation on the other. In addition, the definition of a planning problem for a complex, real-world robot is not trivial. The planning process could become intractable as its search spaces become large. As the defined planning problem determines the complexity and the computationability for solving the problem, it should contain only relevant states. In this work, a novel approach which amalgamates Description Logic (DL) reasoning with Hierarchical Task Network (HTN) planning is introduced. The planning domain description as well as fundamental HTN planning concepts are represented in DL and can therefore be subject to DL reasoning; from these representations, concise planning problems are generated for HTN planning. The method is presented through an example in the robot navigation domain. In addition, a case study of the RoboCup@Home domain is given. As proof of concept, a well-known planning problem that often serves as a benchmark, namely that of the blocks-world, is modeled and solved using this approach. An analysis of the performance of the approach has been conducted and the results show that this approach yields significantly smaller planning problem descriptions than those generated by current representations in HTN planning.
110

Multi-wavelength laser line profile sensing for agricultural applications

Strothmann, Wolfram 03 November 2016 (has links)
This dissertation elaborates on the novel sensing approach of multi-wavelength laser line profiling (MWLP). It is a novel sensor concept that expands on the well-known and broadly adopted laser line profile sensing concept for triangulation-based range imaging. Thereby, the MWLP concept does not just use one line laser but multiple line lasers at different wavelengths scanned by a single monochrome imager. Moreover, it collects not only the 3D distance values but also reflection intensity and backscattering of the laser lines are evaluated. The system collects spectrally selective image-based data in an active manner. Thus, it can be geared toward an application-specific wavelength configuration by mounting a set of lasers of the required wavelengths. Consequently, with this system image-based 3D range data can be collected along with reflection intensity and backscattering data at multiple, selectable wavelengths using just a single monochrome image sensor. Starting from a basic draft of the idea, the approach was realized in terms of hardware and software design and implementation. The approach was shown to be feasible and the prototype performed well as compared with other state-of-the-art sensor systems. The sensor raw data can be visualized and accessed as overlayed distance images, point clouds or mesh. Further, for selected example applications it was demonstrated that the sensor data gathered by the system can serve as descriptive input for real world agricultural classification problems. The sensor data was classified in a pixel-based manner. This allows very flexible, quick and easy adaptation of the classification toward new field situations.

Page generated in 0.1145 seconds