• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 97
  • 82
  • Tagged with
  • 179
  • 179
  • 158
  • 143
  • 122
  • 59
  • 20
  • 19
  • 18
  • 16
  • 16
  • 16
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Self-Organizing Neural Networks for Sequence Processing

Strickert, Marc 27 January 2005 (has links)
This work investigates the self-organizing representation of temporal data in prototype-based neural networks. Extensions of the supervised learning vector quantization (LVQ) and the unsupervised self-organizing map (SOM) are considered in detail. The principle of Hebbian learning through prototypes yields compact data models that can be easily interpreted by similarity reasoning. In order to obtain a robust prototype dynamic, LVQ is extended by neighborhood cooperation between neurons to prevent a strong dependence on the initial prototype locations. Additionally, implementations of more general, adaptive metrics are studied with a particular focus on the built-in detection of data attributes involved for a given classifcation task. For unsupervised sequence processing, two modifcations of SOM are pursued: the SOM for structured data (SOMSD) realizing an efficient back-reference to the previous best matching neuron in a triangular low-dimensional neural lattice, and the merge SOM (MSOM) expressing the temporal context as a fractal combination of the previously most active neuron and its context. The first SOMSD extension tackles data dimension reduction and planar visualization, the second MSOM is designed for obtaining higher quantization accuracy. The supplied experiments underline the data modeling quality of the presented methods.
42

Entwicklung eines Monte-Carlo-Verfahrens zum selbständigen Lernen von Gauß-Mischverteilungen

Lauer, Martin 03 March 2005 (has links)
In der Arbeit wird ein neuartiges Lernverfahren für Gauß-Mischverteilungen entwickelt. Es basiert auf der Technik der Markov-Chain Monte-Carlo Verfahren und ist in der Lage, in einem Zuge die Größe der Mischverteilung sowie deren Parameter zu bestimmen. Das Verfahren zeichnet sich sowohl durch eine gute Anpassung an die Trainingsdaten als auch durch eine gute Generalisierungsleistung aus. Ausgehend von einer Beschreibung der stochastischen Grundlagen und einer Analyse der Probleme, die beim Lernen von Gauß-Mischverteilungen auftreten, wird in der Abeit das neue Lernverfahren schrittweise entwickelt und seine Eigenschaften untersucht. Ein experimenteller Vergleich mit bekannten Lernverfahren für Gauß-Mischverteilungen weist die Eignung des neuen Verfahrens auch empirisch nach.
43

Hypermediale Navigation in Vorlesungsaufzeichnungen: Nutzung und automatische Produktion hypermedial navigierbarer Aufzeichnungen von Lehrveranstaltungen

Mertens, Robert 08 November 2007 (has links)
In the mid nineties, electronic lecture recording has emerged as a new area of research. The aim behind most early research activities in this field has been the cost-efficient production of e-learning content as a by-product of traditional lectures. These efforts have led to the development of systems that can produce recordings of a lecture in a fraction of the time and also for a fraction of the cost that other methods require for the production of similar e-learning content.While the production of lecture recordings has been investigated thoroughly, the conditions under which the content produced can be used efficiently shifted into focus of research only recently. Employing lecture recordings in the right way is, however, crucial for the effectiveness with which they can be used. Therefore this thesis gives a detailed overview of archetypical application scenarios. A closer examination of these scenarios reveals the importance of navigation in recorded lectures as a critical factor for teaching and learning success. In order to improve navigation, a hypermedia navigation concept for recorded lectures is developed. Hypermedia navigation has proven a successful navigation paradigm in classic text- and picture-based media. In order to be adapted for time based media such as recorded lectures, a number of conceptual changes have to be applied. In this thesis, a hypermedia navigation concept is developed that tackles this problem by combining time- and structure-based navigation paradigms and by modifying existing hypermedia navigation facilities.Even a highly developed navigation concept for recorded lectures can, however, not be put into practice efficiently when production costs of suitable recordings are too high. Therefore this thesis also shows that suitable lecture recordings can be produced with minimal production cost. This endeavour is realized by the implementation of a fully automatic production chain for recording and indexing lectures.
44

Reinforcement Learning with History Lists

Timmer, Stephan 13 March 2009 (has links)
A very general framework for modeling uncertainty in learning environments is given by Partially Observable Markov Decision Processes (POMDPs). In a POMDP setting, the learning agent infers a policy for acting optimally in all possible states of the environment, while receiving only observations of these states. The basic idea for coping with partial observability is to include memory into the representation of the policy. Perfect memory is provided by the belief space, i.e. the space of probability distributions over environmental states. However, computing policies defined on the belief space requires a considerable amount of prior knowledge about the learning problem and is expensive in terms of computation time. In this thesis, we present a reinforcement learning algorithm for solving deterministic POMDPs based on short-term memory. Short-term memory is implemented by sequences of past observations and actions which are called history lists. In contrast to belief states, history lists are not capable of representing optimal policies, but are far more practical and require no prior knowledge about the learning problem. The algorithm presented learns policies consisting of two separate phases. During the first phase, the learning agent collects information by actively establishing a history list identifying the current state. This phase is called the efficient identification strategy. After the current state has been determined, the Q-Learning algorithm is used to learn a near optimal policy. We show that such a procedure can be also used to solve large Markov Decision Processes (MDPs). Solving MDPs with continuous, multi-dimensional state spaces requires some form of abstraction over states. One particular way of establishing such abstraction is to ignore the original state information, only considering features of states. This form of state abstraction is closely related to POMDPs, since features of states can be interpreted as observations of states.
45

Self-Regulating Neurons. A model for synaptic plasticity in artificial recurrent neural networks

Ghazi-Zahedi, Keyan Mahmoud 04 February 2009 (has links)
Robustness and adaptivity are important behavioural properties observed in biological systems, which are still widely absent in artificial intelligence applications. Such static or non-plastic artificial systems are limited to their very specific problem domain. This work introducesa general model for synaptic plasticity in embedded artificial recurrent neural networks, which is related to short-term plasticity by synaptic scaling in biological systems. The model is general in the sense that is does not require trigger mechanisms or artificial limitations and it operates on recurrent neural networks of arbitrary structure. A Self-Regulation Neuron is defined as a homeostatic unit which regulates its activity against external disturbances towards a target value by modulation of its incoming and outgoing synapses. Embedded and situated in the sensori-motor loop, a network of these neurons is permanently driven by external stimuli andwill generally not settle at its asymptotically stable state. The system´s behaviour is determinedby the local interactions of the Self-Regulating Neurons. The neuron model is analysed as a dynamical system with respect to its attractor landscape and its transient dynamics. The latter is conducted based on different control structures for obstacle avoidance with increasing structural complexity derived from literature. The result isa controller that shows first traces of adaptivity. Next, two controllers for different tasks are evolved and their transient dynamics are fully analysed. The results of this work not only show that the proposed neuron model enhances the behavioural properties, but also points out the limitations of short-term plasticity which does not account for learning and memory.
46

Multi-threaded User Interfaces in Java

Ludwig, Elmar 27 July 2006 (has links)
With the rise of modern programming languages like Java that include native support for multi-threading, the issue of concurrency in graphical applications becomes more and more important. Traditional graphical component libraries for Java have always used the threading concepts provided by the language very carefully, telling the programmer that the use of threads in this context is often unnecessarily burdensome and complex. On the other hand, experience gained from systems like Inferno or BeOS shows that the use of concurrency in graphical applications is clearly manageable and leads to a different program design, once you dissociate the application from the limitations of the GUI library. This thesis describes the design of a general architecture that allows for the separation of a program´s user interface from the application logic using thread communication. It enables the use of concurrency in the application code without requiring any degree of thread-safety at the native interface component level.
47

Übersicht über die Habilitationen an der Fakultät für Mathematik und Informatik der Universität Leipzig von 1993 bis 1997

Universität Leipzig 12 March 1999 (has links)
No description available.
48

Ein mechanisches Finite-Elemente-Modell des menschlichen Kopfes

Hartmann, Ulrich 28 November 2004 (has links)
In dieser Arbeit wird ein dreidimensionales Modell des menschlichen Kopfes beschrieben, das es erlaubt, mit der Methode der Finiten Elemente mechanische Einfluesse auf den Kopf zu modellieren. Eine exakte Geometriebeschreibung eines individuellen Modells wird aus einem Kernspintomogramm des Kopfes gewonnen. Ausgehend von diesen medizinischen Bilddaten wird die diskrete Darstellung des Kopfes als Verbund finiter Elemente mit einem Gittergenerator gewonnen. Dieser schnelle und stabile Algorithmus ermoeglicht die Erstellung von raeumlich hochaufgeloesten Finite-Elemente-Repraesentationen des Schaedels und interner neuroanatomischer Strukturen. Es besteht die Auswahl zwischen anisotropen und isotropen Wuerfel- und Tetraedernetzen. Auf deren Basis werden die dem physikalischen Geschehen zugrundeliegenden Differentialgleichungen mittels der Finite-Elemente-Methode numerisch geloest. Die FE-Analysen umfassen statische, dynamische und modale Simulationsrechnungen. Die zur Durchfuehrung der Simulationen noetigen numerischen Verfahren wurden optimiert und auf einer parallelen Rechnerarchitektur implementiert. Jeder der oben genannten Analysearten ist eine klinisch-relevante Anwendung zugeordnet. Mit der nichtlinearen statischen Analyse werden die mechanischen Konsequenzen von Tumorwachstum untersucht, die dynamische Analyse dient dem Studium der Auswirkungen von fokalen Gewalteinwirkungen auf den Kopf und die modale Analyse gibt Aufschluss ueber das Schwingungsverhalten des Kopfes. Die Validierung des Modells wird durch den Vergleich von Simulationsergebnissen mit experimentell ermittelten Daten erzielt. / A new FEM-based approach to model the mechanical response of the head is presented.To overcome restrictions of previous approaches our head model is based on individual datasets of the head obtained from magnetic resonance imaging (MRI). The use of parallel computers allows to carry out biomechanical simulations based on FE meshes with a spatial resolution about five times higher than that of previous models. A totally automatic procedure to generate FE meshes of the head starting from MR datasets is used. Models for individual clinical cases can be set up within minutes and clinically relevant simulations (impact studies, tumor growth consequences) are carried out and discussed by comparing simulation results with experimentally obtained data.
49

Übersicht über die Habilitationen an der Fakultät für Mathematik und Informatik der Universität Leipzig von 1998 bis 2000

Universität Leipzig 06 August 2001 (has links)
No description available.
50

Event-Oriented Dynamic Adaptation of Workflows: Model, Architecture and Implementation

Müller, Robert 28 November 2004 (has links)
Workflow management is widely accepted as a core technology to support long-term business processes in heterogeneous and distributed environments. However, conventional workflow management systems do not provide sufficient flexibility support to cope with the broad range of failure situations that may occur during workflow execution. In particular, most systems do not allow to dynamically adapt a workflow due to a failure situation, e.g., to dynamically drop or insert execution steps. As a contribution to overcome these limitations, this dissertation introduces the agent-based workflow management system AgentWork. AgentWork supports the definition, the execution and, as its main contribution, the event-oriented and semi-automated dynamic adaptation of workflows. Two strategies for automatic workflow adaptation are provided. Predictive adaptation adapts workflow parts affected by a failure in advance (predictively), typically as soon as the failure is detected. This is advantageous in many situations and gives enough time to meet organizational constraints for adapted workflow parts. Reactive adaptation is typically performed when predictive adaptation is not possible. In this case, adaptation is performed when the affected workflow part is to be executed, e.g., before an activity is executed it is checked whether it is subject to a workflow adaptation such as dropping, postponement or replacement. In particular, the following contributions are provided by AgentWork: A Formal Model for Workflow Definition, Execution, and Estimation: In this context, AgentWork first provides an object-oriented workflow definition language. This language allows for the definition of a workflow’s control and data flow. Furthermore, a workflow’s cooperation with other workflows or workflow systems can be specified. Second, AgentWork provides a precise workflow execution model. This is necessary, as a running workflow usually is a complex collection of concurrent activities and data flow processes, and as failure situations and dynamic adaptations affect running workflows. Furthermore, mechanisms for the estimation of a workflow’s future execution behavior are provided. These mechanisms are of particular importance for predictive adaptation. Mechanisms for Determining and Processing Failure Events and Failure Actions: AgentWork provides mechanisms to decide whether an event constitutes a failure situation and what has to be done to cope with this failure. This is formally achieved by evaluating event-condition-action rules where the event-condition part describes under which condition an event has to be viewed as a failure event. The action part represents the necessary actions needed to cope with the failure. To support the temporal dimension of events and actions, this dissertation provides a novel event-condition-action model based on a temporal object-oriented logic. Mechanisms for the Adaptation of Affected Workflows: In case of failure situations it has to be decided how an affected workflow has to be dynamically adapted on the node and edge level. AgentWork provides a novel approach that combines the two principal strategies reactive adaptation and predictive adaptation. Depending on the context of the failure, the appropriate strategy is selected. Furthermore, control flow adaptation operators are provided which translate failure actions into structural control flow adaptations. Data flow operators adapt the data flow after a control flow adaptation, if necessary. Mechanisms for the Handling of Inter-Workflow Implications of Failure Situations: AgentWork provides novel mechanisms to decide whether a failure situation occurring to a workflow affects other workflows that communicate and cooperate with this workflow. In particular, AgentWork derives the temporal implications of a dynamic adaptation by estimating the duration that will be needed to process the changed workflow definition (in comparison with the original definition). Furthermore, qualitative implications of the dynamic change are determined. For this purpose, so-called quality measuring objects are introduced. All mechanisms provided by AgentWork include that users may interact during the failure handling process. In particular, the user has the possibility to reject or modify suggested workflow adaptations. A Prototypical Implementation: Finally, a prototypical Corba-based implementation of AgentWork is described. This implementation supports the integration of AgentWork into the distributed and heterogeneous environments of real-world organizations such as hospitals or insurance business enterprises.

Page generated in 0.0872 seconds