• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 98
  • 82
  • Tagged with
  • 180
  • 180
  • 159
  • 144
  • 123
  • 59
  • 20
  • 19
  • 18
  • 16
  • 16
  • 16
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Hypermediale Navigation in Vorlesungsaufzeichnungen: Nutzung und automatische Produktion hypermedial navigierbarer Aufzeichnungen von Lehrveranstaltungen

Mertens, Robert 08 November 2007 (has links)
In the mid nineties, electronic lecture recording has emerged as a new area of research. The aim behind most early research activities in this field has been the cost-efficient production of e-learning content as a by-product of traditional lectures. These efforts have led to the development of systems that can produce recordings of a lecture in a fraction of the time and also for a fraction of the cost that other methods require for the production of similar e-learning content.While the production of lecture recordings has been investigated thoroughly, the conditions under which the content produced can be used efficiently shifted into focus of research only recently. Employing lecture recordings in the right way is, however, crucial for the effectiveness with which they can be used. Therefore this thesis gives a detailed overview of archetypical application scenarios. A closer examination of these scenarios reveals the importance of navigation in recorded lectures as a critical factor for teaching and learning success. In order to improve navigation, a hypermedia navigation concept for recorded lectures is developed. Hypermedia navigation has proven a successful navigation paradigm in classic text- and picture-based media. In order to be adapted for time based media such as recorded lectures, a number of conceptual changes have to be applied. In this thesis, a hypermedia navigation concept is developed that tackles this problem by combining time- and structure-based navigation paradigms and by modifying existing hypermedia navigation facilities.Even a highly developed navigation concept for recorded lectures can, however, not be put into practice efficiently when production costs of suitable recordings are too high. Therefore this thesis also shows that suitable lecture recordings can be produced with minimal production cost. This endeavour is realized by the implementation of a fully automatic production chain for recording and indexing lectures.
62

Reinforcement Learning with History Lists

Timmer, Stephan 13 March 2009 (has links)
A very general framework for modeling uncertainty in learning environments is given by Partially Observable Markov Decision Processes (POMDPs). In a POMDP setting, the learning agent infers a policy for acting optimally in all possible states of the environment, while receiving only observations of these states. The basic idea for coping with partial observability is to include memory into the representation of the policy. Perfect memory is provided by the belief space, i.e. the space of probability distributions over environmental states. However, computing policies defined on the belief space requires a considerable amount of prior knowledge about the learning problem and is expensive in terms of computation time. In this thesis, we present a reinforcement learning algorithm for solving deterministic POMDPs based on short-term memory. Short-term memory is implemented by sequences of past observations and actions which are called history lists. In contrast to belief states, history lists are not capable of representing optimal policies, but are far more practical and require no prior knowledge about the learning problem. The algorithm presented learns policies consisting of two separate phases. During the first phase, the learning agent collects information by actively establishing a history list identifying the current state. This phase is called the efficient identification strategy. After the current state has been determined, the Q-Learning algorithm is used to learn a near optimal policy. We show that such a procedure can be also used to solve large Markov Decision Processes (MDPs). Solving MDPs with continuous, multi-dimensional state spaces requires some form of abstraction over states. One particular way of establishing such abstraction is to ignore the original state information, only considering features of states. This form of state abstraction is closely related to POMDPs, since features of states can be interpreted as observations of states.
63

Self-Regulating Neurons. A model for synaptic plasticity in artificial recurrent neural networks

Ghazi-Zahedi, Keyan Mahmoud 04 February 2009 (has links)
Robustness and adaptivity are important behavioural properties observed in biological systems, which are still widely absent in artificial intelligence applications. Such static or non-plastic artificial systems are limited to their very specific problem domain. This work introducesa general model for synaptic plasticity in embedded artificial recurrent neural networks, which is related to short-term plasticity by synaptic scaling in biological systems. The model is general in the sense that is does not require trigger mechanisms or artificial limitations and it operates on recurrent neural networks of arbitrary structure. A Self-Regulation Neuron is defined as a homeostatic unit which regulates its activity against external disturbances towards a target value by modulation of its incoming and outgoing synapses. Embedded and situated in the sensori-motor loop, a network of these neurons is permanently driven by external stimuli andwill generally not settle at its asymptotically stable state. The system´s behaviour is determinedby the local interactions of the Self-Regulating Neurons. The neuron model is analysed as a dynamical system with respect to its attractor landscape and its transient dynamics. The latter is conducted based on different control structures for obstacle avoidance with increasing structural complexity derived from literature. The result isa controller that shows first traces of adaptivity. Next, two controllers for different tasks are evolved and their transient dynamics are fully analysed. The results of this work not only show that the proposed neuron model enhances the behavioural properties, but also points out the limitations of short-term plasticity which does not account for learning and memory.
64

Multi-threaded User Interfaces in Java

Ludwig, Elmar 27 July 2006 (has links)
With the rise of modern programming languages like Java that include native support for multi-threading, the issue of concurrency in graphical applications becomes more and more important. Traditional graphical component libraries for Java have always used the threading concepts provided by the language very carefully, telling the programmer that the use of threads in this context is often unnecessarily burdensome and complex. On the other hand, experience gained from systems like Inferno or BeOS shows that the use of concurrency in graphical applications is clearly manageable and leads to a different program design, once you dissociate the application from the limitations of the GUI library. This thesis describes the design of a general architecture that allows for the separation of a program´s user interface from the application logic using thread communication. It enables the use of concurrency in the application code without requiring any degree of thread-safety at the native interface component level.
65

Übersicht über die Habilitationen an der Fakultät für Mathematik und Informatik der Universität Leipzig von 1993 bis 1997

Universität Leipzig 12 March 1999 (has links)
No description available.
66

Ein mechanisches Finite-Elemente-Modell des menschlichen Kopfes

Hartmann, Ulrich 28 November 2004 (has links)
In dieser Arbeit wird ein dreidimensionales Modell des menschlichen Kopfes beschrieben, das es erlaubt, mit der Methode der Finiten Elemente mechanische Einfluesse auf den Kopf zu modellieren. Eine exakte Geometriebeschreibung eines individuellen Modells wird aus einem Kernspintomogramm des Kopfes gewonnen. Ausgehend von diesen medizinischen Bilddaten wird die diskrete Darstellung des Kopfes als Verbund finiter Elemente mit einem Gittergenerator gewonnen. Dieser schnelle und stabile Algorithmus ermoeglicht die Erstellung von raeumlich hochaufgeloesten Finite-Elemente-Repraesentationen des Schaedels und interner neuroanatomischer Strukturen. Es besteht die Auswahl zwischen anisotropen und isotropen Wuerfel- und Tetraedernetzen. Auf deren Basis werden die dem physikalischen Geschehen zugrundeliegenden Differentialgleichungen mittels der Finite-Elemente-Methode numerisch geloest. Die FE-Analysen umfassen statische, dynamische und modale Simulationsrechnungen. Die zur Durchfuehrung der Simulationen noetigen numerischen Verfahren wurden optimiert und auf einer parallelen Rechnerarchitektur implementiert. Jeder der oben genannten Analysearten ist eine klinisch-relevante Anwendung zugeordnet. Mit der nichtlinearen statischen Analyse werden die mechanischen Konsequenzen von Tumorwachstum untersucht, die dynamische Analyse dient dem Studium der Auswirkungen von fokalen Gewalteinwirkungen auf den Kopf und die modale Analyse gibt Aufschluss ueber das Schwingungsverhalten des Kopfes. Die Validierung des Modells wird durch den Vergleich von Simulationsergebnissen mit experimentell ermittelten Daten erzielt. / A new FEM-based approach to model the mechanical response of the head is presented.To overcome restrictions of previous approaches our head model is based on individual datasets of the head obtained from magnetic resonance imaging (MRI). The use of parallel computers allows to carry out biomechanical simulations based on FE meshes with a spatial resolution about five times higher than that of previous models. A totally automatic procedure to generate FE meshes of the head starting from MR datasets is used. Models for individual clinical cases can be set up within minutes and clinically relevant simulations (impact studies, tumor growth consequences) are carried out and discussed by comparing simulation results with experimentally obtained data.
67

Übersicht über die Habilitationen an der Fakultät für Mathematik und Informatik der Universität Leipzig von 1998 bis 2000

Universität Leipzig 06 August 2001 (has links)
No description available.
68

Event-Oriented Dynamic Adaptation of Workflows: Model, Architecture and Implementation

Müller, Robert 28 November 2004 (has links)
Workflow management is widely accepted as a core technology to support long-term business processes in heterogeneous and distributed environments. However, conventional workflow management systems do not provide sufficient flexibility support to cope with the broad range of failure situations that may occur during workflow execution. In particular, most systems do not allow to dynamically adapt a workflow due to a failure situation, e.g., to dynamically drop or insert execution steps. As a contribution to overcome these limitations, this dissertation introduces the agent-based workflow management system AgentWork. AgentWork supports the definition, the execution and, as its main contribution, the event-oriented and semi-automated dynamic adaptation of workflows. Two strategies for automatic workflow adaptation are provided. Predictive adaptation adapts workflow parts affected by a failure in advance (predictively), typically as soon as the failure is detected. This is advantageous in many situations and gives enough time to meet organizational constraints for adapted workflow parts. Reactive adaptation is typically performed when predictive adaptation is not possible. In this case, adaptation is performed when the affected workflow part is to be executed, e.g., before an activity is executed it is checked whether it is subject to a workflow adaptation such as dropping, postponement or replacement. In particular, the following contributions are provided by AgentWork: A Formal Model for Workflow Definition, Execution, and Estimation: In this context, AgentWork first provides an object-oriented workflow definition language. This language allows for the definition of a workflow’s control and data flow. Furthermore, a workflow’s cooperation with other workflows or workflow systems can be specified. Second, AgentWork provides a precise workflow execution model. This is necessary, as a running workflow usually is a complex collection of concurrent activities and data flow processes, and as failure situations and dynamic adaptations affect running workflows. Furthermore, mechanisms for the estimation of a workflow’s future execution behavior are provided. These mechanisms are of particular importance for predictive adaptation. Mechanisms for Determining and Processing Failure Events and Failure Actions: AgentWork provides mechanisms to decide whether an event constitutes a failure situation and what has to be done to cope with this failure. This is formally achieved by evaluating event-condition-action rules where the event-condition part describes under which condition an event has to be viewed as a failure event. The action part represents the necessary actions needed to cope with the failure. To support the temporal dimension of events and actions, this dissertation provides a novel event-condition-action model based on a temporal object-oriented logic. Mechanisms for the Adaptation of Affected Workflows: In case of failure situations it has to be decided how an affected workflow has to be dynamically adapted on the node and edge level. AgentWork provides a novel approach that combines the two principal strategies reactive adaptation and predictive adaptation. Depending on the context of the failure, the appropriate strategy is selected. Furthermore, control flow adaptation operators are provided which translate failure actions into structural control flow adaptations. Data flow operators adapt the data flow after a control flow adaptation, if necessary. Mechanisms for the Handling of Inter-Workflow Implications of Failure Situations: AgentWork provides novel mechanisms to decide whether a failure situation occurring to a workflow affects other workflows that communicate and cooperate with this workflow. In particular, AgentWork derives the temporal implications of a dynamic adaptation by estimating the duration that will be needed to process the changed workflow definition (in comparison with the original definition). Furthermore, qualitative implications of the dynamic change are determined. For this purpose, so-called quality measuring objects are introduced. All mechanisms provided by AgentWork include that users may interact during the failure handling process. In particular, the user has the possibility to reject or modify suggested workflow adaptations. A Prototypical Implementation: Finally, a prototypical Corba-based implementation of AgentWork is described. This implementation supports the integration of AgentWork into the distributed and heterogeneous environments of real-world organizations such as hospitals or insurance business enterprises.
69

Multiresolution image segmentation

Salem, Mohammed Abdel-Megeed Mohammed 27 November 2008 (has links)
Systeme der Computer Vision spielen in der Automatisierung vieler Prozesse eine wichtige Rolle. Die wichtigste Aufgabe solcher Systeme ist die Automatisierung des visuellen Erkennungsprozesses und die Extraktion der relevanten Information aus Bildern oder Bildsequenzen. Eine wichtige Komponente dieser Systeme ist die Bildsegmentierung, denn sie bestimmt zu einem großen Teil die Qualitaet des Gesamtsystems. Fuer die Segmentierung von Bildern und Bildsequenzen werden neue Algorithmen vorgeschlagen. Das Konzept der Multiresolution wird als eigenstaendig dargestellt, es existiert unabhaengig von der Wavelet-Transformation. Die Wavelet-Transformation wird zur Verarbeitung von Bildern und Bildsequenzen zu einer 2D- bzw. 3D-Wavelet- Transformation erweitert. Fuer die Segmentierung von Bildern wird der Algorithmus Resolution Mosaic Expectation Maximization (RM-EM) vorgeschlagen. Das Ergebnis der Vorverarbeitung sind unterschiedlich aufgeloesten Teilbilder, das Aufloesungsmosaik. Durch dieses Mosaik lassen sich raeumliche Korrelationen zwischen den Pixeln ausnutzen. Die Verwendung unterschiedlicher Aufloesungen beschleunigt die Verarbeitung und verbessert die Ergebnisse. Fuer die Extraktion von bewegten Objekten aus Bildsequenzen werden neue Algorithmen vorgeschlagen, die auf der 3D-Wavelet-Transformation und auf der Analyse mit 3D-Wavelet-Packets beruhen. Die neuen Algorithmen haben den Vorteil, dass sie sowohl die raeumlichen als auch die zeitlichen Bewegungsinformationen beruecksichtigen. Wegen der geringen Berechnungskomplexitaet der Wavelet-Transformation ist fuer den ersten Segmentierungsschritt Hardware auf der Basis von FPGA entworfen worden. Aktuelle Anwendungen werden genutzt, um die Algorithmen zu evaluieren: die Segmentierung von Magnetresonanzbildern des menschlichen Gehirns und die Detektion von bewegten Objekten in Bildsequenzen von Verkehrsszenen. Die neuen Algorithmen sind robust und fuehren zu besseren Segmentierungsergebnissen. / More and more computer vision systems take part in the automation of various applications. The main task of such systems is to automate the process of visual recognition and to extract relevant information from the images or image sequences acquired or produced by such applications. One essential and critical component in almost every computer vision system is image segmentation. The quality of the segmentation determines to a great extent the quality of the final results of the vision system. New algorithms for image and video segmentation based on the multiresolution analysis and the wavelet transform are proposed. The concept of multiresolution is explained as existing independently of the wavelet transform. The wavelet transform is extended to two and three dimensions to allow image and video processing. For still image segmentation the Resolution Mosaic Expectation Maximization (RM-EM) algorithm is proposed. The resolution mosaic enables the algorithm to employ the spatial correlation between the pixels. The level of the local resolution depends on the information content of the individual parts of the image. The use of various resolutions speeds up the processing and improves the results. New algorithms based on the 3D wavelet transform and the 3D wavelet packet analysis are proposed for extracting moving objects from image sequences. The new algorithms have the advantage of considering the relevant spatial as well as temporal information of the movement. Because of the low computational complexity of the wavelet transform an FPGA hardware for the primary segmentation step was designed. Actual applications are used to investigate and evaluate all algorithms: the segmentation of magnetic resonance images of the human brain and the detection of moving objects in image sequences of traffic scenes. The new algorithms show robustness against noise and changing ambient conditions and gave better segmentation results.
70

Kontinuierliche Bewertung psychischer Beanspruchung an informationsintensiven Arbeitsplätzen auf Basis des Elektroenzephalogramms

Radüntz, Thea 21 January 2016 (has links)
Die Informations- und Kommunikationstechnologien haben die Arbeitswelt grundlegend verändert. Durch den Einsatz komplexer, hochautomatisierter Systeme werden an die kognitive Leistungsfähigkeit und Belastbarkeit von Arbeitnehmern hohe Anforderungen gestellt. Über die Ermittlung der psychischen Beanspruchung des Menschen an Arbeitsplätzen mit hohen kognitiven Anforderungen wird es möglich, eine Über- oder Unterbeanspruchung zu vermeiden. Gegenstand der Dissertation ist deshalb die Entwicklung, Implementierung und der Test eines neuen Systems zur kontinuierlichen Bewertung psychischer Beanspruchung an informationsintensiven Arbeitsplätzen auf Basis des Elektroenzephalogramms. Im theoretischen Teil der Arbeit werden die Konzepte zur Definition der psychischen Beanspruchung und Modelle zur Beschreibung der menschlichen Informationsverarbeitung zusammengestellt. Die Auswertung einer Reihe von Experimenten ist die Basis für die Konzeption und den Test des neuen Systems zur Indexierung der psychischen Beanspruchung. Die Aufgabenbatterie, die Stichprobenbeschreibung, der Versuchsaufbau und -ablauf sind Bestandteil des experimentellen Teils der Arbeit. Während der Aufgabenlösung wird von den Probanden das Elektroenzephalogramm mit 25 Kanälen abgeleitet. Es folgt eine Artefakteliminierung, für die ein neues automatisch und in Echtzeit arbeitendes Verfahren entwickelt wurde. Die Klassifikation und damit die Indexierung von Segmenten des Elektroenzephalogramms in die Klassen niedriger, mittlerer oder hoher Beanspruchung erfolgt auf Basis einer ebenfalls neu entwickelten Methode, deren Grundlage Dual Frequency Head Maps sind. Damit ist ein vollständiges System entstanden, das die einzelnen Verfahrensschritte integriert und die Aufgabenstellung der Arbeit erfüllt: Es kann an informationsintensiven Arbeitsplätzen eingesetzt werden, um kontinuierlich die Bewertung der psychischen Beanspruchung auf Basis des Elektroenzephalogramms vorzunehmen. / Advanced information and communication technology has fundamentally changed the working environment. Complex and highly automated systems impose high demands on employees with respect to cognitive capacity and the ability to cope with workload. The registration of mental workload of employees on-site at workplaces with high cognitive demands enables preventing over- or underload. The subject of this dissertation is therefore the development, implementation and testing of a novel system for continuous assessment of mental workload at information intensive workplaces on the basis of the electroencephalogram. In the theoretical section of the thesis concepts for defining mental workload are given; furthermore, models for describing human information processing are introduced and the relevant terminology such as strain, workload, and performance is clarified. Evaluation of an array of experiments with cognitive tasks forms the basis for the conceptual design and testing of the novel system for indexing mental workload. Descriptions of these tasks, the sample, the experimental set-up and procedure are included in the experimental section. The electroencephalogram with 25 channels was recorded from the subjects while performing the tasks. Subsequently, an artifact elimination was carried out, for which a new, automated, and real-time capable procedure has been developed. Segments from the electroencephalogram are classified and thusly indexed into classes of low, medium, and high workload on the basis of a likewise newly developed method, whose central element are Dual Frequency Head Maps. Hence, a complete system emerges that integrates the single processing steps and satisfies the scope of this thesis: It can be applied on-site at information intensive workplaces for continuous assessment of mental workload on the basis of the electroencephalogram.

Page generated in 0.1138 seconds