• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 50
  • Tagged with
  • 143
  • 143
  • 143
  • 142
  • 121
  • 59
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Konzeption und prototypische Modellierung einer objektorientierten Architektur für Management Support Systeme (MSS)

Krüger, Dietmar 30 May 2002 (has links)
Im Rahmen der Arbeit wird untersucht, wie durch die konsequente Anwendung der objektorientierten Konzepte auf die Architektur von Management Support Systemen (MSS) die beobachtbaren Integrationsbeschränkungen bislang heterogen modellierter MSS-Funktionalitäten überwunden werden können. Zur Identifikation, Auswahl und Konfiguration der diesbezüglichen Potentiale und Konzepte des objektorientierten Paradigmas und späteren Bewertung der daraus entwickelten objektorientierten MSS-Architektur wird zunächst ein dreistufiger Kriterienkatalog aufgestellt, der sich an den spezifischen integrativen Anforderungen an ein MSS orientiert.Das entwickelte MSS-Konzept, das in Form eines konkreten Smalltalk-basierten, objektorientierten MSS (ooMSS) dargestellt wird, umfaßt die drei Modellbereiche Informationsmodell, Interaktionsmodell und MSS-Modell:Das Informationsmodell ist unterteilt in ein operatives Domänenmodell zur Repräsentation heterogener Informationselemente und -mengen und ein Werkzeugmodell zur Abbildung analytischer Beziehungen und Auswertungsfunktionen auf diesen Informationselementen, u.a. mehrdimensionaler Datenanalysen (OLAP). Zur Umsetzung werden die spezialisierten Klassen Information- und ToolObject eingeführt, die über das Adapter- bzw. InterpreterPattern verbunden werden.Das Interaktionsmodell zur direkten, anwenderindividuellen Konfiguration und Manipulation relevanter Ausschnitte des Informationsmodells (Daten und Funktionen) ist nach dem Morphic-Framework realisiert.Das MSS-Modell dient der Repräsentation von technisch und fachlich möglichen bzw. sinnvollen MSS-Funktionen und -abläufe sowie deren inhaltlichen Ergebnissen. Die Modellierung erfolgt mittels der spezialisierten Klasse InformationAspect auf einer Meta-Ebene des Informationsmodells.Abschließend wird die Anwendung des Gesamtkonzepts in Form von MSS-Unterstützungsszenarien auf den Ebenen Fachanwender, Fortgeschrittener und Entwickler dokumentiert.
32

Validation of a Regional Distribution Model in Environmental Risk Assessment of Substances / Validierung eines regionalen Ausbreitungsmodells in der Umweltrisikoabschätzung von Substanzen

Berding, Volker 06 November 2000 (has links)
The aim of this investigation was to determine the applicability and weaknesses of the regional distribution model SimpleBox and to make proposals for improvement. The validation was performed using a scheme of which the main aspects are the division into internal and external validation. With its default values, the regional distribution model represents a generic region, and it is connected with a model which estimates indirect emissions from sewage treatment plants. The examination was carried out using a set of sample substances, the characteristics of which cover a wide range of different physico-chemical properties, use patterns and emissions. These substances were employed in order to enable us to make common statements on the model´s applicability. Altogether, the model complies with its designated purpose to calculate regional background concentrations. A scrutiny of theory did not show serious errors or defects. Regarding sensitivity, it could be shown that the model contains only few parameters with a negligible influence on the results. The comparison with measured results showed a good agreement in many cases. The highest deviations occur if the preliminary estimations of emissions, degradation rates and partition coefficients deliver unrealistic values. Altering the regional default parameters has a lower influence on the modelled results than replacing unrealistic substance properties by better ones. Generally, the model employed is a reasonable compromise between complexity and simplification. For the sewage treatment model, it could be shown that its influence on the predicted concentration is very low and a much simpler model fulfils its purpose in a similar way. It is proposed to improve the model in several ways, e.g. by alternative estimations functions for partition coefficients. But the main focus for future improvements should be on the amelioration of release estimations and substance characteristics.
33

Self-Organizing Neural Networks for Sequence Processing

Strickert, Marc 27 January 2005 (has links)
This work investigates the self-organizing representation of temporal data in prototype-based neural networks. Extensions of the supervised learning vector quantization (LVQ) and the unsupervised self-organizing map (SOM) are considered in detail. The principle of Hebbian learning through prototypes yields compact data models that can be easily interpreted by similarity reasoning. In order to obtain a robust prototype dynamic, LVQ is extended by neighborhood cooperation between neurons to prevent a strong dependence on the initial prototype locations. Additionally, implementations of more general, adaptive metrics are studied with a particular focus on the built-in detection of data attributes involved for a given classifcation task. For unsupervised sequence processing, two modifcations of SOM are pursued: the SOM for structured data (SOMSD) realizing an efficient back-reference to the previous best matching neuron in a triangular low-dimensional neural lattice, and the merge SOM (MSOM) expressing the temporal context as a fractal combination of the previously most active neuron and its context. The first SOMSD extension tackles data dimension reduction and planar visualization, the second MSOM is designed for obtaining higher quantization accuracy. The supplied experiments underline the data modeling quality of the presented methods.
34

Entwicklung eines Monte-Carlo-Verfahrens zum selbständigen Lernen von Gauß-Mischverteilungen

Lauer, Martin 03 March 2005 (has links)
In der Arbeit wird ein neuartiges Lernverfahren für Gauß-Mischverteilungen entwickelt. Es basiert auf der Technik der Markov-Chain Monte-Carlo Verfahren und ist in der Lage, in einem Zuge die Größe der Mischverteilung sowie deren Parameter zu bestimmen. Das Verfahren zeichnet sich sowohl durch eine gute Anpassung an die Trainingsdaten als auch durch eine gute Generalisierungsleistung aus. Ausgehend von einer Beschreibung der stochastischen Grundlagen und einer Analyse der Probleme, die beim Lernen von Gauß-Mischverteilungen auftreten, wird in der Abeit das neue Lernverfahren schrittweise entwickelt und seine Eigenschaften untersucht. Ein experimenteller Vergleich mit bekannten Lernverfahren für Gauß-Mischverteilungen weist die Eignung des neuen Verfahrens auch empirisch nach.
35

Hypermediale Navigation in Vorlesungsaufzeichnungen: Nutzung und automatische Produktion hypermedial navigierbarer Aufzeichnungen von Lehrveranstaltungen

Mertens, Robert 08 November 2007 (has links)
In the mid nineties, electronic lecture recording has emerged as a new area of research. The aim behind most early research activities in this field has been the cost-efficient production of e-learning content as a by-product of traditional lectures. These efforts have led to the development of systems that can produce recordings of a lecture in a fraction of the time and also for a fraction of the cost that other methods require for the production of similar e-learning content.While the production of lecture recordings has been investigated thoroughly, the conditions under which the content produced can be used efficiently shifted into focus of research only recently. Employing lecture recordings in the right way is, however, crucial for the effectiveness with which they can be used. Therefore this thesis gives a detailed overview of archetypical application scenarios. A closer examination of these scenarios reveals the importance of navigation in recorded lectures as a critical factor for teaching and learning success. In order to improve navigation, a hypermedia navigation concept for recorded lectures is developed. Hypermedia navigation has proven a successful navigation paradigm in classic text- and picture-based media. In order to be adapted for time based media such as recorded lectures, a number of conceptual changes have to be applied. In this thesis, a hypermedia navigation concept is developed that tackles this problem by combining time- and structure-based navigation paradigms and by modifying existing hypermedia navigation facilities.Even a highly developed navigation concept for recorded lectures can, however, not be put into practice efficiently when production costs of suitable recordings are too high. Therefore this thesis also shows that suitable lecture recordings can be produced with minimal production cost. This endeavour is realized by the implementation of a fully automatic production chain for recording and indexing lectures.
36

Reinforcement Learning with History Lists

Timmer, Stephan 13 March 2009 (has links)
A very general framework for modeling uncertainty in learning environments is given by Partially Observable Markov Decision Processes (POMDPs). In a POMDP setting, the learning agent infers a policy for acting optimally in all possible states of the environment, while receiving only observations of these states. The basic idea for coping with partial observability is to include memory into the representation of the policy. Perfect memory is provided by the belief space, i.e. the space of probability distributions over environmental states. However, computing policies defined on the belief space requires a considerable amount of prior knowledge about the learning problem and is expensive in terms of computation time. In this thesis, we present a reinforcement learning algorithm for solving deterministic POMDPs based on short-term memory. Short-term memory is implemented by sequences of past observations and actions which are called history lists. In contrast to belief states, history lists are not capable of representing optimal policies, but are far more practical and require no prior knowledge about the learning problem. The algorithm presented learns policies consisting of two separate phases. During the first phase, the learning agent collects information by actively establishing a history list identifying the current state. This phase is called the efficient identification strategy. After the current state has been determined, the Q-Learning algorithm is used to learn a near optimal policy. We show that such a procedure can be also used to solve large Markov Decision Processes (MDPs). Solving MDPs with continuous, multi-dimensional state spaces requires some form of abstraction over states. One particular way of establishing such abstraction is to ignore the original state information, only considering features of states. This form of state abstraction is closely related to POMDPs, since features of states can be interpreted as observations of states.
37

Self-Regulating Neurons. A model for synaptic plasticity in artificial recurrent neural networks

Ghazi-Zahedi, Keyan Mahmoud 04 February 2009 (has links)
Robustness and adaptivity are important behavioural properties observed in biological systems, which are still widely absent in artificial intelligence applications. Such static or non-plastic artificial systems are limited to their very specific problem domain. This work introducesa general model for synaptic plasticity in embedded artificial recurrent neural networks, which is related to short-term plasticity by synaptic scaling in biological systems. The model is general in the sense that is does not require trigger mechanisms or artificial limitations and it operates on recurrent neural networks of arbitrary structure. A Self-Regulation Neuron is defined as a homeostatic unit which regulates its activity against external disturbances towards a target value by modulation of its incoming and outgoing synapses. Embedded and situated in the sensori-motor loop, a network of these neurons is permanently driven by external stimuli andwill generally not settle at its asymptotically stable state. The system´s behaviour is determinedby the local interactions of the Self-Regulating Neurons. The neuron model is analysed as a dynamical system with respect to its attractor landscape and its transient dynamics. The latter is conducted based on different control structures for obstacle avoidance with increasing structural complexity derived from literature. The result isa controller that shows first traces of adaptivity. Next, two controllers for different tasks are evolved and their transient dynamics are fully analysed. The results of this work not only show that the proposed neuron model enhances the behavioural properties, but also points out the limitations of short-term plasticity which does not account for learning and memory.
38

Multi-threaded User Interfaces in Java

Ludwig, Elmar 27 July 2006 (has links)
With the rise of modern programming languages like Java that include native support for multi-threading, the issue of concurrency in graphical applications becomes more and more important. Traditional graphical component libraries for Java have always used the threading concepts provided by the language very carefully, telling the programmer that the use of threads in this context is often unnecessarily burdensome and complex. On the other hand, experience gained from systems like Inferno or BeOS shows that the use of concurrency in graphical applications is clearly manageable and leads to a different program design, once you dissociate the application from the limitations of the GUI library. This thesis describes the design of a general architecture that allows for the separation of a program´s user interface from the application logic using thread communication. It enables the use of concurrency in the application code without requiring any degree of thread-safety at the native interface component level.
39

Multiresolution image segmentation

Salem, Mohammed Abdel-Megeed Mohammed 27 November 2008 (has links)
Systeme der Computer Vision spielen in der Automatisierung vieler Prozesse eine wichtige Rolle. Die wichtigste Aufgabe solcher Systeme ist die Automatisierung des visuellen Erkennungsprozesses und die Extraktion der relevanten Information aus Bildern oder Bildsequenzen. Eine wichtige Komponente dieser Systeme ist die Bildsegmentierung, denn sie bestimmt zu einem großen Teil die Qualitaet des Gesamtsystems. Fuer die Segmentierung von Bildern und Bildsequenzen werden neue Algorithmen vorgeschlagen. Das Konzept der Multiresolution wird als eigenstaendig dargestellt, es existiert unabhaengig von der Wavelet-Transformation. Die Wavelet-Transformation wird zur Verarbeitung von Bildern und Bildsequenzen zu einer 2D- bzw. 3D-Wavelet- Transformation erweitert. Fuer die Segmentierung von Bildern wird der Algorithmus Resolution Mosaic Expectation Maximization (RM-EM) vorgeschlagen. Das Ergebnis der Vorverarbeitung sind unterschiedlich aufgeloesten Teilbilder, das Aufloesungsmosaik. Durch dieses Mosaik lassen sich raeumliche Korrelationen zwischen den Pixeln ausnutzen. Die Verwendung unterschiedlicher Aufloesungen beschleunigt die Verarbeitung und verbessert die Ergebnisse. Fuer die Extraktion von bewegten Objekten aus Bildsequenzen werden neue Algorithmen vorgeschlagen, die auf der 3D-Wavelet-Transformation und auf der Analyse mit 3D-Wavelet-Packets beruhen. Die neuen Algorithmen haben den Vorteil, dass sie sowohl die raeumlichen als auch die zeitlichen Bewegungsinformationen beruecksichtigen. Wegen der geringen Berechnungskomplexitaet der Wavelet-Transformation ist fuer den ersten Segmentierungsschritt Hardware auf der Basis von FPGA entworfen worden. Aktuelle Anwendungen werden genutzt, um die Algorithmen zu evaluieren: die Segmentierung von Magnetresonanzbildern des menschlichen Gehirns und die Detektion von bewegten Objekten in Bildsequenzen von Verkehrsszenen. Die neuen Algorithmen sind robust und fuehren zu besseren Segmentierungsergebnissen. / More and more computer vision systems take part in the automation of various applications. The main task of such systems is to automate the process of visual recognition and to extract relevant information from the images or image sequences acquired or produced by such applications. One essential and critical component in almost every computer vision system is image segmentation. The quality of the segmentation determines to a great extent the quality of the final results of the vision system. New algorithms for image and video segmentation based on the multiresolution analysis and the wavelet transform are proposed. The concept of multiresolution is explained as existing independently of the wavelet transform. The wavelet transform is extended to two and three dimensions to allow image and video processing. For still image segmentation the Resolution Mosaic Expectation Maximization (RM-EM) algorithm is proposed. The resolution mosaic enables the algorithm to employ the spatial correlation between the pixels. The level of the local resolution depends on the information content of the individual parts of the image. The use of various resolutions speeds up the processing and improves the results. New algorithms based on the 3D wavelet transform and the 3D wavelet packet analysis are proposed for extracting moving objects from image sequences. The new algorithms have the advantage of considering the relevant spatial as well as temporal information of the movement. Because of the low computational complexity of the wavelet transform an FPGA hardware for the primary segmentation step was designed. Actual applications are used to investigate and evaluate all algorithms: the segmentation of magnetic resonance images of the human brain and the detection of moving objects in image sequences of traffic scenes. The new algorithms show robustness against noise and changing ambient conditions and gave better segmentation results.
40

Kontinuierliche Bewertung psychischer Beanspruchung an informationsintensiven Arbeitsplätzen auf Basis des Elektroenzephalogramms

Radüntz, Thea 21 January 2016 (has links)
Die Informations- und Kommunikationstechnologien haben die Arbeitswelt grundlegend verändert. Durch den Einsatz komplexer, hochautomatisierter Systeme werden an die kognitive Leistungsfähigkeit und Belastbarkeit von Arbeitnehmern hohe Anforderungen gestellt. Über die Ermittlung der psychischen Beanspruchung des Menschen an Arbeitsplätzen mit hohen kognitiven Anforderungen wird es möglich, eine Über- oder Unterbeanspruchung zu vermeiden. Gegenstand der Dissertation ist deshalb die Entwicklung, Implementierung und der Test eines neuen Systems zur kontinuierlichen Bewertung psychischer Beanspruchung an informationsintensiven Arbeitsplätzen auf Basis des Elektroenzephalogramms. Im theoretischen Teil der Arbeit werden die Konzepte zur Definition der psychischen Beanspruchung und Modelle zur Beschreibung der menschlichen Informationsverarbeitung zusammengestellt. Die Auswertung einer Reihe von Experimenten ist die Basis für die Konzeption und den Test des neuen Systems zur Indexierung der psychischen Beanspruchung. Die Aufgabenbatterie, die Stichprobenbeschreibung, der Versuchsaufbau und -ablauf sind Bestandteil des experimentellen Teils der Arbeit. Während der Aufgabenlösung wird von den Probanden das Elektroenzephalogramm mit 25 Kanälen abgeleitet. Es folgt eine Artefakteliminierung, für die ein neues automatisch und in Echtzeit arbeitendes Verfahren entwickelt wurde. Die Klassifikation und damit die Indexierung von Segmenten des Elektroenzephalogramms in die Klassen niedriger, mittlerer oder hoher Beanspruchung erfolgt auf Basis einer ebenfalls neu entwickelten Methode, deren Grundlage Dual Frequency Head Maps sind. Damit ist ein vollständiges System entstanden, das die einzelnen Verfahrensschritte integriert und die Aufgabenstellung der Arbeit erfüllt: Es kann an informationsintensiven Arbeitsplätzen eingesetzt werden, um kontinuierlich die Bewertung der psychischen Beanspruchung auf Basis des Elektroenzephalogramms vorzunehmen. / Advanced information and communication technology has fundamentally changed the working environment. Complex and highly automated systems impose high demands on employees with respect to cognitive capacity and the ability to cope with workload. The registration of mental workload of employees on-site at workplaces with high cognitive demands enables preventing over- or underload. The subject of this dissertation is therefore the development, implementation and testing of a novel system for continuous assessment of mental workload at information intensive workplaces on the basis of the electroencephalogram. In the theoretical section of the thesis concepts for defining mental workload are given; furthermore, models for describing human information processing are introduced and the relevant terminology such as strain, workload, and performance is clarified. Evaluation of an array of experiments with cognitive tasks forms the basis for the conceptual design and testing of the novel system for indexing mental workload. Descriptions of these tasks, the sample, the experimental set-up and procedure are included in the experimental section. The electroencephalogram with 25 channels was recorded from the subjects while performing the tasks. Subsequently, an artifact elimination was carried out, for which a new, automated, and real-time capable procedure has been developed. Segments from the electroencephalogram are classified and thusly indexed into classes of low, medium, and high workload on the basis of a likewise newly developed method, whose central element are Dual Frequency Head Maps. Hence, a complete system emerges that integrates the single processing steps and satisfies the scope of this thesis: It can be applied on-site at information intensive workplaces for continuous assessment of mental workload on the basis of the electroencephalogram.

Page generated in 0.116 seconds