• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2659
  • 663
  • 450
  • Tagged with
  • 3739
  • 3739
  • 2927
  • 2917
  • 2917
  • 1813
  • 1812
  • 1800
  • 1797
  • 1797
  • 1260
  • 726
  • 719
  • 540
  • 348
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

TU-Spektrum 4/1996, Magazin der Technischen Universität Chemnitz-Zwickau

Steinebach, Mario, Gieß, Hubert J., Häckel-Riffler, Christine 06 December 2002 (has links)
4 mal im Jahr erscheinende Zeitschrift über aktuelle Themen der TU Chemnitz
132

TU-Spektrum 1/1997, Magazin der Technischen Universität Chemnitz-Zwickau

Steinebach, Mario, Gieß, Hubert J., Häckel-Riffler, Christine 18 December 2002 (has links)
4 mal im Jahr erscheinende Zeitschrift über aktuelle Themen der TU Chemnitz
133

TU-Spektrum 3/2002, Magazin der Technischen Universität Chemnitz

Steinebach, Mario, Friebel, Alexander, Häckel-Riffler, Christine, Lopez, Daniela, Schellenberger, Peggy, Tzschucke, Volker 27 September 2002 (has links)
4 mal im Jahr erscheinende Zeitschrift ueber aktuelle Themen der TU- Chemnitz
134

25 Linux-Kniffe

Team der Chemnitzer Linux-Tage, - 25 February 2009 (has links) (PDF)
Eine Sammlung von 25 nützlichen Programmen und Werkzeugen unter Linux, zusammengestellt anlässlich der Chemnitzer Linux-Tage 2009
135

Context-specific Consistencies in Information Extraction: Rule-based and Probabilistic Approaches / Kontextspezifische Konsistenzen in der Informationsextraktion: Regelbasierte und Probabilistische Ansätze

Klügl, Peter January 2015 (has links) (PDF)
Large amounts of communication, documentation as well as knowledge and information are stored in textual documents. Most often, these texts like webpages, books, tweets or reports are only available in an unstructured representation since they are created and interpreted by humans. In order to take advantage of this huge amount of concealed information and to include it in analytic processes, it needs to be transformed into a structured representation. Information extraction considers exactly this task. It tries to identify well-defined entities and relations in unstructured data and especially in textual documents. Interesting entities are often consistently structured within a certain context, especially in semi-structured texts. However, their actual composition varies and is possibly inconsistent among different contexts. Information extraction models stay behind their potential and return inferior results if they do not consider these consistencies during processing. This work presents a selection of practical and novel approaches for exploiting these context-specific consistencies in information extraction tasks. The approaches direct their attention not only to one technique, but are based on handcrafted rules as well as probabilistic models. A new rule-based system called UIMA Ruta has been developed in order to provide optimal conditions for rule engineers. This system consists of a compact rule language with a high expressiveness and strong development support. Both elements facilitate rapid development of information extraction applications and improve the general engineering experience, which reduces the necessary efforts and costs when specifying rules. The advantages and applicability of UIMA Ruta for exploiting context-specific consistencies are illustrated in three case studies. They utilize different engineering approaches for including the consistencies in the information extraction task. Either the recall is increased by finding additional entities with similar composition, or the precision is improved by filtering inconsistent entities. Furthermore, another case study highlights how transformation-based approaches are able to correct preliminary entities using the knowledge about the occurring consistencies. The approaches of this work based on machine learning rely on Conditional Random Fields, popular probabilistic graphical models for sequence labeling. They take advantage of a consistency model, which is automatically induced during processing the document. The approach based on stacked graphical models utilizes the learnt descriptions as feature functions that have a static meaning for the model, but change their actual function for each document. The other two models extend the graph structure with additional factors dependent on the learnt model of consistency. They include feature functions for consistent and inconsistent entities as well as for additional positions that fulfill the consistencies. The presented approaches are evaluated in three real-world domains: segmentation of scientific references, template extraction in curricula vitae, and identification and categorization of sections in clinical discharge letters. They are able to achieve remarkable results and provide an error reduction of up to 30% compared to usually applied techniques. / Diese Arbeit befasst sich mit regelbasierten und probabilistischen Ansätzen der Informationsextraktion, welche kontextspezifische Konsistenzen ausnutzen und somit die Extraktionsgenauigkeit verbessern.
136

Lokalisierung und Kartenbau mit mobilen Robotern

Lingemann, Kai 08 April 2014 (has links)
Die dreidimensionale Kartierung der Umgebung spielt speziell in der Robotik eine große Rolle und ist Grundlage für nahezu alle Aufgaben, die eine nicht rein reaktive Interaktion mit dieser Umgebung darstellen. Die vorliegende Arbeit beschreibt den Weg zu solchen Karten. Angefangen bei der reinen (2D-)Lokalisierung eines mobilen Roboters, als erster, fundamentaler Schritt in Richtung autonomer Exploration und Kartierung, beschreibt der Text die Registrierung von Scans zur automatischen, effizienten Generierung von 3D-Karten und gleichzeitiger Lokalisierung in sechs Freiheitsgraden (SLAM-Problem). Es folgen Lösungsstrategien für den Umgang mit akkumulierten Fehlern gerade bei großen explorierten Gebieten: Eine GraphSLAM-Variante liefert global konsistente Karten, optional unterstützt durch eine echtzeitfähige Heuristik zur online-Schleifenoptimierung. Den Abschluss bildet ein alternativer Lokalisierungsansatz zur 3D-Kartierung mittels kooperativ agierenden Robotern.
137

Ontology Matching by Combining Instance-Based Concept Similarity Measures with Structure

Todorov, Konstantin 12 April 2011 (has links)
Ontologies describe the semantics of data and provide a uniform framework of understanding between different parties. The main common reference to an ontology definition describes them as knowledge bodies, which bring a formal representation of a shared conceptualization of a domain - the objects, concepts and other entities that are assumed to exist in a certain area of interest together with the relationships holding among them. However, in open and evolving systems with decentralized nature (as, for example, the Semantic Web), it is unlikely for different parties to adopt the same ontology. The problem of ontology matching evolves from the need to align ontologies, which cover the same or similar domains of knowledge. The task is to reducing ontology heterogeneity, which can occur in different forms, not in isolation from one another. Syntactically heterogeneous ontologies are expressed in different formal languages. Terminological heterogeneity stands for variations in names when referring to the same entities and concepts. Conceptual heterogeneity refers to differences in coverage, granularity or scope when modeling the same domain of interest. Finally, prgamatic heterogeneity is about mismatches in how entities are interpreted by people in a given context. The work presented in this thesis is a contribution to the problem of reducing the terminological and conceptual heterogeneity of hierarchical ontologies (defined as ontologies, which contain a hierarchical body), populated with text documents. We make use of both intensional (structural) and extensional (instance-based) aspects of the input ontologies and combine them in order to establish correspondences between their elements. In addition, the proposed procedures yield assertions on the granularity and the extensional richness of one ontology compared to another, which is helpful at assisting a process of ontology merging. Although we put an emphasis on the application of instance-based techniques, we show that combining them with intensional approaches leads to more efficient (both conceptually and computationally) similarity judgments. The thesis is oriented towards both researchers and practitioners in the domain of ontology matching and knowledge sharing. The proposed solutions can be applied successfully to the problem of matching web-directories and facilitating the exchange of knowledge on the web-scale.
138

Integrated management of indoor and outdoor utilities by utilizing BIM and 3DGIS

Hijazi, Ihab 09 January 2012 (has links)
Computer Aided Design (CAD) and Geographic Information System (GIS) are two technologies/systems that are used in tandem in different phases of a civil infrastructure project. CAD systems provide tools to design and manage the interior space of buildings, while GIS is used to provide information about the geo-context. These two technologies encroach upon each other's territory. In fact, the business processes related to them do not even have these boundaries. Utilities infrastructure is an area wherein integrated information management, facilitated by input from both systems, is crucial. This research provides a framework and a data model, "Network for Interior Building Utilities" (NIBU), for integrated analysis and management of interior building utilities in a micro-scale environment. The framework considers managing individual network systems by providing semantic categorization of utilities, as well as a graph structure based on a "Modern" adjacency list data structure. The framework also considers managing the interdependencies between different network systems and the building structure. NIBU is a graph-based spatial data model can be used, in providing the location and specifications of interior utilities to a technician, to perform a maintenance operation, or to estimate the effect of different maintenance operations in different locations along utility service systems. The model focuses on two important aspects: 1) the relationship between interior utilities and building elements (or spaces) and 2) the building hierarchy structure to which the utilities network is related. A proper hierarchy of the building that supports the generation of human-oriented descriptions of interior utilities is also developed during the research. In addition, a method for partitioning of large building elements (and spaces) was utilized. The connection of the different utilities network systems and buildings were established using joints, based on a containment relationship. A user requirement study consisting of three use case scenarios ("maintenance operation", "emergency response" and "inspection operation") was conducted during the research, and these use cases were used to develop the required functionalities and to test the proposed framework. The framework relies on standards data models related to Building Information Model BIM/CAD and GIS, and these standard models were used as data sources for obtaining information about the utilities. BIMs support the semantic and geometric representation of interior building utilities, and, more recently, City Geographic Markup Language (CityGML) has been extended to model utilities infrastructure. Semantic harmonization was employed to achieve the integration and provide a formal mapping between the BIM i.e. Industry Foundation Class (IFC), CityGML and NIBU. The semantic and connectivity information from these (BIM/ GIS) standards were mapped onto NIBU. Furthermore, the building structure and the required hierarchy were obtained from these models. The research proves that BIMs provide the required amount of information that is needed for the framework and model (i.e. NIBU). By contrast, CityGML does not provide the amount of detail required by NIBU. The research also provides an information system that facilitates the use of BIM for geo-analysis purposes, by populating/implementing the NIBU and its functions. BIM4GeoA is a concept for combining existing Open Source Software (OSS) and Open Specification (OS) for efficient data management and analysis of building information within its broader context. The core components of the system are the Spatial Database (i.e. PostgreSQL/PostGIS), the Building Information Model Server, a Virtual Globe application (i.e. Google Earth 3D viewer), and the models of existing BIM/3D Open Geospatial Consortium (OGC) standards (IFC, Keyhole Markup Language (KML), CityGML). Following the system development, a thorough analysis of the strengths and weaknesses of these different components were completed to reinforce their strengths and eliminate their weaknesses. The system is used in implementing the NIBU model and its functions; i.e. NIBU is mapped to PostgreSQL/PostGIS spatial Data Base Management System (DBMS). The model is populated directly from a BIM Server with the help of an IFC parser developed during the research. Five analysis functions are implemented in the system to support spatial operations. These were: trace upstream, trace downstream, find ancestors, find source, and find disconnected. The investigation proves that NIBU provides the semantics and attributes, the connectivity information and the required relationship necessary to facilitating analysis of interior utility networks and manage its relations with building structures.
139

Automatische Generierung dreidimensionaler Polygonkarten für mobile Roboter

Wiemann, Thomas 07 May 2013 (has links)
Die 3D-Kartierung von Umgebungen spielt in der Robotik eine zunehmend wichtige Rolle. Mit Hilfe von 3D-Sensoren lassen sich dreidimensionale Umgebungen präzise erfassen. Allerdings vermessen selbst hochaufgelöste Scanner Oberflächen nur stichprobenartig. Zudem verbrauchen die gewonnenen Punktwolken viel Speicher. Eine Möglichkeit, die Diskretisierung aufzulösen und die Darstellung zu optimieren, ist, eine polygonale Umgebungsdarstellung aus den Punktdaten zu erzeugen. In dieser Arbeit wird ein Verfahren vorgestellt, mit dem sich komprimierte Polygonkarten automatisch erstellen lassen. Die Oberflächenrekonstruktion basiert auf einem modifizierten Marching-Cubes-Algorithmus. Die mit diesem Verfahren erzeugten Polygonnetze werden durch Optimierungsschritte in eine kompakte Darstellung überführt, die sich für Anwendungen in der Robotik einsetzen lässt, wie anhand von verschiedenen Einsatzbeispielen demonstriert wird.
140

Scheduling of flow shops with synchronous movement

Waldherr, Stefan 28 October 2015 (has links)
This thesis presents a thorough introduction to flow shop problems with synchronous movement which are a variant of a non-preemptive permutation flow shop. Jobs have to be moved from one machine to the next by an unpaced synchronous transportation system, which implies that the processing is organized in synchronized cycles. This means that in each cycle the current jobs start at the same time on the corresponding machines and after processing have to wait until the last job is finished. Afterwards, all jobs are moved to the next machine simultaneously. In this thesis flow shops with synchronous movement are systematically embedded into the flow shop scheduling framework. The problem is defined for the most common objective functions as well as for many extensions and additional constraints that can be observed in real world applications. The thesis offers an extensive study of complexity of the discussed problems. Several exact and heuristic solution algorithms are proposed and evaluated. Further, a project in cooperation with a practitioner where flow shops with synchronous movement and resource constraints appear in a real world application is discussed. The results of the implemented heuristic approach are compared to the actual production of the industrial partner.

Page generated in 0.0341 seconds