• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 57
  • 17
  • Tagged with
  • 219
  • 219
  • 182
  • 182
  • 182
  • 33
  • 28
  • 26
  • 23
  • 21
  • 18
  • 18
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Digital Intelligence – Möglichkeiten und Umsetzung einer informatikgestützten Frühaufklärung / Digital Intelligence – opportunities and implementation of a data-driven foresight

Walde, Peter 18 January 2011 (has links) (PDF)
Das Ziel der Digital Intelligence bzw. datengetriebenen Strategischen Frühaufklärung ist, die Zukunftsgestaltung auf Basis valider und fundierter digitaler Information mit vergleichsweise geringem Aufwand und enormer Zeit- und Kostenersparnis zu unterstützen. Hilfe bieten innovative Technologien der (halb)automatischen Sprach- und Datenverarbeitung wie z. B. das Information Retrieval, das (Temporal) Data, Text und Web Mining, die Informationsvisualisierung, konzeptuelle Strukturen sowie die Informetrie. Sie ermöglichen, Schlüsselthemen und latente Zusammenhänge aus einer nicht überschaubaren, verteilten und inhomogenen Datenmenge wie z. B. Patenten, wissenschaftlichen Publikationen, Pressedokumenten oder Webinhalten rechzeitig zu erkennen und schnell und zielgerichtet bereitzustellen. Die Digital Intelligence macht somit intuitiv erahnte Muster und Entwicklungen explizit und messbar. Die vorliegende Forschungsarbeit soll zum einen die Möglichkeiten der Informatik zur datengetriebenen Frühaufklärung aufzeigen und zum zweiten diese im pragmatischen Kontext umsetzen. Ihren Ausgangspunkt findet sie in der Einführung in die Disziplin der Strategischen Frühaufklärung und ihren datengetriebenen Zweig – die Digital Intelligence. Diskutiert und klassifiziert werden die theoretischen und insbesondere informatikbezogenen Grundlagen der Frühaufklärung – vor allem die Möglichkeiten der zeitorientierten Datenexploration. Konzipiert und entwickelt werden verschiedene Methoden und Software-Werkzeuge, die die zeitorientierte Exploration insbesondere unstrukturierter Textdaten (Temporal Text Mining) unterstützen. Dabei werden nur Verfahren in Betracht gezogen, die sich im Kontext einer großen Institution und den spezifischen Anforderungen der Strategischen Frühaufklärung pragmatisch nutzen lassen. Hervorzuheben sind eine Plattform zur kollektiven Suche sowie ein innovatives Verfahren zur Identifikation schwacher Signale. Vorgestellt und diskutiert wird eine Dienstleistung der Digital Intelligence, die auf dieser Basis in einem globalen technologieorientierten Konzern erfolgreich umgesetzt wurde und eine systematische Wettbewerbs-, Markt- und Technologie-Analyse auf Basis digitaler Spuren des Menschen ermöglicht.
212

Superpixels and their Application for Visual Place Recognition in Changing Environments

Neubert, Peer 03 December 2015 (has links) (PDF)
Superpixels are the results of an image oversegmentation. They are an established intermediate level image representation and used for various applications including object detection, 3d reconstruction and semantic segmentation. While there are various approaches to create such segmentations, there is a lack of knowledge about their properties. In particular, there are contradicting results published in the literature. This thesis identifies segmentation quality, stability, compactness and runtime to be important properties of superpixel segmentation algorithms. While for some of these properties there are established evaluation methodologies available, this is not the case for segmentation stability and compactness. Therefore, this thesis presents two novel metrics for their evaluation based on ground truth optical flow. These two metrics are used together with other novel and existing measures to create a standardized benchmark for superpixel algorithms. This benchmark is used for extensive comparison of available algorithms. The evaluation results motivate two novel segmentation algorithms that better balance trade-offs of existing algorithms: The proposed Preemptive SLIC algorithm incorporates a local preemption criterion in the established SLIC algorithm and saves about 80 % of the runtime. The proposed Compact Watershed algorithm combines Seeded Watershed segmentation with compactness constraints to create regularly shaped, compact superpixels at the even higher speed of the plain watershed transformation. Operating autonomous systems over the course of days, weeks or months, based on visual navigation, requires repeated recognition of places despite severe appearance changes as they are for example induced by illumination changes, day-night cycles, changing weather or seasons - a severe problem for existing methods. Therefore, the second part of this thesis presents two novel approaches that incorporate superpixel segmentations in place recognition in changing environments. The first novel approach is the learning of systematic appearance changes. Instead of matching images between, for example, summer and winter directly, an additional prediction step is proposed. Based on superpixel vocabularies, a predicted image is generated that shows, how the summer scene could look like in winter or vice versa. The presented results show that, if certain assumptions on the appearance changes and the available training data are met, existing holistic place recognition approaches can benefit from this additional prediction step. Holistic approaches to place recognition are known to fail in presence of viewpoint changes. Therefore, this thesis presents a new place recognition system based on local landmarks and Star-Hough. Star-Hough is a novel approach to incorporate the spatial arrangement of local image features in the computation of image similarities. It is based on star graph models and Hough voting and particularly suited for local features with low spatial precision and high outlier rates as they are expected in the presence of appearance changes. The novel landmarks are a combination of local region detectors and descriptors based on convolutional neural networks. This thesis presents and evaluates several new approaches to incorporate superpixel segmentations in local region detection. While the proposed system can be used with different types of local regions, in particular the combination with regions obtained from the novel multiscale superpixel grid shows to perform superior to the state of the art methods - a promising basis for practical applications.
213

A Novel Approach for Spherical Stereo Vision / Ein Neuer Ansatz für Sphärisches Stereo Vision

Findeisen, Michel 27 April 2015 (has links) (PDF)
The Professorship of Digital Signal Processing and Circuit Technology of Chemnitz University of Technology conducts research in the field of three-dimensional space measurement with optical sensors. In recent years this field has made major progress. For example innovative, active techniques such as the “structured light“-principle are able to measure even homogeneous surfaces and find its way into the consumer electronic market in terms of Microsoft’s Kinect® at the present time. Furthermore, high-resolution optical sensors establish powerful, passive stereo vision systems in the field of indoor surveillance. Thereby they induce new application domains such as security and assistance systems for domestic environments. However, the constraint field of view can be still considered as an essential characteristic of all these technologies. For instance, in order to measure a volume in size of a living space, two to three deployed 3D sensors have to be applied nowadays. This is due to the fact that the commonly utilized perspective projection principle constrains the visible area to a field of view of approximately 120°. On the contrary, novel fish-eye lenses allow the realization of omnidirectional projection models. Therewith, the visible field of view can be enlarged up to more than 180°. In combination with a 3D measurement approach, thus, the number of required sensors for entire room coverage can be reduced considerably. Motivated by the requirements of the field of indoor surveillance, the present work focuses on the combination of the established stereo vision principle and omnidirectional projection methods. The entire 3D measurement of a living space by means of one single sensor can be considered as major objective. As a starting point for this thesis chapter 1 discusses the underlying requirement, referring to various relevant fields of application. Based on this, the distinct purpose for the present work is stated. The necessary mathematical foundations of computer vision are reflected in Chapter 2 subsequently. Based on the geometry of the optical imaging process, the projection characteristics of relevant principles are discussed and a generic method for modeling fish-eye cameras is selected. Chapter 3 deals with the extraction of depth information using classical (perceptively imaging) binocular stereo vision configurations. In addition to a complete recap of the processing chain, especially occurring measurement uncertainties are investigated. In the following, Chapter 4 addresses special methods to convert different projection models. The example of mapping an omnidirectional to a perspective projection is employed, in order to develop a method for accelerating this process and, hereby, for reducing the computational load associated therewith. Any errors that occur, as well as the necessary adjustment of image resolution, are an integral part of the investigation. As a practical example, an application for person tracking is utilized in order to demonstrate to which extend the usage of “virtual views“ can increase the recognition rate for people detectors in the context of omnidirectional monitoring. Subsequently, an extensive search with respect to omnidirectional imaging stereo vision techniques is conducted in chapter 5. It turns out that the complete 3D capture of a room is achievable by the generation of a hemispherical depth map. Therefore, three cameras have to be combined in order to form a trinocular stereo vision system. As a basis for further research, a known trinocular stereo vision method is selected. Furthermore, it is hypothesized that, applying a modified geometric constellation of cameras, more precisely in the form of an equilateral triangle, and using an alternative method to determine the depth map, the performance can be increased considerably. A novel method is presented, which shall require fewer operations to calculate the distance information and which is to avoid a computational costly step for depth map fusion as necessary in the comparative method. In order to evaluate the presented approach as well as the hypotheses, a hemispherical depth map is generated in Chapter 6 by means of the new method. Simulation results, based on artificially generated 3D space information and realistic system parameters, are presented and subjected to a subsequent error estimate. A demonstrator for generating real measurement information is introduced in Chapter 7. In addition, the methods that are applied for calibrating the system intrinsically as well as extrinsically are explained. It turns out that the calibration procedure utilized cannot estimate the extrinsic parameters sufficiently. Initial measurements present a hemispherical depth map and thus con.rm the operativeness of the concept, but also identify the drawbacks of the calibration used. The current implementation of the algorithm shows almost real-time behaviour. Finally, Chapter 8 summarizes the results obtained along the studies and discusses them in the context of comparable binocular and trinocular stereo vision approaches. For example the results of the simulations carried out produced a saving of up to 30% in terms of stereo correspondence operations in comparison with a referred trinocular method. Furthermore, the concept introduced allows the avoidance of a weighted averaging step for depth map fusion based on precision values that have to be calculated costly. The achievable accuracy is still comparable for both trinocular approaches. In summary, it can be stated that, in the context of the present thesis, a measurement system has been developed, which has great potential for future application fields in industry, security in public spaces as well as home environments.
214

High-Level-Synthese von Operationseigenschaften / High-Level Synthesis Using Operation Properties

Langer, Jan 12 December 2011 (has links) (PDF)
In der formalen Verifikation digitaler Schaltkreise hat sich die Methodik der vollständigen Verifikation anhand spezieller Operationseigenschaften bewährt. Operationseigenschaften beschreiben das Verhalten einer Schaltung in einem festen Zeitintervall und können sequentiell miteinander verknüpft werden, um so das Gesamtverhalten zu spezifizieren. Zusätzlich beweist eine formale Vollständigkeitsprüfung, dass die Menge der Eigenschaften für jede Folge von Eingangssignalwerten die Ausgänge der zu verifizierenden Schaltung eindeutig und lückenlos determiniert. In dieser Arbeit wird untersucht, wie aus Operationseigenschaften, deren Vollständigkeit erfolgreich bewiesen wurde, automatisiert eine Schaltungsbeschreibung abgeleitet werden kann. Gegenüber der traditionellen Entwurfsmethodik auf Register-Transfer-Ebene (RTL) bietet dieses Verfahren zwei Vorteile. Zum einen vermeidet der Vollständigkeitsbeweis viele Arten von Entwurfsfehlern, zum anderen ähnelt eine Beschreibung mit Hilfe von Operationseigenschaften den in Spezifikationen häufig genutzten Zeitdiagrammen, sodass die Entwurfsebene der Spezifikationsebene angenähert wird und Fehler durch manuelle Verfeinerungsschritte vermieden werden. Das Entwurfswerkzeug vhisyn führt die High-Level-Synthese (HLS) einer vollständigen Menge von Operationseigenschaften zu einer Beschreibung auf RTL durch. Die Ergebnisse zeigen, dass sowohl die verwendeten Synthesealgorithmen, als auch die erzeugten Schaltungen effizient sind und somit die Realisierung größerer Beispiele zulassen. Anhand zweier Fallstudien kann dies praktisch nachgewiesen werden. / The complete verification approach using special operation properties is an accepted methodology for the formal verification of digital circuits. Operation properties describe the behavior of a circuit during a certain time interval. They can be sequentially concatenated in order to specify the overall behavior. Additionally, a formal completeness check proves that the sequence of properties consistently determines the exact value of the output signals for every valid sequence of input signal values. This work examines how a circuit description can be automatically derived from a set of operation properties whose completeness has been proven. In contrast to the traditional design flow at register-transfer level (RTL), this method offers two advantages. First, the prove of completeness helps to avoid many design errors. Second, the design of operation properties resembles the design of timing diagrams often used in textual specifications. Therefore, the design level is closer to the specification level and errors caused by refinement steps are avoided. The design tool vhisyn performs the high-level synthesis from a complete set of operation properties to a description at RTL. The results show that both the synthesis algorithms and the generated circuit descriptions are efficient and allow the design of larger applications. This is demonstrated by means of two case studies.
215

Inhaltsbasierte Analyse und Segmentierung narrativer, audiovisueller Medien / Content-based Analysis and Segmentation of Narrative, Audiovisual Media

Rickert, Markus 26 September 2017 (has links) (PDF)
Audiovisuelle Medien, insbesondere Filme und Fernsehsendungen entwickelten sich innerhalb der letzten einhundert Jahre zu bedeutenden Massenmedien. Große Bestände audiovisueller Medien werden heute in Datenbanken und Mediatheken verwaltet und professionellen Nutzern ebenso wie den privaten Konsumenten zur Verfügung gestellt. Eine besondere Herausforderung liegt in der Indexierung, Durchsuchung und Beschreibung der multimedialen Datenbestände. Die Segmentierung audiovisueller Medien, als Teilgebiet der Videoanalyse, bildet die Grundlage für verschiedene Anwendungen im Bereich Multimedia-Information-Retrieval, Content-Browsing und Video-Summarization. Insbesondere die Segmentierung in semantische Handlungsanschnitte bei narrativen Medien gestaltet sich schwierig. Sie setzt ein besonderes Verständnis der filmischen Stilelemente vorraus, die im Rahmen des Schaffensprozesses genutzt wurden, um die Handlung und Narration zu unterstützten. Die Arbeit untersucht die bekannten filmischen Stilelemente und wie sie sich im Rahmen algorithmischer Verfahren für die Analyse nutzen lassen. Es kann gezeigt werden, dass unter Verwendung eines mehrstufigen Analyse-Prozesses semantische Zusammenhänge in narrativen audiovisuellen Medien gefunden werden können, die zu einer geeigneten Sequenz-Segmentierung führen. / Audiovisual media, especially movies and TV shows, developed within the last hundred years into major mass media. Today, large stocks of audiovisual media are managed in databases and media libraries. The content is provided to professional users as well as private consumers. A particular challenge lies in the indexing, searching and description of multimedia assets. The segmentation of audiovisual media as a branch of video analysis forms the basis for various applications in multimedia information retrieval, content browsing and video summarization. In particular, the segmentation into semantic meaningful scenes or sequences is difficult. It requires a special understanding of cinematic style elements that were used to support the narration during the creative process of film production. This work examines the cinematic style elements and how they can be used in the context of algorithmic methods for analysis. For this purpose, an analysis framework was developed as well as a method for sequence-segmentation of films and videos. It can be shown that semantic relationships can be found in narrative audiovisual media, which lead to an appropriate sequence segmentation, by using a multi-stage analysis process, based on visual MPEG-7 descriptors.
216

Schlussbericht zum InnoProfile Forschungsvorhaben sachsMedia - Cooperative Producing, Storage, Retrieval, and Distribution of Audiovisual Media (FKZ: 03IP608)

Berger, Arne, Eibl, Maximilian, Heinich, Stephan, Knauf, Robert, Kürsten, Jens, Kurze, Albrecht, Rickert, Markus, Ritter, Marc 29 September 2012 (has links) (PDF)
In den letzten 20 Jahren haben sich in Sachsen mit ca. 60 Sendern die meisten privaten regionalen Fernsehsender der Bundesrepublik etabliert. Diese übernehmen dabei oft Aufgaben der Informationsversorgung, denen die öffentlich-rechtlichen Sender nur unzureichend nachkommen. Das InnoProfile Forschungsvorhaben sachsMedia fokussierte auf die existentielle und facettenreiche Umbruchschwelle kleiner und mittelständischer Unternehmen aus dem Bereich der regionalen Medienverbreitung. Besonders kritisch für die Medienbranche war der Übergang von analoger zu digitaler Fernsehausstrahlung im Jahr 2010. Die Forschungsinitiative sachsMedia nahm sich der zugrundeliegenden Problematiken an und bearbeitete grundlegende Forschungsfragen in den beiden Themenkomplexen Annotation & Retrieval und Mediendistribution. Der vorliegende Forschungsbericht fasst die erreichten Ergebnisse zusammen.
217

Measuring coselectional constraint in learner corpora: A graph-based approach

Shadrova, Anna Valer'evna 24 July 2020 (has links)
Die korpuslinguistische Arbeit untersucht den Erwerb von Koselektionsbeschränkungen bei Lerner*innen des Deutschen als Fremdsprache in einem quasi-longitudinalen Forschungsdesign anhand des Kobalt-Korpus. Neben einigen statistischen Analysen wird vordergründig eine graphbasierte Analyse entwickelt, die auf der Graphmetrik Louvain-Modularität aufbaut. Diese wird für diverse Subkorpora nach verschiedenen Kriterien berechnet und mit Hilfe verschiedener Samplingtechniken umfassend intern validiert. Im Ergebnis zeigen sich eine Abhängigkeit der gemessenen Modularitätswerte vom Sprachstand der Teilnehmer*innen, eine höhere Modularität bei Muttersprachler*innen, niedrigere Modularitätswerte bei weißrussischen vs. chinesischen Lerner*innen sowie ein U-Kurven-förmiger Erwerbsverlauf bei weißrussischen, nicht aber chinesischen Lerner*innen. Unterschiede zwischen den Gruppen werden aus typologischer, kognitiver, diskursiv-kultureller und Registerperspektive diskutiert. Abschließend werden Vorschläge für den Einsatz von graphbasierten Modellierungen in kernlinguistischen Fragestellungen entwickelt. Zusätzlich werden theoretische Lücken in der gebrauchsbasierten Beschreibung von Koselektionsphänomenen (Phraseologie, Idiomatizität, Kollokation) aufgezeigt und ein multidimensionales funktionales Modell als Alternative vorgeschlagen. / The thesis located in corpus linguistics analyzes the acquisition of coselectional constraint in learners of German as a second language in a quasi-longitudinal design based on the Kobalt corpus. Supplemented by a number of statistical analyses, the thesis primarily develops a graph-based analysis making use of Louvain modularity. The graph metric is computed for a range of subcorpora chosen by various criteria. Extensive internal validation is performed through a number of sampling techniques. Results robustly indicate a dependency of modularity on language acquisition progress, higher modularity in L1 vs. L2, lower modularity in Belarusian vs. Chinese learners, and a u-shaped learning development in Belarusian, but not in Chinese learners. Group differences are discussed from a typological, cognitive, cultural and cultural discourse, and register perspective. Finally, future applications of graph-based modeling in core-linguistic research are outlined. In addition, some gaps in the theoretical discussion of coselection phenomena (phraseology, idiomaticity, collocation) in usage-based linguistics are discussed and a multidimensional and functional model is proposed as an alternative.
218

Semi-automated Ontology Generation for Biocuration and Semantic Search

Wächter, Thomas 01 February 2011 (has links) (PDF)
Background: In the life sciences, the amount of literature and experimental data grows at a tremendous rate. In order to effectively access and integrate these data, biomedical ontologies – controlled, hierarchical vocabularies – are being developed. Creating and maintaining such ontologies is a difficult, labour-intensive, manual process. Many computational methods which can support ontology construction have been proposed in the past. However, good, validated systems are largely missing. Motivation: The biocuration community plays a central role in the development of ontologies. Any method that can support their efforts has the potential to have a huge impact in the life sciences. Recently, a number of semantic search engines were created that make use of biomedical ontologies for document retrieval. To transfer the technology to other knowledge domains, suitable ontologies need to be created. One area where ontologies may prove particularly useful is the search for alternative methods to animal testing, an area where comprehensive search is of special interest to determine the availability or unavailability of alternative methods. Results: The Dresden Ontology Generator for Directed Acyclic Graphs (DOG4DAG) developed in this thesis is a system which supports the creation and extension of ontologies by semi-automatically generating terms, definitions, and parent-child relations from text in PubMed, the web, and PDF repositories. The system is seamlessly integrated into OBO-Edit and Protégé, two widely used ontology editors in the life sciences. DOG4DAG generates terms by identifying statistically significant noun-phrases in text. For definitions and parent-child relations it employs pattern-based web searches. Each generation step has been systematically evaluated using manually validated benchmarks. The term generation leads to high quality terms also found in manually created ontologies. Definitions can be retrieved for up to 78% of terms, child ancestor relations for up to 54%. No other validated system exists that achieves comparable results. To improve the search for information on alternative methods to animal testing an ontology has been developed that contains 17,151 terms of which 10% were newly created and 90% were re-used from existing resources. This ontology is the core of Go3R, the first semantic search engine in this field. When a user performs a search query with Go3R, the search engine expands this request using the structure and terminology of the ontology. The machine classification employed in Go3R is capable of distinguishing documents related to alternative methods from those which are not with an F-measure of 90% on a manual benchmark. Approximately 200,000 of the 19 million documents listed in PubMed were identified as relevant, either because a specific term was contained or due to the automatic classification. The Go3R search engine is available on-line under www.Go3R.org.
219

Integration von Generalisierungsfunktionalität für die automatische Ableitung verschiedener Levels of Detail von OpenStreetMap Webkarten / Integration of generalization functionality to derivate automatic different levels of detail in OpenStreetMap webmaps

Klammer, Ralf 16 June 2011 (has links) (PDF)
OpenStreetMap (OSM) konnte sich seit der Gründung 2004 sehr schnell etablieren und stellt mittlerweile eine konkrete Alternative gegenüber vergleichbaren kommerziellen Anwendungen dar. Dieser Erfolg ist eindeutig auf das revolutionäre Grundkonzept des Projektes zurückzuführen. Weltweit werden räumliche Daten durch Mitglieder erhoben und dem Projekt OSM zur Verfügung gestellt. Über die zugrunde liegenden Lizenzbestimmungen wird sichergestellt, dass OSM-Daten frei verfügbar und kostenfrei weiter verwendbar sind. Vor allem die Vorstellung der Unabhängigkeit von proprietären Daten hat zu starker, weiterhin zunehmender globaler Beteiligung geführt. Resultierend daraus erreichen die verfügbaren Daten inzwischen hohe Dichte sowie Genauigkeit. Visualisierungen in Form von interaktiven, frei skalierbaren Weltkarten, welche über die vollständig automatisierten Softwarelösungen Mapnik und Osmarender erstellt werden, sind am weitesten verbreitet. Infolgedessen müssen kartographische Grundsätze und Regeln formalisiert und implementiert werden. Insbesondere in Bezug auf kartographische Generalisierung treten teils erhebliche Mängel in den entsprechenden Umsetzungen auf. Dies bildet den Ausgangspunkt der Untersuchung. Ausgehend von einer Ist-Analyse werden vorhandene Defizite identifiziert und anschließend Möglichkeiten zur Integration von Generalisierungsfunktionalitäten untersucht. Aktuelle Entwicklungen streben die Anwendung interoperabler Systeme im Kontext kartographischer Generalisierung an, mit dem Ziel Generalisierungsfunktionalitäten über das Internet bereitzustellen. Grundlage hierfür bilden die vom Open Geospatial Consortium (OGC) spezifizierten Web Processing Services (WPS). Sie ermöglichen die Analyse und Verarbeitung räumlicher Daten. In diesem Zusammenhang werden Web Generalization Services (WebGen-WPS) auf mögliche Integration in die Softwarelösungen untersucht und bilden somit einen zentralen Untersuchungsgegenstand der vorliegenden Arbeit. Mapnik stellt, nicht zuletzt durch dessen offengelegten Quelltext („Open Source“), optimale Voraussetzungen für jene Implementierungen zur Verfügung. Zur Verarbeitung von OSM-Daten verwendet Mapnik die freie Geodatenbank PostGIS, welche ebenfalls Funktionalitäten zur Analyse und Verarbeitung räumlicher Daten liefert. In diesem Kontext wird zusätzlich untersucht, inwiefern PostGIS-Funktionen Potential zur Anwendung kartographischer Generalisierung aufweisen. / OpenStreetMap (OSM) has established very quickly since its founding in 2004 and has become a suitable alternative to similar commercial applications. This success is clearly due to the revolutionary concept of the project. Spatial data is collected by members world-wide and is provided to the project OSM. The underlying license aggreement ensures that OSM-Data is freely available and can be used free of charge. Primarily, the idea of independence from proprietary data has led to strong, still growing, global participation. Resulting from that, the available data is now achieving high density and accuracy. Visualizations in form of interactive, freely scalable maps of the world, which are constructed by the fully automated software solutions Mapnik and Osmarender are most common. In consequence cartographic principles and rules must be formalized and implemented. Particularly with respect to cartographic generalization, some serious faults appear in the corresponding implementations. This is the starting point of this diploma thesis. Based on an analysis of the current state, actual existing deficiencies are identified and then examined for possibilities to integrate generalization functionalities. Recent developments aim at the deployment of interoperable systems in the context of cartographic generalization, with the intention of providing generalization functionalities over the Internet. This is based on Web Processing Services (WPS) that where developed by the Open Geospatial Consortium (OGC). They enable the analysis and processing of spatial data. In this context, Web Generalization Services (Webgen-WPS) are examined for possible integration into the software solutions and represent therefore a central object of investigation within that examination. Mapnik provides, not least through its “open source” code, ideal conditions for those implementations. Mapnik uses the “open source” spatial database PostGIS for the processing of OSM-Data, which also provides capabilities to analyze and process spatial data. In this context is examined in addition, to what extent the features have potential for implementation of cartographic generalization.

Page generated in 0.0339 seconds