• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 69
  • 22
  • 8
  • Tagged with
  • 203
  • 130
  • 106
  • 96
  • 74
  • 72
  • 72
  • 37
  • 27
  • 22
  • 21
  • 21
  • 20
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Dynamische Erzeugung von Diagrammen aus standardisierten Geodatendiensten

Mann, Ulrich 07 August 2012 (has links)
Geodateninfrastrukturen (GDI) erfahren in den letzten Jahren immer weitere Verbreitung durch die Schaffung neuer Standards zum Austausch von Geodaten. Die vom Open Geospatial Consortium (OGC), einem Zusammenschluss aus Forschungseinrichtungen und privaten Firmen, entwickelten offenen Beschreibungen von Dienste-Schnittstellen verbessern die Interoperabilität in GDI. OGC-konforme Geodienste werden momentan hauptsächlich zur Aufnahme, Verwaltung, Prozessierung und Visualisierung von Geodaten verwendet. Durch das vermehrte Aufkommen von Geodiensten steigt die Verfügbarkeit von Geodaten. Gleichzeitig hält der Trend zur Generierung immer größerer Datenmengen beispielsweise durch wissenschaftliche Simulationen an (Unwin et al., 2006). Dieser führt zu einem wachsenden Bedarf an Funktionalität zur effektiven Exploration und Analyse von Geodaten, da komplexe Zusammenhänge in großen Datenbeständen untersucht und relevante Informationen heraus gefiltert werden müssen. Dazu angewendete Techniken werden im Forschungsfeld Visual Analytics (Visuelle Analyse) umfassend beschrieben. Die visuelle Analyse beschäftigt sich mit der Entwicklung von Werkzeugen und Techniken zur automatisierten Analyse und interaktiven Visualisierung zum Verständnis großer und komplexer Datensätze (Keim et al., 2008). Bei aktuellen Web-basierten Anwendungen zur Exploration und Analyse handelt es sich hauptsächlich um Client-Server-Systeme, die auf fest gekoppelten Datenbanken arbeiten. Mit den wachsenden Fähigkeiten von Geodateninfrastrukturen steigt das Interesse, Funktionalitäten zur Datenanalyse in einer GDI anzubieten. Das Zusammenspiel von bekannten Analysetechniken und etablierten Standards zur Verarbeitung von Geodaten kann dem Nutzer die Möglichkeit geben, in einer Webanwendung interaktiv auf ad hoc eingebundenen Geodaten zu arbeiten. Damit lassen sich mittels aktueller Technologien Einsichten in komplexe Daten gewinnen, ihnen zugrunde liegende Zusammenhänge verstehen und Aussagen zur Entscheidungsunterstützung ableiten. In dieser Arbeit wird die Eignung der OGC WMS GetFeatureInfo-Operation zur Analyse raum-zeitlicher Geodaten in einer GDI untersucht. Der Schwerpunkt liegt auf der dynamischen Generierung von Diagrammen unter Nutzung externer Web Map Service (WMS) als Datenquellen. Nach der Besprechung von Grundlagen zur Datenmodellierung und GDIStandards, wird auf relevante Aspekte der Datenanalyse und Visualisierung von Diagrammen eingegangen. Die Aufstellung einer Task Taxonomie dient der Untersuchung, welche raumzeitlichen Analysen sich durch die GetFeatureInfo-Operation umsetzen lassen. Es erfolgt die Konzeption einer Systemarchitektur zur Umsetzung der Datenanalyse auf verteilten Geodaten. Zur Sicherstellung eines konsistenten und OGC-konformen Datenaustauschs zwischen den Systemkomponenenten, wird ein GML-Schema erarbeitet. Anschließend wird durch eine prototypischen Implementierung die Machbarkeit der Diagramm-basierten Analyse auf Klimasimulationsdaten des ECHAM5-Modells verifiziert. / Spatial data infrastructures (SDI) have been subject to a widening dispersion in the last decade, through the development of standards for the exchange of geodata. The open descriptions of service interfaces, developed by the OGC, a consortium from research institutions and private sector companies, alter interoperability in SDI. Until now, OGC-conform geoservices are mainly utilised for the recording, management, processing and visualisation of geodata. Through the ongoing emergence of spatial data services there is a rise in the availability of geodata. At the same time, the trend of the generation of ever increasing amounts of data, e. g. by scientific simulation (Unwin et al., 2006), continues. By this, the need for capabilities to effectively explore and analyse geodata is growing. Complex relations in huge data need to be determined and relevant information extracted. Techniques, which are capable of this, are being described extensively by Visual Analytics. This field of research engages in the development of tools and techniques for automated analysis and interactive visualisation of huge and complex data (Keim et al., 2008). Current web-based applications for the exploration and analysis are usually established as Client-Server approaches, working on a tightly coupled data storage (see subsection 3.3). With the growing capabilities of SDI, there is an increasing interest in offering functionality for data analysis. The combination of widely used analysis techniques and well-established standards for the treatment of geodata may offer the possibility of working interactively on ad hoc integrated data. This will allow insights into large amounts of complex data, understand natural interrelations and derive knowledge for spatial decision support by the use of state-of-the-art technologies. In this paper, the capabilities of the OGC WMS GetFeatureInfo operation for the analysis of spatio-temporal geodata in a SDI are investigated. The main focus is on dynamic generation of diagrams by the use of distributed WMS as a data storage. After the review of basics in data modelling and SDI-standards, relevant aspects of data analysis and visualisation of diagrams are treated. The compilation of a task taxonomy aids in the determination of realisable spatio-temporal analysis tasks by use of the GetFeatureInfo operation. In the following, conceptual design of a multi-layered system architecture to accomplish data analysis on distributed datasets, is carried out. In response to one of the main issues, a GML-schema is developed to ensure consistent and OGC-conform data exchange among the system components. To verify the feasibility of integration of diagram-based analysis in a SDI, a system prototype is developed to explore ECHAM5 climate model data.
102

Human Mobility and Application Usage Prediction Algorithms for Mobile Devices

Baumann, Paul 19 August 2016 (has links)
Mobile devices such as smartphones and smart watches are ubiquitous companions of humans’ daily life. Since 2014, there are more mobile devices on Earth than humans. Mobile applications utilize sensors and actuators of these devices to support individuals in their daily life. In particular, 24% of the Android applications leverage users’ mobility data. For instance, this data allows applications to understand which places an individual typically visits. This allows providing her with transportation information, location-based advertisements, or to enable smart home heating systems. These and similar scenarios require the possibility to access the Internet from everywhere and at any time. To realize these scenarios 83% of the applications available in the Android Play Store require the Internet to operate properly and therefore access it from everywhere and at any time. Mobile applications such as Google Now or Apple Siri utilize human mobility data to anticipate where a user will go next or which information she is likely to access en route to her destination. However, predicting human mobility is a challenging task. Existing mobility prediction solutions are typically optimized a priori for a particular application scenario and mobility prediction task. There is no approach that allows for automatically composing a mobility prediction solution depending on the underlying prediction task and other parameters. This approach is required to allow mobile devices to support a plethora of mobile applications running on them, while each of the applications support its users by leveraging mobility predictions in a distinct application scenario. Mobile applications rely strongly on the availability of the Internet to work properly. However, mobile cellular network providers are struggling to provide necessary cellular resources. Mobile applications generate a monthly average mobile traffic volume that ranged between 1 GB in Asia and 3.7 GB in North America in 2015. The Ericsson Mobility Report Q1 2016 predicts that by the end of 2021 this mobile traffic volume will experience a 12-fold increase. The consequences are higher costs for both providers and consumers and a reduced quality of service due to congested mobile cellular networks. Several countermeasures can be applied to cope with these problems. For instance, mobile applications apply caching strategies to prefetch application content by predicting which applications will be used next. However, existing solutions suffer from two major shortcomings. They either (1) do not incorporate traffic volume information into their prefetching decisions and thus generate a substantial amount of cellular traffic or (2) require a modification of mobile application code. In this thesis, we present novel human mobility and application usage prediction algorithms for mobile devices. These two major contributions address the aforementioned problems of (1) selecting a human mobility prediction model and (2) prefetching of mobile application content to reduce cellular traffic. First, we address the selection of human mobility prediction models. We report on an extensive analysis of the influence of temporal, spatial, and phone context data on the performance of mobility prediction algorithms. Building upon our analysis results, we present (1) SELECTOR – a novel algorithm for selecting individual human mobility prediction models and (2) MAJOR – an ensemble learning approach for human mobility prediction. Furthermore, we introduce population mobility models and demonstrate their practical applicability. In particular, we analyze techniques that focus on detection of wrong human mobility predictions. Among these techniques, an ensemble learning algorithm, called LOTUS, is designed and evaluated. Second, we present EBC – a novel algorithm for prefetching mobile application content. EBC’s goal is to reduce cellular traffic consumption to improve application content freshness. With respect to existing solutions, EBC presents novel techniques (1) to incorporate different strategies for prefetching mobile applications depending on the available network type and (2) to incorporate application traffic volume predictions into the prefetching decisions. EBC also achieves a reduction in application launch time to the cost of a negligible increase in energy consumption. Developing human mobility and application usage prediction algorithms requires access to human mobility and application usage data. To this end, we leverage in this thesis three publicly available data set. Furthermore, we address the shortcomings of these data sets, namely, (1) the lack of ground-truth mobility data and (2) the lack of human mobility data at short-term events like conferences. We contribute with JK2013 and UbiComp Data Collection Campaign (UbiDCC) two human mobility data sets that address these shortcomings. We also develop and make publicly available a mobile application called LOCATOR, which was used to collect our data sets. In summary, the contributions of this thesis provide a step further towards supporting mobile applications and their users. With SELECTOR, we contribute an algorithm that allows optimizing the quality of human mobility predictions by appropriately selecting parameters. To reduce the cellular traffic footprint of mobile applications, we contribute with EBC a novel approach for prefetching of mobile application content by leveraging application usage predictions. Furthermore, we provide insights about how and to what extent wrong and uncertain human mobility predictions can be detected. Lastly, with our mobile application LOCATOR and two human mobility data sets, we contribute practical tools for researchers in the human mobility prediction domain.
103

Estimating the motility parameters of single motor proteins from censored experimental data

Ruhnow, Felix 16 December 2016 (has links)
Cytoskeletal motor proteins are essential to the function of a wide range of intra-cellular mechano-systems. The biophysical characterization of the movement of motor proteins along their filamentous tracks is therefore of large importance. Towards this end, in vitro stepping motility assays are commonly used to determine the motor’s velocities and runlengths. However, comparing results from such experiments has proved difficult due to influences from variations in the experimental setups, the experimental conditions and the data analysis methods. This work describes a novel unified method to evaluate traces of fluorescently-labeled, processive dimeric motor proteins and proposes an algorithm to correct the measurements for finite filament length as well as photobleaching. Statistical errors of the proposed evaluation method are estimated by a bootstrap method. Numerical simulation and experimental data from GFP-labeled kinesin-1 motors stepping along immobilized microtubules was used to verify the proposed approach and it was shown (i) that the velocity distribution should be fitted by a t location-scale probability density function rather than a normal distribution, (ii) that the temperature during the experiments should be controlled with a precision well below 1 K, (iii) that the impossibility to measure events shorter than the image acquisition time needs to be accounted for, (iv) that the motor’s runlength can be estimated independent of the filament length distribution, and (v) that the dimeric nature of the motors needs to be considered when correcting for photobleaching. This allows for a better statistical comparison of motor proteins influenced by other external factors e.g. ionic strength, ATP concentration, or post-translational modifications of the filaments. In this context, the described method was then applied to experimental data to investigate the influence of the nucleotide state of the microtubule on the motility behavior of the kinesin-1 motor proteins. Here, a small but significant difference in the velocity measurements was found, but no significant difference in the runlength and interaction time measurements. Consequently, this work provides a framework for the evaluation of a wide range of experiments with single fluorescently-labeled motor proteins.
104

Geographic object-based image analysis

Marpu, Prashanth Reddy 17 April 2009 (has links)
The field of earth observation (EO) has seen tremendous development over recent time owing to the increasing quality of the sensor technology and the increasing number of operational satellites launched by several space organizations and companies around the world. Traditionally, the satellite data is analyzed by only considering the spectral characteristics measured at a pixel. The spatial relations and context were often ignored. With the advent of very high resolution satellite sensors providing a spatial resolution of ≤ 5m, the shortfalls of traditional pixel-based image processing techniques became evident. The need to identify new methods then led to focusing on the so called object-based image analysis (OBIA) methodologies. Unlike the pixel-based methods, the object-based methods which are based on segmenting the image into homogeneous regions use the shape, texture and context associated with the patterns thus providing an improved basis for image analysis. The remote sensing data normally has to be processed in a different way to that of the other types of images. In the geographic sense OBIA is referred to as Geographic Object-Based Image Analysis (GEOBIA), where the GEO pseudo prefix emphasizes the geographic components. This thesis will provide an overview of the principles of GEOBIA, describe some fundamentally new contributions to OBIA in the geographical context and, finally, summarize the current status with ideas for future developments.
105

FCART: A New FCA-based System for Data Analysis and Knowledge Discovery

Neznanov, Alexey A., Ilvovsky, Dmitry A., Kuznetsov, Sergei O. 28 May 2013 (has links)
We introduce a new software system called Formal Concept Analysis Research Toolbox (FCART). Our goal is to create a universal integrated environment for knowledge and data engineers. FCART is constructed upon an iterative data analysis methodology and provides a built-in set of research tools based on Formal Concept Analysis techniques for working with object-attribute data representations. The provided toolset allows for the fast integration of extensions on several levels: from internal scripts to plugins. FCART was successfully applied in several data mining and knowledge discovery tasks. Examples of applying the system in medicine and criminal investigations are considered.
106

Komplexe Datenanalyseprozesse in serviceorientierten Umgebungen

Habich, Dirk 08 December 2008 (has links)
Im Rahmen dieser Dissertation wird sich mit der Einbettung komplexer Datenanalyseprozesse in serviceorientierten Umgebungen beschäftigt. Diese Betrachtung beginnt mit einem konkreten Anwendungsgebiet, indem derartige Analyseprozesse eine entscheidende Rolle bei der Wissenserschließung spielen und ohne deren Hilfe kein Fortschritt erzielt werden kann. Im zweiten Teil werden konkrete komplexe Datenanalyseprozesse entwickelt, die den Ausgangspunkt für die Erörterung der Einbettung in eine serviceorientierte Umgebung bilden. Auf diese Einbettung wird schlussendlich im dritten Teil der Dissertation eingegangen und entsprechende Erweiterungen an den Technologien der bekanntesten Realisierungsform präsentiert. In der Evaluierung wird gezeigt, dass diese neue Form wesentlich besser geeignet ist für komplexe Datenanalyseprozesse als die bisherige Variante.
107

Musikgeschichte anders erzählen? Das Beispiel der 1970er in Österreich. Musikhistoriographie in der Zeit der Digitalisierung

Berner, Elias, Jaklin, Julia, Provaznik, Peter, Santi, Matej, Szabó-Knotik, Cornelia 29 October 2020 (has links)
The project “Telling Sounds” (www.mdw.ac.at/imi/tellingsounds) has the goal of preparing online available audio-(visual) sources (clips) as a basis for understanding contemporary musical history. The metadata of these clips will be enriched and grouped according to thematic aspects as a starting point for case studies. As a basis for such a digital research environment, a special tool will be developed which makes it possible to visualize the connections between clips and the entities and meanings, thus open them up for further research. As an example of the consequences and possibilities of such a music-historical representation, the following text relates different musical and media forms of expression in Vienna in the 1970s: the Beethoven anniversary, the history of Austropop, the communication of women-related topics on the radio and the propagandistic significance of this medium during the Cold War in connection with the topos “Music Country Austria” are thus made comprehensible as facets of music-related constructions of meaning in a concrete historical time and place.
108

Modeling of census data in a multidimensional environment

Günzel, Holger, Lehner, Wolfgang, Eriksen, Stein, Folkedal, Jon 13 June 2023 (has links)
The general aim of the KOSTRA project, initiated by Statistics Norway, is to set up a data reporting chain from the norwegian municipalities to a central database at Statistics Norway. In this paper, we present an innovative data model for supporting a data analysis process consisting of two sequential data production phases using two conceptional database schemes. A first data schema must provide a sound basis for an efficient analysis reflecting a multidimensional view on data. Another schema must cover all structural information, which is essential for supporting the generation of electronic forms as well as for performing consistency checks of the gathered in-formation. The resulting modeling approach provides a seamless solution for both proposed challenges. Based on the relational model, both schemes are powerful to cover the heterogeneity of the data source, handle complex structural information, and to provide a versioning mechanism for long term analysis.
109

The Impact of Land-use change on the Livelihoods of Rural Communities: A case-study in Edd Al-Fursan Locality, South Darfur, Sudan

Bashir, Masarra 31 January 2013 (has links)
Ziele der Arbeit sind die Bestimmung der dominanten Landnutzungsarten im Untersuchungsgebiet von Edd Al Fursan (Sudan) sowie die Kartierung und Analyse der Veränderungen der Landnutzung im Zeitraum 1972 bis 2008 mit Hilfe von multi-temporalen Satellitenbildern (Landsat MSS, TM und ETM sowie Terra ASTER). Des weiteren erfolgt eine Evaluierung des Einflusses von Veränderungen der Landnutzung auf die Lebensbedingungen der ländlichen Bevölkerung in Hinblick auf die Verfügbarkeit von Ressourcen mit Hilfe von quantitativen Untersuchungsmethoden. Um diese Aussagen treffen zu können, werden drei Methoden der Bestimmung von Veränderungen der Landnutzung angewandt, und zwar Post Classification Comparison (PCC), Change Vector Analysis (CVA) basierend auf Tasseled Cap Transformation (TCT) sowie Iteratively Reweighted Multivariate Alteration Detection (IR-MAD) mittels Maximum Autocorrelation Factor (MAF). Neben den fernerkundlichen Untersuchungen wurde eine sozio-ökonomische Feldstudie durchgeführt, die vorstrukturierte Fragenkataloge, Interviews und Gruppendiskussionen mit Personen in regionalen und lokalen Schlüsselpositionen und mit älteren Menschen durchgeführt. Fünf Klassen der Landnutzung und Landbedeckung ergeben sich aus einer Klassifikation der Satellitenbilder mit der Methode der größten Wahrscheinlichkeit (Maximum Likelihood), explizit die Klassen Grasland, Waldland, Brachland, bebautes und landwirtschaftlich nicht genutztes Land. Die Klassifikation schafft eine genaue Grundlage für die Kartierung, Quantifizierung und Analyse der Änderungen. Die Gesamtgenauigkeit der Flächenermittlung beträgt 83% für die Jahre 1972 und 1984, 85% für 1989, 87% für 1999 und 92% für 2008. Die Untersuchungen zeigen, dass die Post Classification Comparison (PCC) eine vollinhaltlich geeignete und leicht anzuwendende Methode der Flächenanalyse darstellt. Change Vector Analysis (CVA) beruhend auf Tasseled Cap Transformation (TCT) wird ebenfalls für die Kartierung und Bestimmung von Landnutzungsänderungen verwendet. Durch TCT wird der spektrale Bildinhalt in die Komponenten Greeness und Brightness transformiert sowie in dem dadurch neu definierten Koordinatensystem die CVA durchgeführt. Die Ergebnisse in Form von Vektoren der Veränderung mit messbarer Richtung und messbarem Ausmass der Flächendynamik beweisen, dass die Methode für die Kartierung von Vegetationsbedeckung und insbesondere von Entwaldung und Wiederbewaldung geeignet ist. Durch die Anwendung der Multivariate Alteration Detection (MAD) in Kombination mit dem Maximum Autocorrelation Factor (MAF) werden Veränderungen der Landnutzungsklassen während des betrachteten Zeitraumes visualisiert. Die Ergebnisse beweisen, dass die MAD für die Veränderungsanalyse in multi-spektralen Satellitenbildern sehr gut geeignet ist. Darüber hinaus wird nachgewiesen, dass die Kombination mit dem MAF die Ergebnisse der MAD entscheidend verbessern kann, da Rauschen und geringfügige Änderungen unterdrückt und signifikante Änderungen klarer herausgestellt und damit besser interpretierbar werden. Um die Ursachen für die Veränderungen der Landnutzung und den Einfluss dieser Änderungen auf die Lebensbedingungen der ländlichen Bevölkerung im Projektgebiet zu identifizieren, wurde eine Befragung mittels vorstrukturiertem Fragenkatalog, Interviews und Gruppengesprächen mit 100 GesprächspartnerInnen im Alter zwischen 42 und 65 Jahren in vier nach dem Zufallsprinizip ausgewählten Dörfern ausgeführt. Die Auswertung der sozio-ökonomischen Daten erlaubt die Extraktion der Faktoren, die Landnutzung und deren Änderung beeinflussen, und die zu bestimmten Auswirkungen dieser Änderungen auf die Lebensbedingungen in den Dörfern in Hinsicht auf die Verfügbarkeit von Natur-Ressourcen führen. Die Ergebnisse der Forschungsarbeiten zeigen, dass Fernerkundung und sozio-ökonomische Datenanalyse effizient verknüpft werden können, um anthropogene Einflüsse auf Art und Dynamik von Landnutzung sichtbar zu machen. In bezug auf die gegenständliche Zeitreihe wird durch die Untersuchungen bewiesen, dass zunehmende Bevölkerungszahlen im Gebiet von Edd Al-Fursan in direktem Wirkungszusammenhang mit Veränderungen der Landnutzung stehen.
110

Universality and variability in the statistics of data with fat-tailed distributions: the case of word frequencies in natural languages

Gerlach, Martin 10 March 2016 (has links) (PDF)
Natural language is a remarkable example of a complex dynamical system which combines variation and universal structure emerging from the interaction of millions of individuals. Understanding statistical properties of texts is not only crucial in applications of information retrieval and natural language processing, e.g. search engines, but also allow deeper insights into the organization of knowledge in the form of written text. In this thesis, we investigate the statistical and dynamical processes underlying the co-existence of universality and variability in word statistics. We combine a careful statistical analysis of large empirical databases on language usage with analytical and numerical studies of stochastic models. We find that the fat-tailed distribution of word frequencies is best described by a generalized Zipf’s law characterized by two scaling regimes, in which the values of the parameters are extremely robust with respect to time as well as the type and the size of the database under consideration depending only on the particular language. We provide an interpretation of the two regimes in terms of a distinction of words into a finite core vocabulary and a (virtually) infinite noncore vocabulary. Proposing a simple generative process of language usage, we can establish the connection to the problem of the vocabulary growth, i.e. how the number of different words scale with the database size, from which we obtain a unified perspective on different universal scaling laws simultaneously appearing in the statistics of natural language. On the one hand, our stochastic model accurately predicts the expected number of different items as measured in empirical data spanning hundreds of years and 9 orders of magnitude in size showing that the supposed vocabulary growth over time is mainly driven by database size and not by a change in vocabulary richness. On the other hand, analysis of the variation around the expected size of the vocabulary shows anomalous fluctuation scaling, i.e. the vocabulary is a nonself-averaging quantity, and therefore, fluctuations are much larger than expected. We derive how this results from topical variations in a collection of texts coming from different authors, disciplines, or times manifest in the form of correlations of frequencies of different words due to their semantic relation. We explore the consequences of topical variation in applications to language change and topic models emphasizing the difficulties (and presenting possible solutions) due to the fact that the statistics of word frequencies are characterized by a fat-tailed distribution. First, we propose an information-theoretic measure based on the Shannon-Gibbs entropy and suitable generalizations quantifying the similarity between different texts which allows us to determine how fast the vocabulary of a language changes over time. Second, we combine topic models from machine learning with concepts from community detection in complex networks in order to infer large-scale (mesoscopic) structures in a collection of texts. Finally, we study language change of individual words on historical time scales, i.e. how a linguistic innovation spreads through a community of speakers, providing a framework to quantitatively combine microscopic models of language change with empirical data that is only available on a macroscopic level (i.e. averaged over the population of speakers).

Page generated in 0.0649 seconds