Spelling suggestions: "subject:"apatial data"" "subject:"apatial mata""
71 |
Topics in Soft ComputingKeukelaar, J. H. D. January 2002 (has links)
No description available.
|
72 |
A Framework for Participatory Sensing SystemsMendez Chaves, Diego 01 January 2012 (has links)
Participatory sensing (PS) systems are a new emerging sensing paradigm based on the participation of cellular users in a cooperative way. Due to the spatio-temporal granularity that a PS system can provide, it is now possible to detect and analyze events that occur at different scales, at a low cost. While PS systems present interesting characteristics, they also create new problems. Since the measuring devices are cheaper and they are in the hands of the users, PS systems face several design challenges related to the poor accuracy and high failure rate of the sensors, the possibility of malicious users tampering the data, the violation of the privacy of the users as well as methods to encourage the participation of the users, and the effective visualization of the data. This dissertation presents four main contributions in order to solve some of these challenges.
This dissertation presents a framework to guide the design and implementation of PS applications considering all these aspects. The framework consists of five modules: sample size determination, data collection, data verification, data visualization, and density maps generation modules. The remaining contributions are mapped one-on-one to three of the modules of this framework: data verification, data visualization and density maps.
Data verification, in the context of PS, consists of the process of detecting and removing spatial outliers to properly reconstruct the variables of interest. A new algorithm for spatial outliers detection and removal is proposed, implemented, and tested. This hybrid neighborhood-aware algorithm considers the uneven spatial density of the users, the number of malicious users, the level of conspiracy, and the lack of accuracy and malfunctioning sensors. The experimental results show that the proposed algorithm performs as good as the best estimator while reducing the execution time considerably.
The problem of data visualization in the context of PS application is also of special interest. The characteristics of a typical PS application imply the generation of multivariate time-space series with many gaps in time and space. Considering this, a new method is presented based on the kriging technique along with Principal Component Analysis and Independent Component Analysis. Additionally, a new technique to interpolate data in time and space is proposed, which is more appropriate for PS systems. The results indicate that the accuracy of the estimates improves with the amount of data, i.e., one variable, multiple variables, and space and time data. Also, the results clearly show the advantage of a PS system compared with a traditional measuring system in terms of the precision and spatial resolution of the information provided to the users.
One key challenge in PS systems is that of the determination of the locations and number of users where to obtain samples from so that the variables of interest can be accurately represented with a low number of participants. To address this challenge, the use of density maps is proposed, a technique that is based on the current estimations of the variable. The density maps are then utilized by the incentive mechanism in order to encourage the participation of those users indicated in the map. The experimental results show how the density maps greatly improve the quality of the estimations while maintaining a stable and low total number of users in the system.
P-Sense, a PS system to monitor pollution levels, has been implemented and tested, and is used as a validation example for all the contributions presented here. P-Sense integrates gas and environmental sensors with a cell phone, in order to monitor air quality levels.
|
73 |
Design of platforms for computing context with spatio-temporal localityZiotopoulos, Agisilaos Georgios 02 June 2011 (has links)
This dissertation is in the area of pervasive computing.
It focuses on designing platforms for storing, querying, and computing contextual information.
More specifically, we are interested in platforms for storing and querying spatio-temporal events where queries exhibit locality.
Recent advances in sensor technologies have made possible gathering a variety of information on the status of users, the environment machines, etc.
Combining this information with computation we are able to extract context, i.e., a filtered high-level description of the situation.
In many cases, the information gathered exhibits locality both in space and time, i.e., an event is likely to be consumed in a location close to the location where the event was produced, at a time whic
h is close to the time the event was produced.
This dissertation builds on this observation to create better platforms for computing context.
We claim three key contributions.
We have studied the problem of designing and optimizing spatial organizations for exchanging context.
Our thesis has original theoretical work on how to create a platform based on cells of a Voronoi diagram for optimizing the energy and bandwidth required for mobiles to exchange contextual information t
hat is tied to specific locations in the platform.
Additionally, we applied our results to the problem of optimizing a system for surveilling the locations of entities within a given region.
We have designed a platform for storing and querying spatio-temporal events exhibiting locality.
Our platform is based on a P2P infrastructure of peers organized based on the Voronoi diagram associated with their locations to store events based on their own associated locations.
We have developed theoretical results based on spatial point processes for the delay experienced by a typical query in this system.
Additionally, we used simulations to study heuristics to improve the performance of our platform.
Finally, we came up with protocols for the replicated storage of events in order to increase the fault-tolerance of our platform.
Finally, in this thesis we propose a design for a platform, based on RFID tags, to support context-aware computing for indoor spaces.
Our platform exploits the structure found in most indoor spaces to encode contextual information in suitably designed RFID tags.
The elements of our platform collaborate based on a set of messages we developed to offer context-aware services to the users of the platform.
We validated our research with an example hardware design of the RFID tag and a software emulation of the tag's functionality. / text
|
74 |
Καινοτόμος τεχνική χωρικής διαίρεσης χαρτών πόλεων και αναπαράσταση χωρικών δεδομένων σε αυτούςΚαλαβρουζιώτης, Νικόλαος 02 April 2014 (has links)
Σκοπός της παρούσας εργασίας, αποτελεί η κατανόηση και η μελέτη των Γεωγραφικών Πληροφοριακών Συστημάτων (ΓΠΣ), και εν συνεχεία η ανάπτυξη μιας διαδικτυακής εφαρμογής, χρησιμοποιώντας συγκεκριμένα χαρακτηριστικά των ΓΠΣ.
Πιο συγκεκριμένα, έχοντας μια σαφή εικόνα των χαρακτηριστικών των ΓΠΣ, προχωρούμε στον σχεδιασμό και την υλοποίηση μιας εφαρμογής, που ως στόχο έχει την τμηματοποίηση και τον διαχωρισμό του χάρτη μιας δοθείσας περιοχής αστικού περιβάλλοντος, σε άλλες μικρότερες περιοχές. Ο διαχωρισμός αυτός δεν ακολουθεί μια αυθαίρετη λογική κατάτμησης, αλλά βασίζεται στον διαχωρισμό σε πολεοδομικά τετράγωνα, δημιουργώντας έτσι μικρότερες περιοχές ενδιαφέροντος.
Στο παρελθόν έχουν υλοποιηθεί αρκετές τεχνικές αναπαράστασης της χωρικής πληροφορίας πάνω στον χάρτη, όπως οι Heat Maps, οι Dot Density Maps, οι Proportional Symbol Maps, κ.ά. Ωστόσο, αν και σε πολλές περιπτώσεις είναι αρκετά αποδοτικές αυτές οι τεχνικές, συνήθως είναι δυσνόητες και ασαφείς ως προς την κατανόησή τους από μη-ειδικούς χρήστες. Στην εργασία αυτή, σε κάθε νέα περιοχή ενδιαφέροντος (πολεοδομικά τετράγωνα), προσδίδεται ένας χρωματισμός, ανάλογα με τα δεδομένα που εξάγουμε από τα μέσα κοινωνικής δικτύωσης, μια τεχνική όπου είναι εύληπτη και κατανοητή από τον άνθρωπο, και μπορεί μέσω αυτής να εξάγει εύκολα συμπεράσματα. / The purpose of this work is to understand and study Geographic Information Systems (GIS), and subsequently to develop a web application, using specific characteristics of GIS.
More specifically, having a clear picture of the characteristics of GIS, we proceed to the design and implementation of an application, which aims to segment and separate a given urban map area to other smaller one areas. This division does not follow an arbitrary logical partition, but is based on the separation in urban squares, thus creating smaller areas of interest.
In the past, several techniques have been implemented for representing of spatial information on the map, such as HeatMaps, Dot Density Maps, Proportional Symbol Maps, etc. However, although in many cases these are quite effective, these techniques are usually obscure and unclear in their understanding of non-skilled users. In this work, in each new region of interest (urban squares), imparted a coloring, depending on the data extracted from social networking, a technique which is accessible and understandable by users and easy conclusions can be exported through.
|
75 |
Ομαδοποιημένη οπτικοποίηση γεωγραφικών δεδομένων με χρήση web τεχνολογιώνΧαρπαντίδης, Βασίλειος 05 February 2015 (has links)
Στις μέρες μας ο κόσμος αναζητά συνεχώς νέες πληροφορίες. Η αναζήτηση αυτή πολλές φορές
εξαρτάται ή/και βασίζεται σε γεωγραφικά δεδομένα. Για αυτόν το σκοπό η αποτύπωση της
πληροφορίας στο χάρτη είναι μια κλασική μέθοδος που ακολουθείται.
Αυτή η διπλωματική ξεκίνησε από την παρατήρηση ότι ενώ από τη μία υπάρχει μεγάλος όγκος
πληροφορίας, η αποτύπωσή της στο χάρτη είναι πολύ φτωχή. Σχεδόν όλες οι εφαρμογές με εξαίρεση
τους κολοσσούς της πληροφορικής (Microsoft και Google) αναπαριστούν περιορισμένη ποσότητα
πληροφορίας. Αυτή η παρατήρηση προήλθε από λεπτομερή έρευνα των πρακτικών που ακολουθούνται
τόσο από εμπορικές εφαρμογές, όσο και από αντίστοιχες ερευνητικές δραστηριότητες. Έτσι,
χρησιμοποιώντας κάποιος αυτές τις υπηρεσίες δυσκολεύεται να κατανοήσει τη συνολική πληροφορία
της γεωγραφικής περιοχής που εξετάζει.
Σε αυτήν την εργασία προσπαθούμε να δώσουμε λύση στο παραπάνω πρόβλημα, δηλαδή στην
αποτύπωση της συνολικής πληροφορίας σε κάποια μη ορισμένη εξ αρχής γεωγραφική περιοχή. Το
σύνολο της πληροφορίας, επίσης, δεν είναι στατικό, αλλά αλλάζει σύμφωνα με διάφορα κριτήρια που
μπορεί να επιλέξει ο χρήστης.
Η εργασία θα κινηθεί σε δύο βασικούς πυλώνες. Αρχικά, θα δοθεί λύση στο απλούστερο πρόβλημα της
παρουσίασης στο χρήστη της ποσότητας της πληροφορίας στις αντίστοιχες γεωγραφικές περιοχές. Στη
συνέχεια, αφού χρησιμοποιηθεί η λύση αυτού του προβλήματος θα γίνει μια προσπάθεια να λυθεί το
πιο σύνθετο πρόβλημα της ομαδοποίησης της εμφανιζόμενης στοιχείων πληροφορίας στις γεωγραφικές
περιοχές που εξετάζονται από το χρήστη. Οι λύσεις των δύο αυτών προβλημάτων θα βασιστούν στην
ομαδοποίηση. Το πρώτο πρόβλημα θα λυθεί με τη χρήση του αλγορίθμου ομαδοποίησης σε πλέγμα και
το δεύτερο θα βασιστεί στη χρήση μερικών παραλλαγών του αλγορίθμου Minimum Description
Length (MDL). Παρόλο που φαίνεται ότι θα ακολουθηθούν κλασικοί αλγόριθμοι η διπλωματική αυτή
θα τους χρησιμοποιήσει με πρωτοφανή τρόπο (δε βρέθηκε αντίστοιχη χρήση στη βιβλιογραφία) και σε
πολλά λεπτά σημεία θα δοθούν καινοτόμες λύσεις.
Οι δύο αυτές λύσεις δε θα υλοποιηθούν απλά σε ένα εργαστηριακό σύστημα, όπου κάποιος έμπειρος
χρήστης (ο ερευνητής) θα μπορεί να πραγματοποιεί πειράματα, αλλά σε μία σύγχρονη web εφαρμογή.
Απόρροια αυτής της επιλογής είναι η εφαρμογή που θα παραχθεί από την υλοποίηση αυτής της
διπλωματικής να είναι άμεσα προσπελάσιμη από όλον τον κόσμο. Αυτό είναι ένα ακόμη προτέρημα
αυτής της εργασίας.
Συνοψίζοντας η εργασία θα ολοκληρωθεί με ένα συνοπτικό σχολιασμό των λύσεων και των
αποτελεσμάτων τους σε ένα πραγματικό σύνολο δεδομένων. Σε αυτά τα αποτελέσματα θα
παρατηρηθεί η λογική συσχέτιση των αποτελεσμάτων των ομαδοποιημένων σημείων σε σχέση με τις
γεωγραφικές περιοχές. / Nowadays information is produced everywhere. So a great amount of data exists. Spatial data analysis and visualization is a hot trend and a dynamic map like google maps is the best so far solution. The problem is that the combination of the massive amount of data and the limited size of the map instead of helping the user may confuse him/her. Now, consider a web application with a dynamic map, presenting the whole information of each place without confusing the user with the huge amount of information. This paper proposes some techniques to present the quantity and the quality of information in any place of the dynamic map. The common factor of all the proposed techniques is the grouping via some algorithms like grid clustering.
|
76 |
SPATIAL-TEMPORAL DATA ANALYTICS AND CONSUMER SHOPPING BEHAVIOR MODELINGYan, Ping January 2010 (has links)
RFID technologies are being recently adopted in the retail space tracking consumer in-store movements. The RFID-collected data are location sensitive and constantly updated as a consumer moves inside a store. By capturing the entire shopping process including the movement path rather than analyzing merely the shopping basket at check-out, the RFID-collected data provide unique and exciting opportunities to study consumer purchase behavior and thus lead to actionable marketing applications.This dissertation research focuses on (a) advancing the representation and management of the RFID-collected shopping path data; (b) analyzing, modeling and predicting customer shopping activities with a spatial pattern discovery approach and a dynamic probabilistic modeling based methodology to enable advanced spatial business intelligence. The spatial pattern discovery approach identifies similar consumers based on a similarity metric between consumer shopping paths. The direct applications of this approach include a novel consumer segmentation methodology and an in-store real-time product recommendation algorithm. A hierarchical decision-theoretic model based on dynamic Bayesian networks (DBN) is developed to model consumer in-store shopping activities. This model can be used to predict a shopper's purchase goal in real time, infer her shopping actions, and estimate the exact product she is viewing at a time. We develop an approximate inference algorithm based on particle filters and a learning procedure based on the Expectation-Maximization (EM) algorithm to perform filtering and prediction for the network model. The developed models are tested on a real RFID-collected shopping trip dataset with promising results in terms of prediction accuracies of consumer purchase interests.This dissertation contributes to the marketing and information systems literature in several areas. First, it provides empirical insights about the correlation between spatial movement patterns and consumer purchase interests. Such correlation is demonstrated with in-store shopping data, but can be generalized to other marketing contexts such as store visit decisions by consumers and location and category management decisions by a retailer. Second, our study shows the possibility of utilizing consumer in-store movement to predict consumer purchase. The predictive models we developed have the potential to become the base of an intelligent shopping environment where store managers customize marketing efforts to provide location-aware recommendations to consumers as they travel through the store.
|
77 |
Assessing Dynamic Externalities from a Cluster Perspective: The Case of the Motor Metropolis in JapanKawakami, Tetsu, Yamada, Eri 08 1900 (has links)
No description available.
|
78 |
A Gis Based Spatial Data Analysis In Knidian Amphora Workshops In ResadiyeKiroglu, Fatih Mehmet 01 December 2003 (has links) (PDF)
The main objective of this study is to determine main activity locations and correlation between different artifact types in an archaeological site with geographical information systems (GIS) and spatial data analyses.
Knidian amphora workshops in Datç / a peninsula are studied in order to apply GIS and spatial statistical techniques. GIS capabilities are coupled with some spatial statistical software and spatial data analysis steps are followed. Both point and area datasets are examined for the effective analysis of the same set of spatial phenomena.
Visualizing the artifact distribution with the help of GIS tools enables proposing hypotheses about the study area. In exploration part of the study, those assumptions are tested and developed with the help of explorative methods and GIS. The results are discussed and assessed in terms of archaeological framework. Finally the results are compared with the archeo-geophysical anomalies and excavation results.
|
79 |
Bayesian Analysis for Large Spatial DataPark, Jincheol 2012 August 1900 (has links)
The Gaussian geostatistical model has been widely used in Bayesian modeling of spatial data. A core difficulty for this model is at inverting the n x n covariance matrix, where n is a sample size. The computational complexity of matrix inversion increases as O(n3). This difficulty is involved in almost all statistical inferences approaches of the model, such as Kriging and Bayesian modeling. In Bayesian inference, the inverse of covariance matrix needs to be evaluated at each iteration in posterior simulations, so Bayesian approach is infeasible for large sample size n due to the current computational power limit.
In this dissertation, we propose two approaches to address this computational issue, namely, the auxiliary lattice model (ALM) approach and the Bayesian site selection (BSS) approach. The key feature of ALM is to introduce a latent regular lattice which links Gaussian Markov Random Field (GMRF) with Gaussian Field (GF) of the observations. The GMRF on the auxiliary lattice represents an approximation to the Gaussian process. The distinctive feature of ALM from other approximations lies in that ALM avoids completely the problem of the matrix inversion by using analytical likelihood of GMRF. The computational complexity of ALM is rather attractive, which increase linearly with sample size.
The second approach, Bayesian site selection (BSS), attempts to reduce the dimension of data through a smart selection of a representative subset of the observations. The BSS method first split the observations into two parts, the observations near the target prediction sites (part I) and their remaining (part II). Then, by treating the observations in part I as response variable and those in part II as explanatory variables, BSS forms a regression model which relates all observations through a conditional likelihood derived from the original model. The dimension of the data can then be reduced by applying a stochastic variable selection procedure to the regression model, which selects only a subset of the part II data as explanatory data. BSS can provide us more understanding to the underlying true Gaussian process, as it directly works on the original process without any approximations involved.
The practical performance of ALM and BSS will be illustrated with simulated data and real data sets.
|
80 |
Semantic interoperability of geospatial ontologies: a model-theoretic analysis /Farrugia, James A. January 2007 (has links) (PDF)
Thesis (Ph.D.) in Spatial Information Science and Engineering--University of Maine, 2007. / Includes vita. Includes bibliographical references (leaves 145-153).
|
Page generated in 0.0616 seconds