Spelling suggestions: "subject:"geographical data"" "subject:"geographical mata""
1 |
Projection pursuit and other graphical methods for multivariate dataEslava-Gomez, Guillermina January 1989 (has links)
No description available.
|
2 |
Multi-Purpose Boundary-Based Clustering on Proximity Graphs for Geographical Data MiningLee, Ickjai Lee January 2002 (has links)
With the growth of geo-referenced data and the sophistication and complexity of spatial databases, data mining and knowledge discovery techniques become essential tools for successful analysis of large spatial datasets. Spatial clustering is fundamental and central to geographical data mining. It partitions a dataset into smaller homogeneous groups due to spatial proximity. Resulting groups represent geographically interesting patterns of concentrations for which further investigations should be undertaken to find possible causal factors. In this thesis, we propose a spatial-dominant generalization approach that mines multivariate causal associations among geographical data layers using clustering analysis. First, we propose a generic framework of multi-purpose exploratory spatial clustering in the form of the Template-Method Pattern. Based on an object-oriented framework, we design and implement an automatic multi-purpose exploratory spatial clustering tool. The first instance of this framework uses the Delaunay diagram as an underlying proximity graph. Our spatial clustering incorporates the peculiar characteristics of spatial data that make space special. Thus, our method is able to identify high-quality spatial clusters including clusters of arbitrary shapes, clusters of heterogeneous densities, clusters of different sizes, closely located high-density clusters, clusters connected by multiple chains, sparse clusters near to high-density clusters and clusters containing clusters within O(n log n) time. It derives values for parameters from data and thus maximizes user-friendliness. Therefore, our approach minimizes user-oriented bias and constraints that hinder exploratory data analysis and geographical data mining. Sheer volume of spatial data stored in spatial databases is not the only concern. The heterogeneity of datasets is a common issue in data-rich environments, but left open by exploratory tools. Our spatial clustering extends to the Minkowski metric in the absence or presence of obstacles to deal with situations where interactions between spatial objects are not adequately modeled by the Euclidean distance. The genericity is such that our clustering methodology extends to various spatial proximity graphs beyond the default Delaunay diagram. We also investigate an extension of our clustering to higher-dimensional datasets that robustly identify higher-dimensional clusters within O(n log n) time. The versatility of our clustering is further illustrated with its deployment to multi-level clustering. We develop a multi-level clustering method that reveals hierarchical structures hidden in complex datasets within O(n log n) time. We also introduce weighted dendrograms to effectively visualize the cluster hierarchies. Interpretability and usability of clustering results are of great importance. We propose an automatic pattern spotter that reveals high level description of clusters. We develop an effective and efficient cluster polygonization process towards mining causal associations. It automatically approximates shapes of clusters and robustly reveals asymmetric causal associations among data layers. Since it does not require domain-specific concept hierarchies, its applicability is enhanced. / PhD Doctorate
|
3 |
Användning av geografisk data vid mjukvaruutvecklingGenfors, Casper January 2017 (has links)
Gemit Solutions is a company that develops web based solutions for geographical analysis on pollution. One of the solutions is called EnvoMap, which is a map based tool for analyzing pollution on the water and sewage system. The assignment that this project will be working on is to implement a new functionality to EnvoMap. The idea is that it should be possible for the user to choose an area on the map and get the underlaying data for that area. To solve this an iterative method for development will be used. The result will be a prototype that meets the desired functionality. This assignment also provides the opportunity to study an interesting topic about the usage of geographical data in software development. Methods that will be used for the study is literature study, implementation, documentation and analysis. The given result includes theories, experiences and recommendations from working on the project and will try to answer the thesis question about usage of geographical data in software development. / Gemit Solutions är ett företag som utvecklar webbaserade lösningar för bland annat geografisk analys av föroreningsbelastning. En av lösningarna heter EnvoMap och är ett kartbaserat system för analys av föroreningsbelastning på VA-nätet (vatten och avlopp). Uppgiften det här projekt skall lösa är att Gemit vill ha en ny funktionalitet på EnvoMap. Tanken är att det skall gå att välja ett område på kartan och få ut underliggande data för det området. För att lösa detta kommer en iterativ utvecklings metod att användas. Resultatet av detta är tänkt att ge en prototyp/lösningsförslag som möter de funktionella kraven. Den här uppgiften medför också möjligheten till en intressant frågeställning om användning av geografisk data vid mjukvaruutveckling, som rapporten kommer undersöka. Metoder som litteraturstudie, implementation, dokumentation och analys kommer användas för undersökningen. Resultat som ges av undersökningen innefattar teorier, erfarenheter och rekommendationer från arbetet och kommer att försöka besvara frågeställningen om användning av geografisk data vid mjukvaruutveckling.
|
4 |
Dinaminė geografinių ir atributinių duomenų sąsaja Oracle DBVS pagrindu / Dynamic linking of geographical and attributive data using Oracle DBMSRacibara, Giedrius 04 March 2009 (has links)
Šiame darbe apžvelgiama esamų GIS sprendimų privalumai ir trūkumai, tiriamos skaitmeninių žemėlapių įmonių taikomosiose programose panaudojimo galimybės. Analizuojama Oracle DBVS programinė įranga, siekiant įrodyti, kad ji turi reikiamas funkcijas erdvinių duomenų valdymui. Apibendrinus analizės rezultatus, pasiūlomas naujas GIS sprandimas, kuris leidžia atvaizduoti įmonės aprašomuosius duomenis skaitmeniniame žemėlapyje dinamišku būdu, nežinant jų struktūros ir minimizuojant programavimo darbus. Dinamiškos GIS koncepcijai realizuoti suprojektuojama nauja dinamiškos GIS architektūra ir suprogramuojamas trūkstamas duomenų integravimo komponentas. Dinamiškam duomenų susiejimui užtikrinti, duomenų integravimo komponento veikimas pagrindžiamas veiklos taisyklių koncepcija. Dinamiškam erdvinių ir aprašomųjų duomenų susiejimui pademonstruoti, dinamiškos GIS architektūra realizuojama naudojant oracle programinę įrangą, duomenų integravimo komponentas sukonfigūruojamas mapViewer naudojimui. Dinamiškos GIS testavimui sukurta testavimo sąsaja, kuri leidžia tiesiai internetiniame puslapyje rašyti veiklos taisykles ir matyti rezutatus žemėlapyje. Į oracle duomenų bazę importuoti egzistuojsntys erdviniai duomenys ir testavimo tikslais dalis duomenų buvo sukurti rankiniu būdu. / This work reviews advantages and weaknesses of existing GIS solutions, explores usage of digital maps for rendering specific enterprise data in digital maps. Oracle DBMS software was also analyzed to prove that it has all necessary components for spatial data manipulation. After summarizing analysis results we offer new GIS solution, which allows rendering enterprise data on the map in dynamic way without knowing enterprise data structure and minimizing programmer work. To implement a dynamic GIS conception a new dynamic GIS architecture and the missing component design for representing business data in a map are created. To ensure dynamic integration and simple usage of component, dynamic integration component functionality is based on business rules conception. To demonstrate dynamic enterprise and spatial data integration on the map, dynamic GIS architecture was implemented using Oracle software by configuring data integration component to use Oracle mapViewer. For dynamic GIS testing, test interface was created with an ability to write business rules directly in web page and see the integration results. Existing spatial data was imported into Oracle DB and some spatial data for testing purposes was created manually.
|
5 |
Implementation of a 3D terrain-dependent Wave Propagation Model in WRAPBlakaj, Valon, Gashi, Gent January 2014 (has links)
The radio wave propagation prediction is one of the key elements for designing an efficient radio network system. WRAP International has developed a software for spectrum management and radio network planning.This software includes some wave propagation models which are used to predict path loss. Current propagation models in WRAP perform the calculation in a vertical 2D plane, the plane between the transmitter and the receiver. The goal of this thesis is to investigate and implement a 3D wave propagation model, in a way that reflections and diffractions from the sides are taken into account.The implemented 3D wave propagation model should be both fast and accurate. A full 3D model which uses high resolution geographical data may be accurate, but it is inefficient in terms of memory usage and computational time. Based on the fact that in urban areas the strongest path between the receiver and the transmitter exists with no joint between vertical and horizontal diffractions [10], the radio wave propagation can be divided into two parts, the vertical and horizontal part. Calculations along the horizontal and vertical parts are performed independently, and after that, the results are combined. This approach leads to less computational complexity, faster calculation time, less memory usage, and still maintaining a good accuracy.The proposed model is implemented in C++ and speeded up using parallel programming techniques. Using the provided Stockholm high resolution geographical data, simulations are performed and results are compared with real measurements and other wave propagation models. In addition to the path loss calculation, the proposed model can also be used to estimate the channel power delay profile and the delay spread.
|
6 |
Fusion de connaissances imparfaites pour l'appariement de données géographiques : proposition d'une approche s'appuyant sur la théorie des fonctions de croyance / Imperfect knowledge fusion for matching geographical data : approach based on belief theoryOlteanu, Ana-Maria 24 October 2008 (has links)
De nos jours, il existe de nombreuses bases de données géographiques (BDG) couvrant le même territoire. Les données géographiques sont modélisées différemment (par exemple une rivière peut être modélisée par une ligne ou bien par une surface), elles sont destinées à répondre à plusieurs applications (visualisation, analyse) et elles sont créées suivant des modes d’acquisition divers (sources, processus). Tous ces facteurs créent une indépendance entre les BDG, qui pose certains problèmes à la fois aux producteurs et aux utilisateurs. Ainsi, une solution est d’expliciter les relations entre les divers objets des bases de données, c'est-à-dire de mettre en correspondance des objets homologues représentant la même réalité. Ce processus est connu sous le nom d’appariement de données géographiques. La complexité du processus d’appariement fait que les approches existantes varient en fonction des besoins auxquels l'appariement répond, et dépendent des types de données à apparier (points, lignes ou surfaces) et du niveau de détail. Nous avons remarqué que la plupart des approches sont basées sur la géométrie et les relations topologiques des objets géographiques et très peu sont celles qui prennent en compte l’information descriptive des objets géographiques. De plus, pour la plupart des approches, les critères sont enchaînés et les connaissances sont à l’intérieur du processus. Suite à cette analyse, nous proposons une approche d’appariement de données qui est guidée par des connaissances et qui prend en compte tous les critères simultanément en exploitant à la fois la géométrie, l’information descriptive et les relations entre eux. Afin de formaliser les connaissances et de modéliser leurs imperfections (imprécision, incertitude et incomplétude), nous avons utilisé la théorie des fonctions de croyance [Shafer, 1976]. Notre approche d’appariement de données est composée de cinq étapes : après une sélection des candidats, nous initialisons les masses de croyance en analysant chaque candidat indépendamment des autres au moyen des différentes connaissances exprimées par divers critères d’appariement. Ensuite, nous fusionnons les critères d’appariement et les candidats. Enfin, une décision est prise. Nous avons testé notre approche sur des données réelles ayant des niveaux de détail différents représentant le relief (données ponctuelles) et les réseaux routiers (données linéaires) / Nowadays, there are many geographic databases, (GDB), covering the same reality. The geographical data are represented differently (for example a river can be represented by a line or a polygon), they are used in different applications (visualisation, analysis) and they are created using various modes of acquisition (sources, processes). All these factors create independence between GDB, which causes problems for both producers and users. Thus, a solution is to clarify the relationships between various database objects, i.e. to match homologous objects, which represent the same reality. This process is known as spatial data matching. Because of the complexity of the matching process, the existing approaches depend on the types of data (points, lines or polygons) and the level of detail of the GDB. We realised, that most of the approaches are based on the geometry and the topology of the geographical objects, and very few approaches take into account the descriptive information of geographical objects. Besides, for most approaches, the criteria are applied one after the other and knowledge is contained within the process. Following this analysis, we proposed a matching approach that is guided by knowledge and takes into account all criteria at the same time exploiting the geometry, descriptive information and relations between geographical objects. In order to formalise knowledge and model their imperfections (imprecision, uncertainty and incompleteness), we used the Belief Theory [Shafer, 1976]. Our approach of the data matching is composed of five steps. After a selection of candidates, the masses of beliefs are initialised by analysing each candidate separately from the others using different knowledge expressed by various matching criteria. Then, the matching criteria and candidates are fusioned. Finally, a decision is taken. Our approach has been tested on real data having different levels of detail and representing relief (data points) and road networks (linear data)
|
7 |
Informační systém pro správu vizualizací geografických dat / Information System for Management of Geographical Data VisualizationsGrossmann, Jan January 2021 (has links)
The goal of this this is to create an information system for the visualization of geographical data. The main idea is to allow users to create visualizations with their own geographical data, which they can either import from files or directly attach their own database system as a source of data and make use of the data in real-time. The result will be a new web information system that will act as a point of contact between users, geographical data, and visualizations.
|
8 |
Utilization of ETL Processes for Geographical Data Migration : A Case Study at Metria ABSihvola, Toni January 2024 (has links)
In this study, the safety of using ETL processes to migrate geographical data between heterogeneous data sources was investigated, as well as whether certain data structures are more prone to integrity loss during such migrations. Geographical data in various vector structures was migrated using ETL software, FME, from a legacy data source (Oracle 11g with integrated Esri geodatabases) to another (PostgreSQL 14.10 with the PostGIS extension) in order to explore the aforementioned challenges. The maintenance of data integrity post-migration was assessed by comparing the difference between the geodata housed in Oracle 11g (the source) and PostgreSQL 14.10 (the destination) using ArcGIS Pro's built-in tools and a Python script. Further evaluation of the role of ETL processes in geographical data migration included conducting interviews with specialists in databases, data migration, and FME both before and after the migration. The study concludes that different vector structures are affected differently. Whereas points and lines maintained 100% data integrity across all datasets, polygons achieved 99.95% accuracy in one out of the three tested datasets. Managing this issue can be addressed by implementing a repair process during the Transform stage of an ETL process. However, such a process does not guarantee an entirely successful outcome; although the affected area was significantly reduced post-repair, the polygons contained a higher amount of mismatches. / I denna studie undersöktes om ETL-processer kan användas på ett säkert sätt för att migrera geografiska data mellan heterogena datakällor, samt om vissa datastrukturer är mer benägna att förlora integritet under sådana migrationer. Geografiskt data i olika vektorstrukturer migrerades med hjälp av ETL-programvaran FME, från en föråldrad datakälla (Oracle 11g med integrerade Esri geodatabaser) till en annan (PostgreSQL 14.10 med PostGIS-tillägget) för att utforska de ovannämnda frågorna. Dataintegritet mättes genom att jämföra skillnaden mellan geodatan på Oracle 11g (källan) och PostgreSQL 14.10 (destinationen) med hjälp av ArcGIS Pro's inbyggda verktyg och ett Python skript. För att ytterligare utvärdera rollen av ETL-processer i migrering av geografiskt data genomfördes intervjuer med specialister inom databaser, datamigration och FME, både före och efter migrationen. Studien konstaterar att olika vektorstrukturer påverkas olika. Medan punkter och linjer bibehöll 100% datatillförlitlighet över alla dataset, uppnådde polygoner 99,95% noggrannhet i ett av de tre testade dataseten. Hantering av detta problem kan adresseras genom att implementera en reparationsprocess under Transform-steget av en ETL-process. Dock garanterar inte en sådan process ett helt lyckat resultat; även om den påverkade arean minskades avsevärt efter reparationen, innehöll polygonerna ett högre antal avvikelser.
|
9 |
Rigid barrier or not? : Machine Learning for classifying Traffic Control Plans using geographical dataWallander, Cornelia January 2018 (has links)
In this thesis, four different Machine Learning models and algorithms have been evaluated in the work of classifying Traffic Control Plans in the City of Helsingborg. Before a roadwork can start, a Traffic Control Plan must be created and submitted to the Traffic unit in the city. The plan consists of information regarding the roadwork and how the work can be performed in a safe manner, concerning both road workers and car drivers, pedestrians and cyclists that pass by. In order to know what safety barriers are needed both the Swedish Association of Local Authorities and Regions (SALAR) and the Swedish Transport Administration (STA) have made a classification of roads to guide contractors and traffic technicians what safety barriers are suitable to provide a safe workplace. The road classifications are built upon two rules; the amount of traffic and the speed limit of the road. Thus real-world problems have shown that these classifications are not applicable to every single case. Therefore, each roadwork must be judged and evaluated from its specific attributes. By creating and training a Machine Learning model that is able to determine if a rigid safety barrier is needed or not a classification can be made based on historical data. In this thesis, the performance of several Machine Learning models and datasets are presented when Traffic Control Plans are classified. The algorithms used for the classification task were Random Forest, AdaBoost, K-Nearest Neighbour and Artificial Neural Network. In order to know what attributes to include in the dataset, participant observations in combination with interviews were held with a traffic technician at the City of Helsingborg. The datasets used for training the algorithms were primarily based on geographical data but information regarding the roadwork and period of time were also included in the dataset. The results of this study indicated that it was preferred to include road attribute information in the dataset. It was also discovered that the classification accuracy was higher if the attribute values of the geographical data were continuous instead of categorical. In the results it was revealed that the AdaBoost algorithm had the highest performance, even though the difference in performance was not that big compared to the other algorithms.
|
10 |
Marketingový výzkum vhodnosti aplikování GIS na produkty firmy Geodis Brno / Marketing Research of GIS suitability for Geodis ProductsChmelař, Vladimír January 2009 (has links)
The aim of this diploma thesis is by the help of marketing research analyze, which of the available geographic information systems (GIS) on the Czech market, intended in use in the state administration, government and the private sector in the Czech Republic, is the best from marketing point of view for commercial distribution together with geographical data of the company GEODIS BRNO. The theoretical part deals with marketing research and related marketing tools. The practical part deals with analysis the company GEODIS, analysis systems GIS, results of analysis and recommendation.
|
Page generated in 0.0673 seconds