• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Projection pursuit and other graphical methods for multivariate data

Eslava-Gomez, Guillermina January 1989 (has links)
No description available.
2

Multi-Purpose Boundary-Based Clustering on Proximity Graphs for Geographical Data Mining

Lee, Ickjai Lee January 2002 (has links)
With the growth of geo-referenced data and the sophistication and complexity of spatial databases, data mining and knowledge discovery techniques become essential tools for successful analysis of large spatial datasets. Spatial clustering is fundamental and central to geographical data mining. It partitions a dataset into smaller homogeneous groups due to spatial proximity. Resulting groups represent geographically interesting patterns of concentrations for which further investigations should be undertaken to find possible causal factors. In this thesis, we propose a spatial-dominant generalization approach that mines multivariate causal associations among geographical data layers using clustering analysis. First, we propose a generic framework of multi-purpose exploratory spatial clustering in the form of the Template-Method Pattern. Based on an object-oriented framework, we design and implement an automatic multi-purpose exploratory spatial clustering tool. The first instance of this framework uses the Delaunay diagram as an underlying proximity graph. Our spatial clustering incorporates the peculiar characteristics of spatial data that make space special. Thus, our method is able to identify high-quality spatial clusters including clusters of arbitrary shapes, clusters of heterogeneous densities, clusters of different sizes, closely located high-density clusters, clusters connected by multiple chains, sparse clusters near to high-density clusters and clusters containing clusters within O(n log n) time. It derives values for parameters from data and thus maximizes user-friendliness. Therefore, our approach minimizes user-oriented bias and constraints that hinder exploratory data analysis and geographical data mining. Sheer volume of spatial data stored in spatial databases is not the only concern. The heterogeneity of datasets is a common issue in data-rich environments, but left open by exploratory tools. Our spatial clustering extends to the Minkowski metric in the absence or presence of obstacles to deal with situations where interactions between spatial objects are not adequately modeled by the Euclidean distance. The genericity is such that our clustering methodology extends to various spatial proximity graphs beyond the default Delaunay diagram. We also investigate an extension of our clustering to higher-dimensional datasets that robustly identify higher-dimensional clusters within O(n log n) time. The versatility of our clustering is further illustrated with its deployment to multi-level clustering. We develop a multi-level clustering method that reveals hierarchical structures hidden in complex datasets within O(n log n) time. We also introduce weighted dendrograms to effectively visualize the cluster hierarchies. Interpretability and usability of clustering results are of great importance. We propose an automatic pattern spotter that reveals high level description of clusters. We develop an effective and efficient cluster polygonization process towards mining causal associations. It automatically approximates shapes of clusters and robustly reveals asymmetric causal associations among data layers. Since it does not require domain-specific concept hierarchies, its applicability is enhanced. / PhD Doctorate
3

Användning av geografisk data vid mjukvaruutveckling

Genfors, Casper January 2017 (has links)
Gemit Solutions is a company that develops web based solutions for geographical analysis on pollution. One of the solutions is called EnvoMap, which is a map based tool for analyzing pollution on the water and sewage system. The assignment that this project will be working on is to implement a new functionality to EnvoMap. The idea is that it should be possible for the user to choose an area on the map and get the underlaying data for that area. To solve this an iterative method for development will be used. The result will be a prototype that meets the desired functionality. This assignment also provides the opportunity to study an interesting topic about the usage of geographical data in software development. Methods that will be used for the study is literature study, implementation, documentation and analysis. The given result includes theories, experiences and recommendations from working on the project and will try to answer the thesis question about usage of geographical data in software development. / Gemit Solutions är ett företag som utvecklar webbaserade lösningar för bland annat geografisk analys av föroreningsbelastning. En av lösningarna heter EnvoMap och är ett kartbaserat system för analys av föroreningsbelastning på VA-nätet (vatten och avlopp). Uppgiften det här projekt skall lösa är att Gemit vill ha en ny funktionalitet på EnvoMap. Tanken är att det skall gå att välja ett område på kartan och få ut underliggande data för det området. För att lösa detta kommer en iterativ utvecklings metod att användas. Resultatet av detta är tänkt att ge en prototyp/lösningsförslag som möter de funktionella kraven. Den här uppgiften medför också möjligheten till en intressant frågeställning om användning av geografisk data vid mjukvaruutveckling, som rapporten kommer undersöka. Metoder som litteraturstudie, implementation, dokumentation och analys kommer användas för undersökningen. Resultat som ges av undersökningen innefattar teorier, erfarenheter och rekommendationer från arbetet och kommer att försöka besvara frågeställningen om användning av geografisk data vid mjukvaruutveckling.
4

Dinaminė geografinių ir atributinių duomenų sąsaja Oracle DBVS pagrindu / Dynamic linking of geographical and attributive data using Oracle DBMS

Racibara, Giedrius 04 March 2009 (has links)
Šiame darbe apžvelgiama esamų GIS sprendimų privalumai ir trūkumai, tiriamos skaitmeninių žemėlapių įmonių taikomosiose programose panaudojimo galimybės. Analizuojama Oracle DBVS programinė įranga, siekiant įrodyti, kad ji turi reikiamas funkcijas erdvinių duomenų valdymui. Apibendrinus analizės rezultatus, pasiūlomas naujas GIS sprandimas, kuris leidžia atvaizduoti įmonės aprašomuosius duomenis skaitmeniniame žemėlapyje dinamišku būdu, nežinant jų struktūros ir minimizuojant programavimo darbus. Dinamiškos GIS koncepcijai realizuoti suprojektuojama nauja dinamiškos GIS architektūra ir suprogramuojamas trūkstamas duomenų integravimo komponentas. Dinamiškam duomenų susiejimui užtikrinti, duomenų integravimo komponento veikimas pagrindžiamas veiklos taisyklių koncepcija. Dinamiškam erdvinių ir aprašomųjų duomenų susiejimui pademonstruoti, dinamiškos GIS architektūra realizuojama naudojant oracle programinę įrangą, duomenų integravimo komponentas sukonfigūruojamas mapViewer naudojimui. Dinamiškos GIS testavimui sukurta testavimo sąsaja, kuri leidžia tiesiai internetiniame puslapyje rašyti veiklos taisykles ir matyti rezutatus žemėlapyje. Į oracle duomenų bazę importuoti egzistuojsntys erdviniai duomenys ir testavimo tikslais dalis duomenų buvo sukurti rankiniu būdu. / This work reviews advantages and weaknesses of existing GIS solutions, explores usage of digital maps for rendering specific enterprise data in digital maps. Oracle DBMS software was also analyzed to prove that it has all necessary components for spatial data manipulation. After summarizing analysis results we offer new GIS solution, which allows rendering enterprise data on the map in dynamic way without knowing enterprise data structure and minimizing programmer work. To implement a dynamic GIS conception a new dynamic GIS architecture and the missing component design for representing business data in a map are created. To ensure dynamic integration and simple usage of component, dynamic integration component functionality is based on business rules conception. To demonstrate dynamic enterprise and spatial data integration on the map, dynamic GIS architecture was implemented using Oracle software by configuring data integration component to use Oracle mapViewer. For dynamic GIS testing, test interface was created with an ability to write business rules directly in web page and see the integration results. Existing spatial data was imported into Oracle DB and some spatial data for testing purposes was created manually.
5

Implementation of a 3D terrain-dependent Wave Propagation Model in WRAP

Blakaj, Valon, Gashi, Gent January 2014 (has links)
The radio wave propagation prediction is one of the key elements for designing an efficient radio network system. WRAP International has developed a software for spectrum management and radio network planning.This software includes some wave propagation models which are used to predict path loss. Current propagation models in WRAP perform the calculation in a vertical 2D plane, the plane between the transmitter and the receiver. The goal of this thesis is to investigate and implement a 3D wave propagation model, in a way that reflections and diffractions from the sides are taken into account.The implemented 3D wave propagation model should be both fast and accurate. A full 3D model which uses high resolution geographical data may be accurate, but it is inefficient in terms of memory usage and computational time. Based on the fact that in urban areas the strongest path between the receiver and the transmitter exists with no joint between vertical and horizontal diffractions [10], the radio wave propagation can be divided into two parts, the vertical and horizontal part. Calculations along the horizontal and vertical parts are performed independently, and after that, the results are combined. This approach leads to less computational complexity, faster calculation time, less memory usage, and still maintaining a good accuracy.The proposed model is implemented in C++ and speeded up using parallel programming techniques. Using the provided Stockholm high resolution geographical data, simulations are performed and results are compared with real measurements and other wave propagation models. In addition to the path loss calculation, the proposed model can also be used to estimate the channel power delay profile and the delay spread.
6

Fusion de connaissances imparfaites pour l'appariement de données géographiques : proposition d'une approche s'appuyant sur la théorie des fonctions de croyance / Imperfect knowledge fusion for matching geographical data : approach based on belief theory

Olteanu, Ana-Maria 24 October 2008 (has links)
De nos jours, il existe de nombreuses bases de données géographiques (BDG) couvrant le même territoire. Les données géographiques sont modélisées différemment (par exemple une rivière peut être modélisée par une ligne ou bien par une surface), elles sont destinées à répondre à plusieurs applications (visualisation, analyse) et elles sont créées suivant des modes d’acquisition divers (sources, processus). Tous ces facteurs créent une indépendance entre les BDG, qui pose certains problèmes à la fois aux producteurs et aux utilisateurs. Ainsi, une solution est d’expliciter les relations entre les divers objets des bases de données, c'est-à-dire de mettre en correspondance des objets homologues représentant la même réalité. Ce processus est connu sous le nom d’appariement de données géographiques. La complexité du processus d’appariement fait que les approches existantes varient en fonction des besoins auxquels l'appariement répond, et dépendent des types de données à apparier (points, lignes ou surfaces) et du niveau de détail. Nous avons remarqué que la plupart des approches sont basées sur la géométrie et les relations topologiques des objets géographiques et très peu sont celles qui prennent en compte l’information descriptive des objets géographiques. De plus, pour la plupart des approches, les critères sont enchaînés et les connaissances sont à l’intérieur du processus. Suite à cette analyse, nous proposons une approche d’appariement de données qui est guidée par des connaissances et qui prend en compte tous les critères simultanément en exploitant à la fois la géométrie, l’information descriptive et les relations entre eux. Afin de formaliser les connaissances et de modéliser leurs imperfections (imprécision, incertitude et incomplétude), nous avons utilisé la théorie des fonctions de croyance [Shafer, 1976]. Notre approche d’appariement de données est composée de cinq étapes : après une sélection des candidats, nous initialisons les masses de croyance en analysant chaque candidat indépendamment des autres au moyen des différentes connaissances exprimées par divers critères d’appariement. Ensuite, nous fusionnons les critères d’appariement et les candidats. Enfin, une décision est prise. Nous avons testé notre approche sur des données réelles ayant des niveaux de détail différents représentant le relief (données ponctuelles) et les réseaux routiers (données linéaires) / Nowadays, there are many geographic databases, (GDB), covering the same reality. The geographical data are represented differently (for example a river can be represented by a line or a polygon), they are used in different applications (visualisation, analysis) and they are created using various modes of acquisition (sources, processes). All these factors create independence between GDB, which causes problems for both producers and users. Thus, a solution is to clarify the relationships between various database objects, i.e. to match homologous objects, which represent the same reality. This process is known as spatial data matching. Because of the complexity of the matching process, the existing approaches depend on the types of data (points, lines or polygons) and the level of detail of the GDB. We realised, that most of the approaches are based on the geometry and the topology of the geographical objects, and very few approaches take into account the descriptive information of geographical objects. Besides, for most approaches, the criteria are applied one after the other and knowledge is contained within the process. Following this analysis, we proposed a matching approach that is guided by knowledge and takes into account all criteria at the same time exploiting the geometry, descriptive information and relations between geographical objects. In order to formalise knowledge and model their imperfections (imprecision, uncertainty and incompleteness), we used the Belief Theory [Shafer, 1976]. Our approach of the data matching is composed of five steps. After a selection of candidates, the masses of beliefs are initialised by analysing each candidate separately from the others using different knowledge expressed by various matching criteria. Then, the matching criteria and candidates are fusioned. Finally, a decision is taken. Our approach has been tested on real data having different levels of detail and representing relief (data points) and road networks (linear data)
7

Informační systém pro správu vizualizací geografických dat / Information System for Management of Geographical Data Visualizations

Grossmann, Jan January 2021 (has links)
The goal of this this is to create an information system for the visualization of geographical data. The main idea is to allow users to create visualizations with their own geographical data, which they can either import from files or directly attach their own database system as a source of data and make use of the data in real-time. The result will be a new web information system that will act as a point of contact between users, geographical data, and visualizations.
8

Rigid barrier or not? : Machine Learning for classifying Traffic Control Plans using geographical data

Wallander, Cornelia January 2018 (has links)
In this thesis, four different Machine Learning models and algorithms have been evaluated in the work of classifying Traffic Control Plans in the City of Helsingborg. Before a roadwork can start, a Traffic Control Plan must be created and submitted to the Traffic unit in the city. The plan consists of information regarding the roadwork and how the work can be performed in a safe manner, concerning both road workers and car drivers, pedestrians and cyclists that pass by. In order to know what safety barriers are needed both the Swedish Association of Local Authorities and Regions (SALAR) and the Swedish Transport Administration (STA) have made a classification of roads to guide contractors and traffic technicians what safety barriers are suitable to provide a safe workplace. The road classifications are built upon two rules; the amount of traffic and the speed limit of the road. Thus real-world problems have shown that these classifications are not applicable to every single case. Therefore, each roadwork must be judged and evaluated from its specific attributes. By creating and training a Machine Learning model that is able to determine if a rigid safety barrier is needed or not a classification can be made based on historical data. In this thesis, the performance of several Machine Learning models and datasets are presented when Traffic Control Plans are classified. The algorithms used for the classification task were Random Forest, AdaBoost, K-Nearest Neighbour and Artificial Neural Network. In order to know what attributes to include in the dataset, participant observations in combination with interviews were held with a traffic technician at the City of Helsingborg. The datasets used for training the algorithms were primarily based on geographical data but information regarding the roadwork and period of time were also included in the dataset. The results of this study indicated that it was preferred to include road attribute information in the dataset. It was also discovered that the classification accuracy was higher if the attribute values of the geographical data were continuous instead of categorical. In the results it was revealed that the AdaBoost algorithm had the highest performance, even though the difference in performance was not that big compared to the other algorithms.
9

Marketingový výzkum vhodnosti aplikování GIS na produkty firmy Geodis Brno / Marketing Research of GIS suitability for Geodis Products

Chmelař, Vladimír January 2009 (has links)
The aim of this diploma thesis is by the help of marketing research analyze, which of the available geographic information systems (GIS) on the Czech market, intended in use in the state administration, government and the private sector in the Czech Republic, is the best from marketing point of view for commercial distribution together with geographical data of the company GEODIS BRNO. The theoretical part deals with marketing research and related marketing tools. The practical part deals with analysis the company GEODIS, analysis systems GIS, results of analysis and recommendation.
10

Comparing machine learning methods for classification and generation of footprints of buildings from aerial imagery

Jerkenhag, Joakim January 2019 (has links)
The up to date mapping data is of great importance in social services and disaster relief as well as in city planning. The vast amounts of data and the constant increase of geographical changes lead to large loads of continuous manual analysis. This thesis takes the process of updating maps and breaks it down to the problem of discovering buildings by comparing different machine learning methods to automate the finding of buildings. The chosen methods, YOLOv3 and Mask R-CNN, are based on Region Convolutional Neural Network(R-CNN) due to their capabilities of image analysis in both speed and accuracy. The image data supplied by Lantmäteriet makes up the training and testing data; this data is then used by the chosen machine learning methods. The methods are trained at different time limits, the generated models are tested and the results analysed. The results lay ground for whether the model is reasonable to use in a fully or partly automated system for updating mapping data from aerial imagery. The tested methods showed volatile results through their first hour of training, with YOLOv3 being more so than Mask R-CNN. After the first hour and until the eight hour YOLOv3 shows a higher level of accuracy compared to Mask R-CNN. For YOLOv3, it seems that with more training, the recall increases while precision decreases. For Mask R-CNN, however, there is some trade-off between the recall and precision throughout the eight hours of training. While there is a 90 % confidence interval that the accuracy of YOLOv3 is decreasing for each hour of training after the first hour, the Mask R-CNN method shows that its accuracy is increasing for every hour of training,however, with a low confidence and can therefore not be scientifically relied upon. Due to differences in setups the image size varies between the methods, even though they train and test on the same areas; this results in a fair evaluation where YOLOv3 analyses one square kilometre 1.5 times faster than the Mask R-CNN method does. Both methods show potential for automated generation of footprints, however, the YOLOv3 method solely generates bounding boxes, leaving the step of polygonization to manual work while the Mask R-CNN does, as the name implies, create a mask of which the object is encapsulated. This extra step is thought to further automate the manual process and with viable results speed up the updating of map data. / Uppdaterad kartdata är av stor betydelse för sociala tjänster och katastrofhjälp såväl som inom stadsplanering. De enorma mängderna data och den ständiga ökningen av geografiska förändringar leder till mycket arbete för kontinuerlig manuell analys. Denna avhandling kommer att behandla detta problem med att uppdatera kartor, bryta ned det till det specifika problemet att upptäcka byggnader och ur den synvinkelen jämföra olika maskininlärningsmetoder för automatisera detektering av byggnader. De valda metoderna, YOLOv3 och Mask R-CNN, är baserade på Region Convolutional Neural Network (R-CNN) på grund av dess förmåga av bildanalys i både hastighet och träffsäkerhet. Bildmaterial från Lantmäteriet utgör tränings- och testdatan, denna data används sedan av de utvalda maskininlärningmetoderna. Metoderna tränas med olika tidsgränser och de genererade modellerna testas och resultaten analyseras. Resultaten lägger grund för huruvida modellen är rimlig att använda i ett helt eller delvis automatiserat system för uppdatering av kartdata från flygbilder. De testade metoderna visade varierande resultat under sin första timmes träning, med YOLOv3 mer så än Mask R-CNN. Efter den första timmen fram till den åttonde timmen visar YOLOv3 en högre nivå av precision jämfört med Mask R-CNN. För YOLOv3 ser det ut som att mer träning ökar recall samtidigt som precision minskar. För Mask R-CNN är det emellertid en avvägning mellan recall och precision under de åtta timmarnas träning. Medan det finns en 90 % konfidens att accuracy minskar med YOLOv3 för varje timmes träning efter första timmen så visar Mask R-CNN-metoden att dess accuracy ökar för varje timmes träning, det är dock med låg konfidens och har därmed inte vetenskapligt stöd. På grund av skillnader i konfigurationer varierar bildstorleken mellan metoderna, de tränar och testar dock på samma områden för att ge en rättvis jämförelse. I dessa test analyserar YOLOv3 en kvadratkilometer 1.5 gånger snabbare än Mask R-CNN. Båda metoderna visar potential för en automatiserad generering av footprints. Dock så genererar YOLOv3-metoden endast en bounding box, vilket gör att polygoniseringen återstår för manuellt arbete medan Mask R-CNN, som namnet antyder, skapar en mask som objektet inkapslas i. Detta extrasteg är tänkt att automatisera den manuella processen och med rimliga resultat påskynda uppdateringen av kartdata.

Page generated in 0.0957 seconds