• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 35
  • 13
  • 13
  • 8
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 237
  • 237
  • 59
  • 45
  • 42
  • 40
  • 37
  • 37
  • 34
  • 34
  • 31
  • 24
  • 24
  • 22
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Avaliação da qualidade do dado espacial digital de acordo com parâmetros estabelecidos por usuários. / Digital spatial data quality evaluation based on users parameters.

Salisso Filho, João Luiz 02 May 2013 (has links)
Informações espaciais estão cada vez mais disseminadas no cotidiano do cidadão comum, de empresas e de instituições governamentais. Aplicações como o Google Earth, Bing Maps, aplicativos de localização por GPS, entre outros apresentam a informação espacial como uma commodity. Cada vez mais empresas públicas e privadas incorporam o dado espacial em seu processo decisório, tornando ainda mais crítico a questão da qualidade deste tipo de dado. Dada a natureza multidisciplinar e, principalmente, o volume de informações disponibilizadas para os usuários, faz-se necessário apresentar um método de avaliação de dados apoiado por processos computacionais, que permita ao usuário avaliar a verdadeira adequação que tais dados têm frente ao uso pretendido. Nesta Dissertação de Mestrado propõe-se uma metodologia estruturada de avaliação de dados espaciais apoiada por computador. A metodologia utilizada, baseada em normas apresentadas pela International Standards Organization (ISO), permite ao usuário de dados espaciais avaliar sua qualidade comparando a qualidade do dado de acordo com os parâmetros estabelecidos pelo próprio usuário. Também permite ao usuário comparar a qualidade apresentada pelo dado espacial com a informação de qualidade provida pelo produtor do dado. Desta forma, o método apresentado, ajuda o usuário a determinar a real adequação do dado espacial ao seu uso pretendido. / Spatial information is increasingly widespread in everyday life of ordinary people, businesses and government institutions. Applications like Google Earth, Bing Maps, GPS location applications, among others present spatial data as a commodity. More and more public and private companies incorporate the usage of spatial data into their decision process, increasing the importance of spatial quality issues. Given the multidisciplinary nature and, especially, the volume of information available to all users, it is necessary to introduce a data quality evaluation method supported by computational processes, enabling the end user to evaluate the real fitness for use that such data have for an intended use. This dissertation aims to present a structure methodology for spatial data evaluation supported by computational process. The methodology, based on standards provided by the International Standards Organization (ISO), allows users of spatial information evaluating the quality of spatial data comparing the quality of information against users own quality parameters. It will also allow the user to compare the quality presented by the given spatial data with quality information provided by the data producer. Thus, the presented method will support the end user in determining the real fitness for use for the spatial data.
22

Metadata challenges faced by producers and users of spatial data in South Africa.

Alford, Judith. January 2009 (has links)
A large number of spatial datasets have inconsistent and/or outdated metadata. In certain cases, metadata is entirely absent. Some spatial data producers suggest that metadata creation and maintenance is a time consuming and labour-intensive process. Conversely, users experience difficulties in understanding and accessing spatial datasets, if associated metadata is insufficient or non-existent. Eventually, deficient metadata use may lead to loss of spatial data meaning and cause its very existence to be forgotten. The purpose of the study was to assess the main challenges hindering metadata creation and maintenance on the part of producers and its usage on the part of users in South Africa. The main findings showed that: data was accessed at expected levels via the internet; most data users accepted alternative spatial data media including compact disks and hardcopy; the spatial data industry is generally under financial budget constraints; particularly in the public sector, lack of skilled personnel in spatial metadata management resulted in staff turnover problems; the framework datasets indicated outdated metadata; and different producers used inconsistent metadata standards and a number of organizations were at rudimentary stage of spatial metadata development. In conclusion, spatial data producers should be encouraged to maintain data with complete documentation in a standardized spatial metadata to assure information consistency for users. Raising awareness about spatial metadata benefits may encourage data managers and top leaders to build on metadata priorities. Moreover, strong compliance with the SDI policy necessitates solid cooperation amongst the spatial data community. / Thesis (M.Sc.) - University of KwaZulu-Natal, Pietermaritzburg, 2009.
23

Integration of vector datasets

Hope, Susannah Jayne January 2008 (has links)
As the spatial information industry moves from an era of data collection to one of data maintenance, new integration methods to consolidate or to update datasets are required. These must reduce the discrepancies that are becoming increasingly apparent when spatial datasets are overlaid. It is essential that any such methods consider the quality characteristics of, firstly, the data being integrated and, secondly, the resultant data. This thesis develops techniques that give due consideration to data quality during the integration process.
24

GeoDrill : uso de SQL para integração de fontes de dados espaciais heterogêneas com ou sem esquema.

ACIOLI FILHO, José Amilton Moura. 21 May 2018 (has links)
Submitted by Maria Medeiros (maria.dilva1@ufcg.edu.br) on 2018-05-21T13:33:00Z No. of bitstreams: 1 JOSÉ AMILTON MOURA ACIOLI FILHO - DISSERTAÇÃO (PPGCC) 2016.pdf: 4531903 bytes, checksum: 0544920547c2d257f657b480a1c5f45f (MD5) / Made available in DSpace on 2018-05-21T13:33:00Z (GMT). No. of bitstreams: 1 JOSÉ AMILTON MOURA ACIOLI FILHO - DISSERTAÇÃO (PPGCC) 2016.pdf: 4531903 bytes, checksum: 0544920547c2d257f657b480a1c5f45f (MD5) Previous issue date: 2016-09-02 / Com a evolução da web e dos sistemas de informação, as organizações têm obtido dados dos mais diversos formatos, estruturas e tipos, podendo-se destacar os espaciais. Devido aos dados apresentarem características distintas, estes acabam sendo mantidos em fontes de dados heterogêneas, sendo assim necessário investir cada vez mais em soluções que possam integrar e analisar estes dados de diferentes fontes. Algumas destas soluções conseguem analisar o componente espacial dos dados, no entanto, essa análise dos dados espaciais é limitada pelo tipo de dados ou funções espaciais suportadas. Neste trabalho, é abordado o problema da integração de dados espaciais de fontes de dados heterogêneas, com ou sem esquema, utilizando linguagem SQL. Este é um problema em aberto na área de integração de dados espaciais, pois as soluções existentes apresentam inúmeras limitações, a exemplo da linguagem de consulta utilizada, os meios para acesso a dados, as tecnologias que podem ser integradas, as funções disponibilizadas e os tipos de dados espaciais suportados. Visando solucionar esse problema, desenvolveu-se a solução GeoDrill, uma extensão do Apache Drill que dá suporte a todas as funções espaciais padronizadas pela OGC (Open Geospatial Consortium), através da linguagem SQL, podendo realizar consultas em dados com ou sem esquema. Para validar a capacidade de integração dos dados no GeoDrill, foi desenvolvido um experimento para analisar as funcionalidades e o desempenho do mesmo. A solução GeoDrill foi capaz de realizar a integração dos dados espaciais de fontes heterogêneas, apresentando-se como uma alternativa para a resolução de parte das limitações existentes na área. / With the evolution of the web and information systems, organizations have obtained data of various formats, structures and types, specially the spatial one. Due to different characteristics presented in data, such data have been stored in heterogeneous data sources. Therefore, it is needed to increasingly invest in solutions that can integrate and analyze these data from different sources. Some of these solutions can analyze the spatial component of data; however, this analysis of spatial data is limited either by the data type or spatial functions supported. In this work, the problem of spatial data integration from heterogeneous data sources is addressed, either with or without using schemas, using SQL language. This is an open issue in the area of spatial data integration, since existing solutions present many limitations, such as the query language used, the ways to access data, the technologies that can be integrated, the available functions set and the spatial data types supported. Aiming at solving this problem, the GeoDrill solution was developed, which is an extension of the Apache Drill that supports all standard spatial functions provided by the OGC (Open Geospatial Consortium) through the SQL language. The GeoDrill can perform queries on data with or without schema. In order to validate the capacity of GeoDrill to integrate data, an experiment was conducted to analyze its functionalities and performance. The obtained results indicate the GeoDrill solution is able to integrate spatial data from heterogeneous data sources. Hence, it appears to be a suitable alternative for solving part of the existing limitations in this research field.
25

Avaliação da qualidade do dado espacial digital de acordo com parâmetros estabelecidos por usuários. / Digital spatial data quality evaluation based on users parameters.

João Luiz Salisso Filho 02 May 2013 (has links)
Informações espaciais estão cada vez mais disseminadas no cotidiano do cidadão comum, de empresas e de instituições governamentais. Aplicações como o Google Earth, Bing Maps, aplicativos de localização por GPS, entre outros apresentam a informação espacial como uma commodity. Cada vez mais empresas públicas e privadas incorporam o dado espacial em seu processo decisório, tornando ainda mais crítico a questão da qualidade deste tipo de dado. Dada a natureza multidisciplinar e, principalmente, o volume de informações disponibilizadas para os usuários, faz-se necessário apresentar um método de avaliação de dados apoiado por processos computacionais, que permita ao usuário avaliar a verdadeira adequação que tais dados têm frente ao uso pretendido. Nesta Dissertação de Mestrado propõe-se uma metodologia estruturada de avaliação de dados espaciais apoiada por computador. A metodologia utilizada, baseada em normas apresentadas pela International Standards Organization (ISO), permite ao usuário de dados espaciais avaliar sua qualidade comparando a qualidade do dado de acordo com os parâmetros estabelecidos pelo próprio usuário. Também permite ao usuário comparar a qualidade apresentada pelo dado espacial com a informação de qualidade provida pelo produtor do dado. Desta forma, o método apresentado, ajuda o usuário a determinar a real adequação do dado espacial ao seu uso pretendido. / Spatial information is increasingly widespread in everyday life of ordinary people, businesses and government institutions. Applications like Google Earth, Bing Maps, GPS location applications, among others present spatial data as a commodity. More and more public and private companies incorporate the usage of spatial data into their decision process, increasing the importance of spatial quality issues. Given the multidisciplinary nature and, especially, the volume of information available to all users, it is necessary to introduce a data quality evaluation method supported by computational processes, enabling the end user to evaluate the real fitness for use that such data have for an intended use. This dissertation aims to present a structure methodology for spatial data evaluation supported by computational process. The methodology, based on standards provided by the International Standards Organization (ISO), allows users of spatial information evaluating the quality of spatial data comparing the quality of information against users own quality parameters. It will also allow the user to compare the quality presented by the given spatial data with quality information provided by the data producer. Thus, the presented method will support the end user in determining the real fitness for use for the spatial data.
26

Quality Assessment of Spatial Data: Positional Uncertainties of the National Shoreline Data of Sweden

Hast, Isak January 2014 (has links)
This study investigates on the planimetric (x, y) positional accuracy of the National Shoreline (NSL) data, produced in collaboration between the Swedish mapping agency Lantmäteriet and the Swedish Maritime Administration (SMA). Due to the compound nature of shorelines, such data is afflicted by substantial positional uncertainties. In contrast, the positional accuracy requirements of NSL data are high. An apparent problem is that Lantmäteriet do not measure the positional accuracy of NSL in accordance to the NSL data product specification. In addition, currently, there is little understanding of the latent positional changes of shorelines affected by the component of time, in direct influence of the accuracy of NSL. Therefore, in accordance to the two specific aims of this study, first, an accuracy assessment technique is applied so that to measure the positional accuracy of NSL. Second, positional time changes of NSL are analysed. This study provides with an overview of potential problems and future prospects of NSL, which can be used by Lantmäteriet to improve the quality assurance of the data. Two line-based NSL data sets within the NSL classified regions of Sweden are selected. Positional uncertainties of the selected NSL areas are investigated using two distinctive methodologies. First, an accuracy assessment method is applied and accuracy metrics by the root-means-square error (RMSE) are derived. The accuracy metrics are checked toward specification and standard accuracy tolerances. Results of the assessment by the calculated RMSE metrics in comparison to tolerances indicate on an approved accuracy of tested data. Second, positional changes of NSL data are measured using a proposed space-time analysis technique. The results of the analysis reveal significant discrepancies between the two areas investigated, indicating that one of the test areas are influenced by much greater positional changes over time. The accuracy assessment method used in this study has a number of apparent constraints. One manifested restriction is the potential presence of bias in the derived accuracy metrics. In mind of current restrictions, the method to be preferred in assessment of the positional accuracy of NSL is a visual inspection towards aerial photographs. In regard of the result of the space-time analysis, one important conclusion can be made. Time-dependent positional discrepancies between the two areas investigated, indicate that Swedish coastlines are affected by divergent degrees of positional changes over time. Therefore, Lantmäteriet should consider updating NSL data at different time phases dependent on the prevailing regional changes so that to assure the currently specified positional accuracy of the entire data structure of NSL.
27

Bayesian Spatial Quantile Regression.

Reich, BJ, Fuentes, M, Dunson, DB 03 1900 (has links)
Tropospheric ozone is one of the six criteria pollutants regulated by the United States Environmental Protection Agency under the Clean Air Act and has been linked with several adverse health effects, including mortality. Due to the strong dependence on weather conditions, ozone may be sensitive to climate change and there is great interest in studying the potential effect of climate change on ozone, and how this change may affect public health. In this paper we develop a Bayesian spatial model to predict ozone under different meteorological conditions, and use this model to study spatial and temporal trends and to forecast ozone concentrations under different climate scenarios. We develop a spatial quantile regression model that does not assume normality and allows the covariates to affect the entire conditional distribution, rather than just the mean. The conditional distribution is allowed to vary from site-to-site and is smoothed with a spatial prior. For extremely large datasets our model is computationally infeasible, and we develop an approximate method. We apply the approximate version of our model to summer ozone from 1997-2005 in the Eastern U.S., and use deterministic climate models to project ozone under future climate conditions. Our analysis suggests that holding all other factors fixed, an increase in daily average temperature will lead to the largest increase in ozone in the Industrial Midwest and Northeast. / Dissertation
28

NEW METHODS FOR MINING SEQUENTIAL AND TIME SERIES DATA

Al-Naymat, Ghazi January 2009 (has links)
Doctor of Philosophy (PhD) / Data mining is the process of extracting knowledge from large amounts of data. It covers a variety of techniques aimed at discovering diverse types of patterns on the basis of the requirements of the domain. These techniques include association rules mining, classification, cluster analysis and outlier detection. The availability of applications that produce massive amounts of spatial, spatio-temporal (ST) and time series data (TSD) is the rationale for developing specialized techniques to excavate such data. In spatial data mining, the spatial co-location rule problem is different from the association rule problem, since there is no natural notion of transactions in spatial datasets that are embedded in continuous geographic space. Therefore, we have proposed an efficient algorithm (GridClique) to mine interesting spatial co-location patterns (maximal cliques). These patterns are used as the raw transactions for an association rule mining technique to discover complex co-location rules. Our proposal includes certain types of complex relationships – especially negative relationships – in the patterns. The relationships can be obtained from only the maximal clique patterns, which have never been used until now. Our approach is applied on a well-known astronomy dataset obtained from the Sloan Digital Sky Survey (SDSS). ST data is continuously collected and made accessible in the public domain. We present an approach to mine and query large ST data with the aim of finding interesting patterns and understanding the underlying process of data generation. An important class of queries is based on the flock pattern. A flock is a large subset of objects moving along paths close to each other for a predefined time. One approach to processing a “flock query” is to map ST data into high-dimensional space and to reduce the query to a sequence of standard range queries that can be answered using a spatial indexing structure; however, the performance of spatial indexing structures rapidly deteriorates in high-dimensional space. This thesis sets out a preprocessing strategy that uses a random projection to reduce the dimensionality of the transformed space. We use probabilistic arguments to prove the accuracy of the projection and to present experimental results that show the possibility of managing the curse of dimensionality in a ST setting by combining random projections with traditional data structures. In time series data mining, we devised a new space-efficient algorithm (SparseDTW) to compute the dynamic time warping (DTW) distance between two time series, which always yields the optimal result. This is in contrast to other approaches which typically sacrifice optimality to attain space efficiency. The main idea behind our approach is to dynamically exploit the existence of similarity and/or correlation between the time series: the more the similarity between the time series, the less space required to compute the DTW between them. Other techniques for speeding up DTW, impose a priori constraints and do not exploit similarity characteristics that may be present in the data. Our experiments demonstrate that SparseDTW outperforms these approaches. We discover an interesting pattern by applying SparseDTW algorithm: “pairs trading” in a large stock-market dataset, of the index daily prices from the Australian stock exchange (ASX) from 1980 to 2002.
29

Developing Australian Spatial Data Policies - Existing Practices and Future Strategies

Mason, Renate, Surveying & Spatial Information Systems, Faculty of Engineering, UNSW January 2002 (has links)
This thesis investigates the problems associated with the development of Spatial Data Infrastructures (SDIs). The results of this investigation are used as input for the development of new spatial data policy strategies for individual organisations to enable an improved better facilitation of SDIs. Policy issues that need to be considered by an organisation when developing spatial data policies, were identified as being: SDI requirements; organisational issues; technical issues; Governmental/organisational duties; ownership/custodianship; privacy and confidentiality; legal liability, contracts and licences; Intellectual Property Law; economic analysis; data management; outreach, cooperation and political mandate; and users' choices, rights and obligations. In order to gain an understanding of current spatial data policy practices and to device new policy strategies a spatial data survey was conducted. This survey addressed the identified SDI problem areas. Some 6630 questionnaires were mailed out with more than 400 responses returned. These were reduced to 379 useful responses. Once analysed, the results were compared with the findings of the SDI investigation and used throughout the thesis. The results of the analysis to the spatial data survey are displayed in tables and graphs throughout Chapters 3, 4, 5 and 6 and in Appendix 2. The tables and graphs show the answers to the questions asked in the questionnaire as a percentage of the total number of respondents. The survey discovered that many organisations had no spatial data policies, nor individual policies on spatial data pricing and/or intellectual property protection. This thesis established that SDI requirements are not being met by many spatial data policies used by individual organisations. Hence, the thesis studied the spatial data policy issues that are involved when an organisation develops new policies with the aim to aid the development of SDIs. It uniquely established current Australian spatial data policy practices in the areas of spatial data quality, access, pricing, and legal issues to form the basis for future strategies. It reviewed the current knowledge of intellectual property law applied to spatial data and devised new approaches to deal with all the identified policy issues. Finally, the thesis defines spatial data policies that facilitate SDI development.
30

Quantifying Vein Patterns in Growing Leaves

Assaf, Rebecca 16 May 2011 (has links)
How patterns arise from an apparently uniform group of cells is one of the classical problems in developmental biology. The mechanism is complicated by the fact that patterning occurs on a growing medium. Therefore, changes in an organism’s size and shape affect the patterning processes. In turn, patterning itself may affect growth. This interaction between growth and patterning leads to the generation of complex shapes and structures from simpler ones. Studying such interactions requires the possibility to monitor both processes in vivo. To this end, we developed a new technique to monitor and quantify vein patterning in a growing leaf over time using the leaves of Arabidopsis thaliana as a model system. We used a transgenic line with fluorescent markers associated with the venation. Individual leaves are followed in many samples in vivo through time-lapse imaging. Custom-made software allowed us to extract the leaf surface and vein pattern from images of each leaf at each time point. Then average spatial maps from multiple samples that were generated revealed spatio-temporal gradients. Our quantitative description of wild type vein patterns during leaf development revealed that there is no constant size at which a part of tissue enclosed by vasculature will become irrigated by a new vein. Instead, it seemed that vein formation depends on the growth rate of the tissue. This is the first time that vein patterning in growing leaves was quantified. The techniques developed will later be used to explore the interaction between growth and patterning through a variety of approaches, including mutant analysis, pharmacological treatments and variation of environmental conditions.

Page generated in 0.0524 seconds