• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • 2
  • Tagged with
  • 11
  • 11
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analysis of the positional accuracy of linear features.

Lawford, Geoffrey John Unknown Date (has links) (PDF)
Although the positional accuracy of spatial data has long been of fundamental importance in GIS, it is still largely unknown for linear features. This is compromising the ability of GIS practitioners to undertake accurate geographic analysis and hindering GIS in fulfilling its potential as a credible and reliable tool. As early as 1987 the US National Center for Geographic Information and Analysis identified accuracy as one of the key elements of successful GIS implementation. Yet two decades later, while there is a large body of geodetic literature addressing the positional accuracy of point features, there is little research addressing the positional accuracy of linear features, and still no accepted accuracy model for linear features. It has not helped that national map and data accuracy standards continue to define accuracy only in terms of “well-defined points”. This research aims to address these shortcomings by exploring the effect on linear feature positional accuracy of feature type, complexity, segment length, vertex proximity and e-scale, that is, the scale of the paper map from which the data were originally captured or to which they are customised for output. / The research begins with a review of the development of map and data accuracy standards, and a review of existing research into the positional accuracy of linear features. A geographically sensible error model for linear features using point matching is then developed and a case study undertaken. Features of five types, at five e-scales, are selected from commonly used, well-regarded Australian topographic datasets, and tailored for use in the case study. Wavelet techniques are used to classify the case study features into sections based on their complexity. Then, using the error model, half a million offsets and summary statistics are generated that shed light on the relationships between positional accuracy and e-scale, feature type, complexity, segment length, and vertex proximity. Finally, auto-regressive time series modelling and moving block bootstrap analysis are used to correct the summary statistics for correlation. / The main findings are as follows. First, metadata for the tested datasets significantly underestimates the positional accuracy of the data. Second, positional accuracy varies with e-scale but not, as might be expected, in a linear fashion. Third, positional accuracy varies with feature type, but not as the rules of generalisation suggest. Fourth, complex features lose accuracy faster than less complex features as e-scale is reduced. Fifth, the more complex a real-world feature, the worse its positional accuracy when mapped. Finally, accuracy mid-segment is greater than accuracy end-segment.
2

Quality Assessment of Spatial Data: Positional Uncertainties of the National Shoreline Data of Sweden

Hast, Isak January 2014 (has links)
This study investigates on the planimetric (x, y) positional accuracy of the National Shoreline (NSL) data, produced in collaboration between the Swedish mapping agency Lantmäteriet and the Swedish Maritime Administration (SMA). Due to the compound nature of shorelines, such data is afflicted by substantial positional uncertainties. In contrast, the positional accuracy requirements of NSL data are high. An apparent problem is that Lantmäteriet do not measure the positional accuracy of NSL in accordance to the NSL data product specification. In addition, currently, there is little understanding of the latent positional changes of shorelines affected by the component of time, in direct influence of the accuracy of NSL. Therefore, in accordance to the two specific aims of this study, first, an accuracy assessment technique is applied so that to measure the positional accuracy of NSL. Second, positional time changes of NSL are analysed. This study provides with an overview of potential problems and future prospects of NSL, which can be used by Lantmäteriet to improve the quality assurance of the data. Two line-based NSL data sets within the NSL classified regions of Sweden are selected. Positional uncertainties of the selected NSL areas are investigated using two distinctive methodologies. First, an accuracy assessment method is applied and accuracy metrics by the root-means-square error (RMSE) are derived. The accuracy metrics are checked toward specification and standard accuracy tolerances. Results of the assessment by the calculated RMSE metrics in comparison to tolerances indicate on an approved accuracy of tested data. Second, positional changes of NSL data are measured using a proposed space-time analysis technique. The results of the analysis reveal significant discrepancies between the two areas investigated, indicating that one of the test areas are influenced by much greater positional changes over time. The accuracy assessment method used in this study has a number of apparent constraints. One manifested restriction is the potential presence of bias in the derived accuracy metrics. In mind of current restrictions, the method to be preferred in assessment of the positional accuracy of NSL is a visual inspection towards aerial photographs. In regard of the result of the space-time analysis, one important conclusion can be made. Time-dependent positional discrepancies between the two areas investigated, indicate that Swedish coastlines are affected by divergent degrees of positional changes over time. Therefore, Lantmäteriet should consider updating NSL data at different time phases dependent on the prevailing regional changes so that to assure the currently specified positional accuracy of the entire data structure of NSL.
3

Análise da qualidade posicional das bases do Google Maps, Bing Maps e da Esri para referência espacial em projetos em SIG: aplicação para o município de São Paulo. / Horizontal positional accuracy of Bing Maps, Google Maps and Esri\'s World Imagegery as spatial references within a geographic information system for the municipality of São Paulo.

Sztutman, Paulo 09 December 2014 (has links)
A presente pesquisa analisou a acurácia posicional horizontal das bases do Bing Maps, Google Maps e da World Imagery da Esri quando utilizadas como referência espacial on-line em um Sistema de Informação Geográfica no Município de São Paulo (MSP). A metodologia adotada foi a baseada no Decreto Federal no 89.817/84 e na Análise Estatística proposta por Merchant (1982). A análise da acurácia foi desenvolvida a partir das diferenças entre as coordenadas de 240 pontos nas cartas 1:1.000 do Mapa Digital da Cidade de São Paulo (MDC) em relação às coordenadas homólogas nas três bases, considerando separadamente as coordenadas do eixo Norte e Este. A base do Google Maps para o MSP foi dividida em duas (mosaico de ortofotos na área central e mosaico de imagens de satélite nas regiões periféricas), devido à grande diferença de acurácia entre os dois produtos. Para classificar cada base a partir do Decreto 89.817 foi definida a escala na qual somente 10% das discrepâncias tivessem seu valor superior ao PEC, e a escala na qual o Root Mean Square Error (RMSE) da amostra das discrepâncias fosse igual a 60,8% do PEC. A escala final selecionada foi a menor (menos detalhada) entre as definidas em cada um dos processos. A Análise Estatística foi baseada nos testes de tendência e precisão. Como as três bases apresentaram tendência, a escala definida pelo teste de precisão não foi considerada no cômputo final das escalas, devido à dificuldade de se eliminar a tendência nessas bases quando utilizadas no SIG. As escalas finais obtidas, relativas à classe A, foram: Google Maps (imagens de satélite): 1:12.400; Google Maps (ortofotos): 1:3.588; Bing Maps: 1:10.881 e Word Imagery da ESRI: 1:8.420. Concluiu-se que os três produtos com escalas próximas a 1:10.000 apresentam acurácia para serem utilizados como bases em SIGs nos estudos para planejamento urbano e que o Google Maps (ortofotos, com escala próxima a 1:4.000) pode ser igualmente utilizada para planejamento, mas em função de sua acurácia maior, pode servir também para a gestão de serviços urbanos. A principal limitação encontrada para as bases no uso como referência espacial em SIGs foi a inclinação das feições distantes do nadir da imagem ou da ortofoto e o consequente recobrimento de áreas adjacentes a essas feições. Entretanto, essa limitação se mostrou quase desprezível para as escalas definidas para as bases na análise da acurácia. / This research has analyzed the horizontal positional accuracy of basemaps Bing Maps, Google Maps and ESRIs World Imagery when used as an online spatial reference within a Geographic Information System for the municipality of São Paulo. The methodology was based on criteria defined by Brazil Federal Decree 89817/84 and in the analysis proposed by Merchant (1982). The accuracy analysis was developed observing the discrepancies between coordinates of selected 240 points from the 1:1000 digital map of São Paulo compared to corresponding points in the three basemaps, (coordinate directions North and East were considered separately). The Google Maps basemap for the city of São Paulo was divided in two (ortophoto mosaic for the central area and satellite images mosaic in the remainder peripheral areas), due to the considerable differences in their accuracy patterns. In order to classify each basemap as per Federal Decree 89.817, we have defined a scale in which only 10% of discrepancies were above the LMAS90 and the Root Mean Square Error (RMSE) of the discrepancies sample was equal to 60,8% of LMAS90. The final selected scale was the smallest (less detailed) of those obtained in each of the processes. The statistical analysis was based on the test of bias error and by a test of precision. Because the three basemaps have presented biases, the final scales defined by the precision test were not considered in the results, for it is difficult to eliminate biases in these basemaps when used in a GIS. We have obtained the following final scales to class A of the Brazilian Decree: Google Maps (area covered by satellite images): 1:12.400; Google Maps (area covered by ortophotos): 1:3.588; Bing Maps: 1:10.881 and ESRIs Word Imagery: 1:8.420. In conclusion, (a) the three products with scales around a 1:10.000 present accuracy to be used as basemap in GIS for urban planning studies and (b) Google Maps (area covered by ortophotos, scale around 1:4.000) can be equally used for planning studies, as well as urban services manager, due to its greater accuracy. The key limitations for the use of such basemaps as spatial references in GIS was the inclination of features which are distant from the image or ortophoto nadir (off-nadir effects) and the consequent shadowing of adjoining areas. However, this limitation is almost irrelevant to the scales defined for the basemaps in the accuracy analysis.
4

Integration of vector datasets

Hope, Susannah Jayne January 2008 (has links)
As the spatial information industry moves from an era of data collection to one of data maintenance, new integration methods to consolidate or to update datasets are required. These must reduce the discrepancies that are becoming increasingly apparent when spatial datasets are overlaid. It is essential that any such methods consider the quality characteristics of, firstly, the data being integrated and, secondly, the resultant data. This thesis develops techniques that give due consideration to data quality during the integration process.
5

Proposta de um novo método para o planejamento de redes geodésicas

Klein, Ivandro January 2014 (has links)
O objetivo deste trabalho é desenvolver e propor um novo método para o planejamento de redes geodésicas. O planejamento (ou pré-análise) de uma rede geodésica consiste em planejar (ou otimizar) a rede, de modo que a mesma atenda a critérios de qualidade pré-estabelecidos de acordo com os objetivos do projeto, como precisão, confiabilidade e custos. No método aqui proposto, os critérios a serem considerados na etapa de planejamento são os níveis de confiabilidade e homogeneidade mínimos aceitáveis para as observações; a acurácia posicional dos vértices, considerando tanto os efeitos de precisão quanto os (possíveis) efeitos de tendência, segundo ainda um determinado nível de confiança; o número de outliers não detectados máximo admissível; e o poder do teste mínimo do procedimento Data Snooping (DS) no cenário n-dimensional, isto é, considerando todas as observações (testadas individualmente). De acordo com as classificações encontradas na literatura, o método aqui proposto consiste em um projeto combinado, solucionado por meio do método da tentativa e erro, além de apresentar alguns aspectos inéditos em seus critérios de planejamento. Para demonstrar a sua aplicação prática, um exemplo numérico de planejamento de uma rede GNSS (Global Navigation Satellite System – Sistema Global de Navegação por Satélite) é apresentado e descrito. Os resultados obtidos após o processamento dos dados da rede GNSS foram concordantes com os valores estimados na sua etapa de planejamento, ou seja, o método aqui proposto apresentou desempenho satisfatório na prática. Além disso, também foram investigados como os critérios pré-estabelecidos, a geometria/configuração da rede geodésica e a precisão/correlação inicial das observações podem influenciar nos resultados obtidos na etapa de planejamento, seguindo o método aqui proposto. Com a realização destes experimentos, dentre outras conclusões, verificou-se que todo os critérios de planejamento do método aqui proposto estão intrinsecamente interligados, pois, por exemplo, uma baixa redundância conduz a um valor relativamente mais alto para a componente de precisão, e consequentemente, um valor relativamente mais baixo para a componente de tendência (mantendo a acurácia final constante), o que também conduz a um poder do teste mínimo nos cenários unidimensional e n-dimensional significativamente mais baixos. / The aim of this work is to develop and propose a new method for the design of geodetic networks. Design (planning or pre-analysis) of a geodetic network consists of planning (or optimizing) the network so that it follows the pre-established quality criteria according to the project objectives, such as accuracy, reliability and costs. In the method proposed here, the criteria to be considered in the planning stage are the minimum acceptable levels of reliability and homogeneity of the observations; the positional accuracy of the points considering both the effects of precision and the (possible) effects of bias (according to a given confidence level); the maximum allowable number of undetected outliers; and the minimum power of the test of the Data Snooping procedure (DS) in the n-dimensional scenario, i.e., considering all observations (individually tested). According to the classifications found in the literature, the method proposed here consists of a combined project, solved by means of trial and error approach, and presents some new aspects in their planning criteria. To demonstrate its practical application, a numerical example of a GNSS (Global Navigation Satellite System) network design is presented and described. The results obtained after processing the data of the GNSS network were found in agreement with the estimated values in the design stage, i.e., the method proposed here showed satisfactory performance in practice. Moreover, were also investigated as the pre-established criteria, the geometry/configuration of the geodetic network, and the initial values for precision/correlation of the observations may influence the results obtained in the planning stage, following the method proposed here. In these experiments, among other findings, it was found that all the design criteria of the method proposed here are intrinsically related, e.g., a low redundancy leads to a relatively higher value for the precision component, and consequently to a relatively lower value for the bias component (keeping constant the final accuracy), which also leads to a minimum power of the test significantly lower in the one-dimensional and the n-dimensional scenarios.
6

Proposta de um novo método para o planejamento de redes geodésicas

Klein, Ivandro January 2014 (has links)
O objetivo deste trabalho é desenvolver e propor um novo método para o planejamento de redes geodésicas. O planejamento (ou pré-análise) de uma rede geodésica consiste em planejar (ou otimizar) a rede, de modo que a mesma atenda a critérios de qualidade pré-estabelecidos de acordo com os objetivos do projeto, como precisão, confiabilidade e custos. No método aqui proposto, os critérios a serem considerados na etapa de planejamento são os níveis de confiabilidade e homogeneidade mínimos aceitáveis para as observações; a acurácia posicional dos vértices, considerando tanto os efeitos de precisão quanto os (possíveis) efeitos de tendência, segundo ainda um determinado nível de confiança; o número de outliers não detectados máximo admissível; e o poder do teste mínimo do procedimento Data Snooping (DS) no cenário n-dimensional, isto é, considerando todas as observações (testadas individualmente). De acordo com as classificações encontradas na literatura, o método aqui proposto consiste em um projeto combinado, solucionado por meio do método da tentativa e erro, além de apresentar alguns aspectos inéditos em seus critérios de planejamento. Para demonstrar a sua aplicação prática, um exemplo numérico de planejamento de uma rede GNSS (Global Navigation Satellite System – Sistema Global de Navegação por Satélite) é apresentado e descrito. Os resultados obtidos após o processamento dos dados da rede GNSS foram concordantes com os valores estimados na sua etapa de planejamento, ou seja, o método aqui proposto apresentou desempenho satisfatório na prática. Além disso, também foram investigados como os critérios pré-estabelecidos, a geometria/configuração da rede geodésica e a precisão/correlação inicial das observações podem influenciar nos resultados obtidos na etapa de planejamento, seguindo o método aqui proposto. Com a realização destes experimentos, dentre outras conclusões, verificou-se que todo os critérios de planejamento do método aqui proposto estão intrinsecamente interligados, pois, por exemplo, uma baixa redundância conduz a um valor relativamente mais alto para a componente de precisão, e consequentemente, um valor relativamente mais baixo para a componente de tendência (mantendo a acurácia final constante), o que também conduz a um poder do teste mínimo nos cenários unidimensional e n-dimensional significativamente mais baixos. / The aim of this work is to develop and propose a new method for the design of geodetic networks. Design (planning or pre-analysis) of a geodetic network consists of planning (or optimizing) the network so that it follows the pre-established quality criteria according to the project objectives, such as accuracy, reliability and costs. In the method proposed here, the criteria to be considered in the planning stage are the minimum acceptable levels of reliability and homogeneity of the observations; the positional accuracy of the points considering both the effects of precision and the (possible) effects of bias (according to a given confidence level); the maximum allowable number of undetected outliers; and the minimum power of the test of the Data Snooping procedure (DS) in the n-dimensional scenario, i.e., considering all observations (individually tested). According to the classifications found in the literature, the method proposed here consists of a combined project, solved by means of trial and error approach, and presents some new aspects in their planning criteria. To demonstrate its practical application, a numerical example of a GNSS (Global Navigation Satellite System) network design is presented and described. The results obtained after processing the data of the GNSS network were found in agreement with the estimated values in the design stage, i.e., the method proposed here showed satisfactory performance in practice. Moreover, were also investigated as the pre-established criteria, the geometry/configuration of the geodetic network, and the initial values for precision/correlation of the observations may influence the results obtained in the planning stage, following the method proposed here. In these experiments, among other findings, it was found that all the design criteria of the method proposed here are intrinsically related, e.g., a low redundancy leads to a relatively higher value for the precision component, and consequently to a relatively lower value for the bias component (keeping constant the final accuracy), which also leads to a minimum power of the test significantly lower in the one-dimensional and the n-dimensional scenarios.
7

Análise da qualidade posicional das bases do Google Maps, Bing Maps e da Esri para referência espacial em projetos em SIG: aplicação para o município de São Paulo. / Horizontal positional accuracy of Bing Maps, Google Maps and Esri\'s World Imagegery as spatial references within a geographic information system for the municipality of São Paulo.

Paulo Sztutman 09 December 2014 (has links)
A presente pesquisa analisou a acurácia posicional horizontal das bases do Bing Maps, Google Maps e da World Imagery da Esri quando utilizadas como referência espacial on-line em um Sistema de Informação Geográfica no Município de São Paulo (MSP). A metodologia adotada foi a baseada no Decreto Federal no 89.817/84 e na Análise Estatística proposta por Merchant (1982). A análise da acurácia foi desenvolvida a partir das diferenças entre as coordenadas de 240 pontos nas cartas 1:1.000 do Mapa Digital da Cidade de São Paulo (MDC) em relação às coordenadas homólogas nas três bases, considerando separadamente as coordenadas do eixo Norte e Este. A base do Google Maps para o MSP foi dividida em duas (mosaico de ortofotos na área central e mosaico de imagens de satélite nas regiões periféricas), devido à grande diferença de acurácia entre os dois produtos. Para classificar cada base a partir do Decreto 89.817 foi definida a escala na qual somente 10% das discrepâncias tivessem seu valor superior ao PEC, e a escala na qual o Root Mean Square Error (RMSE) da amostra das discrepâncias fosse igual a 60,8% do PEC. A escala final selecionada foi a menor (menos detalhada) entre as definidas em cada um dos processos. A Análise Estatística foi baseada nos testes de tendência e precisão. Como as três bases apresentaram tendência, a escala definida pelo teste de precisão não foi considerada no cômputo final das escalas, devido à dificuldade de se eliminar a tendência nessas bases quando utilizadas no SIG. As escalas finais obtidas, relativas à classe A, foram: Google Maps (imagens de satélite): 1:12.400; Google Maps (ortofotos): 1:3.588; Bing Maps: 1:10.881 e Word Imagery da ESRI: 1:8.420. Concluiu-se que os três produtos com escalas próximas a 1:10.000 apresentam acurácia para serem utilizados como bases em SIGs nos estudos para planejamento urbano e que o Google Maps (ortofotos, com escala próxima a 1:4.000) pode ser igualmente utilizada para planejamento, mas em função de sua acurácia maior, pode servir também para a gestão de serviços urbanos. A principal limitação encontrada para as bases no uso como referência espacial em SIGs foi a inclinação das feições distantes do nadir da imagem ou da ortofoto e o consequente recobrimento de áreas adjacentes a essas feições. Entretanto, essa limitação se mostrou quase desprezível para as escalas definidas para as bases na análise da acurácia. / This research has analyzed the horizontal positional accuracy of basemaps Bing Maps, Google Maps and ESRIs World Imagery when used as an online spatial reference within a Geographic Information System for the municipality of São Paulo. The methodology was based on criteria defined by Brazil Federal Decree 89817/84 and in the analysis proposed by Merchant (1982). The accuracy analysis was developed observing the discrepancies between coordinates of selected 240 points from the 1:1000 digital map of São Paulo compared to corresponding points in the three basemaps, (coordinate directions North and East were considered separately). The Google Maps basemap for the city of São Paulo was divided in two (ortophoto mosaic for the central area and satellite images mosaic in the remainder peripheral areas), due to the considerable differences in their accuracy patterns. In order to classify each basemap as per Federal Decree 89.817, we have defined a scale in which only 10% of discrepancies were above the LMAS90 and the Root Mean Square Error (RMSE) of the discrepancies sample was equal to 60,8% of LMAS90. The final selected scale was the smallest (less detailed) of those obtained in each of the processes. The statistical analysis was based on the test of bias error and by a test of precision. Because the three basemaps have presented biases, the final scales defined by the precision test were not considered in the results, for it is difficult to eliminate biases in these basemaps when used in a GIS. We have obtained the following final scales to class A of the Brazilian Decree: Google Maps (area covered by satellite images): 1:12.400; Google Maps (area covered by ortophotos): 1:3.588; Bing Maps: 1:10.881 and ESRIs Word Imagery: 1:8.420. In conclusion, (a) the three products with scales around a 1:10.000 present accuracy to be used as basemap in GIS for urban planning studies and (b) Google Maps (area covered by ortophotos, scale around 1:4.000) can be equally used for planning studies, as well as urban services manager, due to its greater accuracy. The key limitations for the use of such basemaps as spatial references in GIS was the inclination of features which are distant from the image or ortophoto nadir (off-nadir effects) and the consequent shadowing of adjoining areas. However, this limitation is almost irrelevant to the scales defined for the basemaps in the accuracy analysis.
8

Proposta de um novo método para o planejamento de redes geodésicas

Klein, Ivandro January 2014 (has links)
O objetivo deste trabalho é desenvolver e propor um novo método para o planejamento de redes geodésicas. O planejamento (ou pré-análise) de uma rede geodésica consiste em planejar (ou otimizar) a rede, de modo que a mesma atenda a critérios de qualidade pré-estabelecidos de acordo com os objetivos do projeto, como precisão, confiabilidade e custos. No método aqui proposto, os critérios a serem considerados na etapa de planejamento são os níveis de confiabilidade e homogeneidade mínimos aceitáveis para as observações; a acurácia posicional dos vértices, considerando tanto os efeitos de precisão quanto os (possíveis) efeitos de tendência, segundo ainda um determinado nível de confiança; o número de outliers não detectados máximo admissível; e o poder do teste mínimo do procedimento Data Snooping (DS) no cenário n-dimensional, isto é, considerando todas as observações (testadas individualmente). De acordo com as classificações encontradas na literatura, o método aqui proposto consiste em um projeto combinado, solucionado por meio do método da tentativa e erro, além de apresentar alguns aspectos inéditos em seus critérios de planejamento. Para demonstrar a sua aplicação prática, um exemplo numérico de planejamento de uma rede GNSS (Global Navigation Satellite System – Sistema Global de Navegação por Satélite) é apresentado e descrito. Os resultados obtidos após o processamento dos dados da rede GNSS foram concordantes com os valores estimados na sua etapa de planejamento, ou seja, o método aqui proposto apresentou desempenho satisfatório na prática. Além disso, também foram investigados como os critérios pré-estabelecidos, a geometria/configuração da rede geodésica e a precisão/correlação inicial das observações podem influenciar nos resultados obtidos na etapa de planejamento, seguindo o método aqui proposto. Com a realização destes experimentos, dentre outras conclusões, verificou-se que todo os critérios de planejamento do método aqui proposto estão intrinsecamente interligados, pois, por exemplo, uma baixa redundância conduz a um valor relativamente mais alto para a componente de precisão, e consequentemente, um valor relativamente mais baixo para a componente de tendência (mantendo a acurácia final constante), o que também conduz a um poder do teste mínimo nos cenários unidimensional e n-dimensional significativamente mais baixos. / The aim of this work is to develop and propose a new method for the design of geodetic networks. Design (planning or pre-analysis) of a geodetic network consists of planning (or optimizing) the network so that it follows the pre-established quality criteria according to the project objectives, such as accuracy, reliability and costs. In the method proposed here, the criteria to be considered in the planning stage are the minimum acceptable levels of reliability and homogeneity of the observations; the positional accuracy of the points considering both the effects of precision and the (possible) effects of bias (according to a given confidence level); the maximum allowable number of undetected outliers; and the minimum power of the test of the Data Snooping procedure (DS) in the n-dimensional scenario, i.e., considering all observations (individually tested). According to the classifications found in the literature, the method proposed here consists of a combined project, solved by means of trial and error approach, and presents some new aspects in their planning criteria. To demonstrate its practical application, a numerical example of a GNSS (Global Navigation Satellite System) network design is presented and described. The results obtained after processing the data of the GNSS network were found in agreement with the estimated values in the design stage, i.e., the method proposed here showed satisfactory performance in practice. Moreover, were also investigated as the pre-established criteria, the geometry/configuration of the geodetic network, and the initial values for precision/correlation of the observations may influence the results obtained in the planning stage, following the method proposed here. In these experiments, among other findings, it was found that all the design criteria of the method proposed here are intrinsically related, e.g., a low redundancy leads to a relatively higher value for the precision component, and consequently to a relatively lower value for the bias component (keeping constant the final accuracy), which also leads to a minimum power of the test significantly lower in the one-dimensional and the n-dimensional scenarios.
9

Jämförelse av mätosäkerhet i höjd över tid med NRTK : Undersökning av tidsvariationer för GNSS-höjder inmätta med NRTK / Comparison of measurement uncertainty for GNSS-derived heights using NRTK

Werling, Kristoffer, Höglund, Andreas January 2021 (has links)
Global Navigation Satellite Systems (GNSS) i kombination med Nätverks-RTK (NRTK) har idag blivit en vanlig metod för geodetiska inmätningar i plan och höjd. Inmätningar med NRTK har fördelen att de är relativt enkel att använda, ger koordinater i realtid och med låg mätosäkerhet (2–3 cm). Dock har användare rapporterat om att större avvikelser i höjd är vanligare än avvikelser i plan, vilka varierar över tid. Tidigare studier har inte funnit samband mellan avvikelser i höjd med NRTK och variationen över tid.  Studiens syfte var att undersöka hur resultatet av inmätta höjder med NRTK varierar över tid och vad som påverkar avvikelsen i höjd samt om det går att finna ett samband för avvikelserna.   I studien användes en GNSS-mottagare med NRTK för att på två stompunkter insamla mätdata en gång i minuten i tre timmar på varje punkt, under totalt två dagar. Mätdata som lagrades och analyserades var antal satelliter, PDOP, vertikal precision, och tidpunkt för inmätt höjd och höjdvärde. Under fältarbetet användes också två olika elevationsvinklar, 10 och 15 grader, för att se hur mätosäkerheten påverkades. Vidare utfördes en dubbelavvägning mellan stompunkterna som en kontroll av de angivna höjdkoordinaterna. Resultatet visar att variationen för enskild inmätt höjd är slumpmässig. En undersökning av PDOP, antal satelliter, horisontell och vertikal precision gav inga korrelationer till mätresultatet. Standardosäkerheten i höjd för mätserie på punkt 8316 med 10° elevationsvinkel beräknades till 1,2 cm och med en förändring till 15° elevationsvinkel 1,6 cm. På punkt 8318 med 10° elevationsvinkel beräknades 2,6 cm och med 15° elevationsvinkel 3,4 cm.  Analysen av mätdata förtydligar att behov finns för att öka tillförlitligheten vid GNSS-mätning med NRTK. Standardosäkerheten överskrider angiven mätosäkerhet för metoden, vid mätning på 8318 med 15° elevationsvinkel. Med elevationsvinkel 10° uppnås inte 68 % av mätningarna inom sigmanivå 2, och drygt 13 % återfinns inom sigmanivå 2–3. Vid en beräkning av standardosäkerhet, med medeltals-bildning, erhölls markanta förbättringar på punkt 8316 för båda mätserierna upp till 20 min medan det följande ger avtagande förbättringar. På punkt 8318 erhölls ständiga förbättringar kontinuerligt till 60 min vilket var den högsta gränsen som undersöktes. Differens mellan högsta och lägsta avvikelse för inmätt höjd, jämfört med känd höjd, beräknades till 9,2 cm för punkt 8316 med 15° elevation och 6,9 cm med 10°. Motsvarande för punkt 8318 med 15° beräknades avvikelsen till 16,5 cm och 12,3 cm för 10°. För GNSS-användare behövs insikt i att enskilda mätningar, med lågt angivna värden i handenheten, inte säkerställer goda mätresultat. Flera mätningar under kort tid kan ge mycket låg standardosäkerhet, men en timme senare kan samma låga standard-osäkerhet för nya mätningar fortfarande representera en avvikande inmätt höjd. Medeltalsbildning, tidsseparation och en lämplig elevationsvinkel är sannolikt krav för att tillförlitligt kunna säkerställa att metoden uppnår 2–3 cm mätosäkerhet i höjd för 68 % av mätningarna. / The usage of GNSS is becoming increasingly more common due to its efficiency and time saving capacities. Network-RTK is a method that uses relative positioning and is supposed to deliver measurements with a positional accuracy of about 2-3 cm for plane and height. Previous research shows variety in measuring results using NRTK at different times or days, but the focus was on other aspects than time itself. This study focused on time and its impact on GNSS-derived heights, linked to used methods for practical use of GNSS, and these results were meant to create guidelines or routines for increased reliability in measuring data if it exceeded the positional accuracy. Measurment data were gathered at two vertical control points during three hours each, on two separate days, with data being collected at a one-minute interval. The findings of this study show that the variety over time is random, and that there are no standard settings or routines that guarantee reliability for the method to deliver commonly stated positional accuracies. Although, we found that certain steps for improving measurements are time-separating, averaged measurements for at least twenty minutes, and a good understanding of how to set an appropriate elevation mask.
10

Automating Geographic Object-Based Image Analysis and Assessing the Methods Transferability : A Case Study Using High Resolution Geografiska SverigedataTM Orthophotos

Hast, Isak, Mehari, Asmelash January 2016 (has links)
Geographic object-based image analysis (GEOBIA) is an innovative image classification technique that treats spatial features in an image as objects, rather than as pixels; thus resembling closer to that of human perception of the geographic space. However, the process of a GEOBIA application allows for multiple interpretations. Particularly sensitive parts of the process include image segmentation and training data selection. The multiresolution segmentation algorithm (MSA) is commonly applied. The performance of segmentation depends primarily on the algorithms scale parameter, since scale controls the size of image objects produced. The fact that the scale parameter is unit less makes it a challenge to select a suitable one; thus, leaving the analyst to a method of trial and error. This can lead to a possible bias. Additionally, part from the segmentation, training area selection usually means that the data has to be manually collected. This is not only time consuming but also prone to subjectivity. In order to overcome these challenges, we tested a GEOBIA scheme that involved automatic methods of MSA scale parameterisation and training area selection which enabled us to more objectively classify images. Three study areas within Sweden were selected. The data used was high resolution Geografiska Sverigedata (GSD) orthophotos from the Swedish mapping agency, Lantmäteriet. We objectively found scale for each classification using a previously published technique embedded as a tool in eCognition software. Based on the orthophoto inputs, the tool calculated local variance and rate of change at different scales. These figures helped us to determine scale value for the MSA segmentation. Moreover, we developed in this study a novel method for automatic training area selection. The method is based on thresholded feature statistics layers computed from the orthophoto band derivatives. Thresholds were detected by Otsu’s single and multilevel algorithms. The layers were run through a filtering process which left only those fit for use in the classification process. We also tested the transferability of classification rule-sets for two of the study areas. This test helped us to investigate the degree to which automation can be realised. In this study we have made progress toward a more objective way of object-based image classification, realised by automating the scheme. Particularly noteworthy is the algorithm for automatic training area selection proposed, which compared to manual selection restricts human intervention to a minimum. Results of the classification show overall well delineated classes, in particular, the border between open area and forest contributed by the elevation data. On the other hand, there still persists some challenges regarding separating between deciduous and coniferous forest. Furthermore, although water was accurately classified in most instances, in one of the study areas, the water class showed contradictory results between its thematic and positional accuracy; hence stressing the importance of assessing the result based on more than the thematic accuracy. From the transferability test we noted the importance of considering the spatial/spectral characteristics of an area before transferring of rule-sets as these factors are a key to determine whether a transfer is possible.

Page generated in 0.0921 seconds