• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 16
  • 8
  • 8
  • 8
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Viability of the 'Yin-Yang grid' as a basis for future generations of atmospheric models

Goddard, Jacqueline Clare January 2014 (has links)
This thesis concerns the viability of the 'Yin-Yang grid' for future generations of atmospheric models. The 'Yin-Yang grid’, is an overset grid in which two segments of the classical latitude-longitude grid, with the poles excised, are rotated relative to each other and fit together rather like the surface of a tennis ball. We investigate whether wave propagation can be accurately modelled across the overlap regions and whether transport of air mass properties, such as entropy or water content, can be modelled accurately and conservatively across the overlap regions, without significant 'grid imprinting' on the solution. The wave propagation results demonstrate that the overlapping regions support computational/spurious wave modes and methods for controlling these wave modes are discussed. Transport schemes are investigated in one dimension using the Chesshire and Henshaw conservative interpolation scheme [Chesshire and Henshaw 1984] and the new 'Zerroukat mass fixer' scheme. The Zerroukat mass fixer scheme is extended to the (two-dimensional) Yin-Yang grid. The results demonstrate that the Zerroukat mass fixer scheme is successful in conserving mass. However, the Zerroukat scheme has an effect on flux limiter schemes, overshoots can occur. The Zerroukat scheme also reduces convergence rates by 2 orders of accuracy. Therefore if 2nd order convergence is required a 4th order scheme would need to be used.
2

The study of a mesoscale model applied to the prediction of offshore wind resource

Hughes, James January 2014 (has links)
The Supergen wind research consortium is a group of research centres which undertake research primarily aimed at reducing the cost of offshore wind farming. Research is undertaken to apply the WRF mesoscale NWP model to the field of offshore wind resource assessment to assess its potential as an operational tool. WRF is run in a variety of configurations for a number of locations to determine and optimise a level of performance and assess how accessible that performance might be to an end user. Three studies set out to establish a level of performance at two different sites and improve performance through optimisation of model setup and post processing techniques. WRF was found to simulate wind speed to an appreciable level by reference to similar studies, though performance was found to vary throughout the course of the model runs and depending on the location. An average correlation coefficient of 0.9 was found for the Shell Flats resource assessment at 6-hourly resolution with an RMSE of 1.7ms-1. Performance at Scroby Sands was not at as high a level as that seen for Shell Flats with an average correlation coefficient for wind speed of 0.64 with an RMSE of 2ms-1. A range of variables were simulated by the model in the Shell Flats investigation to test the flexibility of the model output. Wind direction was produced to a moderate level of accuracy at 10-minute resolution while aggregated stability statistics showed the model had a good appreciation of the frequency of cases observed. Areas of uncertainty in model performance were addressed through model optimisation techniques including the generation of two ensembles and observational nudging. Both techniques were found to add value to the model output as well as improving performance. The difference between performance observed at Shell Flats and Scroby Sands shows that while the model clearly has inherent skill it is sensitive to the environment to which it is applied. In order to maximise performance, as large a computing resource as possible is recommended with a concerted effort to optimise model setup with the aim of allowing it to perform to its best ability. There is room for improvement in the application of mesoscale NWP to the field of offshore wind resource assessment but these results confirm an inherent skill in model performance. With the addition of further validation, improvements to model setup on a case by case basis and the application of optimisation techniques, it is anticipated mesoscale NWP can perform to a level which would justify its adoption operationally by the industry. The flexibility which can be offered relating to spatial and temporal coverage as well as the range of variables which can be produced make it an attractive option to developers if performance of a consistently high level can be established.
3

Predikce bleskové aktivity numerickým modelem předpovědi počasí / Lightning activity prediction using a numerical weather prediction model

Uhlířová, Iva January 2020 (has links)
Lightning activity is considered a severe meteorological hazard that needs to be studied, monitored as well as predicted. This thesis focuses on the prediction of lightning activity by the Lightning Potential Index (LPI) in the COSMO numerical weather prediction (NWP) model that comprises 1- and 2-moment (1M and 2M, respectively) cloud microphysical schemes. The objective of this thesis is to investigate the correlation between the predicted lightning activity and the detected one (by the European network for lightning detection EUCLID). Events of the years 2018 and 2019 that recorded significant lightning activity over Czechia are considered for the analyses. For the first time over Czech region, the prognostic values of LPI calculated for each event are verified. In particular, the spatio- temporal distribution of the predicted vs. detected lightning activity is evaluated. Both spatial characterizations and diurnal course of detected lightning activity correspond well to the theoretical knowledge. Thus, spatial (horizontal) and temporal approaches are applied to verify the lightning activity prediction. The results of this thesis successfully verify the LPI prognostic values both in space by comparing the LPI values with the proximity of detected lightning flashes, and in time by contrasting the...
4

An investigation into the use of balance in operational numerical weather prediction

Devlin, David J. J. January 2011 (has links)
Presented in this study is a wide-ranging investigation into the use of properties of balance in an operational numerical weather prediction context. Initially, a joint numerical and observational study is undertaken. We used the Unified Model (UM), the suite of atmospheric and oceanic prediction software used at the UK Met Office (UKMO), to locate symmetric instabilities (SIs), an indicator of imbalanced motion. These are areas of negative Ertel potential vorticity (in the Northern hemisphere) calculated on surfaces of constant potential temperature. Once located, the SIs were compared with satellite and aircraft observational data. As a full three-dimensional calculation of Ertel PV proved outwith the scope of this study we calculated the two-dimensional, vertical component of the absolute vorticity, to assess the inertial stability criterion. We found that at the synoptic scale in the atmosphere, if there existed a symmetric instability, it was dominated by an inertial instability. With the appropriate observational data, evidence of inertial instability from the vertical component of the absolute vorticity, predicted by the UM was found at 12km horizontal grid resolution. Varying the horizontal grid resolution allowed the estimation of a grid length scale, above which, the inertial instability was not captured by the observational data, of approximately 20km. Independently, aircraft data was used to estimate that horizontal grid resolutions above 20-25km should not model any features of imbalance providing a real world estimate of the lower bound of the grid resolution that should be employed by a balanced atmospheric prediction model. A further investigation of the UM concluded that the data assimilation scheme and time of initialisation had no effect on the generation of SIs. An investigation was then made into the robustness of balanced models in the shallow water context, employing the contour-advective semi-Lagrangian (CASL) algorithm, Dritschel & Ambaum (1997), a novel numerical algorithm that exploits the underlying balance observed within a geophysical flow at leading order. Initially two algorithms were considered, which differed by the prognostic variables employed. Each algorithm had their three-time-level semi-implicit time integration scheme de-centred to mirror the time integration scheme of the UM. We found that the version with potential vorticity (PV), divergence and acceleration divergence, CA[subscript(δ,γ)], as prognostic variables preserved the Bolin-Charney balance to a much greater degree than the model with PV, divergence and depth anomaly CA[subscript(tilde{h},δ)], as prognostic variables. This demonstrated that CA[subscript(δ,γ)] was better equipped to benefit from de-centring, an essential property of any operational numerical weather prediction (NWP) model. We then investigate the robustness of CA[subscript(δ,γ)] by simulating flows with Rossby and Froude number O(1), to find the operational limits of the algorithm. We also investigated increasing the efficiency of CA[subscript(δ,γ)] by increasing the time-step Δt employed while decreasing specific convergence criteria of the algorithm while preserving accuracy. We find that significant efficiency gains are possible for predominantly mid-latitude flows, a necessary step for the use of CA[subscript(δ,γ)] in an operational NWP context. The study is concluded by employing CASL in the non-hydrostatic context under the Boussinesq approximation, which allows weak stratification to be considered, a step closer to physical reality than the shallow water case. CASL is compared to the primitive equation pseudospectral (PEPS) and vorticity-based pseudospectral (VPS) algorithms, both as the names suggest, spectral-based algorithms, which again differ by the prognostic variables employed. This comparison is drawn to highlight the computational advantages that CASL has over common numerical methods used in many operational forecast centres. We find that CASL requires significantly less artificial numerical diffusion than its pseudospectral counterparts in simulations of Rossby number ~O(1). Consequently, CASL obtains a much less diffuse, more accurate solution, at a lower resolution and therefore lower computational cost. At low Rossby number, where the flow is strongly influence by the Earth's rotation, it is found that CASL is the most cost-effective method. In addition, CASL also preserves a much greater proportion of balance, diagnosed with nonlinear quasigeostrophic balance (NQG), another significant advantage over its pseudospectral counterparts.
5

Exploiting weather forecast data for cloud detection

Mackie, Shona January 2009 (has links)
Accurate, fast detection of clouds in satellite imagery has many applications, for example Numerical Weather Prediction (NWP) and climate studies of both the atmosphere and of the Earth’s surface temperature. Most operational techniques for cloud detection rely on the differences between observations of cloud and of clear-sky being more or less constant in space and in time. In reality, this is not the case - different clouds have different spectral properties, and different cloud types are more or less likely in different places and at different times, depending on atmospheric conditions and on the Earth’s surface properties. Observations of clear sky also vary in space and time, depending on atmospheric and surface conditions, and on the presence or absence of aerosol particles. The Bayesian approach adopted in this project allows pixel-specific physical information (for example from NWP) to be used to predict pixel-specific observations of clear sky. A physically-based, spatially- and temporally-specific probability that each pixel contains a cloud observation is then calculated. An advantage of this approach is that identification of ambiguously classed pixels from a probabilistic result is straightforward, in contrast to the binary result generally produced by operational techniques. This project has developed and validated the Bayesian approach to cloud detection, and has extended the range of applications for which it is suitable, achieving skills scores that match or exceed those achieved by operational methods in every case. High temperature gradients can make observations of clear sky around ocean fronts, particularly at thermal wavelengths, appear similar to cloud observations. To address this potential source of ambiguous cloud detection results, a region of imagery acquired by the AATSR sensor which was noted to contain some ocean fronts, was selected. Pixels in the region were clustered according to their spectral properties with the aim of separating pixels that correspond to different thermal regimes of the ocean. The mean spectral properties of pixels in each cluster were then processed using the Bayesian cloud detection technique and the resulting posterior probability of clear then assigned to individual pixels. Several clustering methods were investigated, and the most appropriate, which allowed pixels to be associated with multiple clusters, with a normalized vector of ‘membership strengths’, was used to conduct a case study. The distribution of final calculated probabilities of clear became markedly more bimodal when clustering was included, indicating fewer ambiguous classifications, but at the cost of some single pixel clouds being missed. While further investigations could provide a solution to this, the computational expense of the clustering method made this impractical to include in the work of this project. This new Bayesian approach to cloud detection has been successfully developed by this project to a point where it has been released under public license. Initially designed as a tool to aid retrieval of sea surface temperature from night-time imagery, this project has extended the Bayesian technique to be suitable for imagery acquired over land as well as sea, and for day-time as well as for night-time imagery. This was achieved using the land surface emissivity and surface reflectance parameter products available from the MODIS sensor. This project added a visible Radiative Transfer Model (RTM), developed at University of Edinburgh, and a kernel-based surface reflectance model, adapted here from that used by the MODIS sensor, to the cloud detection algorithm. In addition, the cloud detection algorithm was adapted to be more flexible, making its implementation for data from the SEVIRI sensor straightforward. A database of ‘difficult’ cloud and clear targets, in which a wide range of both spatial and temporal locations was represented, was provided by M´et´eo-France and used in this work to validate the extensions made to the cloud detection scheme and to compare the skill of the Bayesian approach with that of operational approaches. For night land and sea imagery, the Bayesian technique, with the improvements and extensions developed by this project, achieved skills scores 10% and 13% higher than M´et´eo-France respectively. For daytime sea imagery, the skills scores were within 1% of each other for both approaches, while for land imagery the Bayesian method achieved a 2% higher skills score. The main strength of the Bayesian technique is the physical basis of the differentiation between clear and cloud observations. Using NWP information to predict pixel-specific observations for clear-sky is relatively straightforward, but making such predictions for cloud observations is more complicated. The technique therefore relies on an empirical distribution rather than a pixel-specific prediction for cloud observations. To try and address this, this project developed a means of predicting cloudy observations through the fast forward-modelling of pixel-specific NWP information. All cloud fields in the pixel-specific NWP data were set to 0, and clouds were added to the profile at discrete intervals through the atmosphere, with cloud water- and ice- path (cwp, cip) also set to values spaced exponentially at discrete intervals up to saturation, and with cloud pixel fraction set to 25%, 50%, 75% and 100%. Only single-level, single-phase clouds were modelled, with the justification that the resulting distribution of predicted observations, once smoothed through considerations of uncertainties, is likely to include observations that would correspond to multi-phase and multi-level clouds. A fast RTM was run on the profile information for each of these individual clouds and cloud altitude-, cloud pixel fraction- and channel-specific relationships between cwp (and similarly cip) and predicted observations were calculated from the results of the RTM. These relationships were used to infer predicted observations for clouds with cwp/cip values other than those explicitly forward modelled. The parameters used to define the relationships were interpolated to define relationships for predicted observations of cloud at 10m vertical intervals through the atmosphere, with pixel coverage ranging from 25% to 100% in increments of 1%. A distribution of predicted cloud observations is then achieved without explicit forward-modelling of an impractical number of atmospheric states. Weights are applied to the representation of individual clouds within the final Probability Density Function (PDF) in order to make the distribution of predicted observations realistic, according to the pixel-specific NWP data, and to distributions seen in a global reference dataset of NWP profiles from the European Centre for Medium Range Weather Forecasting (ECMWF). The distribution is then convolved with uncertainties in forward-modelling, in the NWP data, and with sensor noise to create the final PDF in observation space, from which the conditional probability that the pixel observation corresponds to a cloud observation can be read. Although the relatively fast computational implementation of the technique was achieved, the results are disappointingly poor for the SEVIRI-acquired dataset, provided by M´et´eo-France, against which validation was carried out. This is thought to be explained by both the uncertainties in the NWP data, and the forward-modelling dependence on those uncertainties, being poorly understood, and treated too optimistically in the algorithm. Including more errors in the convolution introduces the problem of quantifying those errors (a non-trivial task), and would increase the processing time, making implementation impractical. In addition, if the uncertianties considered are too high then a PDF flatter than the empirical distribution currently used would be produced, making the technique less useful.
6

The Verification of different model configurations of the Unified Atmospheric Model over South Africa

Mahlobo, Dawn Duduzile January 2013 (has links)
In 2006 a Numerical Weather Prediction (NWP) model known as the Unified Model (UM) from the United Kingdom Meteorological Office (UK Met Office) was installed at the South African Weather Service (SAWS). Since then it has been used operationally at SAWS, replacing the Eta model that was previously used. The research documented in this dissertation was inspired by the need to verify the performance of the UM in simulating and predicting weather over South Africa. To achieve this aim, three model configurations of the UM were compared against each other and against observations. Verification of rainfall as well as minimum and maximum temperature for the year 2008 was therefore done to achieve this. 2008 is the first year since installation, where all the configurations of the UM used in the study are present. For rainfall verification the model was subjectively verified using the eyeball verification for the entire domain of South Africa, followed by objective verification of categorical forecasts for rainfall regions grouped according to standardized monthly rainfall totals obtained by cluster analysis and finally objective verification using continuous variables for selected stations over South Africa. Minimum and maximum temperatures were subjectively verified using the eyeball verification for the entire domain of South Africa, followed by objective verification of continuous variables for selected stations over South Africa, grouped according to different heights above mean sea level (AMSL). Both the subjective and objective verification of the three model configurations of the UM (for both rainfall as well as the minimum and maximum temperatures) suggests that 12km UM simulation with DA gives better and reliable results than the 12km and 15km UM simulations without DA. It was further shown that although there was no significant difference between the model outputs from the 12km and the 15km UM without DA, the 15km UM simulation without DA, proved to me more reliable and accurate than the 12km UM simulation without DA in simulating minimum and maximum temperatures over South Africa, on the other hand the 12km UM simulation without DA is more reliable and accurate than the 15km UM simulation without DA in simulating rainfall over South Africa. / Dissertation (MSc)--University of Pretoria, 2013. / gm2014 / Geography, Geoinformatics and Meteorology / unrestricted
7

Verification of simulated DSDs and sensitivity to CCN concentration in EnKF analysis and ensemble forecasts of the 30 April 2017 tornadic QLCS during VORTEX-SE

Connor Paul Belak (10285328) 16 March 2021 (has links)
<p>Storms in the SE-US often evolve in different environments than those in the central Plains. Many poorly understood aspects of these differing environments may impact the tornadic potential of SE-US storms. Among these differences are potential variations in the CCN concentration owing to differences in land cover, combustion, industrial and urban activity, and proximity to maritime environments. The relative influence of warm and cold rain processes is sensitive to CCN concentration, with higher CCN concentrations producing smaller cloud droplets and more efficient cold rain processes. Cold rain processes result in DSDs with relatively larger drops from melting ice compared to warm rain processes. Differences in DSDs impact cold pool and downdraft size and strength, that influence tornado potential. This study investigates the impact of CCN concentration on DSDs in the SE-US by comparing DSDs from ARPS-EnKF model analyses and forecasts to observed DSDs from portable disdrometer-equipped probes collected by a collaboration between Purdue University, the University of Oklahoma (OU), the National Severe Storms Laboratory (NSSL), and the University of Massachusetts in a tornadic QLCS on 30 April 2017 during VORTEX-SE.</p><p>The ARPS-EnKF configuration, which consists of 40 ensemble members, is used with the NSSL triple-moment microphysics scheme. Surface and radar observations are both assimilated. Data assimilation experiments with CCN concentrations ranging from 100 cm<sup>-3</sup> (maritime) to 2,000 cm<sup>-3</sup> (continental) are conducted to characterize the variability of DSDs and the model output DSDs are verified against the disdrometer observations. The sensitivity of the DSD variability to CCN concentrations is evaluated. Results indicate continental CCN concentrations (close to CCN 1,000 cm<sup>3</sup>) produce DSDs that align closest to the observed DSDs. Other thermodynamic variables also accord better to observations in intermediate CCN concentration environments.</p>
8

The Study of Water Quality Improment and Planning Strategy for Urban Wetland Park

Chen, Li-yu 12 February 2005 (has links)
Wetlands provide many functions, which includes offering surface water, supplying groundwater, breeding and producing natural resources, offering natural landscape and touristic spot, providing area for ecology education or research and regulating regional ecosystem etc.. Their function in the environment can't be ignored, and thus they deserve to be protected. One research was focused on Niaosung Wetland Park(NWP). The Niaosung wetland park was developed from the sinking pool of the Cheng Chin Lake Branch, Taiwan Water Supply Company. The site was originally designed to precipitate sediments from wastewater exhausting from Cheng Chin Lake Water Treatment Plant. The sinking pool gradually became a small-scale artificial wetland by sufficient water and nutrients. NWP was launching in September, 2000, which have seen working more than four years. Although the construction methods of NWP were disputable and destroyed existing ecosystem, NWP was still tended slowly to nature after recovering naturally for four years. Actually, It is difficult to manage NWP for the reason of that it is located in Grant Kaohsiung municipal area. The other part of this research was focused on The Neiweipi Cultural Park in Kaohsiung¡]NCP¡^Museum of Fine Arts. NCP was established in 2000, and was divided into three areas: hill area, river area and wetland. It is combined with the Art Museum and ecological park, which can offer citizens many life styles, such as art, culture, recreation and ecology, etc.. In this study, we monitored the water quality and assessed the habitat of both wetland parks to get some strategies which would be used to manage and maintain these two wetland parks in order to let them become more sustenance, stability and variety. After monitoring for one year, the result shows that both of the artificial wetland parks, whose purposes were not used for wastewater treatment (purification of water quality), could reduce some non-point source pollution. If we want to maintain both of the wetland parks sustenance, we must use regularly artificial controls and monitoring data, involving plants, birds, insects and water quality etc., to stop their changing and improve the quality of habitat. Habitat recovery is one of these artificial controls, which could maintain the habitats under the best state to attract diverse creatures for their looking for food and perch.
9

Machine Learning for Improvement of Ocean Data Resolution for Weather Forecasting and Climatological Research

Huda, Md Nurul 18 October 2023 (has links)
Severe weather events like hurricanes and tornadoes pose major risks globally, underscoring the critical need for accurate forecasts to mitigate impacts. While advanced computational capabilities and climate models have improved predictions, lack of high-resolution initial conditions still limits forecast accuracy. The Atlantic's "Hurricane Alley" region sees most storms arise, thus needing robust in-situ ocean data plus atmospheric profiles to enable precise hurricane tracking and intensity forecasts. Examining satellite datasets reveals radio occultation (RO) provides the most accurate 5-25 km altitude atmospheric measurements. However, below 5 km accuracy remains insufficient over oceans versus land areas. Some recent benchmark study e.g. Patil Iiyama (2022), and Wei Guan (2022) in their work proposed the use of deep learning models for sea surface temperature (SST) prediction in the Tohoku region with very low errors ranging from 0.35°C to 0.75°C and the root-mean-square error increases from 0.27°C to 0.53°C over the over the China seas respectively. The approach we have developed remains unparalleled in its domain as of this date. This research is divided into two parts and aims to develop a data driven satellite-informed machine learning system to combine high-quality but sparse in-situ ocean data with more readily available low-quality satellite data. In the first part of the work, a novel data-driven satellite-informed machine learning algorithm was implemented that combines High-Quality/Low-Coverage in-situ point ocean data (e.g. ARGO Floats) and Low-Quality/High-Coverage Satellite ocean Data (e.g. HYCOM, MODIS-Aqua, G-COM) and generated high resolution data with a RMSE of 0.58◦C over the Atlantic Ocean.The second part of the work a novel GNN algorithm was implemented on the Gulf of Mexico and showed it can successfully capture the complex interactions between the ocean and mimic the path of a ARGO floats with a RMSE of 1.40◦C. / Doctor of Philosophy / Severe storms like hurricanes and tornadoes are a major threat around the world. Accurate weather forecasts can help reduce their impacts. While climate models have improved predictions, lacking detailed initial conditions still limits forecast accuracy. The Atlantic's "Hurricane Alley" sees many storms form, needing good ocean and atmospheric data for precise hurricane tracking and strength forecasts. Studying satellite data shows radio occultation provides the most accurate 5-25 km high altitude measurements over oceans. But below 5 km accuracy remains insufficient versus over land. Recent research proposed using deep learning models for sea surface temperature prediction with low errors. Our approach remains unmatched in this area currently. This research has two parts. First, we developed a satellite-informed machine learning system combining limited high-quality ocean data with more available low-quality satellite data. This generated high resolution Atlantic Ocean data with an error of 0.58°C. Second, we implemented a new algorithm on the Gulf of Mexico, successfully modeling complex ocean interactions and hurricane paths with an error of 1.40°C. Overall, this research advances hurricane forecasting by combining different data sources through innovative machine learning techniques. More accurate predictions can help better prepare communities in hurricane-prone regions.
10

Função de mapeamento brasileira da atmosfera neutra e sua aplicação no posicionamento GNSS na América do Sul /

Gouveia, Tayná Aparecida Ferreira. January 2019 (has links)
Orientador: João Francisco Galera Monico / Resumo: A tecnologia Global Navigation Satellite Systems (GNSS) tem sido amplamente utilizada em posicionamento, desde as aplicações cotidianas (acurácia métrica), até aplicações que requerem alta acurácia (poucos cm ou dm). Quando se pretende obter alta acurácia, diferentes técnicas devem ser aplicadas a fim de minimizar os efeitos que o sinal sofre desde sua transmissão, no satélite, até sua recepção. O sinal GNSS ao se propagar na atmosfera neutra (da superfície até 50 km), é afetado por gases hidrostáticos e vapor d’água. A variação desses constituintes atmosféricos causa uma refração no sinal que gera um atraso. Esse atraso pode ocasionar erros na medida de no mínimo 2,5 m (zenital) e superior a 25 m (inclinado). A determinação do atraso na direção inclinada (satélite-receptor) de acordo com o ângulo de elevação é realizada pelas funções de mapeamento. Uma das técnicas para o cálculo do atraso é o traçado de raio (ray tracing). Essa técnica permite mapear o caminho real que o sinal percorreu e modelar a interferência da atmosfera neutra sobre esse sinal. Diferentes abordagens podem ser usadas para obter informações que descrevem os constituintes da atmosfera neutra. Dentre as possibilidades pode-se citar o uso de medidas de radiossondas, modelos de previsão do tempo e clima (PNT), medidas GNSS, assim como modelos teóricos. Modelos de PNT regionais do Centro de Previsão de Tempo e Estudos Climáticos (CPTEC) do Instituto Nacional de Pesquisas Espaciais (INPE) apresentam-se como um... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Global Navigation Satellite Systems (GNSS) technology has been widely used in positioning, from day-to-day applications (metric accuracy) to applications that require high accuracy (few cm or dm). For high accuracy, different techniques may be applied to minimize the effects that the signal suffers from its transmission on the satellite to its reception. GNSS signal when propagating in the neutral atmosphere (from surface up to 50km) is influenced by hydrostatic gases and water vapor. The variation of these atmospheric constituents causes a refraction in the signal that generates a delay. This delay may cause errors of at least 2.5 m (zenith) and greater than 25 m (slant). The determination of the delay in the slanted direction (satellite-receiver) according to the elevation angle is performed by the mapping functions. One of the techniques for calculating the delay is raytracing. This technique allows us to map the actual path that the signal has traveled and to model the interference of the neutral atmosphere on it. Different approaches can be used to obtain information describing the neutral atmosphere constituents - temperature, pressure and humidity. The possibilities include the use of radiosonde measurements, weather and climate models (NWP), GNSS measurements, as well as theoretical models. Regional NWP models from the Center Weather Forecasting and Climate Studies (CPTEC) of the National Institute for Space Research (INPE) are a good alternative to provide atmospheri... (Complete abstract click electronic access below) / Doutor

Page generated in 0.1381 seconds